bookmark_borderWriter River is back!

I dropped Tom an email about Writer River last week, and he alluded to some of the issues he mentions in his post in his reply. Little did I know that Writer River was soon to be hacked!

I love the idea behind the website though, so it’s good to see Tom is still keen on pushing things forward. If you previously registered I’d urge you to go back and register on the new version of the website. The premise is the same, a website which will collate the best Technical Communications stories and blog posts.

Head over to Writer River, the more people who sign up and join in, the better it will become.

bookmark_borderConsideration Layer Model

As a technical writer, every decision you make is influenced by several discrete things, considerations for either the audience of the information, the process you’ll need to follow to collate and verify the information, and so on. Every decision requires such considerations but is it possible to model these?

Some background first; I don’t revisit my old posts nearly as often as I should and, as there are certain topics that I tackle with the vague idea of covering in greater detail at some point later on, it’s handy when someone else gives me a nudge about an old post (namely, The tool is not important).

That said, such topics are typically the hardest ones to consider, the big picture things that end up with my brain reeling as I try to narrow down this wonderful profession into something digestable without generalising (genericising?) so much that it becomes worthless. Still, that’s never stopped me trying, so I’ll bash on and see what falls out of my head.

My post about how the tool is not important looks at the other areas that need to be consider if you are investigating upgrading or changing your main authoring tool, and was largely prompted by our upcoming move from FrameMaker to Author-it. The post is focussed on tools (obviously) but looking back it only mentions a rather large consideration in passing, namely “focussing discussions on the audience, the expectations”.

Such is the way of things when it comes to Technical Communications, anytime you take a step back you realise that there are many things to consider, all of which impact on one another even though they are distinctly different. The audience requires to have information delivered in a particular format (technical consideration), and in a particular voice (writing theory). They’d also like it structured a certain way (information design) and of course they’d quite like it to be accurate and up-to-date (working practice).

As a manager of a technical communications team, all of these things feature in my thinking almost every day. Anytime something new lands on my desk or a new issue is discovered that needs to be corrected my brain runs through the gamut of considerations trying to figure out how best to tackle the work. The more often this happens the more I look for a model of how best to tackle such things and, as I’ve not really found one, I thought I’d take a bash at creating something myself.

This is a first draft, it is still very crude and is missing a lot of detail but as a starting point I think it might work. The premise is simple enough, for each piece of work undertaken by a typical software technical writer (yes, I’m making some assumptions), there are various items that need to be taken into consideration and these can be broadly broken down into four layers – Audience, Content, Theory, and Tools – and within each layer there are a number categories of consideration.

Rather than try and tackle the entire thing, I’m going to focus on the big pieces first.

The following layers are the broadsweep of the model, and I think most technical communication considerations can be allocated to one of the following;

  • Audience – be it preferences for format and media, scenarios for which they require information, through to a detailed understanding of how they work.
  • Content – the main output needs to be considerate of audience, and as such will be provided in different forms (written, graphical and so on). It also needs to be sourced properly, written, reviewed and published.
  • Theory – depending on where you are on the IPMM, this layer may be thin but it will still exist and covers working practice as well as in-house guidelines. It also covers larger view methodologies such as single source, minimalism in writing and so on.
  • Tools – the lowest layer as it is furthest removed from the user but still has a significant impact as it is directly tied in to the writing process itself.

So, does any of this make sense to anyone? Or is it just me? Over the next few posts I’m aiming to delve a little deeper into each layer, presuming I’ve gotten them correct and we’ll see what lies within.

Consider this very much a work in progress, and feel free to point out my errors. Comments are welcomed.

bookmark_borderMuji Manifesto

Can’t recall where I saw this but it struck a chord so I grabbed the main tenets with a view to expounding on them at a later date.

However, as simplicity suggests, I really don’t need to bother.

  • Because there is complexity in purity.
  • Elegance in plainness.
  • Intricacy in streamlining.
  • Richness in reduction.
  • Depth in minimalism.
  • Surprise in uniformity.
  • Innovation in re-use.
  • Cool in the avoidance of cool.
  • And there is true sophistication in simplicity.

These were not written about Technical Communications but they might as well have been. I’m seriously considering printing these off and pinning them up on the wall.

bookmark_borderWhat’s in store for 2008?

Back after a couple of weeks of merriment, over-eating and general lazing about. Hopefully the festive season was as good to you as it was to me.

But enough looking back, this time of year is all about looking forward. So what is coming up in the next 12 months?

Well, I’m hoping to start migrating some content from Structured FrameMaker to AuthorIT, having decided that the overheads required to get DITA up and running just don’t stack up against the cost of ownership of AuthorIT. I’m a big fan of the principles behind DITA, and I will keep up-to-speed with progress, but it doesn’t suit our needs here.

I’m also hoping to post a bit more often here, and I’m also toying with writing up an article or two for the ISTC magazine, Communicator. As ever, those will be the first things to go when project deadlines need to be met, but I’ll give it a try. One thing I won’t be doing is undertaking an MA in Technical Communications. The course starts this month and there is just too much going on in my life at the moment… maybe I’ll join the September influx. We’ll see.

I will, of course, be expanding on the themes I’ve been posting about recently, specifically the role of the modern Technical Communicator in a forward-facing software company. I’m hoping to make some strides in this area and I’ll be sure to write up my thoughts on a variety of topics. I’m also hoping to hear more from YOU, dear reader. Whilst I did start this blog as a way of getting my own thoughts straight, it’s been great to read your comments over the past year. Blogging is all about the conversation, so please, don’t be shy.

Here’s to a wonderful year!

Right, I’m off to write up that article I had completely forgotten about.

bookmark_borderDITA is not the answer

Single sourcing is good, I’m sure most of us can agree on that, but I’ve recently been wondering if perhaps DITA isn’t quite good enough?

The thing is, I’ve been looking at DITA as a solution for our single sources needs for a while now. I’ve attended conferences, read whitepapers, listened to vendors and everything else that goes with it and I’ve got a pretty good handle on things. If applied correctly the benefits you can gain are very large, although the same can be said of any other single source project, yet what seems to be consistently missing during all of these wonderfully theoretical discussions is the cost and impact of getting such a solution “applied correctly”.

A key part of planning to move to single source, of which DITA is only a part, is understanding the business needs and technological requirements of all of the content producers in your organisation. Traditionally that means Technical Communications, Training, Pre-Sales and Marketing, with perhaps other flavours to be considered depending on how your company is structured.

However, if those parts of your organisation aren’t yet ready to move, then the business case changes. At present this is the situation I’m in, so I find myself looking for a short-term (2-3 year) solution that won’t lock us in to a proprietary format and that can give us benefits as soon as possible.

Re-use is our main reason for moving to single source. We don’t (yet) localise, and there is only one other team that has any interest in re-using our content (and even then, they are likely to use it as an source of verification, not a source of content). With that in mind, and with the proviso that I’ve used it previously, we are looking at AuthorIT.

Yes it does mean we forego a lot of the power of DITA but as it will allow us to tag topics accordingly (in keeping with the DITA model) and it does have an XML DITA output option, then it shouldn’t lock us in. I’m willing to put up with a little pain further down the road to get the benefits now.

I’m still not entirely sure what else we are missing. We publish PDFs, HTML and Javahelp, all of which AuthorIT handles, and as yet we don’t have a need to dynamically publish information based on metadata. If that changes in the near future then we’ll handle it appropriately but it isn’t on anyone’s radar.

I am concerned about the versioning capabilities of AuthorIT as we maintain the last 3 versions of all our publications, but I know there are ways to achieve this in AuthorIT. I doubt it will work as well as our current system (FrameMaker files in an SVN repository) but, as is always the case, I do expect we may need to make some compromises to get ourselves moving towards single sourcing our publications. This is our main pain point and so becomes the focus for any possible solution.

DITA remains the long-term goal but, and I’ve said this before, until there is an all in one solution that is easy to rollout it remains marginalised as a viable option. Most of us need to produce some form of business case to justify such purchases and, at present, DITA is still too costly an option. I’m always happy to learn new things, and whilst I would love to be able to invest time and resource into creating and maintaining a DITA based solution, I just can’t justify it.

All of my research suggests that, rather than being a simple installation and conversion process, creating a DITA solution requires a lot of technical know-how and a not insubstantial amount of time and resource. We can handle the first, the latter is (I believe) not yet at a level which makes it cost-effective.

Ultimately, for the moment, DITA costs too much.

Do you agree? Can you prove me wrong? I’d love to hear your thoughts on this, particularly if you have implemented DITA already. I’m keen to hear just how more productive a DITA solution can be if you aren’t involved in localisation. Have you recouped your costs yet?

Perhaps DITA is only really applicable for those with large budgets and the chance to invest heavily upfront. Alas I’m not in such a position. For the moment.

bookmark_borderThis is not a video

As I mentioned previously, the opening presentation at TICAD was by Adobe and featured their vision of the future of Technical Communications and information development. Apparently that future includes video.

Video has been available to many for a few years now, yet it is never really the main focus of a documentation team. Tom has questioned this as well:

“For too long I’ve minimized the importance of the audiovisual. Captivate — the industry standard tool for creating screen demos — is actually a relatively simple application. Mastering it and integrating audiovisual into user help will take it to the next level.”

This echoes what Adobe suggest, no big surprise there, but I have to admit that I don’t fully agree.

As a quick learning tool, I’m sure videos (screen demos) are useful, but I wouldn’t really know as I’ve never used one as a primary source for learning about a product and I’m not sure I know anyone who has. Of course that’s not to say they don’t have value and with some research into the intended audience I’m sure it can be proven that they have a valid place in the product documentation set.

However my initial thoughts on the matter are hard to shake.

It may be one of the unwritten rules of documentation, the rules that few people question and may well be inaccurately applied, but I’ve always operated under the assumption that people only use the documentation when they are stuck.

Of course this is a broad sweeping statement but I believe that it is true for the majority of software users. So, if that is the case, what is their mindset when they finally give in (having asked a co-worker and searched Google to no avail) and fire up the online help or open the user guide? Typically they will be annoyed and want an answer or fix pretty damn sharpish.

Why, in that case, would they even consider sitting through a 2 minute video that explains how to use the functionality with which they are currently battling?

To be fair, Tom isn’t suggesting this approach but I think it’s wise to counsel against this trend lest it be used too heavily. A few short demos of how to complete core tasks, accompanied by a comprehensive help system or user guide is the best balance.

My fear is that the “cool” effect will override sensibilities and we’ll be plagued by popup videos and worse in the future.

The written word certainly isn’t the only way to effectively communicate information, and as technology progresses we will all need to carefully match the available delivery mechanisms with the information we need to deliver. The key word here is “carefully”.

I’d love to hear from anyone who is already doing something like this, I’ve not used Captivate, nor offered any form of video as part of a documentation set before as they didn’t match the audience profile but I’d be interested in hearing how successful they were.