bookmark_borderWhat do we want?

At TCUK12 this year, I chatted with several people about authoring tools. Vendors, other technical writers, managers, I asked the same two questions, again and again.

What authoring application do you use, and why do you use it?

The answers were illuminating, interesting and always useful. There are many, many options out there, catering to many different needs, and all of them have a different set of strengths and weaknesses. Alas, no matter how hard I tried, regardless of how many ways I tried to bend our requirements, all of those conversations led me to the same conclusion.

No-one out there builds what we want so we may have to build it ourselves.

As part of improvements to our content, one of my team has led the charge to restructure our information. She has a passion for information architecture and devised a three pronged approach to our content. You can either navigate in by role, by product area or… by something else we haven’t yet decided upon.

We’ve audited the topics we have and applied some simple structuring decisions and it is looking good so far. The problem we will soon have is that we will need to build this new structure and make it usable by our customers.

What we would like is to be able to tag our topics, and use those tags to present a default structure to our information. The tags would also allow users to filter the topics they see and, either by addition or subtraction, create a unique set of information for their needs. Ultimately this would lead to personalisation of content in a basic form, but that could easily be enhanced to provide a smarter take on content for each user.

Alas it seems that, without doing a lot of customising of an XML (most likely DITA) based system we won’t get near that and even the products that get close require a compromise somewhere. Most of the time it would be, for the team of writers, a step back to a less friendly interface, and more exposure to the underlying technology of the tool they are using. At present Author-it provides a simple authoring environment that allows the writers to concentrate on writing content.

But perhaps that is the point. Maybe it’s time to try a different set of tools, adopt new working practices, take on a the bigger challenge.

bookmark_borderTCUK 12

The time has come, so Gordon said, to talk of many things, of slides and chats, and learning facts, and something else that rhymes but I’m rubbish at poetry (with sincere apologies to Lewis Carroll).

Enough of that though, what I want to talk about is the Technical Communications Conference 2012 (TCUK12) and why you should go.

Disclaimer: I serve on the Council of the ISTC, who organise this event. 

Let me tell you a story.

Once upon a time a young (ok, middle-aged) man had started a new job and was trying to figure out the best way to improve things and solve some of his problems. The year was 2007.

At the time, the young man (oh shut up) had started a blog and was finding a lot of interesting people writing about Technical Communications. From that he heard about something called DITA. To learn more, as it sounded very much like it might solve his problems, he went to a conference (X-Pubs) in Reading. He learned a lot, and met a lot of inspiring and interesting people. Turned out DITA wasn’t for him though (yet).

Later that year, he had the opportunity (entirely thanks to his blog and a lovely woman named Anne Gentle) to attend and speak at TICAD, an opportunity that came about directly through this blog. It was a smaller conference in scale but just as rich in information.

Having set a precedent of attending conferences, he looked around for another the following year and, remembering how good he found the Digitext conferences (many years ago now) he decided to attend the User Assistance conference in Edinburgh (2008). Again, he found himself surrounded by his peers, and took away some valuable lessons.

The following year he heard of new conference, and as it had multiple streams of presentations he thought that would give him the best chance of learning. He also felt foolhardy enough to present at it (but let us not dwell on that). The conference was called Technical Communications UK (2009).

And that’s quite enough babbling about me.

It’s always interesting to see what presentations and theme the conference will have, each year has had a different third stream, and this year it’s focusing on Accessibility and Usability, something I know many technical writers working in a software environment inevitably get drawn into (if it’s easier to use, it’s easier to document). Add in the longer workshops on the first day and, for the money, it’s hard to beat.

Like many people I’ve had to convince my boss it is worthwhile letting members of our team attend, but I’m convinced that everyone will find a handful of topics that they could learn about and look to apply at their own workplace, the trick is to plan to do just that.

Any time I’ve ever returned from a conference I’ve been excited and looking to apply ideas and techniques to what we do. If we hadn’t managed to implement some of these things then it would be much harder to ask again the following year as the evidence of value is a hard thing to argue against!

Above all though, TCUK seems to have a good energy, a good ‘vibe’ and everyone who attends seems that little bit more driven and up for learning, discussion and basically getting stuck in. I does help that most people stay over so you start to make friends over a glass of wine (or three) and that carries through into the next day giving the entire conference a relaxed, friendly feel.

If you only plan on attending one conference this year, I would heartily suggest TCUK as a great starting point.

Hope to see you there!

bookmark_borderTechnology vs Emotion

Random thought: Has the rise of (talk of) emotional content (affective assistance) been driven by the concentration, over the last few years, on technological solutions?

Single sourcing, XML, DITA, DocBook, and all the rest have (rightly) taken our profession forward, so I guess it’s natural that the general trends, as well as refocussing on the content itself, are looking for how to better engage with a modern audience.

The evidence suggests that that modern audience is Facebooking, Twittering, and blogging, and wants content in easily digestable chunks.

That plays nicely into the hands of single sourcing (chunks) and the idea of emotional content through connecting to the user, using friendly language to make the content easily digestable.

So, if you’ve already got your technology sorted out, why aren’t you looking at how your content is presented?

bookmark_borderDoes single sourcing content work?

One of the more popular posts on this blog is titled DITA is not the answer and, whilst things are certainly moving forward, it’s a little sad that it is still valid.

A recent comment on that post suggested that it’s not just DITA that is lacking, it’s the working realities of single source that is flawed.

Well, that and the fact that I keep referring to single source when I am actually meaning content reuse (for you can have one source for everything but not reuse the content anywhere).

You can read the full comment yourself but the relevant bits are:

I have never seen single sourcing work. Maybe a single author who knows the topics thoroughly enough to reuse, or a tightly knit group of writers synched up at the same level.

The only place we are going to reuse content is in web mashups using semantic markup once the content is in the cloud.

It’s an interesting view and one which touches on something that has been on my mind these past couple of weeks as we are in mid-migration towards our single source solution.

Just how do you coordinate a team of writers, working in discrete areas of the documentation, with a large number (3000+) topics?

There are a number of ways we are tackling this and only time will tell if they are successful. Firstly we spent some time discussing how best to structure the source topics. Do we group them by product area? By topic type? Or some other arbitrary method?

We decided to group at the highest level (the top level folder) by user persona, and below that we grouped topics in accordance to how they are viewed from the product, so development kit wide ‘Events’ are stored in single folder, where as topics for a specific piece of functionality in the development kit are stored in their own folder. Your system will be different, of course, but this method suits our needs.

After that we need some way of knowing both what type of information a topic contains, as well as where that topic is used. We are not authoring in a DITA specific environment but decided to model our topic types on the DITA model to future proof us as much as possible (we are using Author-it which will export to DITA XML should we need it in the future). We have different templates for each type of topic (Concept, Procedure, Reference and so on), primarily to allow us to identify a topic (by default, Author-it shows which template a topic is using).

That leaves the final piece of our puzzle. How do know where a topic is used? This is more than just a list of which deliverables the topic will appear in, it also has to hint at the context of how the topic is being used.

Does any of this mean that we are more likely to reuse content? Not necessarily but it should give us a fighting chance, and once we’ve updated the content plans for all of our deliverables we will start to really see the benefits. Those content plans were the very things that suggested we could reuse content across multiple deliverables and I’m certain that, with a bit more analysis, we’ll get further gains.

Can single source and content reuse work? Of course it can. There are plenty of good examples out there and they all share one thing in common, something that isn’t really broadcast by the vendors; content reuse from a single source takes a lot of hard work.

But it is possible.

bookmark_borderHow do we structure our topics?

I’ve waffled on about single source and our plans for long enough so, as we are finally starting the process itself, I thought I’d capture some information as we go along. However, it’s probably good to set the scene, so I’ll cover that stuff first. Over time you’ll be able to see all the posts related to this work here.

Where should it live?

Next up in our journey towards Author-it nirvana is to decide how to store our content. Author-it stores information as topics, and as topics are designed to be reused, locating them is a key part of the Author-it solution.

One approach would be to simply dump a lot of the topics in loosely appropriate folders and let the built-in search help us find the topics we need. That way the topic names can be a little ambiguous as the content of the topic is what matters.

However that feels a little like flying by the seat of our pants so I’m keen to try and figure out the best way to store the content within Author-it not only to make it easier for the technical writers, but to future proof us as much as possible.

The Author-it Knowledge Center (sic) is chock full of useful information and includes a topic on folder structure which rightly states that:

You need to choose the approach that best suits your requirements. You can have as many folders as you need (but remember that too many, may get confusing…) and as many levels as are required. Also consider the reusability of your content. By burying objects in a myriad of sub folders, others may not know that these objects exist and end up creating multiple copies of the same information – meaning the information is duplicated in more than one place.

Another useful thing to know when creating folders is that when folders are created, they inherit the security of its parent. Therefore, when you design your initial folder structure, it is worthwhile creating some folders at the very top level to set security, and then creating any sub folders within these.

One thing my team and I are hoping to adopt is a DITA based structure. Whilst built in DITA support is not yet part of Author-it (but it’s coming) we do like the way DITA approaches topic-based writing and can easily map most of our content to the default topic types with which DITA is concerned. This also gives us an exit route out of Author-it should we ever decide to change our tooling in the coming years.

However, simply storing all of our content in 3 or 4 folders (1 per topic type) would still leave us with a huge number of topics per folder, so obviously we need some other way of structuring the content logically. And, in a nice twist, we are also going to be restructuring how we offer the published content in the future so we can’t base the folder structure on our current documentation set. That makes sense moving forward as well as we may well start offer different groupings of information anyway and I’d rather not perpetuate our current document-centric view.

So, what have we decided?

After some thought we realised that the only way to structure the content in Author-it to make it easy to locate is to focus on user role. We discounted using product terms here as some of the information we will be writing in the future doesn’t easily fall into a specific area of the product so we’d end up with a generic “Other Stuff” area which suggests that that was the wrong approach.

Essentially we have three user types for our product set; Developer, Administrator and End User. Under those folders we then break down the information accordingly into areas of product information (for example “Installation”). We tried to steer away, again, from using product specific areas but as the large part of our product is a development kit we realised that it made sense to base that information on the “tools” within the development kit, rather than trying to conceptualise the information any further.

Beneath those folders we then break out into, loosely, DITA-focused folders of Concepts, Procedures, and References, with an additional folder to hold Graphics (screenshots, diagrams and so on). DITA suggests Tasks, not Procedures but we consider a task to be at a higher-level, with one task containing one or more procedures.

So we have a basic folder structure in Author-it that looks a little like this:

    Administrator [User Role]
    	Installation [Information Area]
    		Concepts [Topic Type]
    		Graphics
    		Procedures [Topic Type]
    		Reference [Topic Type]
    

We think this will work for us, and we’ll be testing it with a sample chapter or two very soon. We definitely need to get this right now before we start converting our content over but the thoughts and details of that exercise are for another post.

bookmark_borderHow do we move to single source?

I’ve waffled on about single source and our plans for long enough so, as we are finally starting the process itself, I thought I’d capture some information as we go along. However, it’s probably good to set the scene, so I’ll cover that stuff first. Over time you’ll be able to see all the posts related to this work here.

How? – how do we do it?

Once we’d agreed that single source would provide us with a good solution (it’s still not ideal, but nothing ever is..) the next question was “How?”.

Having followed the technologies in this area quite closely over the past few years my immediate thoughts went towards a DITA enabled solution. The basic topic types and methodologies fit well with an Agile environment so there would be fairly immediate benefits once we got the system up and running. We spent some time investigating our content and planning how best to leverage DITA to our advantage and once we were happy that it would meet our needs (with less over head than adopting DocBook) we looked at the technological challenges of adopting a DITA based system.

And that’s where we hit the biggest block. DITA is an excellent methodology but still lacks simple/cheap tooling support (it would take upwards of several thousand to fully implement a DITA solution, whereas a bespoke solution could cost considerably less). Other considerations (we have JavaHelp as our online help format) also came into play and, after some investigation of other XML based tools we decided to go with Author-it and base our working practices around the DITA methodology and topic types.

We did consider upgrading our legacy applications (FrameMaker and Webworks) and configuring them to give us a solution that would meet our needs but even the rough estimates for that work took us beyond the cost of our chosen solution.

One caveat to this is to note that I have used Author-it previously and, whilst it is not without its foibles (which application isn’t) it hits the sweet spot of functionality versus cost. None of the rest of the team have used it but that would be the same for any other new tool and was considered as an upside to keeping the FrameMaker + Webworks solution.

A second caveat is that I’m fully aware that, in due time the tool vendors will get on top of this problem (MadCap already seem to be ahead of the others in this area), but alas the timescales don’t suit us. Worst case scenario is that we ditch Author-it in a few years, export the content to DITA XML and import to a compatible tool that meets whatever needs we have at that time.