Tag: <span>DITA</span>

At TCUK12 this year, I chatted with several people about authoring tools. Vendors, other technical writers, managers, I asked the same two questions, again and again.

What authoring application do you use, and why do you use it?

The answers were illuminating, interesting and always useful. There are many, many options out there, catering to many different needs, and all of them have a different set of strengths and weaknesses. Alas, no matter how hard I tried, regardless of how many ways I tried to bend our requirements, all of those conversations led me to the same conclusion.

No-one out there builds what we want so we may have to build it ourselves.

As part of improvements to our content, one of my team has led the charge to restructure our information. She has a passion for information architecture and devised a three pronged approach to our content. You can either navigate in by role, by product area or… by something else we haven’t yet decided upon.

We’ve audited the topics we have and applied some simple structuring decisions and it is looking good so far. The problem we will soon have is that we will need to build this new structure and make it usable by our customers.

What we would like is to be able to tag our topics, and use those tags to present a default structure to our information. The tags would also allow users to filter the topics they see and, either by addition or subtraction, create a unique set of information for their needs. Ultimately this would lead to personalisation of content in a basic form, but that could easily be enhanced to provide a smarter take on content for each user.

Alas it seems that, without doing a lot of customising of an XML (most likely DITA) based system we won’t get near that and even the products that get close require a compromise somewhere. Most of the time it would be, for the team of writers, a step back to a less friendly interface, and more exposure to the underlying technology of the tool they are using. At present Author-it provides a simple authoring environment that allows the writers to concentrate on writing content.

But perhaps that is the point. Maybe it’s time to try a different set of tools, adopt new working practices, take on a the bigger challenge.

Tech Work

The time has come, so Gordon said, to talk of many things, of slides and chats, and learning facts, and something else that rhymes but I’m rubbish at poetry (with sincere apologies to Lewis Carroll).

Enough of that though, what I want to talk about is the Technical Communications Conference 2012 (TCUK12) and why you should go.

Disclaimer: I serve on the Council of the ISTC, who organise this event. 

Let me tell you a story.

Once upon a time a young (ok, middle-aged) man had started a new job and was trying to figure out the best way to improve things and solve some of his problems. The year was 2007.

At the time, the young man (oh shut up) had started a blog and was finding a lot of interesting people writing about Technical Communications. From that he heard about something called DITA. To learn more, as it sounded very much like it might solve his problems, he went to a conference (X-Pubs) in Reading. He learned a lot, and met a lot of inspiring and interesting people. Turned out DITA wasn’t for him though (yet).

Later that year, he had the opportunity (entirely thanks to his blog and a lovely woman named Anne Gentle) to attend and speak at TICAD, an opportunity that came about directly through this blog. It was a smaller conference in scale but just as rich in information.

Having set a precedent of attending conferences, he looked around for another the following year and, remembering how good he found the Digitext conferences (many years ago now) he decided to attend the User Assistance conference in Edinburgh (2008). Again, he found himself surrounded by his peers, and took away some valuable lessons.

The following year he heard of new conference, and as it had multiple streams of presentations he thought that would give him the best chance of learning. He also felt foolhardy enough to present at it (but let us not dwell on that). The conference was called Technical Communications UK (2009).

And that’s quite enough babbling about me.

It’s always interesting to see what presentations and theme the conference will have, each year has had a different third stream, and this year it’s focusing on Accessibility and Usability, something I know many technical writers working in a software environment inevitably get drawn into (if it’s easier to use, it’s easier to document). Add in the longer workshops on the first day and, for the money, it’s hard to beat.

Like many people I’ve had to convince my boss it is worthwhile letting members of our team attend, but I’m convinced that everyone will find a handful of topics that they could learn about and look to apply at their own workplace, the trick is to plan to do just that.

Any time I’ve ever returned from a conference I’ve been excited and looking to apply ideas and techniques to what we do. If we hadn’t managed to implement some of these things then it would be much harder to ask again the following year as the evidence of value is a hard thing to argue against!

Above all though, TCUK seems to have a good energy, a good ‘vibe’ and everyone who attends seems that little bit more driven and up for learning, discussion and basically getting stuck in. I does help that most people stay over so you start to make friends over a glass of wine (or three) and that carries through into the next day giving the entire conference a relaxed, friendly feel.

If you only plan on attending one conference this year, I would heartily suggest TCUK as a great starting point.

Hope to see you there!

Comments closed

Random thought: Has the rise of (talk of) emotional content (affective assistance) been driven by the concentration, over the last few years, on technological solutions?

Single sourcing, XML, DITA, DocBook, and all the rest have (rightly) taken our profession forward, so I guess it’s natural that the general trends, as well as refocussing on the content itself, are looking for how to better engage with a modern audience.

The evidence suggests that that modern audience is Facebooking, Twittering, and blogging, and wants content in easily digestable chunks.

That plays nicely into the hands of single sourcing (chunks) and the idea of emotional content through connecting to the user, using friendly language to make the content easily digestable.

So, if you’ve already got your technology sorted out, why aren’t you looking at how your content is presented?

Work

One of the more popular posts on this blog is titled DITA is not the answer and, whilst things are certainly moving forward, it’s a little sad that it is still valid.

A recent comment on that post suggested that it’s not just DITA that is lacking, it’s the working realities of single source that is flawed.

Well, that and the fact that I keep referring to single source when I am actually meaning content reuse (for you can have one source for everything but not reuse the content anywhere).

You can read the full comment yourself but the relevant bits are:

I have never seen single sourcing work. Maybe a single author who knows the topics thoroughly enough to reuse, or a tightly knit group of writers synched up at the same level.

The only place we are going to reuse content is in web mashups using semantic markup once the content is in the cloud.

It’s an interesting view and one which touches on something that has been on my mind these past couple of weeks as we are in mid-migration towards our single source solution.

Just how do you coordinate a team of writers, working in discrete areas of the documentation, with a large number (3000+) topics?

There are a number of ways we are tackling this and only time will tell if they are successful. Firstly we spent some time discussing how best to structure the source topics. Do we group them by product area? By topic type? Or some other arbitrary method?

We decided to group at the highest level (the top level folder) by user persona, and below that we grouped topics in accordance to how they are viewed from the product, so development kit wide ‘Events’ are stored in single folder, where as topics for a specific piece of functionality in the development kit are stored in their own folder. Your system will be different, of course, but this method suits our needs.

After that we need some way of knowing both what type of information a topic contains, as well as where that topic is used. We are not authoring in a DITA specific environment but decided to model our topic types on the DITA model to future proof us as much as possible (we are using Author-it which will export to DITA XML should we need it in the future). We have different templates for each type of topic (Concept, Procedure, Reference and so on), primarily to allow us to identify a topic (by default, Author-it shows which template a topic is using).

That leaves the final piece of our puzzle. How do know where a topic is used? This is more than just a list of which deliverables the topic will appear in, it also has to hint at the context of how the topic is being used.

Does any of this mean that we are more likely to reuse content? Not necessarily but it should give us a fighting chance, and once we’ve updated the content plans for all of our deliverables we will start to really see the benefits. Those content plans were the very things that suggested we could reuse content across multiple deliverables and I’m certain that, with a bit more analysis, we’ll get further gains.

Can single source and content reuse work? Of course it can. There are plenty of good examples out there and they all share one thing in common, something that isn’t really broadcast by the vendors; content reuse from a single source takes a lot of hard work.

But it is possible.

Work

I’ve waffled on about single source and our plans for long enough so, as we are finally starting the process itself, I thought I’d capture some information as we go along. However, it’s probably good to set the scene, so I’ll cover that stuff first. Over time you’ll be able to see all the posts related to this work here.

Where should it live?

Next up in our journey towards Author-it nirvana is to decide how to store our content. Author-it stores information as topics, and as topics are designed to be reused, locating them is a key part of the Author-it solution.

One approach would be to simply dump a lot of the topics in loosely appropriate folders and let the built-in search help us find the topics we need. That way the topic names can be a little ambiguous as the content of the topic is what matters.

However that feels a little like flying by the seat of our pants so I’m keen to try and figure out the best way to store the content within Author-it not only to make it easier for the technical writers, but to future proof us as much as possible.

The Author-it Knowledge Center (sic) is chock full of useful information and includes a topic on folder structure which rightly states that:

You need to choose the approach that best suits your requirements. You can have as many folders as you need (but remember that too many, may get confusing…) and as many levels as are required. Also consider the reusability of your content. By burying objects in a myriad of sub folders, others may not know that these objects exist and end up creating multiple copies of the same information – meaning the information is duplicated in more than one place.

Another useful thing to know when creating folders is that when folders are created, they inherit the security of its parent. Therefore, when you design your initial folder structure, it is worthwhile creating some folders at the very top level to set security, and then creating any sub folders within these.

One thing my team and I are hoping to adopt is a DITA based structure. Whilst built in DITA support is not yet part of Author-it (but it’s coming) we do like the way DITA approaches topic-based writing and can easily map most of our content to the default topic types with which DITA is concerned. This also gives us an exit route out of Author-it should we ever decide to change our tooling in the coming years.

However, simply storing all of our content in 3 or 4 folders (1 per topic type) would still leave us with a huge number of topics per folder, so obviously we need some other way of structuring the content logically. And, in a nice twist, we are also going to be restructuring how we offer the published content in the future so we can’t base the folder structure on our current documentation set. That makes sense moving forward as well as we may well start offer different groupings of information anyway and I’d rather not perpetuate our current document-centric view.

So, what have we decided?

After some thought we realised that the only way to structure the content in Author-it to make it easy to locate is to focus on user role. We discounted using product terms here as some of the information we will be writing in the future doesn’t easily fall into a specific area of the product so we’d end up with a generic “Other Stuff” area which suggests that that was the wrong approach.

Essentially we have three user types for our product set; Developer, Administrator and End User. Under those folders we then break down the information accordingly into areas of product information (for example “Installation”). We tried to steer away, again, from using product specific areas but as the large part of our product is a development kit we realised that it made sense to base that information on the “tools” within the development kit, rather than trying to conceptualise the information any further.

Beneath those folders we then break out into, loosely, DITA-focused folders of Concepts, Procedures, and References, with an additional folder to hold Graphics (screenshots, diagrams and so on). DITA suggests Tasks, not Procedures but we consider a task to be at a higher-level, with one task containing one or more procedures.

So we have a basic folder structure in Author-it that looks a little like this:

    Administrator [User Role]
    	Installation [Information Area]
    		Concepts [Topic Type]
    		Graphics
    		Procedures [Topic Type]
    		Reference [Topic Type]
    

We think this will work for us, and we’ll be testing it with a sample chapter or two very soon. We definitely need to get this right now before we start converting our content over but the thoughts and details of that exercise are for another post.

I’ve waffled on about single source and our plans for long enough so, as we are finally starting the process itself, I thought I’d capture some information as we go along. However, it’s probably good to set the scene, so I’ll cover that stuff first. Over time you’ll be able to see all the posts related to this work here.

How? – how do we do it?

Once we’d agreed that single source would provide us with a good solution (it’s still not ideal, but nothing ever is..) the next question was “How?”.

Having followed the technologies in this area quite closely over the past few years my immediate thoughts went towards a DITA enabled solution. The basic topic types and methodologies fit well with an Agile environment so there would be fairly immediate benefits once we got the system up and running. We spent some time investigating our content and planning how best to leverage DITA to our advantage and once we were happy that it would meet our needs (with less over head than adopting DocBook) we looked at the technological challenges of adopting a DITA based system.

And that’s where we hit the biggest block. DITA is an excellent methodology but still lacks simple/cheap tooling support (it would take upwards of several thousand to fully implement a DITA solution, whereas a bespoke solution could cost considerably less). Other considerations (we have JavaHelp as our online help format) also came into play and, after some investigation of other XML based tools we decided to go with Author-it and base our working practices around the DITA methodology and topic types.

We did consider upgrading our legacy applications (FrameMaker and Webworks) and configuring them to give us a solution that would meet our needs but even the rough estimates for that work took us beyond the cost of our chosen solution.

One caveat to this is to note that I have used Author-it previously and, whilst it is not without its foibles (which application isn’t) it hits the sweet spot of functionality versus cost. None of the rest of the team have used it but that would be the same for any other new tool and was considered as an upside to keeping the FrameMaker + Webworks solution.

A second caveat is that I’m fully aware that, in due time the tool vendors will get on top of this problem (MadCap already seem to be ahead of the others in this area), but alas the timescales don’t suit us. Worst case scenario is that we ditch Author-it in a few years, export the content to DITA XML and import to a compatible tool that meets whatever needs we have at that time.

Tom Johnson has had a look at the survey recently published by the HATT matrix website on help authoring and, by pulling in the results of some other surveys in the same area, has extrapolated some good conclusions from them.

He rightly points out that surveys need to be taken with a pinch of salt (he goes into the detail of why this is so), and that whilst the numbers involved would seem to be high enough it’s likely that the questions themselves need further consideration in future.

That said, there are two things I took from his post.

1. Know the problem before picking the tool
You may not be in the position to switch authoring tools, but if you are and you have investigated the market then please make sure that you are buying a tool that addresses the problems you have.

The presumption here is that if you have a legacy tool (like we currently do, FrameMaker 7.1) and it still works and meets your requirements then there is no good reason to upgrade. I’ve been victim of buying into the ‘keeping up’ frenzy that software manufacturers like to generate but once a product is reasonably mature it is likely it has most of the features you need already.

I’d offer Microsoft Word as an example here, I could probably still use Word 2.0 for the few documents I maintain in that format as the newer versions add functionality I don’t need (and which has ended up intruding on my workflow at times!).

The X-Pubs conference a couple of years ago had a common, if not publicised theme. Almost all of the presentations included the advice to figure out what problems you had, before deciding IF single sourcing (using XML as the base format) will help and that’s even before you consider the tools themselves.

2. DITA is still a theory
Whilst it is true that the number of people leveraging DITA for their content is rising, the numbers remain low.

Partly that will be due to the fact that few organisations/teams/people are in a position to quickly switch just because a new technology has come along, but and I’ve said this before (in fact I’ve said that I’ve said this before!) rollout of DITA remains harder than rolling out a bespoke authoring tool.

When costing an implementation of a new tool there are various factors and it’s very easy to see that you can get MadCap Flare up and running quickly, where as a DITA based solution takes investment in developing the environment. This is beginning to change but, as yet, the phrase ‘DITA support’ really only means that you can output to a DITA formatted XML file. The tools aren’t constructed around the DITA concepts, so you immediately lose a lot of the benefits that DITA can bring.

Until there is a tool that fully leverages DITA, building it into the workflow of using the tool, and helping the concepts become part of your daily working practice then it will continue to be a marginal player.

Which, in a way, is how it should be. DITA is not a tool, it is a technology and methodology. It is there to support the toolset and the writer. It’s just a shame that tool vendors continue to believe that THEIR format is best, refusing to budge from that position and shoe-horning ‘DITA-esque’ features into their software.

Anyway, the rest of the survey write up is interesting and worth a read but, as Tom says:

“I do love these surveys, though; if for no other reason than they give us something to talk about”

Tech Work

Notes and thoughts from Day 1 of the User Assistance Conference

Session 1 – Tony Self – Emerging Help Delivery Technologies
It’s been quite a while since I heard Tony speak but as ever he provided an entertaining, if somewhat limited, presentation. Covering the various types of help viewing technologies he nicely summarised some of the available choices including the features to look out for, including the ability to wrap up an online help system in its own application (using technology like Adobe AIR). It was interesting to hear some Web 2.0 features making their way into online help technologies, including voting and commenting facilities which would give you direct feedback from the people using your help system.

Session 2 – Joe Welinske – Write Mote, Write Less
Embracing the Value of Crafted Words and Images
Another regular speaker and Joe was certainly fired up, challenging us all from the outset of his presentation to consider how we work in far more detail than we currently do. First up he suggests that we should be writing fewer words whilst making sure those words are correct and so lessen the impact on the reader, providing just the information they need and nothing more.

And then he hit on something that I’ve previously mentioned here (although Joe nailed it much better than I did), namely allocating writing resource to the highest priority pieces of documentation work, rather than the traditional approach of documenting everything. It’s a simple approach that, when combined with better writing, leads the craft of technical communications to provide much higher value to the business which is good news for all of us.

Session 3 – Sonia Fuga – DITA & WordPress Solution for Flexible User Assistance
A showcase style presentation of a stunningly simple concept. With a little bit of coding work (building a DITA importer to get XML content into the WordPress database), the team at Northgate offer a web-based help system which allows users to add their own notes and to vote for useful information, and which is can receive updates with new content with each release.

How? By using WordPress features. Notes are left as comments, votes are left using a WordPress plugin, and the updateable content is controlled by only allowing the customer (who has access to the WordPress admin screen) to create Pages, leaving the Posts controlled by Northgate. I use WordPress for this website, and spoke to Sonia in the evening to confirm some of the finer details. It’s a very clever use of WordPress, and I hope Northgate release their DITA importer to the open source community!

Session 4 – Question and Rants
A short session with four speakers each giving a two minute ‘rant’ and then taking questions. Nothing particularly noteworthy came of this but it’s a good addition to the usual style of presentations and made for a little bit of light relief.

Session 5 – Dave Gash – True Separation of Content, Format, Structure and Behaviour
Another familiar name, Dave is always entertaining and a very dynamic speaker and in this session he even managed to make the somewhat mundane topics of HTML, CSS, and JavaScript interesting!

Outlining some basic principles he showed how you could take an HTML file, full of embedded behaviours (javascript), style rules (CSS), and content and strip out all four parts into a more manageable set of files. This way, holding the style and behaviours in referenced files, you can make changes to either and have them ‘ripple’ through all of your deliverable.

Admittedly this was all done by hand but the basic principles are something that you should be following if you have that kind of output.

Session 6 – Matthew Ellison – User-centred Design of Context-sensitive Help
Interesting presentation by Matthew which started a little slowly, covering the history of context-sensitive help before taking us onto the idea of task support clusters. Originally presented by Michael Hughes at the WritersUA conference, the premise is to offer the user a smarter landing page, referred to as Keystone Concept topics here.

The key to a successful Keystone Concept topic is not to limit what is presented, and to consider that it should be different depending on the context from which it was launched, with the ultimate aim of getting the user back on task as quickly as possible. This includes any form of tips and hints, and crucially suggests NOT to include the obvious stuff (don’t answer questions that users will never have!). This mirrors part of the theme from Joe’s talk early in the day, and certainly seems to be a sensible goal given the business (time and resource) pressures we are all under.

After that, I had a few beers and a chat with some other delegates, and as ever it was great to hear that most of us have similar issues, problems and solutions.

I’ll post my notes from Day 2 of the conference tomorrow.

Work