It’s a common topic, an oft repeated complaint, and one which has already had many many lines of advice written, and countless suggestions offered. So here’s a new slant.
The reason most technical reviews fail is because of the writer, not the reviewer.
It’s not because the reviewers couldn’t find the time, it’s not because the reviewers didn’t understand the need or reasoning behind the review, it’s not because they didn’t know what to do, and it is most certainly not because they don’t value your contribution to the product and your part in the development process.
Because, if any of the above reasons are true, it’s YOUR fault, your responsibility.
Katherine Brown covers this area well in her recent article, noting that:
reviews can often go awry for a number of reasons at a number of points in the overall process:
- Poor communication
- Lack of preparation
- Lack of management support
- Unclear expectations and objectives for the review
- Insufficient time planned for the review
- Lack of follow-up
- Wrong people involved, or right people involved at the wrong time
Katherine goes on to offer solutions to all of these issues, but I have a different slant, namely:
- Poor communication – if, as a technical author, you cannot communicate there are bigger problems than the technical review!
- Lack of preparation – you are asking reviewers to give up their time, even if it is agreed many will consider this an ‘extra’ piece of work. Not preparing for this is likely to come across as both unprofessional and arrogant. If you don’t seem to care about the review, why should they?
- Lack of management support – yes, we do need to fight a little harder to get support for our work from management. No, it isn’t the way it should be. As professionals we need to learn to promote ourselves and, by our actions, gain the support we need.
- Unclear expectations and objectives for the review – as I’ve said, many people treat reviews as an interruption, so it’s up to you to make the objectives and requirements clear. What should they be looking for? What should they report back, and how? If they have a general query how do they present it during the review?
- Insufficient time planned for the review – as a project manager once said to me, there is no such thing as “not enough time” just something called bad planning. Yes you may need to fight to get allocated time (and you should, ad-hoc reviews are never as productive as well organised and scheduled sessions) but it is an important part of the technical publications process, so fight your corner hard.
- Lack of follow-up – It’s not hard to send a short email, summarising the main review comments or outcomes, to those that were involved. This is something I am terrible at but I know, when I’ve received similar communications, how well they work and how good they make me feel.
- Wrong people involved, or right people involved at the wrong time – “The more eyes the better” doesn’t always hold true. You need to figure out the best people, and make sure they are reviewing the content at the correct time, enlisting the help of your friendly project manager if required.
This may seem harsh but, and this is something I’m guilty of myself, there can be a tendency to wrongly apportion blame, to presume that the technical reviews are failing because no-one else is interested, or presuming that the work you do doesn’t need a review anyway (and you hate to bother those busy developers, right?).
We are responsible for our work, we are responsible for the information we produce and as the technical review is part of that production process (it is NOT a QA check!) then, if it is failing… well, I’m sorry, but it’s your fault.
[…] Gordon McLean analyzes at why technical reviews fail. […]
Comments are closed.