Formal SME Feedback Mechanisms

Ben Colborn & Nathan Jackson, Citrix Systems, Inc.

The Need for SME Feedback

Do you request that subject-matter experts (SMEs) review content before it is released? Almost certainly you do: such reviews are considered a critical step in developing quality information (Carey et al., 20041). Surprisingly, the “how” of gathering feedback has received little attention. In an unsystematic survey of technical writing textbooks, reference books, and articles, we found that the treatment of the mechanics of gathering feedback was typically limited to a single detail: “At this point in your process, get feedback from SMEs.” That’s it—no descriptions of how to do so.

Given that working with SMEs can be one of the most difficult aspects of the technical communicator’s job (Lee and Mehlenbacher, 20002), and that getting SME feedback is one of the most critical aspects of content development, shouldn’t the mechanics of working with the SMEs get a little more attention?

Ad hoc feedback processes

Courseware developers in Citrix Education typically accepted SMEs feedback in any form: handwritten on hard copy, listed in a spreadsheet that was emailed around, noted in the body of email messages, entered in notes on a PDF. Moreover, from the perspective of the SMEs, the comments seemed to go into a black hole: they could not be certain that their feedback was received, let alone incorporated. The problem was no less from the courseware developer’s side: once feedback was received, it was spread across several different forms and could not easily be passed around and tracked within the project team.

The issues with this process are as follows:

  • Feedback is not registered
  • Feedback is not tracked
  • Feedback is not public
  • Feedback is received in multiple formats

Criteria for a Formal Feedback Mechanism

To address the shortcomings of ad hoc feedback processes, we identified the following criteria for a formal feedback mechanism.

Criterion Description
Contextual Feedback appears in the document
Unified Feedback from multiple reviewers is accessible from a single source
Web-accessible Feedback is in a web-accessible electronic source, not on paper or in email
Trackable Feedback is tracked by priority/severity, impacted modules, submitter, owner, and status (open, in progress, closed)
Generic Tool can be used for eLearning, ILT, and any other deliverable type or modality
Usable Tool is easy to use for both reviewers and developers

 

Feedback Models and Tools

In searching for alternatives to supplant our current ad-hoc process, we identified the following feedback models and tools.

Model Example tools
Issue Tracking Bugzilla
Trac
Jira
Arctic
Document Review Acrobat
SecurView
Collaborative Writing Wikis
Hybrid SharePoint
PleaseReview

 

Unfortunately, none of them met all of the identified criteria, as the following matrix shows. (Usability is listed on a scale of 1-10, where 1 is difficult 10 is easiest.)

Comments on paper Comments in file Verbal comments Spreadsheet Issue tracker Collaborative writing system
Contextual
Unified
(Manually)
Web-accessible
Trackable
Generic
Usable SME: 10 CD: 5 SME: 10
CD: 6
SME: 10 CD: 3 SME: 5
CD: 8
SME: 5
CD: 5
SME: 6
CD: 8

 

We piloted two different issue trackers (Trac and Arctic) and SharePoint. We attempted to pilot the document review capabilities of Acrobat but encountered a lack of cooperation from the SMEs, who reverted to the typical ad-hoc way of providing feedback. Ultimately we settled on an Arctic-based system for gathering and managing feedback.

The Citrix Education Formal Feedback Mechanism

The advantages of tracking feedback are several:

  • Reviewers are accountable for the feedback they send or fail to send
  • Reviewers have greater confidence that their feedback is being used
  • Developers are accountable for the feedback they receive
  • Developers can track the feedback they have incorporated
  • Project leads can track the feedback received and incorporated

The only drawback is that the courseware developers have a new tool to learn, though the learning curve is not steep. SMEs, who generally have some sort of engineering background, are very familiar with the issue tracking model and understand its value.

We have been using the Arctic Issue Tracker (www.arctictracker.com) for a little over a year to submit and track issues found in online courseware during testing. Arctic is a simple PHP-based web application with a MySQL backend which allows users to submit context sensitive information about a course that they are reviewing. There are several ways to implement an issue tracker within a workflow, but we wanted to make the experience as simple as possible so users would be able to submit comments quickly and easily.

To track issues for a particular course, the developer must set up a “project” within the system that will contain all the issues for that course. Each project is assigned a unique ID. Once a project is set up, the developers can assign users to have access to read, write, or modify issues that are contained in that particular project. They can then begin adding issues to the project through the Arctic “Add Issue” form.

This method, however, relies on the user to provide context about the area in the course where the issue was found. Since reviewers do not always provide that information, we decided to do it for them. When a course is sent for review, the developer responsible publishes it to an internal online course framework. Because we author our courseware in DITA, we can use the same framework whether the courseware is intended for online or instructor-led use. Within this framework, we include code to link a course with a particular project in the Arctic system using the Arctic project ID. We also created some custom fields in the “Add Issue” form that would allow us to store context-sensitive information that explained where the issue was found like page title, page filename, page number, etc. When users view a course in “Review” mode, they see a small “Add Issue” button at the top of each page in the course. All they have to do is click that button, and it launches the form within Arctic that is pre-populated with all the contextual information. The users then enter their issue-related information in the provided fields.

When developers go into the Arctic system to track issues that have been submitted, it is easy for them to find the page that the user had an issue with and correct it. They can also add notes to the ticket, documenting follow-up questions. Arctic also tracks the status of changes (new, confirmed, fixed, and so on) and which developer is responsible for correcting the issue.

Conclusion

While the Arctic issue tracker has not solved all issues with SME review, it does meet five of the six criteria: unified, web-accessible, trackable, and generic. It does not record the feedback precisely in the location where it applies, though it does record the topic. This system has been used for both technical and editorial reviews, and results have been positive.

As always, the most important component of successful information review is communication between the information developers and reviewers. A formal mechanism to track feedback can assist with this communication.

Reference

1 Carey, M., Hargis, G., Hernandez, A., Hughes, P., Longo, D., Rouiller, S., Wilde, E.(2004).Developing Quality Technical Information: A Handbook for Writers and Editors. Upper Saddle River: Pearson plc as IBM Press.

2 Lee, M. & Mehlenbacher, B. (2000). Technical Writer/Subject-Matter Expert Interaction: The Writer’s Perspective, the Organizational Challenge. Technical Communication, 47(4), 544-552.

We use cookies to monitor the traffic on this web site in order to provide the best experience possible. By continuing to use this site you are consenting to this practice. | Close