Gaining…no, Earning Cross-Functional Ownership of Field Documentation

Home/Publications/Best Practices Newsletter/2008 – Best Practices Newsletter/Gaining…no, Earning Cross-Functional Ownership of Field Documentation

CIDM

February 2008


Gaining…no, Earning Cross-Functional Ownership of Field Documentation


CIDMIconNewsletterChona Shumate, Cymer, Inc.

Sometimes the most amazing part of a success story is to realize, in hindsight, that you had all the pieces to make it a success all along—you just didn’t know it. I experienced this recently when I discovered that the combination of (1) collaborative product ownership and (2) learning who your real customers are can radically improve the value of technical publications.

For several years, poorly reviewed field service documentation was impacting product performance and customer satisfaction at my company. My technical publications team needed cross-functional organizations to support and participate in technical reviews and procedure validation for all technical procedures performed in the field. All attempts at achieving this cooperation had failed. There were obvious reasons. Specifically,

  • Responsible Engineers (REs) were not held accountable for the quality of their technical reviews or for doing technical reviews.
  • Technical Publications (TPs) had little support in getting all the non-TP resources needed to validate procedures before they were published.

And the results over time were predictable:

  • Procedures distributed to the field were inaccurate.
  • Field Service Engineers (FSEs) knew this and distrusted the procedures.
  • FSEs performed the procedures their own way, inconsistent from site to site, from FSE to FSE, with problematic results.

Eventually, these problematic results landed on my doorstep. Our field procedures had a bad reputation. The essential problem was how to change the behavior—of all the stakeholders—so that they understood the true value of and responsibility for product documentation. From experience, I’ve learned that behavior can change if you can communicate a compelling reason for changing it and if there’s an ultimate gain for the stakeholders. But change can make people difficult. I needed to find out how behavior change had been successful in the past. Based on research and interviews with our business process folks, I decided to break this effort into the following objectives:

  • Drive the need with data.
  • Correct the field perspective.
  • Standardize.
  • Use existing processes.
  • Convince engineering that design impacts serviceability.
  • Value and strengthen key relationships.
  • Work empathetically with your customers.

Drive the Need with Data

To counter the growing negative performance in the field, I needed persuasive ammunition. As with most data-driven companies, regardless of what information you know to be absolutely true, it won’t mean squat if it’s anecdotal. What I needed was hard evidence to prove that at least some of the customer damage in the field was fixable with solid, accurate documentation.

What I needed was data. But what data would be convincing enough to support change in behavior and accountability? I investigated what data was being paid attention and how I could contribute to that data to gain support for better technical reviews and validation.

The data collection actually proved fairly easy to do. A standard metric used in the semiconductor industry is MTTR: mean time to repair. A simplified version means that x is the average amount of time it takes an FSE to service and repair the n part on one of our products, with each part assigned its MTTR value—or, average service time. I argued that if FSEs had accurate documentation, they would be able to service the part correctly, consistently, more efficiently, and faster, thereby reducing the MTTR.

Further, to derive this MTTR value for any one part, we had to standardize the procedures used. Standardization was problematic; procedures were not always being followed because they were held suspect by the FSEs. They had their own methods, techniques, and tricks for getting the product up and running. Even though all FSEs attended 6 weeks of rigorous product training, this tribal knowledge dynamic was used by our field veterans to mentor new FSEs with on the job training (OJT) when they returned to their job site. This peer OJT often overruled whatever training they learned. Obviously, this practice promoted inconsistent service and performance by both the FSE and the product.

Correct the Field Perspective—Fix What’s Out There First

This project began to take on the characteristic of peeling the proverbial onion. We realized that to invoke technical discipline with new procedures, we had to clean up our own backyard first. We had to fix whatever errors existed in procedures already published. We had to regain the confidence of our FSEs. We created a simple feedback mechanism for FSEs to send in corrections via our intranet. We documented the process and held internal presentations and remote web-ex sessions on how to use it. We invited all FSEs, their managers, and Engineering. At this point, we thought we had it covered, and we could stand back and let the process work. However, processes are only effective if they are used. Most FSEs were jaded by the belief that if they took the time to identify errors and submit them, that Tech Pubs wouldn’t actually do anything about it. To change that belief, I worked from two angles: make it a required process and be transparent with the process. I met with the upper management of field service and explained that if we don’t know about the errors, we can’t fix them. In addition, if their FSEs were to provide feedback, we would, by policy, not only respond publicly but we would measure how well we did and post those metrics.

We quickly learned that unless we somehow acknowledged receipt of the input, FSEs continued to hold onto the belief that their feedback went into a big black hole, their corrections never to see the light of day. We created an auto reply via our email system confirming receipt of their entries and wrote simple scripts that copied the text they entered in the description field of the form. We came through with our commitment and published metrics collected from the feedback database: number of errors per week vs. error closure rate. It wasn’t pretty at first, but the process was working.

Lo! The wall of cynicism began to crumble. Feedback started coming in quickly, and we made tremendous gains in correcting the procedures and building trust with our FSEs. I continued to solicit usability feedback via regional site visits with teams of FSEs, taking them to lunch and having a candid conversation on what was working, what was not, and what ideas they had for improvement. From each visit, I came away with great ideas, as well as small, subtle changes we could make that made a big difference for them in the field. One such idea was to change the background color of our online documentation so that it was more readable in yellow fab lights. I can say Duh! in hindsight, but without learning and listening about their work environment, how could we know? Another idea was to provide status on their feedback—was it completed? In the works? Ignored? This status report became important when several corrections were dependent on an engineering fix and took a long time to incorporate. To the FSE, it appeared as if we were ignoring his feedback. I now publish the status of all feedback entries.

Standardize

With trust being built on our serious intent to improve documentation, we promoted the idea that if we could standardize service times for MTTR, we could standardize FSE performance and actually improve MTTR. This process required all FSEs to follow the same procedures step by step. To make the processes mandatory, we had to be diligent in making sure the procedures were as accurate as we could get them. They required thorough technical reviews and in-situ procedure validation, particularly for the newly-developed procedures. Rather than explain our process of procedure validation in this paper (this is not design validation—big difference), refer to the flow chart (Figure 1). The key criteria is to perform the procedure exactly as written, step by step, on a working product, as close as possible to how it would be performed in the field.

Shumate_Figure 1 bw

Figure 1: Flow Chart

During the procedure validation, a large FSE Certification initiative was in progress. I leveraged it because it went hand in hand with my objectives. I collaborated with the technical training manager to standardize the tasks each FSE was to perform so they could be certified on those tasks. This process added to the need to standardize each procedure so that an FSE performing a chamber replacement in Korea used the same tools and procedures as in Israel, Boise, or Singapore—all were to be certified to perform the identical task regardless of location.

Use Existing Processes

We are fortunate that process is king at our company. If you had a process, you got something done. Not all processes are effective or efficient, but they are consistent and mostly documented. Over the years, as new people came on board, the ineffective processes became more obvious because new people could not rely on the disappearing tribal knowledge. At one point, the company went wild on process, and we’re better for it, albeit exhausted. So, being somewhat process-weary, my strategy was not to reinvent the wheel and introduce yet more new processes to follow. Rather, why not leverage on existing Engineering processes and modify them for tech reviews and validation?

I decided to first tackle technical reviews. Engineers were already familiar with design reviews and took seriously their accountability for them. I worked with an Engineering Project Manager (EPM) to find out how we could use their design review process for technical reviews. The EPM liked this idea—nothing new to document or train. We used the same form to capture the review meeting activity. He even assumed the role of scheduling the reviews to the RE, assigned writer, and program leads. Using the same form and roughly the same review process (Figure 2) helped this process to be adopted quickly.

Shumate_Figure 2 bw

Figure 2: Audit Checklist

Because we develop topic-based documentation, it nicely aligns to the module components of our products and is the same way REs are assigned work. This alignment simplified the review scheduling process: for each product module (i.e., topic) a writer, RE, trainer if available, and program manager were invited. We provided a list of all the procedures (by topic) needing technical reviews, and the EPM scheduled a design review meeting for each. Group technical reviews are an interesting event. Although most attended without reading the topic prior to the meeting, participants were very engaged in lively discussion. After all, technically, the procedure was a result of their design. This list of procedures also served as a deliverable to tech training so trainers knew which procedures they had to modify for training lab exercises.

Convince Engineering that Design Impacts Serviceability and MTTR

After reaching a motivating level of success with technical reviews, I ventured into procedure validation. I anticipated validation would be the most difficult, but I knew it would have the greatest impact on the company. I set out to convince Program Managers that the engineering design wasn’t complete until the serviceability of the design was tested, for example, a measure of their design must include how the design helped or hindered the FSE to do his or her job. Poor design caused longer service times, and good design needed to include the criteria for easier serviceability and a standardized MTTR. I’m not confident I convinced everyone of this principle, but I did get agreement that the RE was responsible for participating in the procedure validation. There actually were occasions where, during validation, it became apparent that the design didn’t allow for proper service; for example, an FSE’s arm could not reach that far back behind the module to unscrew the bolts.

Because validated procedures help training deliver more robust training exercises, the trainers were motivated to see this process work. Technical training offered their classroom labs and tools as a venue. Training classrooms are ideal settings for procedure validation. They’re typically designed for group activity, have tools available, and the room availability is as predictable as the training schedule. The product validation also has served as a learning session for new FSEs, who are encouraged to participate if it’s not a complex procedure. The trainers are commonly part of the validation team because they recognize the value of testing these procedures before they have to teach them.

Strengthening Relationships

This process win for tech reviews and validation was not accomplished overnight, or over months. It took several years of persistent convincing of the stakeholders. Engineers, by their training, design with a disciplined methodology. If you can incorporate your processes into a framework that they understand and are familiar with, you might stand a better chance of success. Note that this evolving set of processes is being refined with each new product program. The enlightenment comes when you realize that the cross-functional participation has evolved into cross-functional ownership of the product. We all have skin in the game, and everyone is accountable until the rubber meets the road. Everyone’s responsibility to the product isn’t complete until it is successfully installed in the field.

Building relationships with Field Service, Training, and Engineering is proving invaluable. Not all, but most REs now understand their role in completing the design process via the documentation review process. Over time, a solid, trusting relationship has allowed me to convince these teams that their participation and support of tech reviews and validation will promote better customer service and better success for our product. A successful product in the field is a success for everyone. Make no mistake, this is a huge behavior change at our company.

Success often comes not only with changed process but with a changed mindset. We were successful in bringing about change in how the field corps and Engineering viewed documentation. From that process, we changed our perspective dramatically, too. We learned to view our FSEs as customers, our primary customers. If we could make their job easier, we’ve been successful and the end customer is happier. After spending weeks shadowing them at various customer sites, I came to appreciate the difficulty of the job they do and how they take the brunt of the customer’s frustration when a product goes down. They work horrendous hours remote from headquarters, often in a cold or hot, stuffy lab, under great pressure to keep the product up and running as much as possible. I make a point to meet every FSE at least once. I attend their training graduation and give a rally speech. I refer to them as our “soldiers in the field” and tell them that their feedback is a critical part of our success.

Work Empathetically—Applying Ideas from the Field

When we consider our users in this light, we begin to understand their world and work empathetically, trying to put ourselves in their shoes, adapting to their processes, listening to their ideas, not just their complaints. Interestingly, this entire effort empowered our FSEs to feel like they could make a difference to not only improve their ability to do their job, but for their fellow FSEs worldwide. They take pride in seeing their ideas grow into features we incorporate on our web site on an ongoing basis. Having candid dialogue and reinforcing strong communication paths with our field service corps and Engineering has paid back in spades. We’ve gained their trust, their cooperation, their respect, and a solid working relationship where we see ourselves on the same team. CIDMIconNewsletter

CShumate1Chona Shumate

chona92024@yahoo.com

For the past 12 years Chona Shumate has been a senior technical publications manager working at a semiconductor company. Prior to that, she worked as a technical writer for 8 years in Silicon Valley. She has a background in usability testing and topic-based/single sourcing information design. She holds a masters degree in Technical Communication. She and her team are currently implementing an XML/DITA-compliant content management system.

REFERENCES

Managing your Documentation Projects

JoAnn T. Hackos

Hoboken, NJ: Wiley

1994

ISBN: 0471590991

Information Development: Managing your Projects, Portfolio, and People

JoAnn T. Hackos

Hoboken, NJ: Wiley

2006

ISBN: 0471777110

 

We use cookies to monitor the traffic on this web site in order to provide the best experience possible. By continuing to use this site you are consenting to this practice. | Close