Joyce Du, Alesk Savic, Mario Carriere & Colleen Edwards, CSI Global Education

Is there universal advice regarding the implementation of DITA? Is there a migration guidebook so thorough it will ensure success? Of course not! Each organization is different, operating in its own mission, goals, and competitive environment.

At Moody’s Analytics, the mission of the Financial Services Training and Certification (FSTC) group is to develop educational material for financial professionals at all stages in their careers. Developing learning content is central to our daily activity. While there is no universal advice we can provide, we have learned a great deal while migrating to DITA at our organization. We want to share with you what has helped us to move forward in our journey.

In a nutshell:

  • Don’t bite off more than you can chew
  • Tap into your organization’s implicit knowledge
  • Have a formal retrieval strategy

Don’t Bite Off More Than You Can Chew

When planning a migration, it’s tempting to do everything at once and convert all of your legacy content into beautifully structured DITA components. However, in our case, with hundreds of courses and thousands of learning objects, that would take years. We needed to change our definition of what a successful migration might look like.

Instead of undertaking the task of converting all of our legacy content, we prioritized our content into two groups: legacy material that can be stored in our CMS “as is” and high value content that will be broken into DITA components.

For legacy material, our approach is to create a single, standalone object for each course and to wrap that object in a DITA wrapper. The content of that standalone object consists of unstructured, non-DITA components. The DITA wrapper allows us to publish our legacy content through our publishing engine and deploy in the same way we would with DITA-based course content.

This approach makes the scope of our migration much more manageable. We are not wasting time unnecessarily restructuring legacy content that we won’t ever repurpose or reuse and can focus our efforts on our library of reusable learning content.

Tap Into Your Organization’s Implicit Knowledge

Creating a usable library of structured content from a vast repository of unstructured material is a daunting task, but not impossible. Our subject matter experts (SMEs) know a great deal about what content appears in multiple locations, what content is unique, and what content is frequently changed. Tapping into the knowledge that our SME network has about the structure of our content repository is an essential piece of the puzzle.

Here’s an example of how we approached this challenge. We began by locating a topic that appears in seven different courses. We pulled those topics out of each of the seven courses and put them side by side to compare them. The similarities allowed us to determine the generic content model that governs all of these topics. And our SMEs helped us to understand whether the differences mattered or not.

In some cases, the differences tell us about the conditional properties that we might need to manage. For example, the topic of Spousal RRSPs (registered retirement accounts in Canada) appeared in several of our courses, but went into different levels of detail for an introductory student versus a wealth manager. In this example, the differences are relevant. But in other cases, the differences are actually unimportant and are just a product of having written a topic seven times.

Coming up with a single reusable topic usually involves combining pieces of several different topics and rewriting segments to fit into your DITA framework. For us to do this in an efficient way, we needed to make the implicit knowledge of our organization explicit.

Have a Formal Retrieval Strategy

This SME-driven process of constructing reusable topics has helped us to consolidate our metadata strategy. Our DITA-based CMS features conditional publishing, but what are these conditions, exactly? Again, turning a critical eye to the differences in similar topics helped us to clarify this problem.

For example, let’s look at the topic of RRSPs. In one topic, we see that beneficiaries of RRSPs can be named directly on the RRSP application. In another, however, we see that beneficiaries must be named in a will. In both examples, RRSPs can have beneficiaries, but there is clearly a difference in how they are declared. The difference is based on location: in Quebec, a will must be used, but in all other provinces, it is sufficient to name the beneficiary with the financial institution. Geographic differences are therefore critical to our metadata strategy.

But as we said before, there is no universal guidebook: while geographic differences are critical for us, they may not be critical for all organizations. It is essential to approach the task of understanding your content with an open mind. We started with the IEEE Learning Object Metadata (LOM) standard, but the process of evaluating our content is helping us to narrow down the standard to metadata sufficient for our use, reflecting the real way we identify, classify, and search for information.

In addition to a metadata strategy, we have also developed a rigorous file naming convention and file and folder structure standards prior to migration.

We are proponents of an iterative approach: build a retrieval strategy, migrate a few courses, test the strategy, and then use the results of the test to refine the retrieval strategy.

For us, migrating content to DITA is a strategic move. It will enable us to grow more rapidly. We approach the development using a holistic vision, knowing that the new production processes will change how people work together. Given the scope of work, we use an incremental approach that allows us to reap the benefits as soon as possible.

Some of the approach described here was adopted after a period of trial and error. We are currently in the process of migrating and will learn other lessons while performing the work.