Sabine Ocker, Comtech Services
October 1, 2020

Last month, we invited Barry Grenon and Kevin Kuhns of Genesys to come to the CIDM Monthly Round Table and give members a demo of their dynamic publishing environment. Since a headless CMS is an essential component for that environment, I circled back around with Barry and asked him a few questions. Below are his responses.

Sabine: Can you tell us about your role at Genesys—what are you responsible for, how long have you worked there, what kind of deliverables are created by your organization?

Barry: I have been working for Genesys for 13 years. Currently, my role is Director, Information Experience (IX) Platforms. My team is responsible for the administration and customization of our various IX websites. Deliverables include website content (prime delivery) covering all aspects of our technical documentation. This online content is also a source for generated PDF, in-application contextual help, and in-application notifications.

Sabine: Can you help readers who might not be familiar with a headless CMS environment understand what it entails?

Barry: A headless CMS separates content from output, which may not sound so revolutionary to the Technical Communications industry. But because it is API based, it gives web development teams more flexibility to build their sites on whatever stack they want. It also allows for easy sharing of content across silos, leveraging output APIs. One significant aspect of headless CMS is all content is de facto highly structured. You start by building page schemas, which typically map conceptually to the “web page” as the unit of delivery, and not a more abstract “chunk” that maps to a different information modeling theory (i.e., DITA). In my opinion, this frees up your mind to think about your content more naturally, that is, in terms of what your readers need to read. But it also forces you to make your content structured; you can’t really “get started” without a page schema: you need fields in your user forms and fields in the backend for the thing to work at all.

Sabine: What made you consider a headless solution? What problem does it solve that other solutions couldn’t?

Barry: We were already working in MediaWiki as our CMS, using unstructured content. Once we realized we could expand our existing stack to “Enterprise MediaWiki,” letting us move to fully structured authoring without moving to a new stack, we went for it. In any case, Enterprise MediaWiki has a long history of solving idiosyncratic knowledge management problems across a range of industries. Our feeling was that, at the time, no existing content platforms provided anything near the feature flexibility we required, so we used Enterprise MediaWiki to solve our problems, fitted precisely to the needs of our content. It is also our belief that Enterprise MediaWiki is some sense a victim of its power and flexibility – it deserves broader consideration than it currently does. Still, because it is less a set of features than a set of tools you use to build features, it can be hard to “grok” the potential. With the rise of headless CMS and Jamstack, IMO the basic approach of structured content + query engine to algorithmically display content (either at runtime or at build time) validates the approach we’ve been taking in MediaWiki, and I’m glad to see there are more tools out there now that can use this approach.

Sabine: What is the tool stack in place at Genesys?

Barry: We have a mix of tools, since, like many modern enterprises, we are essentially a mix of companies bought up into one. We have legacy RoboHelp web help output and .chm files, and we have WordPress sites that expose some content in a headless manner via WordPress API. Still, our main platform is Enterprise Mediawiki, which is Mediawiki plus a set of extensions that enable storing of content in structured, API accessible database tables, with form-editing simplifying the experience for authors. We customize these extensions to fit our specific “technical documentation” requirements, and those features get released back into the open-source tooling.

Sabine: How long did it take to put the current solution in place?

Barry: Several years. It was not an end-to-end project, but a transition. MediaWiki is very suitable for rapid prototyping, so we could learn what we wanted from our structured content as we went, spinning up individual “content-as-data” projects that could live seamlessly alongside (or embedded within) otherwise unstructured web pages. We recently completed a migration of a significant documentation set from unstructured MediaWiki into a structured, API-accessible, forms-based authoring environment (still MediaWiki under the hood, but otherwise radically different). This project took about a year.  That should give a sense of scale for a complex “from scratch” implementation.

Sabine: What were the challenges or limitations you encountered?

Barry: The challenges are the typical ones of working with open source tools, finding bugs, requesting bug fixes, or fixing them ourselves. One major challenge over the long term has been our need to prioritize customer-facing “new features” over improving the authoring experience. Some internal inefficiencies were allowed to languish, to the annoyance of our writers, as we focused on improving the look, feel, and organization of content “on the page.” Another big challenge is getting agreement on what the structure of a given page schema should look like. There is a trade-off as well – individual writers with a talent for ad hoc writing can feel muzzled by a form.

Sabine: There currently is a lot of buzz around moving to headless. Can you speculate why that might be?

Barry:  With headless CMS in a Jamstack environment, you can drop any given website architecture for a new one, without migrating your content.  My feeling, the “buzz” part of headless CMS is because it fits nicely with how pure web developers prefer to work (with the latest technology stacks). As docs-as-code progresses, adding headless CMS allows for wider participation in the publishing process – more content, from a broader range of SMEs. However, the kind of fine-grained content re-use – closer to an algorithmic publication from disparate data sources to “automate” a lot of repetitive tech-writing grunt work, this (in my opinion) is the bigger payoff from (my) the technical communicator’s point of view. Intelligently engineered content can save your writers from wasting time on low-value add tasks, as well as allowing for rich subject-affinity linking across the corpus of content – again, algorithmically determined, with no link maintenance required from writers. I don’t think this gets brought up so much in headless CMS discussions because those discussions focus more on the developer community and the toolchains they love. But it holds great promise for solving a lot of complex content engineering problems, the kinds only technical communicators have a real affinity for (i.e., necessity is the mother of invention).

Sabine: What’s the next step for Genesys? Where do you want to take the dynamic publishing?

Barry: We want to improve our document structures. The more explicit, “meaningful” structure you can add to your content, the more you can do downstream.

Sabine: If you had any words of wisdom or advice to give any organization considering a similar solution, what would it be?

Barry: Look for opportunities to learn the approach using your toolset, and get started. If that’s not a possibility, get started imposing templates on your content. For example, what types of paragraphs should a “feature article” consist of? Do not confuse this effort with adopting a DITA-type modeling scheme: i.e., the task-concept-reference troika is not a content model, but something you can use to build a content model. What do you want your website architecture to look like? What kind of portal pages do you want? How should your pages be consistently structured for different content types?  All of those decisions are the hard work. Then build your page schemas around that. With a mind always on what your customers need, not what your tools need.

In short – the tool by itself does almost nothing for you. It will not think for you. You will always iterate on any content model, of course, but if you have a clear vision for your content, the whole thing can more or less fall into place, elbow grease notwithstanding.

Sabine: A big thanks to Barry for his thoughtful and inspiring answers!

The CIDM Member Round Table discussions are held each month on the fourth Wednesday. If you’ve also implemented a dynamic publishing solution you’d like to share with us, please reach out to me at [email protected].