Soft Systems Methodology Part Two

Robert N. Phillips
CEO, Lasotell Pty Ltd.
www.lasotell.com.au

We ended the first article in this series saying we would look at how to determine the adequacy of the model we build of the “system” to try to understand the “system.” (We also need to verify the definition of “system.”) Before we look any further at the Soft Systems Methodology aspects, it is worth taking a detour through Donald Norman’s findings in his book, The Design of Everyday Things, about how we build internal models to understand how things work.

This article draws on just a few pages from Norman’s book that have parallels to Soft Systems Methodology models. The complete book is certainly worth reading.

Norman points out that the simple things in life, such as scissors, pens, and light switches, present us with a simple model in which form follows function, so it is easy to comprehend how they work.

However, understanding how things work is not always the case in more complex systems. Norman’s classic example of such a disconnect is illustrated in the way his refrigerator ought to work, on the basis of the controls it presents to him—two dials, one for setting the freezer and one for setting the fresh food compartment. The controls also include recommendations for changing the settings. The obvious assumption is that according to the visual presentation, two dials mean two thermostats. Right? Wrong—see the illustration on page 15 of Norman’s book. The reality is that the controls are not independent. One dial is connected to a single thermostat (but in which compartment?) and the other dial is connected to a gate valve that controls the flow of cold air between the freezer and the fresh food compartment. The conceptual model derived from the physical presentation and the physical reality are very different.

Norman shows that we construct our mental models through experience, training, instructing, and by interpreting the visible structure together with the observed actions of the system. He calls the visible part of a device the system image. For relatively straightforward systems, such as a refrigerator, designers usually assume their model and the system image are perfectly aligned and provide user information accordingly. When this is not the case, as in Figure 1, the users create their own conceptual models, based on what they see and supplemented by how the device behaves. Such models often have little or no alignment with the true system image. The most obvious examples of such disconnects are the average user’s conceptual models for operating or programming a mobile telephone, fax, and video recorder.

Figure 1—Model Misalignments

Norman points out that alignment between the designer and system image is essential because that is the only way the designer communicates with the user. When we think about that for a moment, we can see that the system image is even more important because users tend to start playing with the system long before they read the manual (if they ever do!).

Similar principles apply to human-based systems. Each human-based system was designed by someone, even if it was a long time ago. In organisations where there is no documentation for the system, the designer’s model has long since been forgotten. People develop their own mental models, based on a perceived image of how the system seems to work. But the reality is often quite different, particularly in large systems, which is one of the reasons why “system” change or improvement projects fail—analysis of the failures often shows that the initial mental model was woefully inadequate. The important point is that in systems involving numerous people, there are usually numerous mental models.

One of the most important influences of the accuracy or adequacy of the mental model depends very much on the source of our knowledge. The fact is that when we do not have certain or absolute knowledge, we rely on fragmentary evidence and, if there is none, we use naïve (folk law) knowledge or even imaginary (made up) “knowledge.” (Sometimes we give the latter knowledge a fancy name—hypothesis!) But the consequence of inadequate knowledge is that we have a faulty model. Norman’s best example of folk law knowledge describes a thermostat. If the room is cold, will it warm up more quickly if the thermostat is turned to maximum? People who subscribe to one of the common thermostat folk theories (the timer or the valve theory) will say yes. The right answer is no—the thermostat is just an on-off switch that sets the machine fully “on” or fully “off.” Another well known example: if a bullet is fired from a rifle at the same instant that an identical bullet is dropped from an identical height, which one will hit the ground first? The folk theory is that the dropped bullet will hit the ground before the rifle bullet because the rifle bullet is travelling so fast.

The difficulty with many folk law-based models is that they are beyond the resources of people to test the answers in a meaningful way. And therein lies a major difficulty in trying to create models about human systems. If the system is complex, people think it is too hard to test a scenario to verify a predicted outcome.

In conclusion, the barriers we face, seen and unseen, in trying to understand human-based systems are much the same as those we face in trying to understand the working of everyday things. We too easily create faulty models if we encounter situations that project inappropriate images of the system that is actually in place. Norman’s examples are a useful gauge for sanity checking that we are not developing a faulty mental model. In the next article, we will examine what Soft Systems Methodology calls a “system.”

Reference

The Design of Everyday Things
Donald A. Norman
1988, New York, NY
Doubleday
ISBN: 0385267746

We use cookies to monitor the traffic on this web site in order to provide the best experience possible. By continuing to use this site you are consenting to this practice. | Close