Clarifying requirements — asking what and how

Home/Publications/CIDM eNews/Information Management News 09.03/Clarifying requirements — asking what and how

Vesa Purho
Development Manager, Nokia

In my last article, I talked about how important it is to ask “why” so many times that you get to the bottom of what it is that people actually require without specifying an implementation alternative. Starting from “the possibility to import an XML configuration in the system,” we ended up with “making changes in the configuration and changing the status of the modules has to be easy.” Two issues need to be clarified in the requirement: do we have one requirement or two, and, most important, what is meant by easy?

One can easily argue that there are actually two requirements: making changes in the configuration has to be easy and changing the status of the modules has to be easy. The current requirement clearly reflects the current practice of using the configuration to build the TOC of the document set as well as to monitor the approval status of the modules. Naturally, since these actions may not exist in the new system, it would be wise to separate the requirements. It may also be that “easy” may be defined differently for the configuration change than for the module state change. These changes may be made at different stages in the process, which also speaks to separating the requirements.

The second question, what is meant by “easy,” is more difficult to handle. There are no absolute answers. The requirement needs to be clarified by the people who will actually implement the tool. Their opinion of what is easy will vary a lot from the actual end users’ perspective. For example, entering a bunch of commands with variables and switches in a command prompt may well be the easiest way for an experienced Unix user to change the status of the modules but would not be easy for a technical writer without DOS experience, let alone coding experience.

The problem with defining a requirement in more detail is that it easily turns into an implementation solution. If you ask, for example, “What in your opinion would be “easy” in changing the configuration,” you will probably get an answer like “drag-and-drop.” That is what people are used to. However, that limits the implementer’s ability to innovate. They might come up with an easier way to make changes than “drag-and-drop.” For example, they might use “click-and-click,” a technique in which you simply click the module to be moved and then click the place where you want to place the module without having to keep the mouse button down. Naturally, everyone has already made some assumptions about the implementation at this phase. You can probably assume that if the product is Web-based or PC-based, you would use a mouse and thus can handle clicking.

The answers to “what” and “how” questions should relate to measurable properties that you can actually test during and after the implementation. For example, “Changing the status of a module to approved should not take longer than three seconds” or “Changing a module’s place in a configuration should not take more than three clicks,” are reasonable definitions of “easy” which can be objectively measured. Remember, however, that all time-bound requirements are dependent on factors like network load and may apply only under normal conditions. I have heard responses like “We cannot promise these response times if the network is congested or down” to performance requirements from the development people. Naturally, they can’t and they should not promise; everybody understands that if there are problems in the network, the tools may have problems as well.

Setting requirements for response times and the number of clicks is relatively easy but more is needed to define “easy.” If the buttons are in strange places or the program forces the user to do steps in a strange order, the tool may fulfill all the requirements but still not be easy to use. Some requirements need a bit more definition and must be tested with the actual users to see if they have been fulfilled. Requirements should have measurements that can be verified. For example, you might write a measurement as “90% of the test users can do task x in y minutes with maximum of one reference to help or documentation.”

The only way to ensure usability is to use user-centered design methods in the design and implementation phases. That means creating user profiles, doing task analysis, prototyping, doing heuristic evaluations and usability tests, and so on. You can set requirements such as “The use of the program should follow a logical sequence of user tasks.”

The design group is responsible for designing use cases that show how they believe the users should interact with the product. They review those use cases with the actual users. I have sometimes encountered situations where the user representatives have written the use cases themselves, but I believe that also limits the innovation of the design and development team. The users begin with a reference point to their current tools and may not be able to innovate or may create scenarios that are impossible to implement. Naturally cooperation can help in these cases as well.

All in all, the requirements should not contain subjective words like “easy,” “fast,” or “with minimum effort” but should be further defined in measurable terms without implying an implementation alternative. The user interaction with the product should be defined by the design group and verified with the user community. During the entire development cycle, users should be involved in the decisions about the user interaction.


This article is the personal opinion of the author and does not necessarily reflect the opinion or practice of Nokia.

 

We use cookies to monitor the traffic on this web site in order to provide the best experience possible. By continuing to use this site you are consenting to this practice. | Close