An Anecdotal User Interface
In this article, we describe the process that our user interface design team followed to design a user interface for a cheque-settlement solution
Our department started out, like many others, being responsible for only technical publications and online help. After a time, we added customer training. Then, over the last few years, especially with the emergence of Web-based applications, we became increasingly involved with user interface design. We see all these activities as being related in that they all occur at the point where the user interacts with the product. As information developers, our job has always been to approach the product from the users’ point of view to ensure that their needs are met. User interface design is simply an extension of that approach.
The bulk of our external audience includes the people who work at banks (supervisors, operators, and system administrators) and recently, the corporate customers of those banks.
Our previous experience on HTML-based projects includes
- an operator recovery system for cheque-processing transports
- product walkthroughs for operators
- online technical references for service technicians
- Web-based applications for querying and managing financial data
Because these projects were so successful (two of them garnered corporate research and development awards), we were invited by engineering to design the user interface of a cheque-settlement system for a large banking system in Nigeria.
Like many projects, this one had too few developers and an aggressive development schedule—initially the project was to be completed in seven months (it turned out to be thirteen). Based on the requirements, the solution proposed was Web-based, despite reservations by many, including us, who were well aware of the shortcomings of Web-based applications. Also, because of the time constraints, we were not able to conduct a user needs and task analysis. While we did have some familiarity with similar systems in North America, this customer was on the other side of the world and had many processes that were handled quite differently from their US counterparts. Additionally, they performed the current process manually; this attempt was the first at handling the process electronically. We were left trying to design a user interface for an unfamiliar environment and without a good understanding of our audience. In short, a challenge.
Our late introduction to the project began with a review of the system design specification. This introduction provided us with some of the system requirements of the solution, as well as a workflow of data through the system. The problems with this specification were that it did not focus on the user requirements and it provided little information about the people who would be using the system. Also, the workflow description was unclear about which users were responsible for which tasks. To better understand these requirements, we met with engineering and architecture to learn about the solution we were trying to develop. However, after a certain point, these discussions became very focused on system implementation issues, so our group disengaged to begin prototyping the user interfaces on our own.
Our first step was to assemble a GUI team. This team included three people in our department taking on the roles of information architect, core product knowledge person, and user interface designer. Based on the requirements from the system functional specification, we designed prototypes of the screens to accommodate the tasks performed by each of the four defined user types.
We developed the initial screen prototypes on paper to ensure we could support the tasks needed. Once these screens were somewhat stable, we created Visio “mock-ups.” To make these mock-ups, we started with a screen shot of a Web browser, sized to the maximum screen resolution. With this screen shot as the basis, we began including bitmaps of browser controls and sample data, until we had a decent representation of the screens. After several “quick-and-dirty” tests with anyone willing to participate, we eliminated the major design flaws and settled on the final design.
Once the screen prototypes were completed, our GUI team assembled the screen flows on ANSI-E sized pages (one for each user type) and printed them out on a plotter. Having this “big paper” view proved helpful because, due to the number of screens in our system, it was easy to lose track of how and when different users interacted with data in the workflow. Our team presented these screen-flows to architecture and engineering in separate sessions to identify any major problems with the design. Our intention was that the architects could identify business-related issues while the engineers could point out any inconsistencies with the back-end system.
The presentations went well, but no one brought up any strong objections to the design, which seemed somewhat peculiar because we had taken some liberties with how the “back-end” system worked. We were expecting more resistance.
Following the presentation, our team began documenting the screen-flows in what came to be called a User Interface Specification. This specification consisted of the following sections:
- a screen templates section, describing the screen types (for example, list screen, dialog screen, and so on), how they were laid out, their color code definitions, and so on
- the screen-flow for each user type, showing the movement of data in the workflow
- a definition for each screen, consisting of the following an example of the screen (taken from the Visio screen prototypes)
- a high-level description of the screen’s purpose and its major interaction points (for example, the tasks users could perform on the screen)
- descriptions of all the screen components (drop-downs, buttons, table columns, hypertext links, and so on) and how they corresponded to the underlying “business” logic
- For example, “source” files could be added to a session only when they were in a certain state. So, the “Add to Session” button could be active for those files only during those states.
Once these sections were complete, our team added screen-flows and details for screens relating to other parts of the system, including administration, research, and so on.
When the specification was complete, we sent it to the architects and engineers for review. When their review comments came back, we were somewhat horrified to see that much of what we had written had to be re-designed. The reason: while our team was documenting the screen-flows, engineering was making fundamental changes to the system design (in effect, rewriting much of the system functional specification), which in turn heavily impacted the user interface. It was time to go back to the white board.
Because of the severity of the review comments, everyone agreed that the three groups needed to get together to consolidate our disparate parts. Architecture, engineering, and our team spent a solid two weeks working through each screen and making design changes on-the-fly to address the underlying design concerns. This process was at times rewarding, frustrating, enraging, and hopeless. However, by the end of the two weeks, we came up with a design that was at least acceptable to each group.
Although the primary purpose of the redesign activity was to work out the problems with the screens, an important by-product was that we identified problems with the underlying system design as well. For example, at one screen, operators select one or more “exchange” files to send to a central system. However, in the course of determining how operators selected and sent these files, we discovered that the system design did not account for problem files that needed to be re-sent. Although not a significant problem at the screen level, the engineers needed to reevaluate how the underlying system would respond to the problem.
Once we were able to realign the user interface to the updated system design, we could revise the User Interface Specification. We reviewed this revised specification again and then released it to engineering to begin implementing the design.
But the process was far from over. Over the course of the next eight months, we took on a consulting role, helping coders implement the design. Before we were finished, we had updated the specification 18 times for the following reasons:
- larifications—In places, the User Interface Specification did not provide enough information for the coders. Our team worked with them to resolve any unclear portions of the specification and provided more detail where necessary.
- Software/developer limitations—Issues arose because of limitations in the technology or in what the developers could implement.
- Changing requirements—As product development proceeded, new customer requirements trickled in, resulting in updates to the system and to the user interface.
When the applications were ready for unit test, our group reviewed the screens to ensure that they followed the standards established by the specification. We noticed quickly that some parts of the application looked different from others, which was to be expected because different developers were responsible for different areas. After some consultation, we agreed that a single developer should look after the GUI issues, freeing the others to work on the interfaces to the back-end system. By the end, the user interface had a much more consistent look and feel.
Eventually, the project came to a close and the product was released to favorable reviews.
A few months after the release of version 1.0, our group began work on designing user interface changes to the next release of the product. Architecture, engineering, and our team were tasked with writing a Request for Change (RFC) for all the updates needed to add a large feature to the existing product.
This time around, we were involved earlier with architecture and engineering. Time was allotted on the development schedule to hold screen prototype sessions (with representatives from architecture, engineering, and our team) to review screens that needed to be updated as a result of the inclusion of the new feature.
When the three groups got together, we reviewed screen changes and discussed (argued over) interpretations of requirements. Although these sessions were at times excruciating, they were overall productive and ran more smoothly than for version 1.0. Once the changes were agreed upon, our team updated the old screen descriptions with the new ones.
Because we made these user interface changes early, engineering was able to better understand the impacts on the system-level design (although there were still a number of changes completely independent to the user interface), which resulted in an easier transition from design to implementation.
These conclusions became very evident to us during our design:
- Early involvement is critical. Don’t wait until after developers start coding to become involved; once code has been written, developers are loath to change it. Waiting too long will also raise cries of “schedule impacting,” which is the developer’s defense against additional UI work. When you reach this point, there is little you can do.
- Include all responsible parties as early as possible. Even at the earliest prototyping sessions, include the people who will have the greatest personal stake in the project. In our case, these were the people who derived the requirements and were responsible for implementing the solution. Without their initial acceptance and “buy-in” you can expect an up-hill battle with the predictable results—none.
- Be realistic. You are not going to win every issue. Sometimes you may need to lose on smaller issues to win the larger ones. This idea is particularly true with first-out product releases where you will not always have first-hand user experience to support your position.
About the Authors