Home
 
Title Page
List of Figures
List of Abbreviations
Introduction
What is usability anyway?
Importance of improving usability
Feedback
Guessability
User centred design
Case Study
Conclusion

Appendix 1

Appendix 2

Bibliography

Case Study: User centred design at Reuters

Introduction
Customer Centred Design Process
Usability Testing
Conclusion

Introduction Next Section

In order to get a feel for how usability is treated at the grass roots level I approached Greg Garrison of the Usability Group at Reuters, UK on how usability is addressed in the development of Reuters interactive systems. Reuters are at the forefront in the involvement of customers in the development of interfaces, a method which many other commercial organisations are now emulating. I was invited to the Usability Laboratories at John Carpenter St, London to meet with Sandy Schmid of the group. After undertaking some background research I created an interviewing proforma (Appendix 2) that proved effective in keeping a structured interview plan at the meeting with Sandy.

This chapter reports the results of my interview as well as other research into Reuters usability methodology and describes what the usability group is and why it was originally set up. It will demonstrate how Reuters subscribe to the concept of user centred design by detailing how their products are tested, evaluated and ultimately improved which allows them to maintain their position as a world leader in information delivery.

Reuters Usability Group

Effective delivery of information is of paramount importance at Reuters. With the increasing sophistication of Reuters products it was becoming apparent (rise in help desk calls etc.) that the information was just not getting through as well as it should. Strangely enough, usability didn't seem to be an issue in the 1850's when Paul Julius Reuter used pigeons to carry share prices between Brussels and Aachen [Pandya 1997]. The role hasn't changed much but the hardware certainly has which, in turn, has introduced usability problems that could have threatened the effectiveness and credibility of the Reuters organisation.

As a result the Usability Group was set up in 1993 to "...tackle the problem of producing software that is efficient and intuitive to use, and easy to teach and support." reports Bray [1995]. Greg Garrison, the group's director, cited by Bray [1995] continues:

"It was becoming increasingly evident that we were beaming information to customers all round the world, only to have it crash in the last millimetre as it got to the screen, because of the usability... so we decided we needed a comprehensive usability methodology."

The weight of importance that Reuters attach to usability cannot be understated. The investment in usability testing is "...a sum so large it won't even quote the figures" [Bray 1995]. It is evident, then, that in an organisation that is founded on information circulation, its ability to enable its customers to glean information with ease from a VDU screen is vital.

Reuters Definition of Usability & their Comprehensive Usability Methodology

Reuters' view on what usability means to them is simply stated: "The ease at which customers can use our systems" [RUG 1997]. However this is a very broad definition which disguises the amount of research and testing done under the umbrella of their comprehensive usability methodology. Under this methodology the Reuters Usability Group's aim is to ensure that "all Reuters products are easy to understand, easy to learn and easy to use". The basic principles of "guessability, learnability and experienced user performance" [Jordan et al 1991] can, thus, be clearly seen in this mission. Products must also be "efficient to support" [RUG 1997]; if a system is hard to maintain, this may ultimately result in down time which, by definition, is not usable.

Reuters would argue that you can involve "customers in the creation of products that are intriguing not mystifying, reassuring not frightening and fun not frustrating" [RUG 1997]. This seems to answer the fears of Bertino cited in Booth [1992] who notes that "...novice users feel frustrated, insecure and even frightened ...[in dealing with a system]... whose behaviour is incomprehensible, mysterious and intimidating". Whether the involvement of the users in the development and testing of products does actually improve usability will now be discussed in the next section.

Customer Centred Design Process Previous SectionNext Section

Introduction

Reuters vision of "truly usable products" [RUG 1997] is achieved, they claim, by putting the customer, rather than the system, at the centre of the design process. The solution to the usability conundrum, (the paradox of increasing sophistication against decreasing ease of use), of Reuters products therefore is based on a "Customer Centred Design" process [RUG 1997]. The benefits of using such a method (figure Q) should amount to "improved design efficiency, increased customer productivity, reduced training and support costs" as well as "enhanced brand identity" [RUG 1997].

Market Research

Initially this stage of the process was carried out as a response to usability issues brought up in help desk calls. In the infant period, when the group was set up, its job was to review the current product line that Reuters customers used with reference to its usability (or lack of it). Now that the group has a permanent role in the assessment of usability, its research can now be undertaken from the conception of a new product (or version) rather than after it has been released. In effect the start of the assessment takes place with a review of the previous version. From figure Q it can therefore be seen that the element of expert review is noted in this part of the process rather than at the end as expected. Hence a continuous, looping iterative design process is established. The role of the expert review, discussed in general in Expert review , is more specifically described in Reuters Expert review .

Reuters support marketing teams who interview customers to identify what usability issues need addressing and what future products may be required. These teams also review the competition to ensure that Reuters products stay in the forefront within the market place.


Figure: Q

Customer Requirements

Being an integral and vital part of the whole process, this stage attempts to improve the understanding of "who Reuters customers are, ...how they work" [RUG 1997] and what they require from the products.

Categories of customers have been documented over the years which has ultimately produced a catalogue of generic "customer profiles" [RUG 1997]. This generalisation gives Reuters a head start in understanding their client base. It is used in preparation for subsequent "site visits, customer interviews and customer group meetings" [RUG 1997] which, in turn from this baseline, permits a more detailed profiling of the users involved. For example, the customer profile of the share dealer will describe certain attributes for that category of customer; however, from interviewing users, it will be apparent that a dealer in the USA, for example, will operate in a significantly different way to a dealer in the Middle East. The tasks customers undertake are modelled too; for instance, the American operative will tend to concentrate on a smaller task area than one in Saudi.

During this stage, client visits are booked to enable the enhancement of the customer profiles. The completion of a participant questionnaire is one method used to collect data. It will assess detailed matters; for example, whether a mouse is to be used on the product and if so whether or not the customer has had experience of using one. The completed questionnaire aids in the modelling of tasks and enables the assessment of how their customers work, at the same time. Simultaneous assessment of both the task and the customer will help discourage any priority of importance being made to the task, since it is vital that the customer, is not divorced from the usability process.

Another method employed to model the users and the tasks that they do is Job Shadowing. A member of the team will undertake a site visit and shadow a customer, over a period of time, to observe what they do.

The final element in this stage is the creation of a product specification. By using the "customer profiles... task models and usage scenarios" [RUG 1997] the resulting description will be able to indicate what product is needed and how the customer will want to use it, thereby enabling the design to match closer to the customer's expectations. This will avoid a situation of mismatch in task requirement against the system function ensuring a close match.

High Level Interface Design

In this stage of the process, customer's needs are turned into a high level design of the required product. This may take the form of a "customer-walkthrough of design ideas" [RUG 1997] which can outline the basic concepts involved, illustrated simply on paper or, more usually, with low fidelity prototypes. Ideally, each task that the product must undertake and other design elements are considered to be of equal importance with respect to usability; no prioritisation is assumed and no bias created. However it is only during the development stages that trade offs are made and decisions of priority are forced.

Not all decisions are commercially based. One example which can demonstrate a cultural design priority is the expectation that the Reuters terminal screen should display a yellow coloured strip, that has been a historical identifier for the product. Therefore it can be demonstrated that a brand identifier may actually out-weigh and take priority over a customers interface requirement.

The main rationale behind this stage is to confirm, with the customer, that the Product Development Team is on the right track. This entails a check of product specification against the customer's requirements. Any changes that are required to the product are therefore clarified and can be resolved at an early stage of development; any changes that are needed are implemented, at low cost, to the low fidelity prototypes rather than against the high fidelity system further along the line of production.

This iterative design philosophy enables crucial usability feedback from customers to be incorporated at the earliest possible stage in a product's development. As Garrison cited in Bray [1995] points out "You don't let the process go too far until you've checked it". Thus all the effort put in at the design stages allows usability issues to be flagged at the earliest opportunity, which, as proven by Reuters success, ultimately affords more usable products in the long run.

Detailed Visual Design

A selection of products are utilised to create high fidelity prototypes that are used during this stage of the customer centred design process:- Visual Basic and Adobe Illustrator to name only two. The use of detailed prototypes can establish an exact copy of what customers can expect to see on a finalised system. Not only that but they enable the customer to make comment on, and offer their preferences for, the proposed interface. For instance attributes such as screen layout, colour and control methods can not only be discussed but can be demonstrated to the customer in order for them to offer often vital feedback which can be used to effect a revised design.

Designs can then be checked against the "User Interface Design Guide" [RUG 1997] which Reuters has constructed from material previously researched by the group. This guide can be referred to during any stage of the process to give designers a benchmark from which similar designs may be drafted. Even though this is not an expert system, this computer based training tool will avoid situations of trying to re-invent the interface wheel.

Development and Testing

During this stage of the process customers test alpha and beta versions of a product thoroughly which allows time to resolve any problems that may arise prior to a full launch. Expert reviews are undertaken to again refine the design. Improvements are prioritised targeting the most significant to ensure benefit to the customer.

After the product is released onto the market there is a continuous process of review which will incorporate the use of customer questionnaires to provide valuable information for future versions of that same product.

Therefore, finally, the design loop is closed and further enhancements will take the process back to the first stage of the Customer Centred Design Process, market research.

Conclusion

The whole concept of the Customer Centred Design Process, Reuters main component of their usability methodology, is one in which the customer plays a central and integral part of the development, evaluating and feedback process. Reuters would suggest that the inclusion of the customer in every stage of their iterative design process is fundamental in ensuring "the creation of products which satisfy customer requirements" [RUG 1997].

Therefore, even though I have listed each process individually and quite formally it must be noted that:- the design process is highly flexible; the usability group communicate with their customers in an open and informal atmosphere and that this exchange of information occurs during every stage of the design process. Indeed the technique of design "cannot be represented statically" as characterised by Carroll and Rosson cited in Shneiderman [1992] who also points out that "design is a process; ...[and] not a state".

The final process of development and testing contains, not only a final test of usability, but also the usability testing methods by which the other stages too can be re-assessed and re-considered. It is these methods that I shall now turn my attention to.

Usability Testing Previous SectionNext Section

Introduction

The primary aim in testing is to be able to "measure the effectiveness of the product user interface" [RUG 1997]. It is noted by Shneiderman [1992] that it is important that the user "should be treated with respect and should be informed that it is not they who are being tested but rather it is the software and user interface that are under study". This is the case at Reuters who stress that the testing of the users "enables them to influence the design of a product according to their needs and requirements" [RUG 1997]. Users will therefore feel empowered and will not feel alienated when the system, a system that they have helped design, is released to them.

The ultimate result of testing and the iterative design and re-design by involving the users should be a product which truly reflects what is needed by them. I will now describe Reuters user testing methods which evaluate design and effects revisions based on customer feedback.

Usability Laboratories

Reuters have permanent usability laboratories around the world:- London, New York, Tokyo, Milan, Geneva and Singapore. They also use mobile laboratories that can be used on site in the heart of the world's financial centres. Within these laboratories prototypes of new products can be tested or comparative tests can be undertaken to see if enhancements on older versions of the same software make a real improvement to usability.

The laboratory is set up with a conventional computer workstation upon which volunteer customers are asked to undertake a specific series of tasks (task scenarios) using the interface being tested. The layout of the Laboratory is similar to figure R [NPL 1997c].

Figure R

Task Scenarios

The task scenarios used in the testing are created by the development teams and are based on the customers description of what the system is required to do overall. By undertaking the scenarios the customers get hands on experience of what the real product will look like and the usability group are able to appreciate how it will behave in genuine situations. The sessions usually last for up to two hours and at least 10 customers are tested individually to ensure accuracy. The sessions are video taped, pending evaluation, in the following way: the whole scene is recorded on a master tape made up of simultaneous views of the face and hands of the customer and the workstation screen. There is also an audio tape made of the users' comments.

After a session there is also a structured, but informal, debriefing allowing the customer to home into any specific areas of concern that they may have had during the test. This too is audio taped. The customer will also be asked to complete a psychometric usability questionnaire (a SUMI is used. See SUMI).

Feedback to Development Teams

The final report, in the form of a written document and video summary is then sent to the development teams. The results are presented in such a way that individual users are not identified. The report gives quantitative measures such as "how many of the specified tasks did the customer successfully achieve" and how fast were they done in comparison "with a trained operator fully conversant with the system" [RUG 1997]. Qualitative measures can also be gleaned from the reports too. Aspects such as "attributes that are particularly liked or disliked, customer wish lists and recommendations for change", which is after all the main reason for testing, can be uncovered.

Shorter tests, that check a particular aspect of the interface, can subsequently be run as a result of any changes to ensure that the amendment is effective. A video link is often set up in these cases so that the development team can actually view the customer in a live situation and so suggest alternative scenarios for trialing. In this way a product developer in say New York is able to test a product that is to be used in the UK with a UK customer. Elements relating to international specific design can thus be effectively tested and evaluated avoiding problems that may, at a later stage, be too late to resolve easily.

Customer feedback cannot be undervalued. It can identify any shortcomings in the designs and is able to home in on problem areas quickly. Customers, who will intrinsically know the task domain, will identify seemingly subconsciously, the most effective way of performing it where alternatives exist and, will at appropriate stages of the whole process of testing, be able to validate proposed navigation structures. The customers view upon the groupings of screen functions for example, would be highly regarded and the change would be incorporated by the development team at the next prototype stage.

Training

Client trainers, who are the coaches to customers after a product is released, are also involved in the usability testing because it helps them in identifying where and when customers may need training. Testing will also allow the development teams to identify where additional software training support (help screens for example) is likely to be useful in making the interface understandable.

Expert Review

Reuters use various consultants from differing fields of expertise when testing products. This enables differing skills to be brought into testing allowing for each element of the design to be tested by the most appropriate expert. Also because independent consultants are used there is no conflict of interest and thus independent analysis is achieved.

Jeffries cited in Cuomo & Bowen [1994] found that "heuristic evaluation is more effective when a group of independent evaluators is used". Reuters also take the view that reviewing products is more effective when undertaken as a team. Formal expert reviews may therefore involve a team made up from the following consultants: User Interface (UI) experts; ergonomists; graphic designers; market analysts; psychologists and domain experts. The expertise involved in reviews is dependent on the product being reviewed so this list is not exhaustive.

Experts are able to offer decisive refinements, based on specific knowledge and experience, which will fine tune systems to customer requirements. They are able "to estimate what will happen with longer-term usage" [Shneiderman 1992] which would not become apparent during normal user testing. As a consequence Reuters use a combination of experts, as well as users, to test products when evaluating products.

Finally the list of initial customer requirements is re-checked to ensure that the design specifications are met before the product is released.

Conclusion

In conclusion I would propose that the testing processes undertaken by Reuters are not just an end evaluation exercise on current products; the testing spurns more than a quick nod of approval if the product has reached a certain level of acceptance (although the mark of 50 on SUMI questionnaires is set as a standard Usability baseline). Testing, in addition, is used as an iterative evaluation technique which does empower users to effect design changes which ultimately increase the usability of the Reuters products, version on version.

Maybe one reason why laboratory usability testing has proved useful for Reuters is that their designers and programmers may increase their diligence if they know that in depth testing is to take place. This point of view is supported by Gould cited in [Shneiderman 1992].

Conclusion Previous Section

In this case study I have endeavoured to uncover whether the involvement of users in the development and testing of products does lead to improved usability. I have demonstrated that Reuters certainly believe this to be the case and I have given examples of where their Customer Centred Design Process and user centred evaluation methods have led to a greater understanding of who the customer is and the context in which he works.

The underlying key principle in creating usable systems appears to be the absolute need to understand the users of them. This has been identified by Lazonder & Van Der Meir [1994] who propose that:

"since it is ultimately the users of the software system who decide its usability ...[they] suggest users be made an integral part of the software design and development process".

Over the last few years this viewpoint has gained greater prominence. As Morry & Dillon [1996] conclude: "research on usability has sought to become central to the design and selection of technology for large organisations".

It seems that Reuters knew this in 1993 and have been able to effectively turn their once usability problem into an effective method of usability design that other companies now aspire to emulate.