Home
 
Title Page
List of Figures
List of Abbreviations
Introduction
What is usability anyway?
Importance of improving usability
Feedback
Guessability
User centred design
Case Study
Conclusion

Appendix 1

Appendix 2

Bibliography

What is usability anyway?

Introduction
Measurement
Other Commentators
Conclusion

Introduction Next Section

The established protocol when discussing a subject is to firstly define it. In this dissertation I will not stray from this convention and will attempt to define what usability actually is. In the course of my research I have uncovered the proposition that usability can be defined in various ways despite the fact they should all, virtually, mean the same thing.

One humorous definition of usability noted by Hix and Hartson [1993] states that "If your computer were a person, how long 'til you punch it in the face'. Despite the flippancy, the point, i.e. a system should be a friend of the user, is made. However there is still a requirement to define what is understood by the term user friendly. The definition also suggests that usability can be measured. As Shackel cited in Booth [1992] points out "Everyone knows what usability means until its recognition as a criterion implies evaluation...".

A quick, comprehensive and operational definition may not be as easy to find as first assumed!

Measurement Previous SectionNext Section

Introduction

Therefore, in the search for a definition of usability, it can, at the very least, be considered to be a measurement. Eason cited in Preece et al [1994] indeed explains that the "...major indicator of usability is whether a system or facility is used..." and that the "...crucial measure [of usability] is the pattern of [the user's] responses to options...". Booth [1992] also supports this view and comments that "if we force an individual to use a system in order that we might assess its usability, then we may be destroying the best measure... whether or not a system is used".

There seems little argument, then, against the idea that any comprehensive definition of usability must involve, to a major extent, the property of measurement. Which elements should be involved in this measurement is dependent on the commentator. The International Organisation for Standardisation (ISO) as cited by Brooke et al cited in Jordan et al [1991] adds its say on the matter by defining usability with two views; one involving measurement, the other implying it:

"Usability measures: The effectiveness, efficiency, and satisfaction with which specified users can achieve specified goals in a particular environment." and

"Usability attributes: The features and characteristics of a product which influence the effectiveness, efficiency and satisfaction with which particular users can achieve specified goals in a particular environment".

However despite this seemingly apt description of the attributes of usability in the second observation, the measurement of it, in terms of "effectiveness, efficiency, and satisfaction", is still fairly vague.

Even a subsequent edition of a definition for usability (ISO 9241-11) still does not define specific metrics [NPL 1996]:

"Usability is the extent to which a product can be used to achieve specific goals with effectiveness, efficiency and satisfaction in a specified context of use"

Preece et al [1994] remarks that usability is "... concerned with making systems easy to learn and easy to use". Mayhew [1992] also considers these criteria to be part of the "general principle of interface design" leading to usable systems. Even though these comments take us some way to understanding what usability is it doesn't, by itself, define actual measurements. Ease of learning and usage are similarly identified by Jordan et al [1991] who also suggest an "appealing idea" that usability measurement is dependent on "three distinct components ...guessability, learnability and experienced user performance". I will now briefly describe these elements

Guessability

Jordan et al [1991] suggest that, "Guessability is a measure of time and effort required to get going with a system.". Guessability is discussed in depth in chapter 5.

Learnability

Jordan et al [1991] propose that this element of usability "represents the amount of time and effort required to reach a user's peak level of performance with a system". Consider the following scenario:

I recently hired a casual member of staff to work in the general administration section of my office. Even though she was only contracted for a months' work she was expected to produce simple graphics using an unfamiliar package with just a few days of informal coaching. The windows based package proved to be easy to learn and consequently the time taken for her to become competent was quite short. Mayhew [1992] has identified this situation too and suggests that "ease of learning should be compatible with the turnover rate".

The scenario therefore demonstrates a trade off with training; i.e. I either employ a member of staff for a longer period of time, enabling them to become more familiar with a package or I purchase a different package that is easier to learn. Either way there is a cost implication. Alternatively I may be able to employ staff who already have the necessary skills in an existing package but they may, however, command greater remuneration. The decrease in costly training is set against an increase in salary cost. In addition if I do purchase the easy to learn package it may well have less functionality to ensure its greater learnability. However this is not necessarily the case for all systems.

Indeed I suggest that the goal of making complicated systems easy to learn should be seen rather as a general design challenge as opposed to a problem, inherent in complex systems. Mayhew [1992] identifies that design goals are "often in direct conflict with one another". The extra challenge, therefore, is to identify which trade offs can be expediently made if goals do conflict.

Experienced User Performance (EUP)

Jordan et al [1991] consider that this element of usability, Experienced User Performance (EUP), "corresponds to the asymptotic level of a user's performance with a system over time". Some systems, such as nuclear control systems or flight deck controls for example, require their users to be fully trained to the level of EUP before they undertake a 'live' situation. However, I suggest that this element of usability measurement is only relative to one particular user since another user may reach a different plateau of experience. In other words, regardless of how long a system is experienced, different users will attain differing levels of performance

Even Jordan et al [1991] consider that the "maximum potential performance" for a system may not actually be reached by an experienced user and he suggests that only an "expert user's asymptotic performance represents the most effective and efficient way to perform a task". He suggests the existence of a "discoverability gulf" between EUP and the actual potential of the system.

Eason cited in Booth [1992] also indicates this theme in his definition of usability which he expounds as being "..the extent to which a user can exploit the potential utility of a system". Eason cited in Preece et al [1994] puts this into practice by undertaking "a field study of a banking system that provided staff with 36 different ways of extracting information from a customer's account." According to Eason's definition, the usability of the system would increase as the users choice of functions increases.

However "after examining the usage logs he [Eason] found that just four codes accounted for 75% of the usage" which could hardly be considered to be exploitative. The reason is that the users wouldn't consider the extra "effort in learning to use the extra searching strategies unless absolutely necessary" because they had already learnt to get the information in other ways. Considering system exploitation may only be limited by the imagination or indeed bounded by the lethargy (whether justified or not) of the user to learn new techniques, the concept of EUP may not be as definite as first cited.

Jordan et al [1991] acknowledge the former point and suggest the idea of "shells of competency" in which a user may "discover a more efficient method" resulting in a "step increase in performance" thereby moving to a "higher shell of competency". I would argue that if a user has discovered a short cut to perform the task such as "coming across something new in the manual", as cited by Jordan et al [1991], then they haven't fully learnt the system in the first place. As a consequence the user couldn't be deemed to have reached their the level of EUP let alone the systems potential performance. I suggest that only measurement at the expert level would create a definite standard, thus bypassing the differing levels of EUP between different users for the same system. However considering that system utility may only be bounded by imagination, this too may prove difficult to define.

Other Commentators Previous SectionNext Section

Universality of Usability Definition

Holcomb & Tharp [1991] postulate that "basic user interface principles exist that apply to all user interfaces...". If Holcomb & Tharp's idea is correct then these principles would be of great advantage. Any design for an interactive device could be modelled on these principles and evaluated against them to prove usability. The suggestion of global usability, even at the basic level, I fear is one of ideal rather than of reality. I seem to be justified in my view when considering Shackel cited in Morrey & Dillon [1996] who considers usability to be

"technology's capacity (in human terms) to be used easily and effectively by the specified range of users, given specified training and user support to fulfil the specified range of tasks, with the specified range of [task] scenarios".

It can be seen that Shackel takes a more pragmatic angle on the question of usability implying that a system's usability test should be limited to its task, range of users and the support and training given to those users. Indeed it would seem unfair to try to assess a system's usability by testing it with inappropriate users who have not had training or support, to do a task that it was not designed to do. Unfortunately Shackel, in this definition, is not able to offer absolute metrics by which usability assessment can be evaluated.

Holcomb & Tharp [1991] continue with their idea that usability should be system independent and postulate that: "... these [interface] principles result in user interfaces with superior usability [and suggest that] usability is not all or nothing but relative". Their paper outlines a usability model that enables designers to have an "initial usability decision" i.e. a baseline from which to work and a tool for evaluating a product. In summarising their view it seems that usability is a relative, rather than a clear cut, concept.

Usability at Reuters plc

Academic and theoretical descriptions for usability are all very well but what happens at the sharp end of things, where the market ultimately dictates a system's usability and fate? To get a feel for how usability is dealt with at the grass roots level, the point at which the user actually uses (or tries to use) a system, I approached the Usability Group at Reuters, London UK for their viewpoint. In short, Reuters define usability simply as: "The ease with which customers can use our systems" [Reuters Usability Group (RUG) 1997]. I have undertaken a full study of the usability methodology at Reuters and this has been documented in chapter 7.

Conclusion Previous Section

In this chapter there has been much discussion about what usability actually is and how it is defined. I have suggested that regardless of how usability is defined, all definitions should all lead to the same conclusion. The conclusion that I draw is that usability is dependent on how easy it is for the user to learn to use the system, actually use the system and exploit the system's potential. I would also argue that any significant definition of usability must include a measurement against which the system can be tested and thus evaluated which therefore allows improvement goals to be set.

Empirical measurement can be considered to be an obvious consequence of usability criteria. However there are other important elements too that have been mentioned in the usability definitions I have cited such as: the user himself and the ever changing environment, the context, in which the system is used. These factors not only play a significant role in the definition of usability but are intrinsic to how products are developed into usable everyday items, whether they are computer interfaces or other interactive systems.