The Heuristic Evaluation method was applied by 99 analysts working in groups to an office application’s drawing editor. The use of structured problem report formats eased merging of analysts’ predictions and the subsequent association of a set of actual problems, which was extracted from user test data. The user tests were based on tasks designed to ensure that all predicted problems would be thoroughly addressed. Analysis of accurate and inaccurate predictions has supported the derivation of the DR-AR model for usability inspection method effectiveness. The model distinguishes between the discovery of candidate (possible) problems and their subsequent confirmation (or elimination) as probable problems. We confirm previous findings that heuristics do not support the discovery of possible usability problems. Our results also show that heuristics were most used appropriately to confirm possible problems that turned out to have low impact or frequency. Otherwise, heuristics are used inappropriately in a way that could lead to poor design changes. Heuristics are also very poor at eliminating improbable problems (65% of all predictions were false), and thus mostly incorrectly confirm false predictions. Overall, heuristics provide a poor analyst resource for the successful elimination/confirmation of im/probable problem predictions. Analysis of false predictions reveals that more effective analyst resources are knowledge of users, tasks, interaction, application domains, the application itself and design knowledge from HCI. Using the DR-AR model, we derive a strategy for UIM improvement.
|Title of host publication||People and Computers XV—Interaction without Frontiers|
|Editors||Ann Blandford, Jean Vanderdonckt, Phil Gray|
|Place of Publication||London|
|Number of pages||593|
|Publication status||Published - 2001|