This index is to allow the reader to find material in this review of timeliness in HCI. It is structured as a number of concepts listed under the headings:
Under each concept there are pointers to references and to relevant paragraphs in the relevant sections of this review.
These concepts come from Section 2 Timing Concepts. Note, this is an index to a review of timeliness in HCI not a review of work in real time systems. Some of the parallels drawn in this index are, to say the least, tenuous.
"Real time" is given by clocks that measure time in different units. One clock might say it is 23 November 2000, another that it is 23 November 2000, 1005 a.m. and 10 seconds. Granularity affects what counts as simultaneity. Another problem is that a system may have access to different clocks that may disagree, e.g., "drift".
A "deadline" is an event, a point in time before which some activity or process must be completed, e.g., I must finish my lecture by November 23, 2000, 1005 a.m. and 10 seconds. Less intuitively, a delay is also a point in time. This event is a time before which some activity must not start, e.g., I must not start my lecture before November 23, 2000, 0905 a.m. and 10 seconds.
Some activities or processes have to be repeated at regular intervals. Such activities exhibit "jitter" (innaccuracy in the delay). The period [frequency?] of repetition is referred to as "pace". [To be continued I don't have a good idea how this works - AM]
Real time data has a temporal validity. Data may have less value as it gets older. There are also problems when data is time stamped by different clocks so you don't know what order they were really recorded in.
This is about proving that a system can meet some real time perfromance specification described using the above formalisms. It involves estimating worst case execution times for the different processes/activities.
Work is commonly analysed hierarchically into tasks, sub-tasks, sub-sub-tasks and so on. The granularity at which you start and stop is entirely arbitrary and depends on the use to which the task analysis is to be put. Concerns are the background motivations a user may have (e.g., keep my job, maximise profit, avoid accidents). Functions are tasks described at a level of granularity that matches the system designer's concept of machine function. There are several notations for reasoning about tasks, concerns and functions.
In some design contexts there is a question as to whether a particular function should be carried out by the machine or the person. The problem is that automating the simple repetitive tasks that machines are better at may take the human "out of the loop" and make it hard to step in and solve the hard judgmental problems humans are good at. For this reason effective allocation of function requires a very detailed understanding of the task.
There are schemes for computing the approximate time it will take someone to complete some well learned task under ideal circumstances. These are really only effective at a fine level of task granularity (i.e. seconds or fractions of seconds). Fitts' Law allows one to predict movement time (e.g., a mouse) from the distance moved and the required accuracy of the movement. The keystroke model allows one to compute the time it takes to do something based on a low level task analysis and includes a heuristic for adding thinking time.
There is a community of psychologists, starting with Newell and Simon, who apply and develop techniques from artificial intelligence to the problem of modelling human information processing ("cognitive modelling"). Newell, Card and Moran developed this into an engineering approach. GOMS is a set of heuristics and a notation for describing low level interaction with a machine. The concept of a goal is very similar to that of a low level task. Operations are key presses, mouse clicks etc. Methods are combinations of operations with IF THEN selection rules.
There have been a number of experimental studies where people have to judge the duration of time between stimuli or events. In general, time seems to go faster when we are absorbed in a task. There are various theories to explain this. They are couched in terms of how processing resources are allocated.
There is also a large body of evidence on the recall of temporal order, i.e., sequences. Short term serial recall requires active rehearsal and has a limited capacity of four to ten "items", depending on the item in question (consider the task of remembering a telephone number between the time of looking it up and dialling).Short term serial recall exhibits primacy and recency effects. That is, recall is better for items presented early and late in the list.
Predictability in a user interface is, all things being equal, a good thing. For example, an action should have a predictable effect. Where the same action has different effects depending on the context (modes) and the user does not expect these differences, errors will result. The same may apply to response times. If users can predict how long it will take a computer to respond they can adapt the way they work accordingly. For example, they can do something else while they wait. Of course, a regular delay will only be predictable to the extent that users can estimate that time duration.