Edited by Mike Scott and Geoff Thompson †
[Not in series 107] 2001
► pp. 287–314
Editors’ introduction
Sinclair, like Coulthard, Darnton and Edge & Wharton in this volume, concerns himself with the uses andusability of text. Where Coulthard concerns himself with a text user’s need to detect and investigate plagiarismin existing sets of text, and Darnton and Edge & Wharton with learners, however, Sinclair is interested inidentifying, interpreting and thence in questioning patterns in dialogue between people and computers.
Sinclair’s argument concerns feedback between the human and the computer, Human-MachineInteraction (HMI). Before the advent of complex switching systems (of which a computer is an example), a simpleelectric light switch provided immediate feedback to the user by an easily noticeable change in the environment. Thevery position, up or down, of a light switch generally indicates in a given culture the presence or absence ofpower, so there was often no need for a label on the switch itself. HMI of the very simplest kind possible. Similarly andearlier still, the use of spurs or a twitch of the reins usually gave feedback in the form of a change of direction or pace.No need for an hourglass display on the horse’s neck, a Horse-Human Interaction which is not formalised.With the development of switching systems where the current state could not be easily inferred at a glance, e.g. arailway control system, more complex and varied systems of interaction between the system and its different classes ofusers (train drivers, current passengers, prospective passengers, controllers, etc.) became necessary; in the case of theInformation Age, this problem has become greater still.
This increasing complexity of switching systems is like a Rubik cube toy, that plastic cube with rotatingcoloured segments where a very few changes of state bring about confusion for the human interactant. The humanuser cannot make sense of the current state of the system and the Rubik cube does not store a “historylist” of its own states, either. Nor does the human know how to alter its states systematically to reach acomprehensible state (such as the starting position). The system is too complex to grasp but provides no feedbackother than its current state. But in the case of computers it gets worse: the interaction is not one-way; the computerprogram and user need mutual knowledge. Each participant — and there may be more than two —needs to (but often does not) know not only what the “current state of play” is, but also whatrepresentation the other participants probably have of the current state of play.
As Sinclair points out, natural conversation has evolved through millennia to deal with this problem; in adiscussion, there are discourse methods which are effortlessly used to control and order the state of play, so that evenidle gossip manages to avoid stating the obvious, proceeding systematically, as it were by a managed series of tips andhints.
In the seventies Sinclair was working concurrently on adapting computers to tackling language data as well asnumbers, and devising linguistic means of analysing human discourse. The strength of this paper is that, building onthe notions of Discourse Analysis which he was largely instrumental in creating, he takes a step back from the paradoxthat computerised systems are highly complex but interact with users in over-simple ways. He considers a set of aspectsof the flow of communication both in natural discourse and in the discourse between the user and the computersystem; he shows how a radical shift will need to be made before there can be any hope of easy communicationbetween humans and virtual systems.
Such is the pace of technological change and consumer avidity, contrasting with the processes of productionof academic text, that Sinclair’s paper, though first committed to paper only a very few years ago, now looksdated in some of the time-scales mentioned and some of its references, e.g. to the Superhighway, a term which had apositive semantic prosody in the middle 1990s but which seems to have languished since. However, the problemswhich he addresses have not changed materially. Indeed Microsoft’s Office Assistant and“Intellisense” have arguably made them worse. The software we have come to interact with is nowmore powerful, but it still seems to operate with its own mysterious agenda, we users are still all too often mere“clickers”, as Sinclair puts it, and are like to remain so until software engineering learns from thelinguistic descriptions which Sinclair and others are attempting to provide. The problem is increasinglyurgent.
This list is based on CrossRef data as of 19 april 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.