Text preview for : ISL-6_Plans_and_Situated_Actions.pdf part of xerox ISL-6 Plans and Situated Actions xerox parc techReports ISL-6_Plans_and_Situated_Actions.pdf
Back to : ISL-6_Plans_and_Situated_ | Home
PLANS AND SITUATED ACTIONS:
The problem of human-machine
communication
Lucy A. Suchman
PLANS AND SITUATED ACTIONS:
The problem of human-machine
communication
Lucy A. Suchman
February 1985
ISL-6
Corporate Accession P85-0000S
Copyright Lucy A. Such man 1985. All rights reserved.
Xerox Corporation
Palo Alto Research Centers
XEROX 3333 Coyote Hill Road
Palo Alto, California 94304
Abstract
This thesis considers two alternative views of purposeful action and shared understanding. The
first, adopted by researchers in Cognitive Science, views the organization and significance of action
as derived from plans, which are prerequisite to and prescribe action at whatever level of detail one
might imagine. Mutual intelligibility on this view is a matter of the recognizability of plans, due to
common conventions for the expression of intent, and common knowledge about typical situations
and appropriate actions. The second view, drawn from recent work in social science, treats plans as
derivative from situated action. Situated action as such comprises necessarily ad hoc responses to
the actions of others and to the contingencies of particular situations. Rather than depend upon the
reliable recognition of intent, successful interaction consists in the collaborative production of
intelligibility through mutual access to situation resources, and through the detection, repair or
exploitation of differences in understanding.
As common sense formulations designed to accomodate the unforseeable contingences of
situated action, plans are inherently vague. Researchers interested in machine intelligence attempt
to remedy the vagueness of plans, to make them the basis for artifacts intended to embody
intelligent behavior, including the ability to interact with their human users. The idea that
computational artifacts might interact with their users is supported by their reactive, linguistic, and
internally opaque properties. Those properties suggest the possibility that computers might 'explain
themselves: thereby providing a solution to the problem of conveying the designer's purposes to the
user, and a means of establishing the intelligence of the artifact itself.
I examine the problem of human-machine communication through a case study of people using
a machine designed on the planning model, and intended to be intelligent and interactive~ A
conversation analysis of "interactions" between users and the machine reveals that the machine's
insensitivity to particular circumstances is a central design resource, and a fundamental limitation. I
conclude that problems in Cognitive Science's theorizing about purposeful action as a basis for
machine intelligence are due to the project of substituting plans for actions, and representations of
the situation of action, for action's actual circumstances.
XEROX PARe. ISL-6. FEBRLARY 1985
Acknowledgements
The greatest contribution to this project over the last several years has been the combination of
time, resources, freedom to work, faith that something good would come of it, and intellectual
support provided by John Seely Brown, Manager of the Intelligent Systems Laboratory at Xerox
Palo Alto Research Center. He and other colleagues have nourished my slowly emerging
appreciation for the 'anthropologically strange' community that is Cognitive Science and its related
disciplines. In particular I have benefited from discussions with Danny Bobrow, Johan deKleer,
Sarah Douglas, Richard Fikes, Austin Henderson. David Levy, Tom Moran, Brian Smith, Kurt
Vanlehn, and Terry Winograd. Recent conversations with Stan Rosenschein on 'situated automata'
have opened yet another window for me onto the problem of how this community proposes that we
understand action. Needless to say, while I am deeply grateful for their contributions, none of them
is responsible for the results.
In my own field of anthropology, I have enjoyed the intellectual and personal friendship of
Brigitte Jordan, whose creative energy and respectful sensibilities toward her own work and life are
an example for mine: . I am deeply grateful to Doug Macbeth, witq whom I share,d my discovery
and exploration of the field of social studies and its possibilities. While he would doubtless argue
, ,
innumerable points in the pages tllat follow, his influence is there. Mike Lynch and Steve Woolgar
both provided thoughtful responses' to early drafts. Particularly, in its, early stages, this project
benefited greatly from lively discussions at the Interaction Analysis Lab at Michigan State
University, where Fred Erickson. Rich Frankel, Brigitte Jordan, Willett Kempton, Bill Rittenberg,
Ron Simons and others helped me first to penetrate the thickness of a video analysis. Jeanette
Blomberg, and more recentl,y JuHan Orr are anthropological colleagues in the PARC community,
and I have enjoyed their company. ,I atn grateful to Hubert Dreyfus and John Gumpcrz, as
members of my thesis committee, for their substantive and stylistic contributions, and for the~r
enthusiasm for the project. My thesis advisor, Gerald Herreman, is through his own career an
example of what ethical scholarship can Qe. While finding my work increasingly foreign and exotic.
he has remained an unflagging supporter.
Finally, of course, I thank my friends new and old, in particular Mimi Montgomery who over
the last ~cveral years, along with this thesis, has been my most constant companion.
XEROX PARC, ISL-6, FEBRUARY 1985
Table of Contents
PREFACE 1
CHAPTER 1: INTRODUCfION 3
CHAPTER 2: INTERACfIVE ARTIFACfS 7
CHAPTER 3: PLANS 21
CHAPTER 4: SITUATED ACTIONS 35
CHAPTER 5: COMMUNICATIVE RESOURCES 47
CHAPTER 6. CASE AND METHODS 65
CHAPTER 7: HUMAN-MACHINE COMMUNICATION 77
CHAPTER 8: CONCLUSION 123
REFERENCES 125
APPENDIX 133
XEROX PARe. ISL-6. FEBRCARY 1985
1
Preface
Thomas Gladwin (1964) has written a brilliant article contrasting the method by which the
Trukese navigate the open sea, with that by which Europeans navigate. He points out that
the European navigator begins with a plan-a course-which he has charted according to
certain universal principles, and he carries out hIS voyage by relating his every move to
that plan. His effort throughout his voyage is directed to remaining 'on course.' If
unexpected events occur, he must first alter the plan, then respond accordingly. The
Trukese navigator begins with an objective rather than a plan. He sets off toward the
objective and responds to conditions as they arise in an ad hoc fashion. He utilizes
information provided by the wind, the waves, the tide and current, the fauna, the stars,
the clouds, the sound of the water on the side of the boat, and he steers accordingly. His
effort is directed to doing whatever is necessary to reach the objective. If asked, he can
point to his objective at any moment, but he cannot describe his course (Gerald Berreman
1966).
The subject of this thesis is the two alternative views of human intelligence and directed action
represented by the Trukese and the European navigators. The European navigator embodies the
prevailing scientific model of purposeful action, for reasons that are implicit in the final sentence of
the quote above. That is to say, the Trukese navigator is hard pressed to tell us how he actually
steers his course, while the comparable account for the European seems to be ready-at-hand, in the
form of the very plan that is taken to guide his actions. While the objective of the Trukese
navigator is clear from the outset, his projected course is necessarily vague, insofar as his actual
course is contingent on unique circumstances that he cannot anticipate in advance. The plan of the
European, in contrast, is derived from universal principles of navigation, and is essentially
independent of the exigencies of his particular situation.
The image of the European navigator, deeply entrenched in the Western human sciences as the
correct model of the purposeful actor, is now in the process of being reified in the form of new,
computational artifacts. In this thesis I examine one such artifact, as a way of investigating the
strengths and limitations of the general view that the design embodies. The properties of the plan
make it attractive for the purpose of constructing a computational model of purposeful action, to
the extent that for those fields devoted to what is now called Cognitive Science, the analysis and
synthesis of plans effectively constitutes the study of action. The contention of this thesis, however,
is that as students of human action we ignore the Trukese navigator at our peril, because while an
account of how the European navigates may be ready-at-hand, the essential nature of situated
action, however planned or unplanned, is Trukese. It behooves us, therefore, to study, and to begin
to find ways to describe, the Trukese system.
There is an injunction in social studies of science to eschew interest in the validity of the
products of science, in favor of an interest in their production. While I generally agree with this
injunction, my investigation of one of the prevailing models of human action in Cognitive Science is
admittedly and unabashedly interested. That ~s to say, I take it that there is a reality of human
action beyond either the Cognitive Scientist's models or my own accounts, to which both are trying
to do justice. In that sense, I am not just examining the Cognitive Science model with the
XEROX PARe. ISL