Communication Institute for Online Scholarship
Communication Institute for Online
Scholarship Continous online service and innovation
since 1986
Site index
ComAbstracts Visual Communication Concept Explorer Tables of Contents Electronic Journal of Communication ComVista

Discourse Management Strategies in Face-to-Face and Computer-Mediated Decision Making Interactions
EJC logo
The Electronic Journal of Communication / La Revue Electronique de Communication
******* CONDON ********** EJC/REC Vol. 6, No. 3, 1996 ******


Sherri L. Condon
University of Southwestern Louisiana

Claude G. Cech
University of Southwestern Louisiana

        Abstract.  This article compares discourse
     management strategies in face-to-face and
     computer-mediated interactions involving four
     decision-making tasks.  A schema-based approach to
     discourse processing is adopted which claims that
     participants will employ the same decision- making
     routine in both modalities.  However, due to
     differing constraints in the two modalities, we
     predict that management strategies should differ
     across modality.  In particular, the greater
     effort and attentional demands of
     computer-mediated communication suggest that users
     will attempt to find more efficient ways of

        We examine these issues in qualitative and
     quantitative analyses of the data using an
     utterance-unit coding system to identify discourse
     functions.  The analyses show that participants in
     computer-mediated interaction encode discourse
     management functions more explicitly than those in
     face-to-face interactions.  In addition, there is
     a clear preference for encoding management
     functions as first pair-parts of adjacency pairs,
     which results in utterances that manage turn-
     taking as well.  The results suggest that
     participants in electronic communications
     compensate for decreased efficiency by adopting
     management strategies that pack more information
     into fewer utterances, i.e., by relying heavily on
     the implicit knowledge of a shared problem-solving


     Computer-mediated interaction has enormous potential
for increasing our understanding of discourse structure and
discourse processing.  It has many features that permit
carefully controlled experimental studies not possible with
face-to-face interaction.  For example, all understandings
are achieved using linguistic forms only, so that nonverbal
communication does not complicate investigations.
Transcripts are ready made and can be annotated with
information about the amount of time participants took to
produce and respond to utterances.  Since participants
cannot see each other, the effects of knowledge about
participants can be investigated by simply providing
information about the partner, such as claiming that the
partner is female or is a machine.  In addition, it is
possible to intervene in the discourse by adding or removing
language to test predictions about processing.

     Many researchers have claimed that discourses conform
to scripts (Schank & Abelson, 1977; Schank, 1982), frames
(Minsky, 1975; Tannen, 1979; Goffman, 1974) and related
notions.  These structures pattern the discourse at an
abstract level, and provide the participants with a set of
assumptions regarding their shared knowledge.  As a result,
the understandings achieved in conversation are under-
determined by the linguistic forms used.  If discourse
schemas are truly essential to the achievement of verbal
interaction, then similar structures ought also to undergird
electronic talk.

     However, as soon as we begin asking questions about
discourse routines in computer-mediated communication, how
little researchers know about the interaction between
linguistic forms and script-like knowledge structures
becomes apparent.  As Kellerman et al.  (1989, p. 29)

     discourse analysis of ordinary conversation has
     typically stressed the conventional, script-like
     sequences of language within speech events such as
     greetings, compliments, apologies, and leave-
     takings...[A]lthough consistent patterning in such
     linguistically fixed events has been detected, the
     _ordering_ [sic] of such events has generally been
     ignored; how these events are embedded into a
     conversation is unknown.

If we assume that discourse is structured into routines of
various sorts, then it is necessary to answer a host of
questions about types of routines, interactions among these
types, and their roles in discourse processing.  Moreover,
it is essential to investigate non-routine events and the
means by which conversants weave routine and non-routine
sequences into the fabric of the communicative event.  We
use the term _discourse management_ to refer to the
strategies that speakers employ to structure and sequence
the routine (and non-routine) elements of their talk into
successful discourses.

     In the present study, we address some of the high-
level functions and routines that we believe modulate
discourse.  How these routines are realized within the
constraints of a computer-mediated communication environment
will be our primary concern.  Whereas the same general or
high-level discourse frames or schemas should structure both
electronic and face-to-face communication, the differing
constraints in the two modalities suggest that discourse
management is likely to differ in these different
modalities.  For example, we note the presence of major
constraints of time and effort in computer-mediated
communication.  Most participants are not accomplished
typists, and so will compose messages much more slowly on
keyboard than verbally.  The effort required to type
messages in turn will result in an inevitable delay of
communication, requiring yet further sustained attention to
the emerging structure and content of the talk.  Thus, to
anticipate one prediction, we expect electronic
communication to be made more efficient via compression:
the use of linguistic forms to convey multiple functions
simultaneously.  We address this and other issues in more
detail below.


     In this study, we focus on decision making as one
common type of discourse activity associated with a routine.
To permit a quantitative comparison of face-to-face and
computer- mediated decision-making interactions, a corpus
consisting of 16 face-to-face and 16 computer-mediated
interactions was collected and analyzed.  The interactions
involved conversations between pairs of people chosen at
random from the subject pool of the Psychology Department at
the University of Southwestern Louisiana.  Pairs in each
case consisted of a male and a female between the ages of 18
and 26.  Each pair was given a planning task to solve
jointly.  Planning tasks involved planning a barbecue, a
weekend getaway, a picnic, or a sightseeing tour for a
visitor.  In the analyses to be presented below, there were
four interactions for each of these tasks within the two
modalities.  Face-to-face interactions were tape-recorded
for later transcription whereas computer-mediated
interactions were recorded automatically by computer.

     In the computer-mediated interactions, participants in
separate rooms communicated with one another in real time by
means of a message screen on their respective monitors
capable of exhibiting about 3.5 lines of text at a time.
These message screens displayed the last message sent by the
partner, or, when the participant started composing a
message, the message to be sent.  This method was chosen
rather than a split-screen method like that used in e-mail
TALK mode to provide greater comparability with face-to-face
communication.  The method generally resulted in turn-taking
rather than simultaneous communication, much as in the
face-to-face interactions.  It also required _remembering_
the partner's message when starting a response, rather than
being able to refer to that message while replying.  Further
details of the methodology employed to collect these data
can be found in Condon and Cech (1996).

     The resulting interactions were divided into utterance
units and coded by student assistants.  We defined our
utterance unit to be the clause, including all adjuncts and
complements (even clausal ones).  Discourse markers and
other interjections such as _yeah_ and _oh_ were treated as
separate utterances.  The coding scheme we used is a
modified version of the one described in Condon et al.
(1984) and employed by Cooper and Grotevant in their
research on family interaction (Cooper et al., 1982, 1983).
Coders were trained using a comprehensive coding manual and
were tested frequently for reliability.  With reliability
measured as the percentage of unit-agreement with a standard
coding, scores ranged from 70% for difficult, low-frequency
categories such as requests for validation and
acknowledgment, to 90-100% for salient, high- frequency
categories such as discourse markers and requests for

     In the numbered examples that we will present below,
data from interactions conform to the following conventions.
When a list of single turns from different participants in
different interactions is presented, the ordinary
alphabetical delimiters (a, b, and so on) are used.  When an
excerpt of sequential turns produced by participants in a
single interaction is presented, the ordinary alphabetical
delimiters are used, but additionally, the turns are labeled
"P1:"  and "P2:"  to attribute turns to each person.  Each
turn or excerpt is annotated "e" (electronic) for data from
the computer-mediated corpus and "o" (oral) for data from
the face-to-face corpus examined in Condon and Cech (1996).
Occasionally, an unannotated example from the (oral) family
interactions examined in Condon (1986) is included to
illustrate that the same kinds of forms were observed in
that corpus as well.  Data from computer-mediated
interactions are presented exactly as they appeared when
transmitted to the partner.  Transcriptions of audio data do
not attempt to reflect intonation and prosody except that
question marks are used to indicate question intonation and
ellipses are used to indicate pauses.

                     The Coding Scheme

     Our methodology focuses on the functions that
participants must accomplish in order to complete a simple
decision-making task.  This task requires speakers to
generate suggestions that must be evaluated.  There must be
procedures or criteria for determining how a suggestion
acquires the status of a decision.  In this study, we will
be particularly concerned with the fact that these
procedures are often not explicit.  For example, Condon
(1986) observed that families asked to plan an imaginary
two-week vacation rarely verbalized the understandings they
achieved, as illustrated in (1).

(1)  a. Father:      OK (long pause and shuffling papers)
                     OK...two whole weeks
     b. Mother:      OK
     c. Teenager A:  Hawaii
     d. Mother:      Hawaii OK
     e. Teenager B:  Hawaii
     f. Father:      for all fourteen days?

     Despite the apparent paucity of verbal content, (1)
demonstrates that participants have performed the basic
functions required for decision-making.  First, they have
generated a suggestion, a potential decision which
represents a (partial) solution to the decision-making
problem.  In (1c) the single word _Hawaii_ is understood to
function in this way.  Second, they have evaluated the
suggestion, which is evident from the fact that they signal
agreement, as in (1d,e).  Again, the linguistic forms used
are minimal:  no explicit discussion of the merits of the
suggestion took place.  Third, the family has determined
from the evaluations that the suggestion has acquired the
status of a decision.

     Based on the data of the family corpus in Condon
(1986), Condon claimed that decisions in these tasks usually
conform to the simple routine illustrated in Figure 1. In
fact, Condon and Condon and Cech (1996) argue that this
routine represents the operation of a schema for goal-
oriented problem-solving discourse, and thus accounts for
how a suggestion might be adopted implicitly:  for
participants to engage in extremely under-determined
dialogue like (1), they _must_ rely on a system of
expectations that exerts powerful constraints on the
interpretation of linguistic forms in decision-making


Figure 1.  The decision routine

   Goal-Solving     Prototypical    Marked (Atypical)
      Schema         Discourse         Discourse

       GOAL         orientation
        |                |
       \|/              \|/
      INPUT          suggestion   <----------|
        |                |                   |
       \|/              \|/                  |
    EVALUATION       agreement   or    disagreement
        |                |                  &
       \|/              \|/             elaboration
    CRITERIA       group consensus
        |                |
       \|/              \|/
     OUTPUT           writing
        |                |
       \|/              \|/
    NEXT GOAL       orientation


     The functions represented in block capitals in the
first column of Figure 1 are intended to suggest that the
structure observed in the interactions instantiates a more
general schema.  That is, an initial specific goal (GOAL) is
selected or agreed upon, and this goal serves to focus the
ensuing discourse.  At the simplest level, a solution
satisfying the constraints is proposed (INPUT), and that
solution must then be tested for adequacy by each
participant (EVALUATION).  As attainment of the goal
requires a group decision, there must be some means for
determining when there is group agreement (CRITERIA).  At
that point, whatever wrap-up actions are required that
relate to that goal may be taken (OUTPUT).  Finally, the
participants cycle on to the next goal.  Given that most
tasks will require a hierarchy of goals (in order to go on a
picnic, one must determine what day to go, what foods to
bring, who to invite, what to do for entertainment, etc.),
the participants will cycle through a number of tasks (NEXT
GOAL) until finally all the planning has been accomplished
to meet the constraints of the high-level goal (PLAN A

     In contrast, the functions represented in lower case
type in the second column of Figure 1 relate the general
schema to utterance functions in actual discourse.
Orientations like (1a) establish constraints for each
decision.  Suggestions like (1c) formulate a proposal that
meets the constraints established in the orientation.
Agreements (1d,e) and disagreements evaluate the proposal
and test for consensus.

     Finally, the third column of Figure 1 is meant to
suggest both that disagreements are departures from the
expected course of the conversation, _and_ that
disagreements will need to be explicitly marked.  In this
model, a lack of explicit consensus does not signal a
disagreement, since consensus is the assumed or unmarked
function.  In contrast, disagreements are _dispreferred
seconds_ (Levinson, 1983) with the characteristic features
of these, including prefacing by _well_, the presence of
mitigating devices, and, especially, explanations of reasons
for not accepting the suggestion.  Consequently, more
linguistic form is expended when utterances fail to conform
to the decision routine, whereas little form is required
when the talk satisfies routine expectations.  Thus, in
Condon (1986), families appeared to consider a proposal
accepted if it received at least one agreement and no
disagreements, but they never verbally recognized either
these criteria or the achievement of consensus.  Instead
these must be inferred from the subsequent talk, such as
(1f), which presupposes the decision to go to Hawaii.
Consequently, decisions tended to follow the simple sequence
of orientation, suggestion, and agreement in the
interactions we examined.

     Based on this simple sequence, the coding scheme we
employ identifies three broad classes of discourse function:
MOVE, RESPONSE, and OTHER.  As utterances could often be
coded in several different subcategories, coders assigned an
utterance to the highest possible function within each of
these three categories.  The functions and their
hierarchical arrangement are presented in Table 1. Each
category is described briefly below.  More complete
descriptions can be found in Condon and Cech (1992, 1996).


Table 1.  The coding categories

       A. MOVES                      B. RESPONSES
    Suggests Action               Agrees
    Requests Action               Disagrees
    Requests Validation           Complies with Request
    Requests Information          Acknowledges Only
    Elaborates-Repeats            No Clear RESPONSE
    No Clear MOVE

                        C. OTHER
                     Discourse Marker
                     Orients Suggestion
                     Personal Information
                     No Clear OTHER Function


     MOVE functions are those which invite a response.  In
fact, most MOVE functions are first pair-parts with
obligatory second pair-parts.  The highest MOVE function is
Suggests-Action and it corresponds to the suggestion/input
function in the decision routine, as in (1c).  Below that
function is the category Requests Action, which often
corresponds to the output function in the decision routine.
In this category are placed utterances that propose
behaviors in the speech event.  For example, most
interactions included requests concerned with recording
answers on forms provided to participants, as in (2).

(2)  a.  write it in activities                          (o)
     b.  Hey you write down the details                  (e)
     c.  well list your two down there

     Utterances coded as Requests Information seek
information not already provided in the discourse, as in
(1f), while utterances coded as Requests Validation seek
confirmation or verification of information provided in the
discourse.  The final MOVE category, Elaborates-Repeats,
serves as a catch-all for utterances with comprehensible
content that do not serve any other MOVE or RESPONSE
functions.  Frequently these are repetitions and utterances
that support or comment on suggestions.

     RESPONSE functions generally are the second-pair parts
of MOVE functions.  The highest of these is Agrees, the
routine continuation of Suggestions, followed by Disagrees,
which includes refusals to comply with requests.  The
category Complies with Request identifies utterances that
indicate compliance with any of the three types of requests,
and the category Acknowledges Only was restricted to forms
like _yeah_ that acknowledge previous utterances and to
repetitions of a partner's previous utterance.

     The OTHER function types combine categories designed to
reflect discourse management strategies as well as two
categories included to assess affective functions.  The
category Requests/Offers Personal Information identifies
utterances in which participants discuss personal
information or make other personal comments not required to
complete the task.  The Jokes-Exaggerates category includes
utterances that inject humor.

     The highest OTHER function is Discourse Marker, which
is used for a limited set of forms:  _Ok_, _well_, _anyway_,
_so_, _now_, _let's see_, and _alright_ are the forms coded
as discourse markers.  The Metalanguage category was used to
code utterances about the talk.  Because most utterances
coded as Metalanguage function in discourse management, we
will discuss this category extensively in the next section.
Similarly, the remaining OTHER function, Orientation, will
also receive considerable attention in the next section.

    Managerial Functions in Decision-Making Interactions

     We anticipate that participants in decision-making
interactions will rely on the decision routine in computer-
mediated interaction for the same reasons they rely on the
routine in face-to-face interaction.  The routine provides a
structure of shared understandings and expectations that
make it possible to express and interpret decision-making
functions using a minimal amount of linguistic form.
Therefore, differences between the two modalities should be
limited to differences in the ways that interactants manage
routines along with other discourse functions.  In this
section, we examine the strategies that participants employ
to sequence discourse routines and identify additional
managerial activities in which they engage.

     It is possible to divide the discourse management
activities involved in our tasks into two basic types:
cognitive and interactional.  Cognitively, the decision-
making tasks of planning weekends and social events can be
organized into subtasks in various ways.  One possible
structure is provided on a form that participants were
required to complete for each task.  The form for weekend
planning tasks was divided into sections marked _morning_,
_afternoon_ and _evening_ for each of two days.  The form
for social event tasks was divided into sections for the
time, location, food, beverages, entertainment, and
activities.  Even if these structures are adopted, each
subtask may still be divided into additional subtasks and so
on, until we arrive at the decision routines themselves.
Participants must determine the structure of decision
routines as they talk, which requires both recognizing the
structure that has emerged in previous utterances and
elaborating or modifying the structure in subsequent
contributions.  Consequently, much can be learned about
discourse structure and discourse processing by observing
how participants establish the sequence of decision routines
that they engage in.

     The interactional demands of the task follow from the
fact that cooperative action is required by both consensual
decision-making and verbal interaction in general.  For
example, participants in both face-to-face and computer-
mediated interactions must engage in turn management, which
is constrained by the decision routine.  The continuation of
suggestions by agreements in Figure 1 is obligatorily
associated with two separate turns in the typical manner of
adjacency pairs (Sacks, 1973; Goffman, 1981).  However, the
continuation of orientations by suggestions is not:  the
same speaker often produces both the orientation of a
suggestion and the suggestion, as in (3d,h).  Alternatively,
orientations and suggestions may be structured into turns in
which one speaker orients the suggestion using a request for
information as in (3a,g), while the other complies with the
request by formulating a suggestion as in (3b,h).

(3)  a.  P1:  what do you want to do in the morning?
     b.  P2:  sleep
     c.  P1:  cool I say we lay out in the afternoon
     d.  P2:  ok and at night we party
     e.  P1:  yea
     f.  P2:  ?
     g.  P2:  whats next?
     h.  P1:  I say we shop in the evening then go party
              late night
     i.  P2:  cool you writing this down                 (e)

(3) illustrates how densely the decision routines can be
sequenced, especially in the electronic interactions.

     Since participants' primary concern is completing the
task, we assume that their basic managerial problem is
management of decision routines, and we can observe that
much of this work is accomplished in what we call the
_orientation_ function that typically initiates each
routine.  At the same time that orienting phrases establish
goals for subsequent suggestions, they also locate the talk
within the structure of subtasks that evolves as the
discourse proceeds.  Orientations are often included in the
structure of suggesting clauses as fronted adverbials (3d,
4a).  In (3d) "we party" is understood as something to do at
night, in the same way that "sleep" is understood as
something "to do in the morning."  When orientations are
expressed as fronted adverbials in suggestions, the
utterance is coded Suggests Action in the MOVE class and
Orients in the OTHER class.  Short orienting phrases may
also occur as separate utterances preceding suggestions.
The suggestion may be formulated by the same speaker (4b,c),
although the orienting phrase may also stand alone as an
invitation for a suggestion from another participant

(4)  a.  in the evening go to dinner or something        (o)
     b.  ok the food...chicken                           (o)
     c.  FOOD HAMBURGERS, PORK CHOP                      (e)
     d.  ok um in the evening                            (o)

When orientations are expressed in short phrases, the
utterances are coded as Elaborates-Repeats in the MOVE class
and Orients in the OTHER class.

     As we saw in (3a,g), another common way of expressing
orientations is to formulate a request for a suggestion.
Like (3a,g), these are usually interrogatives, as
illustrated in (1f) and (5).

(5)  a.  where are we going to go?                       (o)
     b.  what would you want to do in London Rob this was
         your choice

This strategy structures the routine continuation of
orientations by suggestions into an adjacency pair, which
both provides a specific goal for the suggestion and
contributes to turn management.  In fact, as (5b)
illustrates, interrogative orientations not only structure
the talk into turns, but also provide an opportunity to
select the next speaker in the family interactions where
there are more than two participants.  Orientations
expressed as interrogatives are coded as Requests
Information in the MOVE class and as Orients in the OTHER

     Orientations are also structured in adjacency pairs by
a less frequently-used strategy of expressing orientations
in indirect requests, as in (6).

(6)  a.  we need to find out how many places we want to go
         and how much--how many days we want to spend there
     b.  well, I know, but let's decide when we--when we
         go--want to go across the Channel to France
     c.  we have to do something while we're in Germany  (o)
     d.  We need to decide on somewhere to have it.      (e)

In our coding system, utterances like (6) are coded as
Requests Action in the MOVE class and as Metalanguage in the
OTHER class because they refer to the decision-making
interaction itself.  They are the most overtly managerial
strategies that participants adopt because they explicitly
direct the talk.  These requests can be used to establish a
structure of subtasks (6a,b) or to orient a single decision
sequence (6d), and they have the distinguishing property
that no single person can perform a single action to comply
with the request.  Instead, compliance must be achieved by
cooperative effort from participants in a sequence of
utterances.  They differ in this respect from requests like
(2) and from other requests also coded as Requests Action in
the MOVE class and Metalanguage in the OTHER class such as
those in (7).

(7)  a.  remember we have un--unlimited funds            (o)
     b.  You may decide                                  (e)
     c.  you decide now                                  (o)
     d.  press escape                                    (e)

As (7) illustrates, aside from orientation, metalinguistic
requests often serve managerial functions such as clarifying
the task (7a) and changing the decision routine.  The
participants who produced (7b,c) changed the decision
routine, particularly the criteria for consensus and
adoption of the suggestion, by approving their partners'
suggestions in advance.

     Most utterances coded as Metalanguage in our corpora
can be considered managerial.  In addition to clarifying the
task (7a, 8a,b), metalanguage is used for repairs (8c), to
manage closings (8d,e), and, like metalinguistic requests,
to orient suggestions (8f,g).

(8)  a.  Do we have to stay in Lafayette?                (o)
     b.  It says a friend                                (o)
     c.  sorry i meant to type saturday                  (e)
     d.  ready to do the next problem?                   (o)
     e.  I think we are finished                         (e)
     f.  we have two afternoons to fill up               (o)
     g.  now we have two mornings                        (o)

     In the computer-mediated interactions, the transmission
of the message becomes an object of metalinguistic
expressions, as (9) illustrates.

(9)  a.  Are you there?    		                 (e)
     b.  WAIT MY TYPING SUCKS                            (e)
     c.  I keep pressing the enter key by accident and
         its messing up the message                      (e)
     d.  I'm trying to send you a message, don't press
         anything until i tell you to                    (e)

Participants found it frustrating to wait while their
partners typed messages, and occasionally interrupted their
partners by sending a message while the partner was typing a
reply to a previous message.  The communication software is
designed so that if a participant sends a message while the
partner is typing a message, the latter's unsent message is
replaced by the interrupting message on the screen.
However, the incomplete message remains in the buffer to be
transmitted when the SEND key is pressed.  Clearly,
participants like the one who produced (9d) did not
understand this, which caused them to repeat their messages
unnecessarily and to produce utterances like (9d).  Although
these interruptions were rare, the metalanguage elicited by
these and other transmission management problems contributed
to the large proportion in metalanguage observed in
computer-mediated interactions (see below).  Though they are
rarely topics of conversation in face-to-face interaction,
turn taking and message transmission are not uncommon
subjects of metalanguage in the electronic interactions.

     A final category of linguistic forms that serve
managerial functions is discourse markers.  Pursuing the
idea that these markers are keying or bracketing devices
that bear on frame management (Merritt, 1980; Goffman,
1981), Condon (1986) observed that discourse _ok_ occurs at
routine transitions where several levels of the discourse
coincide, most typically at the beginnings of decision
routines.  In contrast, _well_ introduces non-routine
continuations such as disagreements and requests for action.
These ideas are supported by the corpus collected for the
present study (Condon & Cech, 1995), and we are currently
quantifying additional evidence.  However, to anticipate one
of the results we will discuss further below, discourse
markers are used far less frequently in computer-mediated
interactions.  (10) illustrates how _ok_ and _so_ are
associated with language that serves orientation functions.

(10) a.  OK so in the afternoon first day we there we...
         maybe go sightseeing                            (o)
     b.  OK so let's decide what to do                   (o)
     c.  OK next day what should we do?                  (o)

(10) also illustrates how participants, particularly those
in oral interactions, combine several of the orientation
strategies identified in this section.

     We have observed that participants in our decision-
making tasks have many strategies at their disposal for
managing decision routines.  The orientation portion of the
routine is used extensively to establish a structure and
sequence of subtasks, and orientations may be expressed as
fronted adverbials, interrogatives, and short phrases.
Orientation functions may be accomplished by metalinguistic
requests and statements as well.  Other managerial
activities we observed are clarification of the task,
management of recording answers on the answer forms, repair,
closing, turn management, and in the computer-mediated
condition, management of message transmission.  We also saw
how metalanguage can be used to change the decision routine
itself.  In the next section, we provide some quantitative
data to compare management strategies in the face-to-face
and computer-mediated modalities.

    Quantifying Management Strategies in Two Modalities

     We have asserted that the decision routine provides a
basic structure for the interactions in both oral and
computer-mediated modalities.  The decision routine should
be relied on by participants in both conditions, and any
differences observed should be confined to the management
strategies adopted to structure the routines.  To quantify
these differences, we analyze the proportion of times per
discourse a given function was used.  This has the advantage
of normalizing function use not only across interaction
modality, but also across problem type and, most important,
across dyads.  Thus, in the analyses below, we need not be
concerned with whether one dyad was more talkative than

     Face-to-face interactions produced more language than
computer-mediated ones.  Such a result is hardly surprising
since typing messages requires additional time and effort.
Face-to-face interactions averaged 259 utterances per task,
while computer-mediated interactions averaged only 57
utterances.  Though the lengths of interactions in the two
conditions are very different, the first issue we need to
consider is whether the proportions of functions in those
interactions also differ.  As Table 2 illustrates, most do.


Table 2.  Proportions of functions in two modalities

Function                Average proportion per discourse

MOVES                   Face-to-Face    Computer-Mediated
Suggests Action             .18               .29
Requests Action             .03               .08
Requests Validation         .04               .03
Requests Information        .06               .15
Elaborates, Repeats         .29               .15
No Clear MOVE               .40               .30

Agrees                      .11               .17
Disagrees                   .02               .01
Complies with Request       .07               .14
Acknowledges Only           .09               .03
No Clear RESPONSE           .72               .65

Discourse Marker            .12               .03
Metalanguage                .05               .17
Orients Suggestion          .07               .17
Personal Information        .03               .04
Jokes, Exaggerates          .01               .01
No Clear OTHER Function     .71               .58


     Analyses of variance that treated discourse (dyad) as
the random variable were performed on the data within each
of the three broad categories, excluding the No Clear
MOVE/RESPONSE/OTHER functions where inclusion would force
levels of the between-discourse factor to the same value.
We found no significant effect of problem type or order (for
details see Condon & Cech, 1996).  However, the interaction
of function type with discourse modality was significant at
the .001-level for all three (MOVE, RESPONSE, OTHER)
function classes.  Tests of simple effects of modality type
for each function indicated that only four proportions were
identical in the two modalities:  Requests Validation in the
MOVE class, Disagrees in the RESPONSE class, and, in the
OTHER class, Personal Information and Jokes-Exaggerates.

     In spite of the many differences reflected in Table 2,
there is evidence that participants are relying on the
decision routine and other routine sequences such as
adjacency pairs.  The ratio of suggestions to agreements in
both modalities is nearly identical:  1.64 in the oral
condition and 1.71 in the computer-mediated condition.
Moreover, the ratio of requests to compliances is identical:
if we sum the average proportions of requests for action,
validation and information, the ratio of these to the
proportions of compliance is 1.86 in both modalities.
Finally, a rough approximation of the proportion of
management functions in the discourse can be obtained by
summing the average proportions of discourse marker,
metalanguage, and orientation functions.  Then it can be
observed that the ratio of management functions to
suggestions is 1.33 in the oral condition and 1.27 in the
computer-mediated condition.  These similarities in the
midst of the many differences evident in Table 2 illustrate
how the same processing mechanisms can accomplish the same
goals using different strategies.

     A further prediction that follows from the claim that
the decision routine provides a basic structure for
interactions is that utterances functioning in the routine
should occur more frequently than utterances serving other
functions.  Conforming to this prediction, Suggests Action
was the most frequent MOVE function and Agrees was the most
frequent RESPONSE function in the computer-mediated
interactions.  Agreements were also the most frequent
RESPONSE function in the oral condition.  However,
suggestions were not the most frequent MOVE function in the
oral interactions.  Though the proportion of suggestions in
the oral condition far exceeds the proportions of requests,
the catch-all category Elaborates-Repeats was more frequent.
Of course, this category includes writing talk and
orientations, both of which are associated with the decision
routine.  If the proportions of these were combined with the
proportion of suggestions, then the proportion of MOVE
functions associated with the decision sequence would be
higher in the oral condition.  In contrast, writing talk
does not occur in the computer-mediated condition and much
of the elaboration and repetition found in the oral
interactions was eliminated in the computer-mediated
modality.  In fact, it is remarkable that the proportions of
Suggests Action and Elaborates-Repeats in Table 2 are almost
exactly reversed in the two conditions.  We suggest the
possibility that this reversal demonstrates increased
efficiency of processing in computer-mediated communication.
As participants maximize the efficiency of their
interactions to achieve the same goals using less linguistic
form, their talk conforms more closely to the default
decision routines.

     Why should processing be more efficient in computer
talk?  As we indicated in the Introduction, this medium
requires greater effort to compose and send messages.  It
also requires more sustained attention due to the slower
pace of the interaction caused by the delay between sending
a message and receiving a reply.  Thus, there appear to be
powerful constraints in this condition pressuring
participants to increase the efficiency of their
communications.  Conforming more closely to the decision
sequence and packing more functionality into an utterance
are management techniques that enable this increased
efficiency.  We will refer to such effects as _compression_.

     One consequence of compression appears to be that a
larger portion of the discourse is devoted to generating
suggestions, which are essential to making decisions.
Moreover, if we measure _functional load_ in terms of the
number of clear MOVE, RESPONSE, and OTHER functions served
by an utterance, we observe that the functional load of
utterances in the computer-mediated condition far exceeds
that of utterances in the oral condition.  Thus, in the oral
condition, if we exclude the No-Clear-MOVE/RESPONSE/OTHER-
Function categories, an average of 60% of utterances served
some MOVE function, an average 29% served some RESPONSE
function, and an average 28% served some OTHER function.  In
contrast, in the computer-mediated condition, the averages
are 70% of utterances serving some MOVE function, 35%
serving some RESPONSE function, and 42% serving some OTHER
function.  Clearly, utterances in electronic interactions
are serving more discourse functions than those in the
face-to-face interactions.

     Compression and the fact that writing talk does not
occur in the computer-mediated interactions can account for
most of the differences in proportions of Suggests Action
and Elaborates-Repeats in the two condition.  Moreover,
because the proportions of MOVES to RESPONSES in the two
conditions are the same, differences in proportions of
RESPONSE functions can be assumed to follow for the most
part from differences in the proportions of MOVE functions.
If our assumptions are correct, then the remaining
differences in the proportions of utterances classified as
Requests Action, Requests Information, Discourse Markers,
Metalanguage, and Orients should be attributable to
differences in discourse management strategies.  Based on
our qualitative observations, we know that utterances in the
last three categories reflect discourse management
strategies.  Therefore, the differences in proportions of
discourse markers, metalanguage and orientations are
consistent with the prediction that differences should occur
in discourse management strategies, rather than discourse

     Despite nearly identical overall proportions of
managerial functions to suggestions in the two modalities,
the specific managerial functions do differ qualitatively.
Thus, as Table 2 demonstrates, discourse markers occur on
average four times more frequently in the oral condition
than in the electronic condition, but metalanguage and
orientation are more frequent in the electronic condition
(more than twice as frequent in each case).  Similarly, we
may look at the differences in terms of the ratio of
suggestions to a given managerial function in each
condition.  The ratio of suggestions to discourse markers in
the face-to-face interactions is 1.5, while the ratio of
suggestions to discourse markers in the computer-mediated
interactions is considerably larger (9.67).  The
corresponding ratio differences are not so extreme for
metalanguage and orientation, but they remain considerable.
In short, a decrease in discourse markers corresponds to an
increase in metalanguage and utterances fitting the Orients

     The tradeoff between discourse markers and more
elaborate management strategies supports Condon's (1986)
suggestion that discourse markers signal whether the talk is
expected (e.g., conforms to a discourse routine) or
unexpected (as in dispreferred seconds).  Moreover, markers
like _ok_ signal expected continuations where several levels
of discourse structure coincide.  Thus, _ok_ and _so_
frequently precede the orientation function of decision
routines because these initiate not only the single act of
orienting, but also an entire structure of acts, the
decision routine.  Furthermore, the goal for the decision
routine is determined by the larger structuring of the task
into subtasks, so that the initiation of a decision routine
also locates the talk within that larger structure.

     Since so many levels of structure coincide at the
beginnings of decision routines, participants seem to find
it useful to employ a form which indicates that default
expectations are active for each level.  In this respect,
then, the markers function much like back-channel cues
(Schegloff, 1982), and this may explain why participants in
the computer-mediated interactions made relatively little
use of discourse markers.  Since the only channel available
in computer-mediated interaction is the written form, back-
channelling is impossible, and the many functions that back-
channel cues serve, such as signalling that participants are
paying attention or that messages are being received and
understood, must be accomplished using other strategies,
which we observe as increases in metalanguage and

     Therefore, as anticipated, different management
strategies in the two modalities are reflected in the
proportions of utterances coded as Discourse Marker, Meta-
language, and Orients.  Moreover, we can now suggest that
the differences in proportions of requests in the two
modalities also reflect different discourse management
strategies.  In the previous section, we explained how the
coding system makes it possible to discriminate among
strategies for encoding orientation.  Utterances in which
orientations are expressed as fronted adverbials as in
(3d,h) are coded as Suggests Action and Orients, while
interrogative orientations (5) are coded as Requests
Information and Orients.  Orientations expressed as short
phrases (4b-d) are coded as Elaborates-Repeats and Orients.
Metalinguistic requests (6) are coded as Requests Action and
Metalanguage, while metalanguage in interrogative form
(8a,d) is coded as Requests Information and Metalanguage.
Finally, metalanguage in declarative form (9c,d) is coded as
Elaborates-Repeats and Metalanguage.  Table 3 presents the
average proportions of these paired categories in each


Table 3.  Managerial pair proportions in the two modalities

Pair                       Face-to-Face    Computer-Mediated

Suggests Action/Orients        .01              .06
Requests Info/Orients          .02              .10
Elaborates/Orients             .03              .01

Requests Action/Metalanguage   .01              .07
Requests Info/Metalanguage     .01              .03
Elaborates/Metalanguage        .02              .05


     We can summarize the results in Table 3 by observing
that encoding managerial functions using adjacency pairs is
more likely in computer-mediated interactions.  The
proportions of orientations and other managerial functions
that are expressed as requests range from 3 to 7 times
higher in the computer-mediated interactions than in the
oral interactions.  The proportion of orientations
incorporated into suggestions is 6 times larger in the
electronic interactions, and provides evidence that the
strategies adopted in the computer-mediated condition do not
specifically favor requests, but adjacency pairings in
general.  These figures are especially impressive when
compared to the proportions in Table 2. Since the total
proportion of Suggests Action in the oral condition is .18
and the proportion of Suggests Action also coded as Orients
is .01, only an average of 6% of suggestions in that
condition included orientation in the form of a fronted
adverbial.  In contrast, by the same reasoning, an average
of 21% of suggestions in the electronic condition
incorporated this orientation strategy.  Similarly, in
face-to-face interactions, 33% of requests for information
also function as orientation compared to 67% in electronic
interactions.  In the oral condition, 33% of requests for
action were metalinguistic, while 88% of requests for action
in the electronic condition used metalanguage.  These
figures provide a dramatic illustration of the increase in
functional efficiency brought about by compression.

     It is also worth noting that the differences in Table 3
comport well with the differences in the proportions of
requests that occur in Table 2. Thus the difference in the
proportions of Requests Information in Table 2, which is
.09, matches the difference in Table 3 between the
proportions of utterances coded both as Requests Information
and as Orients, which is .08, plus the difference in the
proportions of utterances coded both as Requests Information
and as Metalanguage, which is .02.  Similarly, the
difference in the proportions of Requests Action in Table 2,
which is .05, mirrors the difference in Table 3 between the
proportion of utterances coded as both Requests Action and
Metalanguage, which is .06.  Consequently, the higher Table
2 proportions of requests for action and requests for
information in computer-mediated conditions are largely due
to the higher proportions of requests with managerial
functions.  In contrast, expression of orientations in short
phrases (elaborates/orients) decreases in the
computer-mediated modality along with use of discourse

     Given the adverse conditions for turn management in the
computer-mediated interactions, it is not surprising that
participants prefer to encode orientations as first
pair-parts of adjacency pairs.  By requiring a second
pair-part from the next speaker, the first pair-part
provides a turn-taking structure for the discourse,
including the possibility of selecting the next speaker as
in (5b).  Furthermore, adjacency pairs can be viewed as the
simplest instantiation of a discourse routine consisting of
the single continuation of a first pair-part by a second
pair-part.  Consequently, the processing efficiency afforded
by discourse routines applies as well to adjacency pairs.
These properties make them powerful tools for structuring
discourse into manageable units.


     The data are consistent with the claim that discourse
is structured according to the problem-solving decision
sequence presented in Figure 1. Participants in both the
face-to-face and computer-mediated modalities appeared to
rely on the decision routine and other common routine
structures such as adjacency pairs.  Differences between the
two modalities occurred primarily in use of discourse
management strategies.  Participants employ different
strategies to perform managerial functions and we identified
a variety of these activities:  cognitive structuring of the
task into decision routines, clarification of the task,
orientation of decision routines, turn management, repair,
closings, and, especially in the computer-mediated modality,
management of message transmission.

     Management strategies encode managerial functions in a
variety of linguistic forms (discourse markers, short
phrases, declaratives, interrogatives, imperatives), and the
functions are frequently overlaid on familiar routines such
as requesting action, suggesting, and requesting
information.  Though participants in the computer-mediated
interactions eliminate unnecessary elaborations and
repetitions, they devote more linguistic form to orientation
and other managerial functions.  In contrast, participants
in the oral interactions relied on discourse markers and
short orienting phrases to perform much of this work.
Furthermore, there is a marked preference for encoding
managerial functions as the first pair-parts of adjacency
pairs in the computer-mediated interactions, allowing turn
management to be combined with other management functions.

     The data are also consistent with the claim that
participants in electronic communications seek to increase
the efficiency of the discourse.  A particularly good
example of this is the exchange in (11).  This exchange is
remarkable because it relies so heavily on the tacit
expectations of the decision routine.  In (11) no linguistic
forms express the agreement as participants move directly
from a suggestion (11b,d) to the next orientation (11c,e).

(11) a.  P1:  WHAT DAY
     b.  P2:  Sunday
     d.  P2:  touch football,volleyball,softball
     f.  P2:  we could hire a magician and comedian
     g.  P1:  MUSIC SHA!                                 (e)

     The discourse model outlined above anticipates
sequences like (11) because of the minimal criteria needed
for consensus:  although normally one would expect a
requirement of at least one agreement and no disagreement,
as (11) demonstrates, the interaction can indeed proceed _in
the absence_ of explicitly marked agreement.  This nicely
illustrates our argument that routines are useful because
they make it possible to communicate efficiently by reducing
the amount of linguistic encoding necessary to express
discourse functions (Condon & Cech, 1996).  This reduction
is accomplished by relying on shared understandings such as
discourse routines and, especially, the understanding that
functions anticipated in routines do not need to be made
explicit in the language.  Since agreement is expected
following a suggestion, it is often reduced to a minimal
encoding such as _OK_, _yeah_ or _cool_.  In contrast, a
disagreement would require some additional linguistic form
to signal the dispreferred function.  Consequently, an
absence of any linguistic form at all should signal
agreement.  We rarely observe sequences like (11) involving
no marking of an agreement, but clearly, they exist, and
clearly, they are sanctioned by our model.  Moreover, the
few examples that we do observe all occur in the
computer-mediated condition, in which we expect compression
to increase reliance on discourse routines.

     The compression effect suggests that computer-mediated
modalities can provide a focused environment for more
efficient decision-making in domains, such as the workplace,
where efficient decision-making is valued.  However, care
should be taken in interpreting this finding.  Our measures
of the number of utterances used and the proportions of
functions served do not take into account the amount of time
spent typing, which varies according to the ability of
individual participants.  Furthermore, we did not measure
the quality of decisions.  In a recent study that assessed
decision quality, Olaniran (1994) found that participants in
computer- mediated conditions generated more ideas and took
longer to reach consensus than in face-to-face conditions.
The quality of decisions was highest in conditions that
combined face-to- face and computer-mediated sessions.  Like
other forms of interaction, therefore, computer-mediated
interaction is a complex phenomenon that will require
careful research before generalizations about overall
efficiency can be made.

     The line of research described here suggests a number
of directions for future study.  Many additional
quantitative analyses remain to be performed in order to
test further predictions of the decision-making model.  We
are currently analyzing data on utterance length to test the
prediction that routine functions are encoded using less
linguistic form than non-routine continuations.  Because the
communication software records the time at which
participants begin typing a message and the time at which
the message is sent, we can calculate reaction time measures
to test predictions about processing efficiency for routine
functions.  In addition, we plan to recode utterances with
management functions to specify the functions identified
above and refine our understanding of differences in the
proportions of metalanguage observed.  For example, we would
like to be able to specify how much of the threefold
increase in metalanguage in computer-mediated interactions
is due to message transmission management.  We look forward
to exploring these possibilities in our future research.


     This paper has benefitted from discussion of an earlier
version presented to the 1995 GURT Presession on Computer-
Mediated Discourse Analysis and from the comments of two
anonymous reviewers.  The project has received support from
the University of Southwestern Louisiana through Faculty
Development and Summer Research grants.  We are grateful to
the following students who assisted in refining the coding
system and coding the data:  Ryan Aubert, Eileen Barton,
Joyce Lane, Tom Petitjean, Tracy Smrcka and John Strawn.


Condon, S. (1986).  The discourse functions of OK.
     Semiotica, 60, 73-101.

Condon, S., & Cech, C. (1992).  Manual for coding decision-
     making interactions.  Unpublished manuscript.
     University of Southwestern Louisiana.  (Revised 1995).

Condon, S., & Cech, C. (1995, March).  Discourse markers
     signal markedness of continuations.  Paper presented at
     the International Linguistics Association Annual
     Meeting, Georgetown University.

Condon, S., & Cech, C. (1996).  Functional comparison of
     face-to-face and computer-mediated decision-making
     interactions.  In S. Herring (Ed.), Computer-mediated
     communication:  Linguistic, social, and cross-cultural
     perspectives (pp. 65-80).  Philadelphia:  John

Condon, S., Cooper C., & Grotevant, H. (1984).  Manual for
     the analysis of family discourse.  Psychological
     Documents, 14 (1), Document no. 2616.

Cooper, C., Grotevant, H., & Condon, S. (1982).
     Methodological challenges of selectivity in family
     interaction:  Addressing temporal patterns of
     individuation.  Journal of Marriage and the Family, 44,

Cooper, C., Grotevant, H., & Condon, S. (1983).
     Individuality and connectedness in the family as a
     context for adolescent identity formation and
     role-taking skill.  In H. Grotevant and C. Cooper
     (Eds.), Adolescent development in the family (pp.
     43-60).  San Francisco:  Jossey-Bass Inc.

Goffman, E. (1974).  Frame analysis.  New York:  Harper and

Goffman, E. (1981).  Forms of talk.  Philadelphia:
     University of Pennsylvania Press.

Herring, S. (Ed.)  (1996).  Computer-mediated communication:
     Linguistic, social, and cross-cultural perspectives.
     Philadelphia:  John Benjamins.

Kellerman, K., Broetzmann, S., Lim, T., & Kitao, K. (1989).
     The conversation MOP:  Scenes in the stream of
     discourse.  Discourse Processes, 12, 27-61.

Levinson, S. (1983).  Pragmatics.  New York:  Cambridge
     University Press.

Merritt, M. (1980).  On the use of `O.K.' in service
     encounters.  In R. Bauman & J. Sherzer (Eds.), Language
     and speech in American society.  Austin, Texas:
     Southwestern Educational Development Laboratory.

Minsky, M. (1975).  A framework for representing knowledge.
     In P. Winston (Ed.), The psychology of human vision.
     New York:  McGraw-Hill.

Olaniran, B. (1994).  Group performance in computer-mediated
     and face-to-face communication media.  Management
     Communication Quarterly, 7 (3), 256-281.

Sacks, H. (1973).  Lecture notes, Summer Institute of
     Linguistics.  Ann Arbor, Michigan.

Schank, R. (1982).  Dynamic memory.  New York:  Cambridge
     University Press.

Schank, R., & Abelson, R. (1977).  Scripts, plans, goals and
     understanding.  Hillsdale, New Jersey:  Lawrence

Schegloff, E. (1982).  Discourse as an interactional
     achievement:  Some uses of 'uh huh' and other things
     that come between sentences.  In D. Tannen (Ed.),
     Analyzing discourse:  Text and talk (pp. 71-93).
     Washington, D.C.:  Georgetown University Press.

Tannen, D. (1979).  What's in a frame?  Surface evidence for
     underlying expectations.  In R. Freedle (Ed.), New
     directions in discourse processing.  Norwood, New
     Jersey:  Ablex.
                      Copyright 1996
   Communication Institute for Online Scholarship, Inc.

     This file may not be publicly distributed or reproduced
without written permission of the Communication Institute
for Online Scholarship, P.O.  Box 57, Rotterdam Jct., NY
12150 USA (phone:  518-887-2443).