Table Of ContentEmotions in Humans and Artifacts
Robert Trappl Paolo Petta Sabine Payr
Emotions have been much studied and discussed in recent years.
Most books, however, treat only one aspect of emotions, such as
emotions and the brain, emotions and well-being, or emotions and
computer agents. This interdisciplinary book presents recent work
on emotions in neuroscience, cognitive science, philosophy,
computer science, artificial intelligence, and software and game
development. The book discusses the components of human
emotion and how they might be incorporated into machines,
Apr 2003 whether artificial agents should convey emotional responses to
human users and how such responses could be made believable,
ISBN 0262201429
and whether agents should accept and interpret the emotions of
400 pp.
users without displaying emotions of their own. It also covers the
35 illus. evolution and brain architecture of emotions, offers vocabularies
$52.00 ( (hardback) and classifications for defining emotions, and examines emotions
$41.60 ( (hardback) in relation to machines, games, virtual worlds, and music.
TABLE OF CONTENTS
PREFACE
1EMOTIONS: FROM BRAIN RESEARCH TO COMPUTER GAME
DEVELOPMENT
2A THEORY OF EMOTION, ITS FUNCTIONS, AND ITS ADAPTIVE VALUE
BY EDMUND T. ROLLS
3HOW MANY SEPARATELY EVOLVED EMOTIONAL BEASTIES LIVE WITHIN
US?
BY AARON SLOMAN
4DESIGNING EMOTIONS FOR ACTIVITY SELECTION IN AUTONOMOUS
AGENTS
BY LOLA D. CAÑAMERO
5EMOTIONS: MEANINGFUL MAPPINGS BETWEEN THE INDIVIDUAL AND ITS
WORLD
BY KIRSTIE L. BELTMAN
6ON MAKING BELIEVABLE EMOTIONAL AGENTS BELIEVABLE
BY ANDRES ORTONY
7 WHAT DOES IT MEAN FOR A COMPUTER TO "HAVE" EMOTIONS?
BY ROSALIND W. PICARD
8THE ROLE OF ELEGANCE IN EMOTION AND PERSONALITY: REASONING
FOR BELIEVABLE AGENTS
BY CLARK BLIOTT
9THE ROLE OF EMOTIONS IN A TRACTABLE ARCHITECTURE FOR
SITUATED COGNIZERS
BY PAOLO PETTA
10 THE WOLFGANG SYSTEM: A ROLE OF "EMOTIONS" TO BIAS LEARNING
AND PROBLEM SOLVING WHEN LEARNING TO COMPOSE MUSIC
BY DOUGLAS RIECKEN
11 A BAYESIAN HEART: COMPUTER RECOGNITION AND SIMULATION OF
EMOTION
BY EUGENE BALL
12 CREATING EMOTIONAL RELATIONSHIPS WITH VIRTUAL CHARACTERS
BY ANDREW STERN
CONCLUDING REMARKS
BY ROBERT TRAPPI
Preface
In Steven Spielberg’s movie Artificial Intelligence the question is
raised, Do robots have emotions, especially love? Can they? Should
they?
The study of emotions has become a hot research area during
recent years. However, there is no book that covers this topic from
different perspectives—from the brain researcher, the cognitive
scientist, the philosopher, the AI researcher, the software devel
oper, the interface designer, the computer game developer—thus
enabling an overview of the current approaches and models.
We therefore invited leading scientists and practitioners from
these research and development areas to present their positions
and discuss them in a two-day workshop held at the Austrian
Research Institute for Artificial Intelligence in Vienna. We had all
discussions both video- and audiotaped and the transcripts sent to
the participants.
This book opens with an overview chapter, which also presents
the motivation—both for the book and the research covered—in
more precision. It is followed by the chapters prepared by the par
ticipants especially for the book. Most of the chapters are followed
by a condensation of the often vivid and intense discussions.
However, on several occasions, questions were raised or comments
were expressed at specific moments during the presentations—
which would hardly be understandable at the end of the chapter.
We decided to place them at the right spot, but clearly separating
them from the body of the chapter as dialogs, thus enabling the
reader to skip them if desired. The book ends with a short chapter
‘‘Concluding Remarks,’’ followed by ‘‘Recommended Readings,’’
short biographies, and the index.
First, we want to thank the contributors who took great pains to
enhance their original position papers to book chapters by includ
ing new material and by considering the comments in and outside
the discussions.
Furthermore, we want to thank our colleagues at the Austrian
Research Institute for Artificial Intelligence for their support—
Preface
especially Isabella Ghobrial-Willmann for efficiently organizing the
travel and the hotels for the participants; Gerda Helscher, for her
hard work preparing transcripts from the tape recordings; Christian
Holzbaur for his technical expertise; and Ulrike Schulz, for care
fully helping us in the final preparation of the book manuscript.
Bob Prior of MIT Press was an ideal partner in our endeavor.
Finally, we want to thank the Austrian taxpayers whose money
enabled us to pay for the travel and hotels of the participants at the
workshop. We are grateful to Dr. Norbert Rozsenich, Sektionschef,
and Dr. Rene´ Fries, Ministerialrat, both currently at the Federal
Ministry of Transport, Innovation, and Technology, who chan
neled this money to a project of which this workshop formed an
integral part.
It is our hope that this book will serve as a useful guide to the
different approaches for understanding the purpose and function
of emotions, both in humans and in artifacts, and assist in devel
oping the corresponding computer models.
1 Emotions: From Brain Research to Computer Game Development
Robert Trappl and Sabine Payr
1.1 Motivation
There are at least three reasons why the topic of emotions has
become such a prominent one during the last several years.
In the early nineties, a strange fact surfaced when people with
specific brain lesions (which left their intellectual capacities intact
but drastically reduced their capacity to experience emotions)
were asked to make rational decisions: This capacity was severely
diminished. These psychological experiments were repeated sev
eral times with similar results (Dama´sio 1994). As a result, the
strange conclusion had to be drawn that rationality and emotion
ality are not opposed or even contradictory but that the reverse
is true: Emotionality—evidently not including tantrums—is a
prerequisite of rational behavior. In contrast, in interviews with
twenty leading cognitive scientists in the United States (Baumgart-
ner and Payr 1995), none of them had mentioned emotions! Also,
AI research that had been done for decades without considering
emotions had to incorporate this new aspect of research (figure
1.1).
A second reason: Humans—not only naive ones—treat com
puters like persons. This can be shown in several cleverly designed
experiments (e.g., Reeves and Nass 1996). In one of these experi
ments, the test persons had to work with a dialog-oriented training
program on a computer. Afterward, they were divided into two
groups to evaluate the computer’s performance. One group had to
answer the evaluation questions on the same computer that was
used for the training task, the other group used a different com
puter. The persons who had to answer the questions on the same
computer gave significantly more positive responses than the
others—that is, they were less honest and more cautious—which
means they automatically applied rules of politeness just as they
would have done in interactions with persons. This happened
even when the persons asked were students of computer science!
Therefore, if people have emotional relations to computers, why
Robert Trappl and Sabine Payr
Emotions
Intelligent Software Agents
Embodied Artificial Intelligence
Neural Networks
Physical Symbol System
Figure 1.1 The development of AI research paradigms.
not make the computer recognize these emotions or make them
express emotions (e.g., Picard 1997)?
From yet another area: Computer animation has made such rapid
progress that it is now possible to generate faces and bodies that
are hardly distinguishable from human ones, such as the main
character in the movie Final Fantasy by Hironobu Sakaguchi,
released in 2001. At present, mimic, gesture, and general motion of
these synthetic actors have to be either programmed or ‘‘motion
captured’’ by cameras from human actors. Given the increasing
number of such synthetic actors, this will become more and more
difficult and, in the case of real-time interaction in, for example,
computer games or educational programs, simply impossible. In
order to give these synthetic actors some autonomy, personality
models, at least simple ones initially, have to be developed in
which emotions play a major role (Trappl and Petta 1997).
As a result, the number of books and papers published on this
subject during recent years has increased considerably. However,
they usually treat emotions only in one respect—for example:
emotions and the brain, emotions and agents, emotions and com
puter users.
1.2 Aims of the Book
But what are the different aspects of emotions—emotions in
humans and in artifacts? How are emotions viewed, analyzed, and,
especially, modeled in different disciplines, for different purposes?
What would happen if scientists and developers from different
disciplines (from brain research, cognitive science, philosophy,
computer science, and artificial intelligence, to developers of soft-
Emotions
ware and computer games) came together to present and discuss
their views? The result of this interdisciplinary endeavor is this
book, with the original position papers enhanced by incorporating
additional, new material and by presenting condensations of the
often intense and vivid discussions of the differing positions.
1.3 Contents
Chapter 2 is the contribution by Edmund Rolls, ‘‘A Theory of
Emotion, Its Functions, and Its Adaptive Value.’’ The logical—but
not the easiest—way into the broad issue of emotions in humans
and artifacts is to ask what emotions are, or better: to try to cut out
the class of those states in animals (including humans) that should
be called emotional. Edmund Rolls offers a definition wherein
emotional states are necessarily tied to reward and punishment:
Emotions are states produced by instrumental reinforcers. Whether
the reinforcers are primary (unlearned) or secondary (learned by
association with primary reinforcers) is not a criterion, whereas he
draws the dividing line between those stimuli that are (positively
or negatively) reinforcing and those that are not—in which case
they can evoke sensations, but not emotions.
Rolls’s evolutionary approach to emotions leads him to search
for the function of the emotional system in increasing fitness (i.e.,
the chance to pass on one’s genes). He finds this potential in
the reaction-independent nature of emotions, or, in his words:
‘‘rewards and punishments provide a common currency for inputs
to response selection mechanisms.’’ The evolutionary advantage,
then, is to allow the animal to select arbitrarily among several pos
sible (re)actions, thus increasing its adaptability to complex and
new environments.
As to the underlying brain architecture, Rolls distinguishes
two different paths to action: On the one hand, there is the
‘‘implicit’’ system involving the orbitofrontal cortex and the amyg
dala. This system is geared toward rapid, immediate reactions and
tends to accumulate greatest possible reward in the present. On the
other hand, there is the ‘‘explicit’’ system, in which the symbol-
processing capabilities and short-term memory play an important
part. This system allows some mammals, including humans, to
make multistep plans that defer expected rewards to some point in
the future, overriding the implicit system’s promises for immedi
ate rewards. This second system would be, for Rolls, related to
Robert Trappl and Sabine Payr
consciousness as the ability to reflect one’s own thoughts (and
some emotional states, as we might add), an ability that is neces
sary in order to evaluate and correct the execution of complex
plans.
Aaron Sloman’s chapter ‘‘How Many Separately Evolved Emo
tional Beasties Live within Us?’’ is concerned with the confusion
surrounding the use of emotion and related concepts in AI that
are not well understood yet. Before discussing, for example, the
necessity of emotions for artifacts or the functions of emotions in
animals, an effort has to be made to understand what is meant and
involved in each case.
Sloman proposes a complex architecture that could allow an
approach to these widely varying and ill-defined phenomena
from an information-processing perspective. The architecture is
based on an overlay of the input-output information flow model
(perception-processing-action) and a model of three processing
levels (reactive-deliberative-reflective)—reminiscent of the brain
evolution model (‘‘old’’ reptilian and ‘‘newer’’ mammalian brains).
This architecture results in what Sloman terms an ‘‘ecology of
mind,’’ in that it accommodates coevolved suborganisms that
acquire and use different kinds of information and process it in
different ways, sometimes cooperating and sometimes competing
with each other. This architecture—contrary to many proposals
currently made by AI research—does not contain a specific ‘‘emo
tional system’’ whose function supposedly is to produce emotions.
Instead, emotions are emergent properties of interactions between
components that are introduced for other reasons and whose func
tioning may or may not generate emotion.
Sloman’s view is that with the progress of research in this field,
emotion, as well as other categories originating from naive psy
chology, may either become useless or may change their meaning
radically—in analogy to some categories used in the early days of
physics.
With chapter 4, ‘‘Designing Emotions for Activity Selection
in Autonomous Agents,’’ by Lola Can˜amero, we fully enter the
domain of emotions in artifacts. Can˜amero first presents an over
view and categorization of different approaches to emotional mod
eling and evaluates them with regard to their adequateness for
the design of mechanisms where emotions guide the selection of
actions. From the point of view of the modeling goal, she considers
phenomenon-based (or black box) models as arising from an engi-
Emotions
neering motivation, whereas design-based (or process) models are
motivated by more scientific concerns. From the perspective of
their view on emotions, she distinguishes component-based and
functional models. Component-based models (e.g., Picard’s) pos
tulate an artificial agent as having emotions when it has certain
components characterizing human (animal) emotional systems.
Functional models (e.g., Frijda’s), on the contrary, focus on how to
transpose the properties of humans and their environment to a
structurally different system so that the same functions and roles
arise. Not surprisingly, functional models turn out to be more use
ful for action selection.
Can˜amero explores the relationship between emotion, moti
vation, and behavior in the experimental system ‘‘Gridland,’’ a
dynamic environment with simple virtual beings. The connection
between emotions, motivations, and behavior is achieved through
the physiological metaphor of homeostatically controlled variables
(drives) and ‘‘hormones.’’ Her motivation is clearly that of the
engineer when she seeks to make virtual beings benefit from the
better survival chances of emotional animals (including humans):
rapid reactions, resolution of goal conflicts, and the social function
of signaling relevant events to others.
Kirstie Bellman’s ‘‘Emotions: Meaningful Mappings Between the
Individual and Its World’’ (chapter 5) first describes a specific
function of emotions and its modeling (as presented in chapter 4).
Then it presents a broader perspective to the part played by emo
tions in cognition, social interaction, and emergence of the self.
Based on that, virtual worlds as open test beds with a hybrid
(agent and human) population are presented. From an information
processing perspective—which has been, historically, dominant
in cognitive science and Artificial Intelligence—the functions of
emotions are primarily seen in their implications for decision
making, arousal, and motivation. Bellman, however, goes on to
explore the role of emotions in establishing (a sense of) ‘‘self’’:
emotions are always personal and experienced from a first-person
point of view. The ‘‘self’’ is always a perceived and felt self. Emo
tions, then, and self-perception of them, play a crucial part in inte
grating cognitive and vital processes into a continuous and global
construction. Without this integration, Bellman argues, it is not
possible for a being to ‘‘make sense of the world.’’
She then advocates virtual worlds as test beds for collecting a
new level of observations on the interactions among humans
Robert Trappl and Sabine Payr
and agents. Virtual worlds have the advantage of offering well-
specified environments where human and virtual actors can meet
on common ground, and, at the same time, of allowing the capture
and analysis of interactions among humans, among agents, and
between humans and agents.
In chapter 6, ‘‘On Making Believable Emotional Agents Believ
able,’’ Andrew Ortony starts out from the requirement that for the
behavior of an emotional agent to be believable, it has to be con
sistent across similar situations, but also coherent across different
kinds of situations and over longer periods of time. Emotions,
motivations, and behaviors are not randomly related to the sit
uations that give rise to them. While the mapping from types of
emotions to classes of response tendencies is flexible, it is not
totally accidental, neither in form nor in intensity.
Ortony distinguishes three major types of emotion response ten
dencies—expressive, information processing, and coping. Which
tendencies will prevail in an individual (or in an artifact) is
to some degree a question of the individual’s personality. Person
ality, in this view, is not a mere description of observed regu
larities, but a generative mechanism that drives behavior. Only
personality models that reduce the vast number of observed per
sonality traits to a small number of dimensions are capable of pro
viding such a ‘‘behavior engine.’’ Factor structure is one such
model, offering between two and five basic factors. Regulatory
focus (‘‘promotion’’ vs. ‘‘prevention’’ focus), characterized as a
preference for either gain/no-gain or nonloss/loss situations, is
another option.
For building a believable agent, it is therefore necessary to
ensure appropriate internal responses (emotions), appropriate ex
ternal responses (behavior and behavioral inclinations), to ar
range for coordination between internal and external responses,
and for individual appropriateness through a personality model.
In chapter 7, ‘‘What Does It Mean for a Computer to ‘Have’ Emo
tions?’’ Rosalind Picard approaches this question from two angles.
First she lists possible motivations why computers should have
certain emotional abilities. Her concern has been with ways in
which computers could be more adaptive to users’ feelings—
especially those of frustration. It may not be necessary for such an
adaptive system to ‘‘have’’ emotions as long as it recognizes and
deals with frustration in a useful way, but the nature of this task,
Description:Emotions have been much studied and discussed in recent years. Most books, however, treat only one aspect of emotions, such as emotions and the brain, emotions and well-being, or emotions and computer agents. This interdisciplinary book presents recent work on emotions in neuroscience, cognitive sci