219-7677 10 7500817 John Benjamins Publishing Company Marketing Department / Karin Plijnaar, Pieter Lamers onix@benjamins.nl 201611101209 ONIX title feed eng 01 EUR
3007632 03 01 01 JB John Benjamins Publishing Company 01 JB code NLP 8 Eb 15 9789027288400 06 10.1075/nlp.8 13 2009048316 DG 002 02 01 NLP 02 1567-8202 Natural Language Processing 8 <TitleType>01</TitleType> <TitleText textformat="02">Close Engagements with Artificial Companions</TitleText> <Subtitle textformat="02">Key social, psychological, ethical and design issues</Subtitle> 01 nlp.8 01 https://benjamins.com 02 https://benjamins.com/catalog/nlp.8 1 B01 Yorick Wilks Wilks, Yorick Yorick Wilks University of Oxford 01 eng 340 xxii 315 COM004000 v.2006 UYZ 2 24 JB Subject Scheme CONS.GEN Consciousness research 24 JB Subject Scheme LIN.COMPUT Computational & corpus linguistics 24 JB Subject Scheme PHIL.GEN Philosophy 06 01 What will it be like to admit Artificial Companions into our society? How will they change our relations with each other? How important will they be in the emotional and practical lives of their owners – since we know that people became emotionally dependent even on simple devices like the Tamagotchi? How much social life might they have in contacting each other? The contributors to this book discuss the possibility and desirability of some form of long-term computer Companions now being a certainty in the coming years. It is a good moment to consider, from a set of wide interdisciplinary perspectives, both how we shall construct them technically as well as their personal philosophical and social consequences. By Companions we mean conversationalists or confidants – not robots – but rather computer software agents whose function will be to get to know their owners over a long period. Those may well be elderly or lonely, and the contributions in the book focus not only on assistance via the internet (contacts, travel, doctors etc.) but also on providing company and Companionship, by offering aspects of real personalization. 04 09 01 https://benjamins.com/covers/475/nlp.8.png 04 03 01 https://benjamins.com/covers/475_jpg/9789027249944.jpg 04 03 01 https://benjamins.com/covers/475_tif/9789027249944.tif 06 09 01 https://benjamins.com/covers/1200_front/nlp.8.hb.png 07 09 01 https://benjamins.com/covers/125/nlp.8.png 25 09 01 https://benjamins.com/covers/1200_back/nlp.8.hb.png 27 09 01 https://benjamins.com/covers/3d_web/nlp.8.hb.png 10 01 JB code nlp.8.00for xi xii 2 Miscellaneous 1 <TitleType>01</TitleType> <TitleText textformat="02">Foreword</TitleText> 10 01 JB code nlp.8.02ack xii 1 Miscellaneous 2 <TitleType>01</TitleType> <TitleText textformat="02">Acknowledgements</TitleText> 10 01 JB code nlp.8.01contr xiii xxii 10 Miscellaneous 3 <TitleType>01</TitleType> <TitleText textformat="02">Contributors</TitleText> 10 01 JB code nlp.8.02s1 Section header 4 <TitleType>01</TitleType> <TitleText textformat="02"><atl>Section I. Setting the scene</TitleText> 10 01 JB code nlp.8.03tur 3 10 8 Article 5 <TitleType>01</TitleType> <TitleText textformat="02">In good company?</TitleText> <Subtitle textformat="02">On the threshold of robotic Companions</Subtitle> 1 A01 Sherry Turkle Turkle, Sherry Sherry Turkle 01 Most contributors to this volume believe that only technical matters stand between where we are now and a time when robots will be our Companions and teachers. Robots need to expand their domains of understanding, and if those domains should be emotional, well, that will be a technical matter as well. So while this volume expresses designers&#8217; enthusiasm about robots as technical objects, it challenges us to see robots as something more, as evocative objects. What are we thinking about when we are thinking about robots? We are thinking about aliveness and authenticity, love and spirituality. We are thinking about what it means to build a psychology. We are thinking about what makes people special. Or perhaps that they are not so special after all. 10 01 JB code nlp.8.04wil 11 20 10 Article 6 <TitleType>01</TitleType> <TitleText textformat="02">Introducing artificial Companions</TitleText> 1 A01 Yorick Wilks Wilks, Yorick Yorick Wilks 01 This introductory chapter, like the whole book itself, concerns a range of closely related topics: the possibility of machines having identifiable personalities, the possible future legal responsibilities of such companionable machines, and the design characteristics of such machines, including their technical implementation and the kinds of computational theory they will need to embody. As will become clear, I wish to explore these topics in terms of software entities, rather than robots, and in particular the sort of software agents now being encountered on the web, ranging at present from technical advisers to mere chatbots. I shall call them Companions. This introduction explores the following related aspects of a Companion in more detail: what kind and level of personality should be in a machine agent so as to be acceptable to a human user, more particularly to one who may fear technology and have no experience of it; and what levels of responsibility and legal attribution for responsibility can we expect from entities like complex web agents in the near future? 10 01 JB code nlp.8.05s2 Section header 7 <TitleType>01</TitleType> <TitleText textformat="02"><atl>Section II. Ethical and philosophical issues</TitleText> 10 01 JB code nlp.8.06flo 23 28 6 Article 8 <TitleType>01</TitleType> <TitleText textformat="02">Artificial Companions and their philosophical challenges</TitleText> 1 A01 Luciano Floridi Floridi, Luciano Luciano Floridi 01 The technology for Artificial Companions is already largely available, and the question is when rather than whether ACs will become commodities (Benyon and Mival (2007). The difficulties are still formidable, but they are not insurmountable. On the contrary, they seem rather well-understood, and the path from theoretical problems to technical solutions looks steep but climbable. cIn the following pages, I wish to concentrate not on the technological challenges, which are important, but on some philosophical issues that a growing population of AC will make increasingly pressing. 10 01 JB code nlp.8.07pul 29 34 6 Article 9 <TitleType>01</TitleType> <TitleText textformat="02">Conditions for Companionhood</TitleText> 1 A01 Stephen G. Pulman Pulman, Stephen G. Stephen G. Pulman 01 This chapter is an attempt to outline a set of conditions that are jointly necessary and sufficient for any entity, virtual or real, to be regarded as displaying the properties characteristic of our ordinary everyday understanding of the notion of a Companion. 10 01 JB code nlp.8.08oha 35 56 22 Article 10 <TitleType>01</TitleType> <TitleText textformat="02">Arius in cyberspace</TitleText> <Subtitle textformat="02">Digital Companions and the limits of the person</Subtitle> 1 A01 Kieron O'Hara O'Hara, Kieron Kieron O'Hara 01 The relationship between a Companion and a person will become increasingly problematic as Companion technology improves and as models of users become increasingly sophisticated, and the simple dichotomies that make the Turing Test so plausible as a means of determining intelligence will become harder to maintain. As with any kind of content-storing technology, such as writing, or in more recent years laptops, the amount and quality of cognition that a human can &#8216;export&#8217; to these outside technologies is significant. The &#8216;person&#8217; or &#8216;agent&#8217; can be seen as an extended system including the technologies as well as the human, in which the technologies, among other things, can help in the extension of trust towards the human. 10 01 JB code nlp.8.09s3 Section header 11 <TitleType>01</TitleType> <TitleText textformat="02">Section III. Social and psychological issues</TitleText> <Subtitle textformat="02">What should a Companion be like?</Subtitle> 10 01 JB code nlp.8.10bod 59 61 3 Article 12 <TitleType>01</TitleType> <TitleText textformat="02">Conversationalists and confidants</TitleText> 1 A01 Margaret A. Boden Boden, Margaret A. Margaret A. Boden 01 If there were an abstract for this very short paper, it would be this: &#8220;Conversationalists, maybe &#8211; but confidants?&#8221; 10 01 JB code nlp.8.11bry 63 74 12 Article 13 <TitleType>01</TitleType> <TitleText textformat="02">Robots should be slaves</TitleText> 1 A01 Joanna J. Bryson Bryson, Joanna J. Joanna J. Bryson 01 Robots should not be described as persons, nor given legal nor moral responsibility for their actions. Robots are fully owned by us. We determine their goals and behavior, either directly or indirectly through specifying their intelligence or how their intelligence is acquired. In humanising them, we not only further dehumanise real people, but also encourage poor human decision making in the allocation of resources and responsibility. This is true at both the individual and the institutional level. This chapter describes both causes and consequences of these errors, including consequences already present in society. I make specific proposals for best incorporating robots into our society. The potential of robotics should be understood as the potential to extend our own abilities and to address our own goals. 10 01 JB code nlp.8.12eva 75 88 14 Article 14 <TitleType>01</TitleType> <TitleText textformat="02">Wanting the impossible</TitleText> <Subtitle textformat="02">The dilemma at the heart of intimate human-robot relationships</Subtitle> 1 A01 Dylan Evans Evans, Dylan Dylan Evans 01 In a recent book entitled Love and Sex with Robots, the British scholar David Levy has argued that relationships with robot Companions might be more satisfying than relationships with humans, a claim which I call &#8220;the greater satisfaction thesis&#8221; (GST). &#160;The main reason Levy provides in support of GST is that people will be able to specify the features of robot Companions precisely in accordance with their wishes (which I call the total specification argument or TSA). &#160;In this paper, I argue that TSA is wrong. &#160;In particular, the argument breaks down when we consider certain behavioral characteristics that we desire in our partners. I illustrate my argument with a thought-experiment involving&#173; two kinds of robot &#8211; the FREEBOT, which is capable of rejecting its owner permanently, and the RELIABOT, which is not 10 01 JB code nlp.8.13lev 89 94 6 Article 15 <TitleType>01</TitleType> <TitleText textformat="02">Falling in love with a Companion</TitleText> 1 A01 David Levy Levy, David David Levy 01 In 1984, in her groundbreaking book The Second Self, Sherry Turkle made us aware of the tendency of some people to develop relationships with their computers. Turkle described one such example, an MIT computer hacker who she called Anthony, who had &#8220;tried out&#8221; having girlfriends but preferred to relate to computers. I believe that the developments in AI since then have demonstrated a progression in human-computer relationships to the point where we can now say with confidence that, in the foreseeable future, significant numbers of Anthony&#8217;s and their female counterparts will be falling in love with software Companions. This position paper summarizes my arguments. 10 01 JB code nlp.8.14low 95 100 6 Article 16 <TitleType>01</TitleType> <TitleText textformat="02">Identifying your accompanist</TitleText> 1 A01 Will Lowe Lowe, Will Will Lowe 01 What Companions are or should be will depend on what we want them to do for us. And this depends on what they can do for us. In the end, I shall argue that whether Companions should best be thought of, programmed and regulated as others, or as extensions of ourselves, is a tactical question. One path cwill simply be more effective than the other. Either way, it&#8217;s going to be all about us. To make a start let us switch to the second person: What can a Companion do for you? 10 01 JB code nlp.8.15rom 101 106 6 Article 17 <TitleType>01</TitleType> <TitleText textformat="02">Look, emotion, language and behavior in&#160;a&#160;believable virtual Companion</TitleText> 1 A01 Daniela M. Romano Romano, Daniela M. Daniela M. Romano 01 A good friend is the one that make you laugh, shares your deepest emotions and is there to listen and help you when you need it. Can we have we have a synthetic Companion with these qualities? He/she (we assume it has the status of a person at this stage) should be able understand and express emotions, to talk and listen, know who you are and what you feel and like. The Companion considered in this chapter is a synthetic creature living in a virtual world, able to display believable human-like qualities. The concept of believability within virtual environments is also discussed here together with some of the problems connected with the creation of such Companion. 10 01 JB code nlp.8.16tay 107 120 14 Article 18 <TitleType>01</TitleType> <TitleText textformat="02">New Companions</TitleText> 1 A01 Alex Taylor Taylor, Alex Alex Taylor 2 A01 Anab Jain Jain, Anab Anab Jain 3 A01 Laurel Swan Swan, Laurel Laurel Swan 01 This chapter draws on advancements in Artificial Intelligence (AI) and robotics to question the orthodoxy of artificial Companions research. Two areas of work are outlined and used to suggest artificial Companions need not be restricted to simulacrums of humans or animals. First, it is argued the developments in AI have given rise to the prospect of very different kinds of machines, machines that are unlike humans or animals but that we may still want to form relationships with. Second, details are presented of a project exploring energy autonomous robots. Ecobot, an example of such a robot, is shown to exhibit unique characteristics that may afford new, distinctive forms of Companionship. Finally, a design concept of an autonomously powered, household radio is presented to illustrate how these new kinds of relationships might be investigated further. 10 01 JB code nlp.8.17wil 121 128 8 Article 19 <TitleType>01</TitleType> <TitleText textformat="02">On being a Victorian Companion</TitleText> 1 A01 Yorick Wilks Wilks, Yorick Yorick Wilks 01 I have argued or suggested: &#8211; English Common Law already, in dogs, has a legal category of entities that are not human but are in some degree responsible for their actions and have &#8220;characters&#8221; that can be assessed. &#8211; Users may not want Companions prone to immediately expressed emotions and a restrained personality, like a Victorian Lady&#8217;s Companion, might provide a better model. &#8211; Language behavior is a complex repository of triggers for emotion, both expressed and causal, and this is often under-rated in the world of ECA and theories of emotion based on them. &#8211; Companion-to-Companion communications will be important and helpful to a user, and there is nothing in principle to make one believe that &#8220;secrets&#8221; cannot be handled sensitively in such an environment. &#8211; It is easy to underestimate the role of a user&#8217;s preference in selecting the personality appropriate to a Companion: it is not even clear that users want Companions to be polite or agreeable &#8211; it may depend on personal choice or their functional role. &#8211; For many it may be appropriate for a Companion to become progressively more like its owner in voice, face, personality, memories etc. &#8211; exaggerating the way dogs are believed to adapt to owners &#8211; and if and when this becomes possible, for the Companion to become a self-avatar of its owner, there may well be other unseen consequences after the owner&#8217;s death 10 01 JB code nlp.8.18s4 Section header 20 <TitleType>01</TitleType> <TitleText textformat="02">Section IV. Design issues</TitleText> <Subtitle textformat="02">Building a Companion</Subtitle> 10 01 JB code nlp.8.19bee 131 142 12 Article 21 <TitleType>01</TitleType> <TitleText textformat="02">The use of affective and attentive cues in&#160;an&#160;empathic computer-based Companions</TitleText> <TitlePrefix>The </TitlePrefix> <TitleWithoutPrefix textformat="02">use of affective and attentive cues in&#160;an&#160;empathic computer-based Companions</TitleWithoutPrefix> 1 A01 Nikolaus Bee Bee, Nikolaus Nikolaus Bee 2 A01 Elisabeth Andre Andre, Elisabeth Elisabeth Andre 3 A01 Thurid Vogt Vogt, Thurid Thurid Vogt 4 A01 Patrick Gebhard Gebhard, Patrick Patrick Gebhard 01 Recently, a number of research projects have been started to create virtual agents that do not just serve as assistants to which tasks may be delegated, but that may even take on the role of a Companion. Such agents require a great deal of social intelligence, such as the ability to detect the user&#8217;s affective state and to respond to it in an empathic manner. The objective of our work is to create an empathetic listener that is capable to react on affective and attentive input cues of the user. In particular, we discuss various forms of empathy and how they may be realized based on these cues. 10 01 JB code nlp.8.20bev 143 156 14 Article 22 <TitleType>01</TitleType> <TitleText textformat="02">GRETA</TitleText> <Subtitle textformat="02">Towards an interactive conversational virtual&#160;Companion</Subtitle> 1 A01 Elisabetta Bevacqua Bevacqua, Elisabetta Elisabetta Bevacqua 2 A01 Ken Prepin Prepin, Ken Ken Prepin 3 A01 Radoslaw Niewiadomski Niewiadomski, Radoslaw Radoslaw Niewiadomski 4 A01 Etienne de Sevin Sevin, Etienne de Etienne de Sevin 5 A01 Catherine Pelachaud Pelachaud, Catherine Catherine Pelachaud 01 In this chapter we present our work toward building a conversational Companion. Conversing with partner(s) means to being able to express one&#8217;s mental and emotional state, to be a speaker or a listener. One needs also to adapt to ones partner&#8217;s reactions to what one is saying. We have developed an interactive ECA platform, Greta (Pelachaud, 2005). It is a 3D virtual agent capable of communicating expressive verbal and nonverbal behaviors as well as listening. It can use its gaze, facial expressions and gestures to convey a meaning, an attitude or an emotion. Multimodal behaviors are tightly tied with each other. A synchronization scheme has been elaborated allowing the agent to display a raised eyebrow or a beat gesture on a given word. According to its emotional or mental state, the agent may vary the quality of its behaviors: it may use a more or less extended gesture, the arms can move at different speeds and with different accelerations (Mancini &amp; Pelachaud, 2008). The agent can also display listener behavior (Bevacqua et al., 2008). It interacts actively with users and/or other agents providing appropriate timed backchannels. Interaction also means the interactants ought to adapt to each others&#8217; behaviors and dynamic coupling between them needs to be considered (Prepin &amp; Revel, 2007). 10 01 JB code nlp.8.21cat 157 168 12 Article 23 <TitleType>01</TitleType> <TitleText textformat="02">A world-hybrid approach to a conversational Companion for reminiscing about images</TitleText> <TitlePrefix>A </TitlePrefix> <TitleWithoutPrefix textformat="02">world-hybrid approach to a conversational Companion for reminiscing about images</TitleWithoutPrefix> 1 A01 Roberta Catizone Catizone, Roberta Roberta Catizone 2 A01 Simon F. Worgan Worgan, Simon F. Simon F. Worgan 3 A01 Yorick Wilks Wilks, Yorick Yorick Wilks 4 A01 Alexiei Dingli Dingli, Alexiei Alexiei Dingli 5 A01 Weiwei Cheng Cheng, Weiwei Weiwei Cheng 10 01 JB code nlp.8.22cow 169 172 4 Article 24 <TitleType>01</TitleType> <TitleText textformat="02">Companionship is an emotional business</TitleText> 1 A01 Roddy Cowie Cowie, Roddy Roddy Cowie 01 This is written from the perspective of someone who was trained as a psychologist, and has been working for a decade on emotion-oriented/affective computing. That background highlights two kinds of issue: how emotion enters into the Companion scenario, and how computing can relate to emotion. In both areas, there is a difference between the intuitions of people who are not deeply involved, and the realities as they appear to people working in the area. The goal of this paper is to consider how the realities of emotion and emotion-oriented technology impact on the prospects for artificial Companions. The concern behind it is that otherwise, we may misjudge both the prospects and the risks. In particular, ability to address the emotional side of Companionship may play a key part in acceptance; and the necessary resources, conceptual as well as technical, cannot be taken for granted. We should be concerned about inserting Companions into emotionally sensitive roles without engineering them to take that into account. 10 01 JB code nlp.8.23new 173 178 6 Article 25 <TitleType>01</TitleType> <TitleText textformat="02">Artificial Companions in society</TitleText> <Subtitle textformat="02">Consulting the users</Subtitle> 1 A01 Alan Newell Newell, Alan Alan Newell 01 Artificial Companions provide Companionship, and to help people with their daily tasks is an intriguing and exciting concept. In addition to the important technical challenges of providing such systems, it is also clearly important to consider: 1. What facilities they should and should not provide. 2. The characteristics, needs and wants of potential users and 3. The personal and social consequences of such technologies. These are complex, multi-faceted issues and it is necessary consider what are the most effective and beneficial ways of addressing them. 10 01 JB code nlp.8.24slo 179 200 22 Article 26 <TitleType>01</TitleType> <TitleText textformat="02">Requirements for Artificial Companions</TitleText> <Subtitle textformat="02">It&#8217;s harder than you think</Subtitle> 1 A01 Aaron Sloman Sloman, Aaron Aaron Sloman 01 Producing a system that meets plausible requirements for Artificial Companions (AC&#8217;s), without arbitrary restrictions, will involve solving a great many problems that are currently beyond the state of the art in Artificial Intelligence (AI); including problems that would arise in the design of robotic Companions helping an owner by performing practical tasks in the physical environment. In other words, even if the AC is not itself a robot and interacts with the user only via input devices such as camera, microphone, keyboard, mouse, touch-pad, and touch-screen, and output devices such as screen and audio output devices, nevertheless it will, in some circumstances, need the visual competences, the ontology, the representational resources, the reasoning competences, the planning competences, and the problem-solving competences that a helpful domestic robot would need. This is because some of the intended beneficiaries of ACs will need to be given advice about what physical actions to perform, what physical devices to acquire, and how to use such devices. I shall give examples illustrating the need for such competences. 10 01 JB code nlp.8.25win 201 208 8 Article 27 <TitleType>01</TitleType> <TitleText textformat="02">You really need to know what your bot(s) are&#160;thinking about you</TitleText> 1 A01 Alan FT Winfield Winfield, Alan FT Alan FT Winfield 01 The projected ubiquity of personal Companion robots raises a range of interesting but also challenging questions. There can be little doubt that an effective artificial Companion, whether embodied or not, will need to be both sensitive to the emotional state of its human partner and be able to respond sensitively. It will, in other words, need artificial theory of mind &#8211; such an artificial Companion would need to behave as if it has feelings and as if it understands how its human partner is feeling. This chapter explores the implementation and implications of artificial theory of mind, and raises concerns over the asymmetry between and artificial Companion&#8217;s theory of mind for its human partner and the human&#8217;s theory of mind for his or her artificial Companion. The essay argues that social learning (imitation) is an additional requirement of artificial Companion robots, then goes on to develop the idea that an artificial Companion robot will not be one robot but several. A surprising consequence of these ideas is that a family of artificial Companion robots could acquire an artificial culture of its own, and the essay concludes by speculating on what this might mean for human(s) interacting with their artificial Companion robots. 10 01 JB code nlp.8.26s5 Section header 28 <TitleType>01</TitleType> <TitleText textformat="02">Section V. Special purpose Companions</TitleText> 10 01 JB code nlp.8.27eyn 211 220 10 Article 29 <TitleType>01</TitleType> <TitleText textformat="02">A Companion for learning in everyday life</TitleText> <TitlePrefix>A </TitlePrefix> <TitleWithoutPrefix textformat="02">Companion for learning in everyday life</TitleWithoutPrefix> 1 A01 Rebecca Eynon Eynon, Rebecca Rebecca Eynon 2 A01 Chris Davies Davies, Chris Chris Davies 01 The use of the Companion could have broad learning benefits. For example, cenabling learners to have control over their projects of learning could potentially lead to improved level of self efficacy beliefs (Kim and Balyor, 2006), cimproved IT skills or a general confidence to try new things. We believe that this broader view of the benefits of such a Companion makes the positive cimplications outweigh the negative. 10 01 JB code nlp.8.28nir 221 244 24 Article 30 <TitleType>01</TitleType> <TitleText textformat="02">The Maryland virtual patient as a task-oriented conversational Companion</TitleText> <TitlePrefix>The </TitlePrefix> <TitleWithoutPrefix textformat="02">Maryland virtual patient as a task-oriented conversational Companion</TitleWithoutPrefix> 1 A01 Sergei Nirenburg Nirenburg, Sergei Sergei Nirenburg 01 This chapter describes a conversational agent environment, the Maryland Virtual Patient (MVP). MVP models the process of disease progression, diagnosis and treatment in virtual patients endowed with a &#8220;body,&#8221; a simulation of their physiological and pathological processes, and a &#8220;mind,&#8221; a set of capabilities of perception, reasoning and action. that allows the virtual patient to exhibit independent behavior, participate in a natural language dialog, remember events, hold beliefs about other agents and about specific object and event instances, make decisions and learn. 10 01 JB code nlp.8.29sha 245 256 12 Article 31 <TitleType>01</TitleType> <TitleText textformat="02">Living with robots</TitleText> <Subtitle textformat="02">Ethical tradeoffs in eldercare</Subtitle> 1 A01 Noel Sharkey Sharkey, Noel Noel Sharkey 2 A01 Amanda Sharkey Sharkey, Amanda Amanda Sharkey 01 We discuss some of the research and ideas for developing robot carers and Companions for people with aging brains. We speculate a little about a near-future when it may be possible to keep people at home for longer in the almost exclusive care of robots and smart homes. We examine the benefits of robot care and Companionship and at the same time raise key ethical questions and concerns. We point to a series of trade-offs between the unethical and the beneficial that must be considered before robot care/Companionship becomes commonplace. 10 01 JB code nlp.8.30s6 Section header 32 <TitleType>01</TitleType> <TitleText textformat="02">Section VI. Afterword</TitleText> 10 01 JB code nlp.8.31pel 259 286 28 Article 33 <TitleType>01</TitleType> <TitleText textformat="02">Summary and discussion of the issues</TitleText> 1 A01 Malcom Peltu Peltu, Malcom Malcom Peltu 2 A01 Yorick Wilks Wilks, Yorick Yorick Wilks 01 The COMPANIONS project, which inspired this book, is studying conversational software-based artificial agents that will get to know their owners over a substantial period. These could be developed to advise, comfort and carry out a wide range of functions to support diverse personal and social needs, such as to be &#8216;artificial Companions&#8217; for the elderly, helping their owners to learn, or assisting to sustain their owners&#8217; fitness and health. This chapter summarizes the main issues raised in the workshop that gave rise to this book. Most direct quotes from participants in this chapter come from their own chapters. Appendix 1 contains examples of current artificial Companions and related research projects mentioned at the workshop. 10 01 JB code nlp.8.32ref 287 308 22 Miscellaneous 34 <TitleType>01</TitleType> <TitleText textformat="02">References</TitleText> 10 01 JB code nlp.8.33index 309 316 8 Miscellaneous 35 <TitleType>01</TitleType> <TitleText textformat="02">Index</TitleText> 02 JBENJAMINS John Benjamins Publishing Company 01 John Benjamins Publishing Company Amsterdam/Philadelphia NL 04 20100324 2010 John Benjamins 02 WORLD 13 15 9789027249944 01 JB 3 John Benjamins e-Platform 03 jbe-platform.com 09 WORLD 21 01 00 99.00 EUR R 01 00 83.00 GBP Z 01 gen 00 149.00 USD S 526007631 03 01 01 JB John Benjamins Publishing Company 01 JB code NLP 8 Hb 15 9789027249944 13 2009048316 BB 01 NLP 02 1567-8202 Natural Language Processing 8 <TitleType>01</TitleType> <TitleText textformat="02">Close Engagements with Artificial Companions</TitleText> <Subtitle textformat="02">Key social, psychological, ethical and design issues</Subtitle> 01 nlp.8 01 https://benjamins.com 02 https://benjamins.com/catalog/nlp.8 1 B01 Yorick Wilks Wilks, Yorick Yorick Wilks University of Oxford 01 eng 340 xxii 315 COM004000 v.2006 UYZ 2 24 JB Subject Scheme CONS.GEN Consciousness research 24 JB Subject Scheme LIN.COMPUT Computational & corpus linguistics 24 JB Subject Scheme PHIL.GEN Philosophy 06 01 What will it be like to admit Artificial Companions into our society? How will they change our relations with each other? How important will they be in the emotional and practical lives of their owners – since we know that people became emotionally dependent even on simple devices like the Tamagotchi? How much social life might they have in contacting each other? The contributors to this book discuss the possibility and desirability of some form of long-term computer Companions now being a certainty in the coming years. It is a good moment to consider, from a set of wide interdisciplinary perspectives, both how we shall construct them technically as well as their personal philosophical and social consequences. By Companions we mean conversationalists or confidants – not robots – but rather computer software agents whose function will be to get to know their owners over a long period. Those may well be elderly or lonely, and the contributions in the book focus not only on assistance via the internet (contacts, travel, doctors etc.) but also on providing company and Companionship, by offering aspects of real personalization. 04 09 01 https://benjamins.com/covers/475/nlp.8.png 04 03 01 https://benjamins.com/covers/475_jpg/9789027249944.jpg 04 03 01 https://benjamins.com/covers/475_tif/9789027249944.tif 06 09 01 https://benjamins.com/covers/1200_front/nlp.8.hb.png 07 09 01 https://benjamins.com/covers/125/nlp.8.png 25 09 01 https://benjamins.com/covers/1200_back/nlp.8.hb.png 27 09 01 https://benjamins.com/covers/3d_web/nlp.8.hb.png 10 01 JB code nlp.8.00for xi xii 2 Miscellaneous 1 <TitleType>01</TitleType> <TitleText textformat="02">Foreword</TitleText> 10 01 JB code nlp.8.02ack xii 1 Miscellaneous 2 <TitleType>01</TitleType> <TitleText textformat="02">Acknowledgements</TitleText> 10 01 JB code nlp.8.01contr xiii xxii 10 Miscellaneous 3 <TitleType>01</TitleType> <TitleText textformat="02">Contributors</TitleText> 10 01 JB code nlp.8.02s1 Section header 4 <TitleType>01</TitleType> <TitleText textformat="02"><atl>Section I. Setting the scene</TitleText> 10 01 JB code nlp.8.03tur 3 10 8 Article 5 <TitleType>01</TitleType> <TitleText textformat="02">In good company?</TitleText> <Subtitle textformat="02">On the threshold of robotic Companions</Subtitle> 1 A01 Sherry Turkle Turkle, Sherry Sherry Turkle 01 Most contributors to this volume believe that only technical matters stand between where we are now and a time when robots will be our Companions and teachers. Robots need to expand their domains of understanding, and if those domains should be emotional, well, that will be a technical matter as well. So while this volume expresses designers&#8217; enthusiasm about robots as technical objects, it challenges us to see robots as something more, as evocative objects. What are we thinking about when we are thinking about robots? We are thinking about aliveness and authenticity, love and spirituality. We are thinking about what it means to build a psychology. We are thinking about what makes people special. Or perhaps that they are not so special after all. 10 01 JB code nlp.8.04wil 11 20 10 Article 6 <TitleType>01</TitleType> <TitleText textformat="02">Introducing artificial Companions</TitleText> 1 A01 Yorick Wilks Wilks, Yorick Yorick Wilks 01 This introductory chapter, like the whole book itself, concerns a range of closely related topics: the possibility of machines having identifiable personalities, the possible future legal responsibilities of such companionable machines, and the design characteristics of such machines, including their technical implementation and the kinds of computational theory they will need to embody. As will become clear, I wish to explore these topics in terms of software entities, rather than robots, and in particular the sort of software agents now being encountered on the web, ranging at present from technical advisers to mere chatbots. I shall call them Companions. This introduction explores the following related aspects of a Companion in more detail: what kind and level of personality should be in a machine agent so as to be acceptable to a human user, more particularly to one who may fear technology and have no experience of it; and what levels of responsibility and legal attribution for responsibility can we expect from entities like complex web agents in the near future? 10 01 JB code nlp.8.05s2 Section header 7 <TitleType>01</TitleType> <TitleText textformat="02"><atl>Section II. Ethical and philosophical issues</TitleText> 10 01 JB code nlp.8.06flo 23 28 6 Article 8 <TitleType>01</TitleType> <TitleText textformat="02">Artificial Companions and their philosophical challenges</TitleText> 1 A01 Luciano Floridi Floridi, Luciano Luciano Floridi 01 The technology for Artificial Companions is already largely available, and the question is when rather than whether ACs will become commodities (Benyon and Mival (2007). The difficulties are still formidable, but they are not insurmountable. On the contrary, they seem rather well-understood, and the path from theoretical problems to technical solutions looks steep but climbable. cIn the following pages, I wish to concentrate not on the technological challenges, which are important, but on some philosophical issues that a growing population of AC will make increasingly pressing. 10 01 JB code nlp.8.07pul 29 34 6 Article 9 <TitleType>01</TitleType> <TitleText textformat="02">Conditions for Companionhood</TitleText> 1 A01 Stephen G. Pulman Pulman, Stephen G. Stephen G. Pulman 01 This chapter is an attempt to outline a set of conditions that are jointly necessary and sufficient for any entity, virtual or real, to be regarded as displaying the properties characteristic of our ordinary everyday understanding of the notion of a Companion. 10 01 JB code nlp.8.08oha 35 56 22 Article 10 <TitleType>01</TitleType> <TitleText textformat="02">Arius in cyberspace</TitleText> <Subtitle textformat="02">Digital Companions and the limits of the person</Subtitle> 1 A01 Kieron O'Hara O'Hara, Kieron Kieron O'Hara 01 The relationship between a Companion and a person will become increasingly problematic as Companion technology improves and as models of users become increasingly sophisticated, and the simple dichotomies that make the Turing Test so plausible as a means of determining intelligence will become harder to maintain. As with any kind of content-storing technology, such as writing, or in more recent years laptops, the amount and quality of cognition that a human can &#8216;export&#8217; to these outside technologies is significant. The &#8216;person&#8217; or &#8216;agent&#8217; can be seen as an extended system including the technologies as well as the human, in which the technologies, among other things, can help in the extension of trust towards the human. 10 01 JB code nlp.8.09s3 Section header 11 <TitleType>01</TitleType> <TitleText textformat="02">Section III. Social and psychological issues</TitleText> <Subtitle textformat="02">What should a Companion be like?</Subtitle> 10 01 JB code nlp.8.10bod 59 61 3 Article 12 <TitleType>01</TitleType> <TitleText textformat="02">Conversationalists and confidants</TitleText> 1 A01 Margaret A. Boden Boden, Margaret A. Margaret A. Boden 01 If there were an abstract for this very short paper, it would be this: &#8220;Conversationalists, maybe &#8211; but confidants?&#8221; 10 01 JB code nlp.8.11bry 63 74 12 Article 13 <TitleType>01</TitleType> <TitleText textformat="02">Robots should be slaves</TitleText> 1 A01 Joanna J. Bryson Bryson, Joanna J. Joanna J. Bryson 01 Robots should not be described as persons, nor given legal nor moral responsibility for their actions. Robots are fully owned by us. We determine their goals and behavior, either directly or indirectly through specifying their intelligence or how their intelligence is acquired. In humanising them, we not only further dehumanise real people, but also encourage poor human decision making in the allocation of resources and responsibility. This is true at both the individual and the institutional level. This chapter describes both causes and consequences of these errors, including consequences already present in society. I make specific proposals for best incorporating robots into our society. The potential of robotics should be understood as the potential to extend our own abilities and to address our own goals. 10 01 JB code nlp.8.12eva 75 88 14 Article 14 <TitleType>01</TitleType> <TitleText textformat="02">Wanting the impossible</TitleText> <Subtitle textformat="02">The dilemma at the heart of intimate human-robot relationships</Subtitle> 1 A01 Dylan Evans Evans, Dylan Dylan Evans 01 In a recent book entitled Love and Sex with Robots, the British scholar David Levy has argued that relationships with robot Companions might be more satisfying than relationships with humans, a claim which I call &#8220;the greater satisfaction thesis&#8221; (GST). &#160;The main reason Levy provides in support of GST is that people will be able to specify the features of robot Companions precisely in accordance with their wishes (which I call the total specification argument or TSA). &#160;In this paper, I argue that TSA is wrong. &#160;In particular, the argument breaks down when we consider certain behavioral characteristics that we desire in our partners. I illustrate my argument with a thought-experiment involving&#173; two kinds of robot &#8211; the FREEBOT, which is capable of rejecting its owner permanently, and the RELIABOT, which is not 10 01 JB code nlp.8.13lev 89 94 6 Article 15 <TitleType>01</TitleType> <TitleText textformat="02">Falling in love with a Companion</TitleText> 1 A01 David Levy Levy, David David Levy 01 In 1984, in her groundbreaking book The Second Self, Sherry Turkle made us aware of the tendency of some people to develop relationships with their computers. Turkle described one such example, an MIT computer hacker who she called Anthony, who had &#8220;tried out&#8221; having girlfriends but preferred to relate to computers. I believe that the developments in AI since then have demonstrated a progression in human-computer relationships to the point where we can now say with confidence that, in the foreseeable future, significant numbers of Anthony&#8217;s and their female counterparts will be falling in love with software Companions. This position paper summarizes my arguments. 10 01 JB code nlp.8.14low 95 100 6 Article 16 <TitleType>01</TitleType> <TitleText textformat="02">Identifying your accompanist</TitleText> 1 A01 Will Lowe Lowe, Will Will Lowe 01 What Companions are or should be will depend on what we want them to do for us. And this depends on what they can do for us. In the end, I shall argue that whether Companions should best be thought of, programmed and regulated as others, or as extensions of ourselves, is a tactical question. One path cwill simply be more effective than the other. Either way, it&#8217;s going to be all about us. To make a start let us switch to the second person: What can a Companion do for you? 10 01 JB code nlp.8.15rom 101 106 6 Article 17 <TitleType>01</TitleType> <TitleText textformat="02">Look, emotion, language and behavior in&#160;a&#160;believable virtual Companion</TitleText> 1 A01 Daniela M. Romano Romano, Daniela M. Daniela M. Romano 01 A good friend is the one that make you laugh, shares your deepest emotions and is there to listen and help you when you need it. Can we have we have a synthetic Companion with these qualities? He/she (we assume it has the status of a person at this stage) should be able understand and express emotions, to talk and listen, know who you are and what you feel and like. The Companion considered in this chapter is a synthetic creature living in a virtual world, able to display believable human-like qualities. The concept of believability within virtual environments is also discussed here together with some of the problems connected with the creation of such Companion. 10 01 JB code nlp.8.16tay 107 120 14 Article 18 <TitleType>01</TitleType> <TitleText textformat="02">New Companions</TitleText> 1 A01 Alex Taylor Taylor, Alex Alex Taylor 2 A01 Anab Jain Jain, Anab Anab Jain 3 A01 Laurel Swan Swan, Laurel Laurel Swan 01 This chapter draws on advancements in Artificial Intelligence (AI) and robotics to question the orthodoxy of artificial Companions research. Two areas of work are outlined and used to suggest artificial Companions need not be restricted to simulacrums of humans or animals. First, it is argued the developments in AI have given rise to the prospect of very different kinds of machines, machines that are unlike humans or animals but that we may still want to form relationships with. Second, details are presented of a project exploring energy autonomous robots. Ecobot, an example of such a robot, is shown to exhibit unique characteristics that may afford new, distinctive forms of Companionship. Finally, a design concept of an autonomously powered, household radio is presented to illustrate how these new kinds of relationships might be investigated further. 10 01 JB code nlp.8.17wil 121 128 8 Article 19 <TitleType>01</TitleType> <TitleText textformat="02">On being a Victorian Companion</TitleText> 1 A01 Yorick Wilks Wilks, Yorick Yorick Wilks 01 I have argued or suggested: &#8211; English Common Law already, in dogs, has a legal category of entities that are not human but are in some degree responsible for their actions and have &#8220;characters&#8221; that can be assessed. &#8211; Users may not want Companions prone to immediately expressed emotions and a restrained personality, like a Victorian Lady&#8217;s Companion, might provide a better model. &#8211; Language behavior is a complex repository of triggers for emotion, both expressed and causal, and this is often under-rated in the world of ECA and theories of emotion based on them. &#8211; Companion-to-Companion communications will be important and helpful to a user, and there is nothing in principle to make one believe that &#8220;secrets&#8221; cannot be handled sensitively in such an environment. &#8211; It is easy to underestimate the role of a user&#8217;s preference in selecting the personality appropriate to a Companion: it is not even clear that users want Companions to be polite or agreeable &#8211; it may depend on personal choice or their functional role. &#8211; For many it may be appropriate for a Companion to become progressively more like its owner in voice, face, personality, memories etc. &#8211; exaggerating the way dogs are believed to adapt to owners &#8211; and if and when this becomes possible, for the Companion to become a self-avatar of its owner, there may well be other unseen consequences after the owner&#8217;s death 10 01 JB code nlp.8.18s4 Section header 20 <TitleType>01</TitleType> <TitleText textformat="02">Section IV. Design issues</TitleText> <Subtitle textformat="02">Building a Companion</Subtitle> 10 01 JB code nlp.8.19bee 131 142 12 Article 21 <TitleType>01</TitleType> <TitleText textformat="02">The use of affective and attentive cues in&#160;an&#160;empathic computer-based Companions</TitleText> <TitlePrefix>The </TitlePrefix> <TitleWithoutPrefix textformat="02">use of affective and attentive cues in&#160;an&#160;empathic computer-based Companions</TitleWithoutPrefix> 1 A01 Nikolaus Bee Bee, Nikolaus Nikolaus Bee 2 A01 Elisabeth Andre Andre, Elisabeth Elisabeth Andre 3 A01 Thurid Vogt Vogt, Thurid Thurid Vogt 4 A01 Patrick Gebhard Gebhard, Patrick Patrick Gebhard 01 Recently, a number of research projects have been started to create virtual agents that do not just serve as assistants to which tasks may be delegated, but that may even take on the role of a Companion. Such agents require a great deal of social intelligence, such as the ability to detect the user&#8217;s affective state and to respond to it in an empathic manner. The objective of our work is to create an empathetic listener that is capable to react on affective and attentive input cues of the user. In particular, we discuss various forms of empathy and how they may be realized based on these cues. 10 01 JB code nlp.8.20bev 143 156 14 Article 22 <TitleType>01</TitleType> <TitleText textformat="02">GRETA</TitleText> <Subtitle textformat="02">Towards an interactive conversational virtual&#160;Companion</Subtitle> 1 A01 Elisabetta Bevacqua Bevacqua, Elisabetta Elisabetta Bevacqua 2 A01 Ken Prepin Prepin, Ken Ken Prepin 3 A01 Radoslaw Niewiadomski Niewiadomski, Radoslaw Radoslaw Niewiadomski 4 A01 Etienne de Sevin Sevin, Etienne de Etienne de Sevin 5 A01 Catherine Pelachaud Pelachaud, Catherine Catherine Pelachaud 01 In this chapter we present our work toward building a conversational Companion. Conversing with partner(s) means to being able to express one&#8217;s mental and emotional state, to be a speaker or a listener. One needs also to adapt to ones partner&#8217;s reactions to what one is saying. We have developed an interactive ECA platform, Greta (Pelachaud, 2005). It is a 3D virtual agent capable of communicating expressive verbal and nonverbal behaviors as well as listening. It can use its gaze, facial expressions and gestures to convey a meaning, an attitude or an emotion. Multimodal behaviors are tightly tied with each other. A synchronization scheme has been elaborated allowing the agent to display a raised eyebrow or a beat gesture on a given word. According to its emotional or mental state, the agent may vary the quality of its behaviors: it may use a more or less extended gesture, the arms can move at different speeds and with different accelerations (Mancini &amp; Pelachaud, 2008). The agent can also display listener behavior (Bevacqua et al., 2008). It interacts actively with users and/or other agents providing appropriate timed backchannels. Interaction also means the interactants ought to adapt to each others&#8217; behaviors and dynamic coupling between them needs to be considered (Prepin &amp; Revel, 2007). 10 01 JB code nlp.8.21cat 157 168 12 Article 23 <TitleType>01</TitleType> <TitleText textformat="02">A world-hybrid approach to a conversational Companion for reminiscing about images</TitleText> <TitlePrefix>A </TitlePrefix> <TitleWithoutPrefix textformat="02">world-hybrid approach to a conversational Companion for reminiscing about images</TitleWithoutPrefix> 1 A01 Roberta Catizone Catizone, Roberta Roberta Catizone 2 A01 Simon F. Worgan Worgan, Simon F. Simon F. Worgan 3 A01 Yorick Wilks Wilks, Yorick Yorick Wilks 4 A01 Alexiei Dingli Dingli, Alexiei Alexiei Dingli 5 A01 Weiwei Cheng Cheng, Weiwei Weiwei Cheng 10 01 JB code nlp.8.22cow 169 172 4 Article 24 <TitleType>01</TitleType> <TitleText textformat="02">Companionship is an emotional business</TitleText> 1 A01 Roddy Cowie Cowie, Roddy Roddy Cowie 01 This is written from the perspective of someone who was trained as a psychologist, and has been working for a decade on emotion-oriented/affective computing. That background highlights two kinds of issue: how emotion enters into the Companion scenario, and how computing can relate to emotion. In both areas, there is a difference between the intuitions of people who are not deeply involved, and the realities as they appear to people working in the area. The goal of this paper is to consider how the realities of emotion and emotion-oriented technology impact on the prospects for artificial Companions. The concern behind it is that otherwise, we may misjudge both the prospects and the risks. In particular, ability to address the emotional side of Companionship may play a key part in acceptance; and the necessary resources, conceptual as well as technical, cannot be taken for granted. We should be concerned about inserting Companions into emotionally sensitive roles without engineering them to take that into account. 10 01 JB code nlp.8.23new 173 178 6 Article 25 <TitleType>01</TitleType> <TitleText textformat="02">Artificial Companions in society</TitleText> <Subtitle textformat="02">Consulting the users</Subtitle> 1 A01 Alan Newell Newell, Alan Alan Newell 01 Artificial Companions provide Companionship, and to help people with their daily tasks is an intriguing and exciting concept. In addition to the important technical challenges of providing such systems, it is also clearly important to consider: 1. What facilities they should and should not provide. 2. The characteristics, needs and wants of potential users and 3. The personal and social consequences of such technologies. These are complex, multi-faceted issues and it is necessary consider what are the most effective and beneficial ways of addressing them. 10 01 JB code nlp.8.24slo 179 200 22 Article 26 <TitleType>01</TitleType> <TitleText textformat="02">Requirements for Artificial Companions</TitleText> <Subtitle textformat="02">It&#8217;s harder than you think</Subtitle> 1 A01 Aaron Sloman Sloman, Aaron Aaron Sloman 01 Producing a system that meets plausible requirements for Artificial Companions (AC&#8217;s), without arbitrary restrictions, will involve solving a great many problems that are currently beyond the state of the art in Artificial Intelligence (AI); including problems that would arise in the design of robotic Companions helping an owner by performing practical tasks in the physical environment. In other words, even if the AC is not itself a robot and interacts with the user only via input devices such as camera, microphone, keyboard, mouse, touch-pad, and touch-screen, and output devices such as screen and audio output devices, nevertheless it will, in some circumstances, need the visual competences, the ontology, the representational resources, the reasoning competences, the planning competences, and the problem-solving competences that a helpful domestic robot would need. This is because some of the intended beneficiaries of ACs will need to be given advice about what physical actions to perform, what physical devices to acquire, and how to use such devices. I shall give examples illustrating the need for such competences. 10 01 JB code nlp.8.25win 201 208 8 Article 27 <TitleType>01</TitleType> <TitleText textformat="02">You really need to know what your bot(s) are&#160;thinking about you</TitleText> 1 A01 Alan FT Winfield Winfield, Alan FT Alan FT Winfield 01 The projected ubiquity of personal Companion robots raises a range of interesting but also challenging questions. There can be little doubt that an effective artificial Companion, whether embodied or not, will need to be both sensitive to the emotional state of its human partner and be able to respond sensitively. It will, in other words, need artificial theory of mind &#8211; such an artificial Companion would need to behave as if it has feelings and as if it understands how its human partner is feeling. This chapter explores the implementation and implications of artificial theory of mind, and raises concerns over the asymmetry between and artificial Companion&#8217;s theory of mind for its human partner and the human&#8217;s theory of mind for his or her artificial Companion. The essay argues that social learning (imitation) is an additional requirement of artificial Companion robots, then goes on to develop the idea that an artificial Companion robot will not be one robot but several. A surprising consequence of these ideas is that a family of artificial Companion robots could acquire an artificial culture of its own, and the essay concludes by speculating on what this might mean for human(s) interacting with their artificial Companion robots. 10 01 JB code nlp.8.26s5 Section header 28 <TitleType>01</TitleType> <TitleText textformat="02">Section V. Special purpose Companions</TitleText> 10 01 JB code nlp.8.27eyn 211 220 10 Article 29 <TitleType>01</TitleType> <TitleText textformat="02">A Companion for learning in everyday life</TitleText> <TitlePrefix>A </TitlePrefix> <TitleWithoutPrefix textformat="02">Companion for learning in everyday life</TitleWithoutPrefix> 1 A01 Rebecca Eynon Eynon, Rebecca Rebecca Eynon 2 A01 Chris Davies Davies, Chris Chris Davies 01 The use of the Companion could have broad learning benefits. For example, cenabling learners to have control over their projects of learning could potentially lead to improved level of self efficacy beliefs (Kim and Balyor, 2006), cimproved IT skills or a general confidence to try new things. We believe that this broader view of the benefits of such a Companion makes the positive cimplications outweigh the negative. 10 01 JB code nlp.8.28nir 221 244 24 Article 30 <TitleType>01</TitleType> <TitleText textformat="02">The Maryland virtual patient as a task-oriented conversational Companion</TitleText> <TitlePrefix>The </TitlePrefix> <TitleWithoutPrefix textformat="02">Maryland virtual patient as a task-oriented conversational Companion</TitleWithoutPrefix> 1 A01 Sergei Nirenburg Nirenburg, Sergei Sergei Nirenburg 01 This chapter describes a conversational agent environment, the Maryland Virtual Patient (MVP). MVP models the process of disease progression, diagnosis and treatment in virtual patients endowed with a &#8220;body,&#8221; a simulation of their physiological and pathological processes, and a &#8220;mind,&#8221; a set of capabilities of perception, reasoning and action. that allows the virtual patient to exhibit independent behavior, participate in a natural language dialog, remember events, hold beliefs about other agents and about specific object and event instances, make decisions and learn. 10 01 JB code nlp.8.29sha 245 256 12 Article 31 <TitleType>01</TitleType> <TitleText textformat="02">Living with robots</TitleText> <Subtitle textformat="02">Ethical tradeoffs in eldercare</Subtitle> 1 A01 Noel Sharkey Sharkey, Noel Noel Sharkey 2 A01 Amanda Sharkey Sharkey, Amanda Amanda Sharkey 01 We discuss some of the research and ideas for developing robot carers and Companions for people with aging brains. We speculate a little about a near-future when it may be possible to keep people at home for longer in the almost exclusive care of robots and smart homes. We examine the benefits of robot care and Companionship and at the same time raise key ethical questions and concerns. We point to a series of trade-offs between the unethical and the beneficial that must be considered before robot care/Companionship becomes commonplace. 10 01 JB code nlp.8.30s6 Section header 32 <TitleType>01</TitleType> <TitleText textformat="02">Section VI. Afterword</TitleText> 10 01 JB code nlp.8.31pel 259 286 28 Article 33 <TitleType>01</TitleType> <TitleText textformat="02">Summary and discussion of the issues</TitleText> 1 A01 Malcom Peltu Peltu, Malcom Malcom Peltu 2 A01 Yorick Wilks Wilks, Yorick Yorick Wilks 01 The COMPANIONS project, which inspired this book, is studying conversational software-based artificial agents that will get to know their owners over a substantial period. These could be developed to advise, comfort and carry out a wide range of functions to support diverse personal and social needs, such as to be &#8216;artificial Companions&#8217; for the elderly, helping their owners to learn, or assisting to sustain their owners&#8217; fitness and health. This chapter summarizes the main issues raised in the workshop that gave rise to this book. Most direct quotes from participants in this chapter come from their own chapters. Appendix 1 contains examples of current artificial Companions and related research projects mentioned at the workshop. 10 01 JB code nlp.8.32ref 287 308 22 Miscellaneous 34 <TitleType>01</TitleType> <TitleText textformat="02">References</TitleText> 10 01 JB code nlp.8.33index 309 316 8 Miscellaneous 35 <TitleType>01</TitleType> <TitleText textformat="02">Index</TitleText> 02 JBENJAMINS John Benjamins Publishing Company 01 John Benjamins Publishing Company Amsterdam/Philadelphia NL 04 20100324 2010 John Benjamins 02 WORLD 01 245 mm 02 164 mm 08 760 gr 01 JB 1 John Benjamins Publishing Company +31 20 6304747 +31 20 6739773 bookorder@benjamins.nl 01 https://benjamins.com 01 WORLD US CA MX 21 6 18 01 02 JB 1 00 99.00 EUR R 02 02 JB 1 00 104.94 EUR R 01 JB 10 bebc +44 1202 712 934 +44 1202 712 913 sales@bebc.co.uk 03 GB 21 18 02 02 JB 1 00 83.00 GBP Z 01 JB 2 John Benjamins North America +1 800 562-5666 +1 703 661-1501 benjamins@presswarehouse.com 01 https://benjamins.com 01 US CA MX 21 1 18 01 gen 02 JB 1 00 149.00 USD