Article published in:Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues
Edited by Yorick Wilks
[Natural Language Processing 8] 2010
► pp. 201–208
You really need to know what your bot(s) are thinking about you
The projected ubiquity of personal Companion robots raises a range of interesting but also challenging questions. There can be little doubt that an effective artificial Companion, whether embodied or not, will need to be both sensitive to the emotional state of its human partner and be able to respond sensitively. It will, in other words, need artificial theory of mind – such an artificial Companion would need to behave as if it has feelings and as if it understands how its human partner is feeling. This chapter explores the implementation and implications of artificial theory of mind, and raises concerns over the asymmetry between and artificial Companion’s theory of mind for its human partner and the human’s theory of mind for his or her artificial Companion. The essay argues that social learning (imitation) is an additional requirement of artificial Companion robots, then goes on to develop the idea that an artificial Companion robot will not be one robot but several. A surprising consequence of these ideas is that a family of artificial Companion robots could acquire an artificial culture of its own, and the essay concludes by speculating on what this might mean for human(s) interacting with their artificial Companion robots.
Published online: 24 March 2010
Cited by 5 other publications
Moore, Roger K.
Peltu, Malcolm & Yorick Wilks
Reiter-Palmon, Roni, Tanmay Sinha, Josette Gevers, Jean-Marc Odobez & Gualtiero Volpe
Vechinski, Matthew James
Winfield, Alan F. T.
This list is based on CrossRef data as of 24 may 2021. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.