From vocal prosody to movement prosody, from HRI to understanding
humans
Human–Human and Human–Robot Interaction are known to be
influenced by a variety of modalities and parameters. Nevertheless, it remains a
challenge to anticipate how a given mobile robot’s navigation and appearance
will impact how it is perceived by humans. Drawing a parallel with vocal
prosody, we introduce the notion of movement prosody, which encompasses
spatio-temporal and appearance dimensions which are involved in a person’s
perceptual experience of interacting with a mobile robot. We design a novel
robot motion corpus, encompassing variables related to the kinematics, gaze, and
appearance of the robot, which we hypothesize are involved in movement prosody.
Initial results of three perception experiments suggest that these variables
have significant influences on participants’ perceptions of robot socio-affects
and physical attributes.
Article outline
- 1.Introduction
- 2.Related works
- 2.1Exploring the variables of social navigation
- 2.2Evaluating perceptions of robot socio-affects
- 2.3Interactions between HRI modalities
- 2.4Robot motion and video corpus
- 3.Robot motion corpus design
- 3.1Velocity profile design
- 3.1.1Motion sequences
- 3.1.2Kinematic types
- 3.1.3Profile variants
- 3.2Beyond velocity: Robot appearance and body dynamics
- 3.2.1Frail or robust robot
- 3.2.2Eye shape and head movements
- 3.2.3Audio recording
- 3.3Summary of corpus variables
- 4.Video corpus acquisition
- 4.1Robot movement consistency and framing
- 4.2Environment characteristics
- 4.3Camera configuration and parameters
- 5.Perception experiments
- 5.1First online experiment: Likert scale
- 5.1.1Likert scale results
- 5.1.2Likert scale analysis
- 5.2Second online experiment: Binary choice
- 5.2.1Binary choice results
- 5.2.2Binary choice analysis
- 5.3Embodied experiment
- 5.3.1Embodied experiment results
- 5.3.2Embodied experiment analysis
- 6.Discussion
- 6.1Limitations
- 6.2Implications
- 6.3Future work
- 7.Conclusion
- Acknowledgements
- Notes
-
References
References (62)
References
Augustine, A. C., Ryusuke, M., Liu, C., Ishi, C. T., & Ishiguro, H. (2020). Generation
and evaluation of audio-visual anger emotional expression for android
robot. Companion of the 2020 ACM/IEEE
International Conference on Human–Robot
Interaction, 96–98.
Barchard, K. A., Lapping-Carr, L., Westfall, R. S., Fink-Armold, A., Banisetty, S. B., & Feil-Seifer, D. (2020). Measuring
the perceived social intelligence of
robots. ACM Transactions on Human–Robot
Interaction,
9
(4).
Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement
Instruments for the Anthropomorphism, Animacy, Likeability, Perceived
Intelligence, and Perceived Safety of
Robots. Int J Soc
Robot,
1
1, 71–81.
Bates, D., Machler, M., Bolker, B., & Walker, S. (2015). Fitting
linear mixed-effects models using
lme4. Journal of Statistical
Software,
67
(1), 1–48.
Brandl, C., Mertens, A., & Schlick, C. M. (2016). Human–Robot
Interaction in Assisted Personal Services: Factors Influencing Distances
That Humans Will Accept between Themselves and an Approaching Service
Robot. Human Factors and Ergonomics in
Manufacturing & Service
Industries,
26
(6), 713–727.
Breazeal, C., Kidd, C., Thomaz, A., Hoffman, G., & Berlin, M. (2005). Effects
of nonverbal communication on efficiency and robustness in human–robot
teamwork. 2005 IEEE/RSJ International
Conference on Intelligent Robots and
Systems, 708–713.
Campbell, N., & Mokhtari, P. (2003). Voice
quality: The 4th prosodic dimension. Proc.
15th Int. Congr. Phonetic
Sciences, pp. 2417–2420.
Carpenter, J. (2013). The
Quiet Professional: An investigation of U.S. military Explosive Ordnance
Disposal personnel interactions with everyday field
robots. Doctoral
dissertation University of Washington.
Carpinella, C. M., Wyman, A. B., Perez, M. A., & Stroessner, S. J. (2017). The
Robotic Social Attributes Scale (RoSAS): Development and
Validation. 2017 12th ACM/IEEE International
Conference on Human–Robot Interaction
(HRI, 254–262.
Carton, D., Olszowy, W., Wollherr, D., & Buss, M. (2017). Socio-Contextual
Constraints for Human Approach with a Mobile
Robot. International Journal of Social
Robotics,
9
(2), 309–327.
Chan, L., Zhang, B. J., & Fitter, N. T. (2021). Designing
and validating expressive cozmo behaviors for accurately conveying
emotions. 2021 30th IEEE International
Conference on Robot & Human Interactive Communication
(RO-MAN), 1037–1044.
Chen, Y. F., Everett, M., Liu, M., & How, J. P. (2017). Socially
aware motion planning with deep reinforcement
learning. IEEE International Conference on
Intelligent Robots and Systems,
2017-Septe, 1343–1350.
Dautenhahn, K., Nehaniv, C. L., Walters, M. L., Robins, B., Kose-Bagci, H., Mirza, N. A., & Blow, M. (2009). KASPAR –
a minimally expressive humanoid robot for human–robot interaction
research. Applied Bionics and
Biomechanics,
6
(3–4), 369–397.
Di Cesare, G., De Stefani, E., Gentilucci, M., & De Marco, D. (2017). Vitality
Forms Expressed by Others Modulate Our Own Motor Response: A Kinematic
Study. Frontiers in Human
Neuroscience,
11
1, 565.
Drumm, P. (2012). Kohler, W. In R. W. Rieber (Ed.), Encyclopedia
of the history of psychological
theories (pp. 610–612). Springer US.
Fischer, K., Jensen, L. C., Suvei, S. D., & Bodenhagen, L. (2016). Between
legibility and contact: The role of gaze in robot
approach. 25th IEEE International Symposium
on Robot and Human Interactive Communication, RO-MAN
2016, 646–651.
Gil, Ó., Garrell, A., & Sanfeliu, A. (2021). Social
robot navigation tasks: Combining machine learning techniques and social
force
model. Sensors,
21
(21).
Gobl, C., & Ní Chasaide, A. (2003). The
role of voice quality in communicating emotion, mood and
attitude. Speech
Communication,
40
(1–2), 189–212.
Guillaume, L., Aubergé, V., Magnani, R., Aman, F., Cottier, C., Sasa, Y., Wolf, C., Nebout, F., Neverova, N., Bonnefond, N., Negre, A., Tsvetanova, L., & Girard-Rivier, M. (2015). Hri
in an ecological dynamic experiment: The gee corpus based approach for the
emox robot. 2015 IEEE International Workshop
on Advanced Robotics and its Social Impacts
(ARSO), 1–6.
Hall, E. T., Birdwhistell, R. L., Bock, B., Bohannan, P., Diebold, A. R., Durbin, M., Edmonson, M. S., Fischer, J. L., Hymes, D., Kimball, S. T., Barre, W. L., Frank Lynch, S. J., McClellan, J. E., Marshall, D. S., Milner, G. B., Sarles, H. B., Trager, G. L., & Vayda, A. P. (1968). Proxemics
[and comments and replies]. Current
Anthropology,
9
(2/3), 83–108. [URL].
Hebesberger, D., Koertner, T., Gisinger, C., & Pripfl, J. (2017). A
long-term autonomous robot at a Care hospital: A mixed methods study on
social acceptance and experiences of staff and older
adults. International Journal of Social
Robotics,
9
1.
Honig, S., & Oron-Gilad, T. (2020). Comparing
laboratory user studies and video-enhancedweb surveys for eliciting user
gestures in human–robot
interactions. ACM/IEEE International
Conference on Human–Robot
Interaction, 248–250.
Honour, A., Banisetty, S. B., & Feil-Seifer, D. (2021). Perceived
Social Intelligence as Evaluation of Socially
Navigation. Companion of the 2021 ACM/IEEE
International Conference on Human–Robot
Interaction, 519–523.
Irfan, B., Kennedy, J., Lemaignan, S., Papadopoulos, F., Senft, E., & Belpaeme, T. (2018). Social
Psychology and Human–Robot Interaction: An Uneasy
Marriage. Companion of the 2018 ACM/IEEE
International Conference on Human–Robot Interaction – HRI
’18, 13–20.
Kamezaki, M., Kobayashi, A., Yokoyama, Y., Yanagawa, H., Shrestha, M., & Sugano, S. (2019). A
Preliminary Study of Interactive Navigation Framework with
Situation-Adaptive Multimodal Inducement: Pass-By
Scenario. International Journal of Social
Robotics.
Khambhaita, H., & Alami, R. (2020). Viewing
Robot Navigation in Human Environment as a Cooperative
Activity. Springer, Cham.
Knight, H., Thielstrom, R., & Simmons, R. (2016). Expressive
path shape (Swagger): Simple features that illustrate a robot’s attitude
toward its goal in real time. IEEE
International Conference on Intelligent Robots and Systems,
2016-Novem, 1475–1482.
Kruse, T., Pandey, A. K., Alami, R., & Kirsch, A. (2013). Human–Aware
Robot Navigation: A Survey. Robotics and
Autonomous
Systems,
61
(12), pp.1726–1743. [URL].
Matsumoto, M. (2021). Fragile
Robot: The Fragility of Robots Induces User Attachment to
Robots. International Journal of Mechanical
Engineering and Robotics
Research,
10
(10), 536–541.
Mavrogiannis, C., Hutchinson, A. M., MacDonald, J., Alves-Oliveira, P., & Knepper, R. A. (2019). Effects
of Distinct Robot Navigation Strategies on Human Behavior in a Crowded
Environment. ACM/IEEE International
Conference on Human–Robot Interaction,
2019-March (March), 421–430.
Mavrogiannis, C. I., Baldini, F., Wang, A., Zhao, D., Trautman, P., Steinfeld, A., & Oh, J. (2021). Core
challenges of social robot navigation: A
survey. CoRR,
abs/2103.05668. [URL]
McGinn, C., & Torre, I. (2019). Can
you tell the robot by the voice? an exploratory study on the role of voice
in the perception of robots. Proceedings of
the 14th ACM/IEEE International Conference on Human–Robot
Interaction, 211–221.
McGurk, H., & MacDonald, J. (1976). Hearing
lips and seeing
voices. Nature,
264
(5588), 746–748.
Menne, I. M., & Schwab, F. (2018). Faces
of Emotion: Investigating Emotional Facial Expressions Towards a
Robot. International Journal of Social
Robotics,
10
(2), 199–209.
Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M. (2013). Design
and Impact of Hesitation Gestures during Human–Robot Resource
Conflicts. Journal of Human–Robot
Interaction,
2
(3).
Mutlu, B., & Forlizzi, J. (2008). Robots
in organizations. Proceedings of the 3rd
international conference on Human robot interaction – HRI
08, (May 2014), 287.
Ramirez, O. A., Khambhaita, H., Chatila, R., Chetouani, M., & Alami, R. (2016). Robots
learning how and where to approach
people. 25th IEEE International Symposium on
Robot and Human Interactive Communication, RO-MAN
2016,
1
1, 347–353.
Reinhardt, J., Prasch, L., & Bengler, K. (2021). Back-off. ACM
Transactions on Human–Robot
Interaction,
10
(3), 1–25. removed for
double-blind
review. (2017).
Rios-Martinez, J., Spalanzani, A., & Laugier, C. (2015). From
Proxemics Theory to Socially-Aware Navigation: A
Survey. International Journal of Social
Robotics,
7
(2), 137–153.
Robair mobile robot, designed and
built by fabmstic,
grenoble [Accessed: 2021-07-19]. (n.d.).
Robinson, F. A., Velonaki, M., & Bown, O. (2021). Smooth
operator: Tuning robot perception through artificial movement
sound. ACM/IEEE International Conference on
Human–Robot
Interaction, 53–62.
Rosenthal-von der Pütten, A. M., Schulte, F. P., Eimler, S. C., Sobieraj, S., Hoffmann, L., Maderwald, S., Brand, M., & Kramer, N. C. (2014). Investigations
on empathy towards humans and robots using
fMRI. Computers in Human
Behavior,
33
1, 201–212.
Saerbeck, M., & Bartneck, C. (2010). Perception
of affect elicited by robot
motion, 53–60.
Saldien, J., Vanderborght, B., Goris, K., Van Damme, M., & Lefeber, D. (2014). A
Motion System for Social and Animated
Robots. International Journal of Advanced
Robotic
Systems,
11
(5), 72.
Sasa, Y., & Aubergé, V. (2016). Perceived
isolation and elderly boundaries in eee (emoz elder-ly expressions) corpus:
Appeal to communication dynamics with a socio-affectively gluing robot in a
smart
home. Gerontechnology,
15
1.
Sasa, Y., & Auberge, V. (2017). SASI:
perspectives for a socio-affectively intelligent HRI dialog
system. 1st Workshop on “Behavior, Emotion
and Representation: Building Blocks of Interaction”. [URL]
Savery, R., Rose, R., & Weinberg, G. (2019). Establishing
human–robot trust through music-driven robotic emotion prosody and
gesture. 2019 28th IEEE International
Conference on Robot and Human Interactive Communication
(RO-MAN), 1–7.
Savery, R., Zahray, L., & Weinberg, G. (2021). Emotional
musical prosody for the enhancement of trust: Audio design for robotic arm
communication. Paladyn, Journal of Behavioral
Robotics,
12
(1), 454–467.
Scales, P., Aycard, O., & Aubergé, V. (2020). Studying
navigation as a form of interaction: A design approach for social robot
navigation methods. 2020 IEEE International
Conference on Robotics and Automation
(ICRA), 6965–6972.
Schulz, T., Holthaus, P., Amirabdollahian, F., Koay, K. L., Torresen, J., & Herstad, J. (2020). Differences
of Human Perceptions of a Robot Moving using Linear or Slow in, Slow out
Velocity Profiles When Performing a Cleaning
Task. 2019 28th IEEE International Conference
on Robot and Human Interactive Communication, RO-MAN
2019.
Sharpe, D. (2015). Your
chi-square test is statistically significant: Now
what? Practical Assessment, Research and
Evaluation,
20
1, 1–10.
Shiomi, M., Zanlungo, F., Hayashi, K., & Kanda, T. (2014). Towards
a Socially Acceptable Collision Avoidance for a Mobile Robot Navigating
Among Pedestrians Using a Pedestrian
Model. International Journal of Social
Robotics,
6
(3), 443–455.
Sorrentino, A., Khalid, O., Coviello, L., Cavallo, F., & Fiorini, L. (2021). Modeling
human-like robot personalities as a key to foster socially aware
navigation. 2021 30th IEEE International
Conference on Robot and Human Interactive Communication, RO-MAN
2021, 95–101.
Takenaka, H. (2005). Loss
experience and rebirth of elderly
people.
Tanaka, K. (1997). Geratology
Isagoge.
Tennent, H., Moore, D., Jung, M., & Ju, W. (2017). Good
vibrations: How consequential sounds affect perception of robotic
arms. RO-MAN 2017 – 26th IEEE International
Symposium on Robot and Human Interactive Communication,
2017-January, 928–935.
Torre, I., Linard, A., Steen, A., Tumova, J., & Leite, I. (2021). Should
robots chicken? How anthropomorphism and perceived autonomy influence
trajectories in a game-theoretic
problem. ACM/IEEE International Conference on
Human–Robot
Interaction, 370–379.
Tsvetanova, L., Aubergé, V., & Sasa, Y. (2017). Multimodal
breathiness in interaction : From breathy voice quality to global breathy
“body behavior quality”. Proc. of the 1st
International Workshop on Vocal Interactivity in-and-between Humans, Animals
and Robots – VIHAR 2017.
Watanabe, K., Greenberg, Y., & Sagisaka, Y. (2014). Sentiment
analysis of color attributes derived from vowel sound impression for
multimodal expression. Signal and Information
Processing Association Annual Summit and Conference (APSIPA), 2014
Asia-Pacific, 1–5.
Zecca, M., Endo, N., Momoki, S., Itoh, K., & Takanishi, A. (2008). Design
of the humanoid robot KOBIAN – preliminary analysis of facial and whole body
emotion expression capabilities-. 2008 8th
IEEE-RAS International Conference on Humanoid Robots, Humanoids
2008, 487–492.
Zhou, A., & Dragan, A. D. (2018). Cost
Functions for Robot Motion Style. IEEE
International Conference on Intelligent Robots and
Systems, 3632–3639.
Cited by (2)
Cited by two other publications
Scales, Philip, Olivier Aycard & Véronique Aubergé
2024.
Planning Socially Expressive Mobile Robot Trajectories.
Sensors 24:11
► pp. 3533 ff.
Scales, Philip, Véronique Aubergé & Olivier Aycard
2022.
2022 10th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW),
► pp. 1 ff.
This list is based on CrossRef data as of 12 september 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.