Article published in:
Social Cues in Robot Interaction, Trust and AcceptanceEdited by Alessandra Rossi, Kheng Lee Koay, Silvia Moros, Patrick Holthaus and Marcus Scheunemann
[Interaction Studies 20:3] 2019
► pp. 530–560
Reshaping human intention in Human-Robot Interactions by robot moves
A comparative analysis of HMM and OOM methods
Akif Durdu | Konya Technical University
Aydan M. Erkmen | Middle East Technical University
Alper Yilmaz | The Ohio State University
This paper outlines the methodology and experiments associated with the reshaping of human intentions based on
robot movements within Human-Robot Interactions (HRIs). Although studies on estimating human intentions are well studied in the
literature, reshaping intentions through robot-initiated interactions is a new significant branching in the field of HRI. In this
paper, we analyze how estimated human intentions can intentionally change through cooperation with mobile robots in real
Human-Robot environments. This paper proposes an intention-reshaping system that includes either the Observable Operator Models
(OOMs) or Hidden Markov Models (HMMs) to estimate human intention and decide which moves a robot should perform to reshape
previously estimated human intentions into desired ones. At the low level, the system needs to track the locations of all mobile
agents using cameras. We test our system on videos taken in a real HRI environment that has been developed as our experimental
setup. The results show that OOMs are faster than HMMs and both models give correct decisions for testing sequences.
Keywords: Human-Robot Interaction, Hidden Markov Models, Observable Operator Models, intention reshaping, intention recognition
Published online: 18 November 2019
https://doi.org/10.1075/is.18068.dur
https://doi.org/10.1075/is.18068.dur
References
References
Aarno, D., & Kragic, D.
Arasaratnam, I., Haykin, S., Kirubarajan, T., & Dilkes, F.
Baum, L. E., Petrie, T., Soules, G., & Weiss, N.
Charniak, E., & Goldman, R.
Chouchourelou, A., Matsuka, T., Harber, K., & Shiffrar, M.
Clarke, T. J., Bradshaw, M. F., Field, D. T., Hampson, S. E., & Rose, D.
Cutting, J. E., & Kozlowski, L. T.
Daprati, E., Wriessnegger, S., & Lacquaniti, F.
Dielmann, A., & Renals, S.
[ p. 557 ]
Dittrich, W. H., Troscianko, T., Lea, S. E. G., & Morgan, D.
Durdu, A., Erkmen, I., Erkmen, A. M., Yilmaz, A.
Durdu, A., Erkmen, I., Erkmen, A. M.
Grezes, J., Frith, C., & Passingham, R. E.
Jaeger, H., Zhao, M., and Kolling, A.
Kelley, R., Nicolescu, M., Tavakkoli, A., Nicolescu, M., Christopher King, George Bebis
Knoblich, G., & Prinz, W.
Kohler, E., Keysers, C., Umilta, M. A., Fogassi, L., Gallese, V., & Rizzolatti, G.
Lee, K. K., & Xu, Y.
Loula, F., Prasad, S., Harber, K., & Shiffrar, M.
Manera, V., Schouten, B., Becchio, C., Bara, B. G., & Verfaillie, K.
Meltzoff, A. N.
Miyake, T., Matsumoto, T., Imamura, T., & Zhang, Z. E.
[ p. 558 ]
Nakauchi, Y., Noguchi, K., Somwong, P., Matsubara, T., & Namatame, A.
Noguchi, K., Somwong, P., Matsubara, T., & Nakauchi, Y.
Pynadath, D.
Rabiner, L. R.
Roether, C. L., Omlor, L., Christensen, A., & Giese, M.
Runeson, S., & Frykholm, G.
Sebanz, N., & Shiffrar, M.
Russell, S. J., & Norvig, P.
Schmidt, S., & Färber, B.
Schrempf, O. C., Albrecht, D., & Hanebeck, U. D.
Sevdalis, V., & Keller, P. E.
Tahboub, K. A.
Terada, K., Shamoto, T., Mei, H., & Ito, A.
Viterbi, A. J.
[ p. 559 ]
Webb, T. L., & Sheeran, P.
Zhang, D., Gatica-Perez, D., Bengio, S., McCowan, I., & Lathoud, G.
[ p. 560 ]