This article investigates the challenge of developing a robot capable of determining if a social situation demands trust. Solving this challenge may allow a robot to react when a person over or under trusts the system. Prior work in this area has focused on understanding the factors that influence a person’s trust of a robot (Hancock, et al., 2011). In contrast, by using game-theoretic representations to frame the problem, we are able to develop a set of conditions for determining if an interactive situation demands trust. In two separate experiments, human subjects were asked to evaluate either written narratives or mazes in terms of whether or not they require trust. The results indicate a Φ1= +0.592 and Φ2 = +0.406 correlation respectively between the subjects’ evaluations and the condition’s predictions. This is a strong correlation for a study involving human subjects.
(2010) When giving is good: Ventromedial prefrontal cortex activation for others’ intentions. Neuron, 67(3), 511–521.
Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H
(2013) Impact of robot failures and feedback on real-time trust.
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
, (pp. 251–258). Tokyo, Japan.
Deutsch, M
(1962) Cooperation and trust: Some theoretical notes. In M.R. Jones (Ed.), Nebraska symposium on motivation (pp. 275–315). Lincoln, NB: University of Nebraska.
Deutsch, M
(1973) The resolution of conflict: Constructive and destructive processes. New Haven, CT: Yale University Press.
Economist
(2006) Trust me, I’m a robot. The Economist, 18–19.
Engle-Warnick, J., & Slonim, R.L
(2006) Learning to trust in indefinitely repeated games. Games and Economic Behavior, 95–114.
Fisher, R.J
(1993) Social desirability bias and the validity of indirect questioning. Journal of Consumer Research, 20(2), 303–315.
Gambetta, D
(1990) Can we trust trust? In D. Gambetta (Ed.), Trust, making and breaking cooperative relationships (pp. 213–237). Oxford England: Basil Blackwell.
(2011) A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 517–527.
Hemphill, J.F
(2003) Interpreting the magnitudes of correlation coefficients. American Psychologist, 58(1), 78–80.
Hung, Y.C., Dennis, A.R., & Robert, L
(2004) Trust in virtual teams: Towards an Integrative model of trust formation.
International Conference on System Sciences
. Hawaii.
Josang, A., & Pope, S
(2005) Semantic constraints for trust transitivity.
Second Asia-Pacific conference on conceptual modeling
. Newcastle, Australia.
Kelley, H.H., & Thibaut, J.W
(1978) Interpersonal relations: A theory of interdependence. New York, NY: John Wiley & Sons.
(1994) A course in game theory. Cambridge, MA: MIT Press.
Paolacci, G., Chandler, J., & Ipeirotis, P.G
(2010) Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.
Prietula, M.J., & Carley, K.M
(2001) Boundedly rational and emotional agents. In C. Castelfranchi & Y.-H. Tan (Eds.), Trust and deception in virtual society (pp. 169–194). Kluwer Academic Publishers.
(2002) A neural basis for social cooperation. Neuron, 395–405.
Robinette, P., Wagner, A.R., & Howard, A
(2013) Building and maintaining trust between humans and guidance robots in an emergency. AAAI Spring Symposium, Stanford University, (pp. 78–83). Palo Alto.
Robinette, P., Wagner, A.R., & Howard, A
(2014) Assessment of robot guidance modalities conveying instructions to humans in emergency situations.
Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 14)
. Edinburgh, UK.
Robinette, P., Wagner, A.R., & Howard, A
(2014) The effect of robot performance on human-robot trust in time-critical situation. Human Factors, under review.
Rousseau, D.M., Sitkin, S.B., Burt, R.S., & Camerer, C
(1998) Not so different after all: A cross-discipline view of trust. Academy of Management Review, 231, 393–404.
Runyon, R.P., & Audrey, H
(1991) Fundamentals of behavioral statistics. New York, NY: McGraw Hill.
Sabater, J., & Sierra, C
(2005) Review of computational trust and reputation models. Artificial Intelligence Review, 241, 33–60.
Schillo, M., Funk, P., & Rovatsos, M
(2000) Using trust for detecting deceitful agents in artificial societies. Applied Artificial Intelligence Journal, Special Issue on Trust, Deception and Fraud in Agent Societies.
Sears, D.O., Peplau, L.A., & Taylor, S.E
(1991) Social psychology. Englewood Cliffs, New Jersey: Prentice Hall.
Shafir, E., & LeBoeuf, R.A
(2002) Rationality. Annual Review of Psychology, 491–517.
Tversky, A., & Kahneman, D
(1974) Judgment under uncertainty: Heuristics and biases. Science, 1851, 1124–1130.
Wagner, A.R
(2009) Creating and using matrix representations of social interaction.
Proceedings of the 4th International conference on human-robot interaction (HRI 2009)
. San Diego, CA.
Wagner, A.R
(2009) The role of trust and relationships in human-robot social interaction. Ph.D. diss., School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA.
Wagner, A.R
(2012) Using cluster-based stereotyping to foster human-robot cooperation.
Proceedings of IEEE International conference on intelligent robots and systems (IROS 2012)
, (pp. 1615–1622). Villamura, Portugal.
Wagner, A.R
(2013) Developing robots that recognize when they are being trusted.
AAAI Spring Symposium
. Stanford CA.
Wagner, A.R., & Arkin, R.C
(2011) Acting deceptively: Providing robots with the capacity for deception. The International Journal of Social Robotics, 31, 5–26.
2019. How to Achieve Explainability and Transparency in Human AI Interaction. In HCI International 2019 - Posters [Communications in Computer and Information Science, 1033], ► pp. 177 ff.
Huang, Hanjing, Pei-Luen Patrick Rau & Liang Ma
2021. Will you listen to a robot? Effects of robot ability, task complexity, and risk on human decision-making. Advanced Robotics 35:19 ► pp. 1156 ff.
Jung, Minjoo, May Jorella S. Lazaro & Myung Hwan Yun
2021. Evaluation of Methodologies and Measures on the Usability of Social Robots: A Systematic Review. Applied Sciences 11:4 ► pp. 1388 ff.
Kordoni, Anastasia, Carlos Gavidia-Calderon, Mark Levine, Amel Bennaceur & Bashar Nuseibeh
2023. “Are we in this together?”: embedding social identity detection in drones improves emergency coordination. Frontiers in Psychology 14
Loghmani, Mohammad Reza, Clara Haider, Yegor Chebotarev, Christiana Tsiourti & Markus Vincze
2019. 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), ► pp. 172 ff.
Robinette, Paul, Ayanna Howard & Alan R. Wagner
2017. Conceptualizing Overtrust in Robots: Why Do People Trust a Robot That Previously Failed?. In Autonomy and Artificial Intelligence: A Threat or Savior?, ► pp. 129 ff.
Robinette, Paul, Ayanna M. Howard & Alan R. Wagner
2017. Effect of Robot Performance on Human–Robot Trust in Time-Critical Situations. IEEE Transactions on Human-Machine Systems 47:4 ► pp. 425 ff.
Robinette, Paul, Wenchen Li, Robert Allen, Ayanna M. Howard & Alan R. Wagner
2016. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), ► pp. 101 ff.
Robinette, Paul, Alan R. Wagner & Ayanna M. Howard
2016. Investigating Human-Robot Trust in Emergency Scenarios: Methodological Lessons Learned. In Robust Intelligence and Trust in Autonomous Systems, ► pp. 143 ff.
Wagner, Alan R., Paul Robinette & Ayanna Howard
2018. Modeling the Human-Robot Trust Phenomenon. ACM Transactions on Interactive Intelligent Systems 8:4 ► pp. 1 ff.
This list is based on CrossRef data as of 31 march 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.