Article published In:
Interaction Studies
Vol. 22:1 (2021) ► pp.5580
References (57)
References
Ardissono, L., Boella, G., and Lesmo, L. (2000). A plan-based agent architecture for interpreting natural language dialogue. International Journal of Human-Computer Studies, 52(4):583–635. DOI logoGoogle Scholar
Bartneck, C., Reichenbach, J., and Carpenter, J. (2008). The carrot and the stick – the role of praise and punishment in human-robot interaction. Interaction Studies – Social Behaviour and Communication in Biological and Artificial Systems, 9(2):179–203. DOI logoGoogle Scholar
Bozdogan, H. (1987). Model selection and akaike’s information criterion (aic): The general theory and its analytical extensions. Psychometrika, 52(3):345–370. DOI logoGoogle Scholar
Brahnam, S. (2005). Strategies for handling customer abuse of ecas. Abuse: The darker side of humancomputer interaction, pages 62–67.Google Scholar
Brahnam, S. and De Angeli, A. (2012). Gender affordances of conversational agents. Interacting with Computers, 24(3):139–153. DOI logoGoogle Scholar
Brscić, D., Kidokoro, H., Suehiro, Y., and Kanda, T. (2015). Escaping from children’s abuse of social robots. In Proceedings of the International Conference on Human-Robot Interaction, pages 59–66, Portland, USA. ACM/IEEE. DOI logoGoogle Scholar
Burnham, K. P. and Anderson, D. R. (2003). Model selection and multimodel inference: a practical information-theoretic approach. Springer Science & Business Media, New York.Google Scholar
Chin, H. and Yi, M. Y. (2019). Should an agent be ignoring it?: A study of verbal abuse types and conversational agents’ response styles. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–6. ACM. DOI logoGoogle Scholar
Connolly, J. (2020). Preventing robot abuse through emotional robot responses. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pages 558–560. DOI logoGoogle Scholar
Cowie, H. and Berdondini, L. (2002). The expression of emotion in response to bullying. Emotional and Behavioural Difficulties, 7(4):207–214. DOI logoGoogle Scholar
Curry, A. C. and Rieser, V. (2018). # metoo alexa: How conversational systems respond to sexual harassment. In Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing, pages 7–14. DOI logoGoogle Scholar
Darling, K. (2012). Extending legal rights to social robots. In We Robot Conference, University of Miami, pages 1–24, Miami, USA. University of Miami. DOI logoGoogle Scholar
De Angeli, A. (2009). Ethical implications of verbal disinhibition with conversational agents. Psych-Nology Journal, 7(1):49–57.Google Scholar
De Angeli, A. and Brahnam, S. (2006). Sex stereotypes and conversational agents. In Gender and Interaction: real and virtual women in a male world. Venice, Italy, pages 1–4.Google Scholar
(2008). I hate you! disinhibition with virtual partners. Interacting with computers, 20(3):302–310. DOI logoGoogle Scholar
De Angeli, A., Brahnam, S., Wallis, P., and Dix, A. (2006). Misuse and abuse of interactive technologies. In CHI’06 Extended Abstracts on Human Factors in Computing Systems, pages 1647–1650, Montreal, Canada. ACM. DOI logoGoogle Scholar
De Angeli, A. and Carpenter, R. (2005). Stupid computer! abuse and social identities. In Proceedings of Abuse: The dark side of human-computer interaction, An INTERACT 2005 workshop, pages 19–25.Google Scholar
De Angeli, A., Johnson, G. I., and Coventry, L. (2001). The unfriendly user: exploring social reactions to chatterbots. In Proceedings of The International Conference on Affective Human Factors Design, London, pages 467–474.Google Scholar
De Swert, K. (2012). Calculating inter-coder reliability in media content analysis using Krippendorffs Alpha. Center for Politics and Communication, University of Amsterdam, the Netherlands.Google Scholar
Dindia, K., Fitzpatrick, M. A., and Kenny, D. A. (1997). Self-disclosure in spouse and stranger interaction: A social relations analysis. Human Communication Research, 23(3):388–412. DOI logoGoogle Scholar
Fessler, L. (2017a). Apple and amazon are under fire for siri and alexas responses to sexual harassment. [URL]
(2017b). We tested bots like siri and alexa to see who would stand up to sexual harassment. [URL]
Haslam, N. (2006). Dehumanization: An integrative review. Personality and social psychology review, 10(3):252–264. DOI logoGoogle Scholar
Haslam, N., Loughnan, S., Kashima, Y., and Bain, P. (2008). Attributing and denying humanness to others. European review of social psychology, 19(1):55–85. DOI logoGoogle Scholar
Hern, A. (2010). Apple made siri deflect questions on feminism, leaked papers reveal. [URL]. [Online; recovered 23 June 2020].
Hill, J., Ford, W. R., and Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between human-human online conversations and human-chatbot conversations. Computers in Human Behavior, 491:245–250. DOI logoGoogle Scholar
Ho, A., Hancock, J., and Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4):712–733. DOI logoGoogle Scholar
Hutchinson, M. K. and Holtman, M. C. (2005). Analysis of count data using poisson regression. Research in Nursing & Health, 28(5):408–418. DOI logoGoogle Scholar
Jay, T. (2009). The utility and ubiquity of taboo words. Perspectives on Psychological Science, 4(2):153–161. DOI logoGoogle Scholar
Katsyri, J., Forger, K., Mäkäräinen, M., and Takala, T. (2015). A review of empirical evidence on different uncanny valley hypotheses: Support for perceptual mismatch as one road to the valley of eeriness. Frontiers in psychology, 61. DOI logoGoogle Scholar
Keijsers, M. and Bartneck, C. (2018). Mindless robots get bullied. In Proceedings of the International Conference on Human-Robot Interaction, pages 205–214, New York, USA. ACM/IEEE. DOI logoGoogle Scholar
Keijsers, M., Bartneck, C., and Kazmi, H. S. (2019a). Cloud-based sentiment analysis for interactive agents. In Proceedings of the 7th International Conference on Human-Agent Interaction, pages 43–50. DOI logoGoogle Scholar
(2019b). Cloud-based sentiment analysis for interactive agents. In Proceedings of the 7th International Conference on Human-Agent Interaction (HAI), pages 43–50. DOI logoGoogle Scholar
Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., and Kircher, T. (2008). Can machines think? interaction and perspective taking with robots investigated via fmri. PloS One, 3(7):e2597. DOI logoGoogle Scholar
Lee, M. K., Kiesler, S., and Forlizzi, J. (2010). Receptionist or information kiosk: how do people talk with a robot? In Proceedings of the 2010 ACM conference on Computer supported cooperative work, pages 31–40. DOI logoGoogle Scholar
Lortie, C. L. and Guitton, M. J. (2011). Judgment of the humanness of an interlocutor is in the eye of the beholder. PLoS One, 6(9):e25085. DOI logoGoogle Scholar
Lowry, P. B., Zhang, J., Wang, C., and Siponen, M. (2016). Why do adults engage in cyberbullying on social media? an integration of online disinhibition and deindividuation effects with the social structure and social learning model. Information Systems Research, 27(4):962–986. DOI logoGoogle Scholar
MacDorman, K. F. and Chattopadhyay, D. (2016). Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not. Cognition, 1461:190–205. DOI logoGoogle Scholar
Mauldin, M. L. (1994). Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition. In AAAI, volume 941, pages 16–21.Google Scholar
Moore, S. (2018). Gartner says 25 percent of customer service operations will use virtual customer assistants by 2020.Google Scholar
Mori, M. et al. (1970). The uncanny valley. Energy, 7(4):33–35.Google Scholar
Nass, C., Steuer, J., and Tauber, E. R. (1994). Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 72–78, Boston, USA. ACM.Google Scholar
Nomura, T., Kanda, T., Kidokoro, H., Suehiro, Y., and Yamada, S. (2017). Why do children abuse robots? Interaction Studies, 17(3):347–369. DOI logoGoogle Scholar
Oberman, L. M., McCleery, J. P., Ramachandran, V. S., and Pineda, J. A. (2007). Eeg evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human qualities of interactive robots. Neurocomputing, 70(13–15):2194–2203. DOI logoGoogle Scholar
Paetzel, M., Peters, C., Nyström, I., and Castellano, G. (2016). Congruency matters – how ambiguous gender cues increase a robots uncanniness. In International Conference on Social Robotics, pages 402–412. Springer. DOI logoGoogle Scholar
Pennebaker, J. W., Booth, R. J., Boyd, R. L., and Francis, M. E. (2015a). Linguistic inquiry and word count: Liwc 2015 [computer software]. pennebaker conglomerates.Google Scholar
Pennebaker, J. W., Boyd, R. L., Jordan, K., and Blackburn, K. (2015b). The development and psychometric properties of liwc2015. Technical report, The University of Texas at Austin.Google Scholar
Reeves, B. and Nass, C. (1996). The Media Equation. CSLI Publications and Cambridge University Press, Cambridge.Google Scholar
Rehm, M. and Krogsager, A. (2013). Negative affect in human robot interaction – impoliteness in unexpected encounters with robots. In Proceedings of the 22nd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages 45–50. IEEE. DOI logoGoogle Scholar
Slater, M., Antley, A., Davison, A., Swapp, D., Guger, C., Barker, C., Pistrang, N., and Sanchez-Vives, M. V. (2006). A virtual reprise of the stanley milgram obedience experiments. PloS one, 1(1):e39. DOI logoGoogle Scholar
Sokol, N., Bussey, K., and Rapee, R. M. (2016). Victims responses to bullying: The gap between students evaluations and reported responses. School Mental Health, 8(4):461–475. DOI logoGoogle Scholar
Strait, M., Contreras, V., and Vela, C. D. (2018). Verbal disinhibition towards robots is associated with general antisociality. arXiv e-prints.Google Scholar
Suler, J. (2004). The online disinhibition effect. Cyberpsychology & behavior, 7(3):321–326. DOI logoGoogle Scholar
Tan, X. Z., Vázquez, M., Carter, E. J., Morales, C. G., and Steinfeld, A. (2018). Inducing bystander interventions during robot abuse with social mechanisms. In Proceedings of the International Conference on Human-Robot Interaction, pages 169–177, New York, USA. ACM/IEEE. DOI logoGoogle Scholar
Veletsianos, G., Scharber, C., and Doering, A. (2008). When sex, drugs, and violence enter the classroom: Conversations between adolescents and a female pedagogical agent. Interacting with computers, 20(3):292–301. DOI logoGoogle Scholar
Whitby, B. (2008). Sometimes its hard to be a robot: A call for action on the ethics of abusing artificial agents. Interacting with Computers, 20(3):326–333. DOI logoGoogle Scholar
Zhang, Z. (2016). Variable selection with stepwise and best subset approaches. Annals of Translational Medicine, 4(7). DOI logoGoogle Scholar
Cited by (5)

Cited by five other publications

De Cicco, Roberta
2024. Exploring the Dark Corners of Human-Chatbot Interactions: A Literature Review on Conversational Agent Abuse. In Chatbot Research and Design [Lecture Notes in Computer Science, 14524],  pp. 185 ff. DOI logo
Guan, Biyu, Xin Li, Zhenshuo Luo & Pei Liu
2024. Can (A)I Arouse You? The Impact of AI Services on Consumer Pro-Environmental Behavior. Journal of Hospitality & Tourism Research DOI logo
Ladak, Ali, Jamie Harris & Jacy Reese Anthis
2024. Proceedings of the CHI Conference on Human Factors in Computing Systems ,  pp. 1 ff. DOI logo
Xi, Yipeng, Aitong Ji & Weihua Yu
2024. Enhancing or impeding? Exploring the dual impact of anthropomorphism in large language models on user aggression. Telematics and Informatics 95  pp. 102194 ff. DOI logo
Jones, Mirabelle, Christina Neumayer & Irina Shklovski
2023. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems,  pp. 1 ff. DOI logo

This list is based on CrossRef data as of 18 october 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.