Skip to main content

2021 | OriginalPaper | Buchkapitel

16. Vertrauen und Vertrauenswürdigkeit bei sozialen Robotern

Stärkung von Mensch-Roboter-Vertrauensbeziehungen mithilfe Erklärbarer Künstlicher Intelligenz

verfasst von : Katharina Weitz

Erschienen in: Soziale Roboter

Verlag: Springer Fachmedien Wiesbaden

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Zusammenfassung

Dieses Kapitel befasst sich mit der Vertrauensbeziehung zwischen Menschen und sozialen Robotern, stellt doch Vertrauen einen wichtigen Bestandteil für die Akzeptanz sozialer Roboter dar. Ausgehend von den Merkmalen sozialer Interaktionen zwischen Mensch und Roboter wird ein Überblick über verschiedene Definitionen von Vertrauen in diesem Kontext gegeben. Zudem werden theoretische Vertrauensmodelle und praktische Möglichkeiten der Erfassung von Vertrauen skizziert sowie der Vertrauensverlust als Folge von Roboterfehlern betrachtet. Es wird beleuchtet, wie Erklärbare Künstliche Intelligenz helfen kann, eine transparente Interaktion zwischen Roboter und Mensch zu ermöglichen und dadurch das Vertrauen in soziale Roboter (wieder-)herzustellen. Insbesondere wird auf die Gestaltungsmöglichkeiten und Herausforderungen beim Einsatz von Erklärungen im Bereich der Robotik eingegangen. Die Wirkung, die Erklärungen von Robotern auf die mentalen Modelle von Nutzer:innen haben, bildet den Abschluss dieses Kapitels.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
Es werden hier die englischen Begriffe verwendet, da die deutsche Übersetzung „Misstrauen“ die feinen Unterschiede von Distrust, Untrust und Mistrust nicht trennscharf benennt.
 
Literatur
Zurück zum Zitat Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10:e0130140CrossRef Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10:e0130140CrossRef
Zurück zum Zitat Bandura A (2010) Self-efficacy. In: Weiner IB, Craighead WE (Hrsg) The Corsini encyclopedia of psychology. Wiley Online Library, Hoboken, S 1–3 Bandura A (2010) Self-efficacy. In: Weiner IB, Craighead WE (Hrsg) The Corsini encyclopedia of psychology. Wiley Online Library, Hoboken, S 1–3
Zurück zum Zitat Bartneck C, Forlizzi J (2004) A design-centred framework for social human-robot interaction. In: 13th IEEE international workshop on robot and human interactive communication. Institute of Electrical and Electronics Engineers, Kurashiki, S 591–594 Bartneck C, Forlizzi J (2004) A design-centred framework for social human-robot interaction. In: 13th IEEE international workshop on robot and human interactive communication. Institute of Electrical and Electronics Engineers, Kurashiki, S 591–594
Zurück zum Zitat Beckers R, Holland OE, Deneubourg JL (2000) Fom local actions to global tasks: stigmergy and collective robotics. In: Cruse HD, Ritter J (Hrsg) Prerational intelligence: interdisciplinary perspectives on the behavior of natural and artificial systems. Springer, Dordrecht, S 1008–1022 Beckers R, Holland OE, Deneubourg JL (2000) Fom local actions to global tasks: stigmergy and collective robotics. In: Cruse HD, Ritter J (Hrsg) Prerational intelligence: interdisciplinary perspectives on the behavior of natural and artificial systems. Springer, Dordrecht, S 1008–1022
Zurück zum Zitat Bickmore TW, Picard RW (2005) Establishing and maintaining long-term human-computer relationships. ACM Trans Comput Hum Interact 12:293–327CrossRef Bickmore TW, Picard RW (2005) Establishing and maintaining long-term human-computer relationships. ACM Trans Comput Hum Interact 12:293–327CrossRef
Zurück zum Zitat Blau PM (2017) Exchange and power in social life. Routledge, New York/LondonCrossRef Blau PM (2017) Exchange and power in social life. Routledge, New York/LondonCrossRef
Zurück zum Zitat Borenstein J, Wagner AR, Howard A (2018) Overtrust of pediatric health-care robots: a preliminary survey of parent perspectives. IEEE Robot Autom Mag 25:46–54CrossRef Borenstein J, Wagner AR, Howard A (2018) Overtrust of pediatric health-care robots: a preliminary survey of parent perspectives. IEEE Robot Autom Mag 25:46–54CrossRef
Zurück zum Zitat Boyce MW, Chen JY, Selkowitz AR, Lakhmani SG (2015) Effects of agent transparency on operator trust. In: Adams JA, Smart W (Hrsg) Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction extended abstracts. Association for Computing Machinery, New York, S 179–180CrossRef Boyce MW, Chen JY, Selkowitz AR, Lakhmani SG (2015) Effects of agent transparency on operator trust. In: Adams JA, Smart W (Hrsg) Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction extended abstracts. Association for Computing Machinery, New York, S 179–180CrossRef
Zurück zum Zitat Castelfranchi C, Falcone R (2009) Trust theory – A socio-cognitive and computational model. John Wiley & Sons Ltd, ChichesterMATH Castelfranchi C, Falcone R (2009) Trust theory – A socio-cognitive and computational model. John Wiley & Sons Ltd, ChichesterMATH
Zurück zum Zitat Dautenhahn K, Billard A (1999) Bringing up robots or – the psychology of socially intelligent robots: from theory to implementation. In: Proceedings of the 3th international conference on autonomous agents. Association for Computing Machinery, Seattle, S 366–367CrossRef Dautenhahn K, Billard A (1999) Bringing up robots or – the psychology of socially intelligent robots: from theory to implementation. In: Proceedings of the 3th international conference on autonomous agents. Association for Computing Machinery, Seattle, S 366–367CrossRef
Zurück zum Zitat De Visser EJ, Peeters MM, Jung MF, Kohn S, Shaw TH, Pak R, Neerincx MA (2020) Towards a theory of longitudinal trust calibration in human-robot teams. Int J Soc Robot 12:459–478 De Visser EJ, Peeters MM, Jung MF, Kohn S, Shaw TH, Pak R, Neerincx MA (2020) Towards a theory of longitudinal trust calibration in human-robot teams. Int J Soc Robot 12:459–478
Zurück zum Zitat Deneubourg JL, Goss S, Franks N, Sendova-Franks A, Detrain C, Chretien L (1992) The dynamics of collective sorting: robot-like ants and ant-like robots. In: Meyer JA, Wilson SW (Hrsg) From animals to animats: proceedings of the first international conference on simulation of adaptive behavior. MIT Press, Cambridge, MA, S 356–363 Deneubourg JL, Goss S, Franks N, Sendova-Franks A, Detrain C, Chretien L (1992) The dynamics of collective sorting: robot-like ants and ant-like robots. In: Meyer JA, Wilson SW (Hrsg) From animals to animats: proceedings of the first international conference on simulation of adaptive behavior. MIT Press, Cambridge, MA, S 356–363
Zurück zum Zitat Eyssel F, Reich N (2013) Loneliness makes the heart grow fonder (of robots) – On the effects of loneliness on psychological anthropomorphism. In: Kuzuoka H, Evers V, Imai M, Forlizzi J (Hrsg) HRI 2013: Proceedings of the 8th ACM/IEEE international conference on human-robot interaction. Institute of Electrical and Electronics Engineers, Tokyo, S 121–122 Eyssel F, Reich N (2013) Loneliness makes the heart grow fonder (of robots) – On the effects of loneliness on psychological anthropomorphism. In: Kuzuoka H, Evers V, Imai M, Forlizzi J (Hrsg) HRI 2013: Proceedings of the 8th ACM/IEEE international conference on human-robot interaction. Institute of Electrical and Electronics Engineers, Tokyo, S 121–122
Zurück zum Zitat Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42:143–166CrossRef Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42:143–166CrossRef
Zurück zum Zitat Gaudiello I, Zibetti E, Lefort S, Chetouani M, Ivaldi S (2016) Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Comput Hum Behav 61:633–655CrossRef Gaudiello I, Zibetti E, Lefort S, Chetouani M, Ivaldi S (2016) Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Comput Hum Behav 61:633–655CrossRef
Zurück zum Zitat Halasz FG, Moran TP (1983) Mental models and problem solving in using a calculator. In: Janda A (Hrsg) Proceedings of the SIGCHI conference on human factors in computing systems. Association for Computing Machinery, New York, S 212–216 Halasz FG, Moran TP (1983) Mental models and problem solving in using a calculator. In: Janda A (Hrsg) Proceedings of the SIGCHI conference on human factors in computing systems. Association for Computing Machinery, New York, S 212–216
Zurück zum Zitat Hammer S, Wißner M, André E (2015) Trust-based decision-making for smart and adaptive environments. User Model User-Adap Inter 25:267–293CrossRef Hammer S, Wißner M, André E (2015) Trust-based decision-making for smart and adaptive environments. User Model User-Adap Inter 25:267–293CrossRef
Zurück zum Zitat Hancock PA, Billings DR, Schaefer KE, Chen JYC, Visser de EJ, Parasuraman R (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53:517–527 Hancock PA, Billings DR, Schaefer KE, Chen JYC, Visser de EJ, Parasuraman R (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53:517–527
Zurück zum Zitat Heimerl A, Weitz K, Baur T, André E (2020) Unraveling ML models of emotion with NOVA: multi-level explainable AI for non-experts. IEEE Transactions on Affective Computing Heimerl A, Weitz K, Baur T, André E (2020) Unraveling ML models of emotion with NOVA: multi-level explainable AI for non-experts. IEEE Transactions on Affective Computing
Zurück zum Zitat Hoff KA, Bashir M (2015) Trust in automation: Integrating empirical evidence on factors that influence trust. Hum Factors 57:407–434CrossRef Hoff KA, Bashir M (2015) Trust in automation: Integrating empirical evidence on factors that influence trust. Hum Factors 57:407–434CrossRef
Zurück zum Zitat Holliday D, Wilson S, Stumpf S (2016) User trust in intelligent systems: a journey over time. In: Nichols J, Mahmud J, O’Donovan J, Conati C, Zancanaro M (Hrsg) Proceedings of the 21st international conference on intelligent user interfaces. Association for Computing Machinery, New York, S 164–168CrossRef Holliday D, Wilson S, Stumpf S (2016) User trust in intelligent systems: a journey over time. In: Nichols J, Mahmud J, O’Donovan J, Conati C, Zancanaro M (Hrsg) Proceedings of the 21st international conference on intelligent user interfaces. Association for Computing Machinery, New York, S 164–168CrossRef
Zurück zum Zitat Huber T, Weitz K, André E, Amir, O (2021). Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps. Artificial Intelligence, 103571 Huber T, Weitz K, André E, Amir, O (2021). Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps. Artificial Intelligence, 103571
Zurück zum Zitat Jian JY, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cogn Ergon 4:53–71CrossRef Jian JY, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cogn Ergon 4:53–71CrossRef
Zurück zum Zitat Kessler TT, Larios C, Walker T, Yerdon V, Hancock PA (2017) A comparison of trust measures in human-robot interaction scenarios. In: Savage-Knepshield P, Chen J (Hrsg) Advances in human factors in robots and unmanned systems. Springer, Cham, S 353–364 Kessler TT, Larios C, Walker T, Yerdon V, Hancock PA (2017) A comparison of trust measures in human-robot interaction scenarios. In: Savage-Knepshield P, Chen J (Hrsg) Advances in human factors in robots and unmanned systems. Springer, Cham, S 353–364
Zurück zum Zitat Körber M (2018) Theoretical considerations and development of a questionnaire to measure trust in automation. In: Bagnara S, Tartaglia R, Albolino S, Alexander T, Fujita Y (Hrsg) Congress of the international ergonomics association. Springer, Cham, S 13–30 Körber M (2018) Theoretical considerations and development of a questionnaire to measure trust in automation. In: Bagnara S, Tartaglia R, Albolino S, Alexander T, Fujita Y (Hrsg) Congress of the international ergonomics association. Springer, Cham, S 13–30
Zurück zum Zitat Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46:50–80CrossRef Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46:50–80CrossRef
Zurück zum Zitat Lewis JD, Weigert A (1985) Trust as a social reality. Social Forces 63:967–985 Lewis JD, Weigert A (1985) Trust as a social reality. Social Forces 63:967–985
Zurück zum Zitat Linegang MP, Stoner HA, Patterson MJ, Seppelt BD, Hoffman JD, Crittendon ZB, Lee JD (2006) Human-automation collaboration in dynamic mission planning: a challenge requiring an ecological approach. In: Proceedings of the human factors and ergonomics society annual meeting. SAGE Publications, Los Angeles, S 2482–2486 Linegang MP, Stoner HA, Patterson MJ, Seppelt BD, Hoffman JD, Crittendon ZB, Lee JD (2006) Human-automation collaboration in dynamic mission planning: a challenge requiring an ecological approach. In: Proceedings of the human factors and ergonomics society annual meeting. SAGE Publications, Los Angeles, S 2482–2486
Zurück zum Zitat Lyons JB (2013) Being transparent about transparency: a model for human-robot interaction. In: 2013 AAAI Spring symposium trust and autonomous systems. Stanford Lyons JB (2013) Being transparent about transparency: a model for human-robot interaction. In: 2013 AAAI Spring symposium trust and autonomous systems. Stanford
Zurück zum Zitat Marsh S, Dibben MR (2005) Trust, untrust, distrust and mistrust–an exploration of the dark(er) side. In: Herrmann P, Issarny V, Shiu S (Hrsg) International conference on trust management. Springer, Berlin/Heidelberg, S 17–33 Marsh S, Dibben MR (2005) Trust, untrust, distrust and mistrust–an exploration of the dark(er) side. In: Herrmann P, Issarny V, Shiu S (Hrsg) International conference on trust management. Springer, Berlin/Heidelberg, S 17–33
Zurück zum Zitat Merritt SM, Ilgen DR (2008) Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Hum Factors 50:194–210CrossRef Merritt SM, Ilgen DR (2008) Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Hum Factors 50:194–210CrossRef
Zurück zum Zitat Mertes S, Huber T, Weitz K, Heimerl A, André E (2020) This is not the texture you are looking for! Introducing novel counterfactual explanations for non-experts using generative adversarial learning. arXiv preprint Mertes S, Huber T, Weitz K, Heimerl A, André E (2020) This is not the texture you are looking for! Introducing novel counterfactual explanations for non-experts using generative adversarial learning. arXiv preprint
Zurück zum Zitat Montavon G, Samek W, Müller KR (2018) Methods for interpreting and understanding deep neural networks. Digit Signal Process 73:1–15MathSciNetCrossRef Montavon G, Samek W, Müller KR (2018) Methods for interpreting and understanding deep neural networks. Digit Signal Process 73:1–15MathSciNetCrossRef
Zurück zum Zitat Norman DA (1983) Some observations on mental models. In: Gentner K, Stevens AL (Hrsg) Mental Models. Psychology Press, New York, S 15-22 Norman DA (1983) Some observations on mental models. In: Gentner K, Stevens AL (Hrsg) Mental Models. Psychology Press, New York, S 15-22
Zurück zum Zitat Petrak B, Weitz K, Aslan I, André E (2019) Let me show you your new home: studying the effect of proxemic-awareness of robots on users’ first impressions. In: 2019 28th IEEE international conference on robot and human interactive communication (RO-MAN). Institute of Electrical and Electronics Engineers, New Delhi, S 1–7 Petrak B, Weitz K, Aslan I, André E (2019) Let me show you your new home: studying the effect of proxemic-awareness of robots on users’ first impressions. In: 2019 28th IEEE international conference on robot and human interactive communication (RO-MAN). Institute of Electrical and Electronics Engineers, New Delhi, S 1–7
Zurück zum Zitat Ribeiro MT, Singh S, Guestrin C (2016) „Why should I trust you?“ Explaining the predictions of any classifier. In: Krishnapuram B, Shah M, Smola A, Aggarwal C, Shen D, Rastogi R (Hrsg) Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. Association for Computing Machinery, New York, S 1135–1144 Ribeiro MT, Singh S, Guestrin C (2016) „Why should I trust you?“ Explaining the predictions of any classifier. In: Krishnapuram B, Shah M, Smola A, Aggarwal C, Shen D, Rastogi R (Hrsg) Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. Association for Computing Machinery, New York, S 1135–1144
Zurück zum Zitat Robinette P, Howard A, Wagner AR (2017) Conceptualizing overtrust in robots: why do people trust a robot that previously failed? In: Lawless WF, Mittu R, Sofge D, Russell S (Hrsg) Autonomy and artificial intelligence: a threat or savior? Springer, Cham, S 129–155CrossRef Robinette P, Howard A, Wagner AR (2017) Conceptualizing overtrust in robots: why do people trust a robot that previously failed? In: Lawless WF, Mittu R, Sofge D, Russell S (Hrsg) Autonomy and artificial intelligence: a threat or savior? Springer, Cham, S 129–155CrossRef
Zurück zum Zitat Rosa H (2016) Resonanz: Eine Soziologie der Weltbeziehung. Suhrkamp, Berlin Rosa H (2016) Resonanz: Eine Soziologie der Weltbeziehung. Suhrkamp, Berlin
Zurück zum Zitat Rutjes H, Willemsen M, IJsselsteijn W (2019) Considerations on explainable AI and users’ mental models. In: Inkpen K, Chancellor S, De Choudhury MD, Veale M, Baumer E (Hrsg) CHI 2019 Workshop: where is the human? Bridging the gap between AI and HCI. Association for Computing Machinery, New York, S 1–6 Rutjes H, Willemsen M, IJsselsteijn W (2019) Considerations on explainable AI and users’ mental models. In: Inkpen K, Chancellor S, De Choudhury MD, Veale M, Baumer E (Hrsg) CHI 2019 Workshop: where is the human? Bridging the gap between AI and HCI. Association for Computing Machinery, New York, S 1–6
Zurück zum Zitat Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K (2015) Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In: 2015 10th ACM/IEEE international conference on human-robot interaction (HRI). Institute of Electrical and Electronics Engineers, Portland, S 1–8 Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K (2015) Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In: 2015 10th ACM/IEEE international conference on human-robot interaction (HRI). Institute of Electrical and Electronics Engineers, Portland, S 1–8
Zurück zum Zitat Samek W, Wiegand T, Müller KR (2017) Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint Samek W, Wiegand T, Müller KR (2017) Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint
Zurück zum Zitat Schaefer K (2013) The perception and measurement of human-robot trust. Electronic Theses and Dissertations Schaefer K (2013) The perception and measurement of human-robot trust. Electronic Theses and Dissertations
Zurück zum Zitat Sheh R (2017) „Why did you do that?“ Explainable intelligent robots. In: AAAI workshop-technical report. Curtin Research Publications, San Francisco, S 628–634 Sheh R (2017) „Why did you do that?“ Explainable intelligent robots. In: AAAI workshop-technical report. Curtin Research Publications, San Francisco, S 628–634
Zurück zum Zitat Stange S, Kopp S (2020) Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In: Belpaeme T, Young J, Gunes H, Riek L (Hrsg) Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction. Association for Computing Machinery, New York, S 619–627CrossRef Stange S, Kopp S (2020) Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In: Belpaeme T, Young J, Gunes H, Riek L (Hrsg) Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction. Association for Computing Machinery, New York, S 619–627CrossRef
Zurück zum Zitat Stange S, Buschmeier H, Hassan T, Ritter C, Kopp S (2019) Towards self-explaining social robots. Verbal explanation strategies for a needs-based architecture. In: Gross S, Krenn B, Scheutz M (Hrsg) AAMAS 2019 workshop on cognitive architectures for HRI: embodied models of situated natural language interactions (MM-Cog), Montréal, S 1–6 Stange S, Buschmeier H, Hassan T, Ritter C, Kopp S (2019) Towards self-explaining social robots. Verbal explanation strategies for a needs-based architecture. In: Gross S, Krenn B, Scheutz M (Hrsg) AAMAS 2019 workshop on cognitive architectures for HRI: embodied models of situated natural language interactions (MM-Cog), Montréal, S 1–6
Zurück zum Zitat Stapels JG, Eyssel F (2021) Let’s not be indifferent about robots: neutral ratings on bipolar measures mask ambivalence in attitudes towards robots. PloS one 16:e0244697CrossRef Stapels JG, Eyssel F (2021) Let’s not be indifferent about robots: neutral ratings on bipolar measures mask ambivalence in attitudes towards robots. PloS one 16:e0244697CrossRef
Zurück zum Zitat Stubbs K, Hinds PJ, Wettergreen D (2007) Autonomy and common ground in human-robot interaction: a field study. IEEE Intell Syst 22:42–50CrossRef Stubbs K, Hinds PJ, Wettergreen D (2007) Autonomy and common ground in human-robot interaction: a field study. IEEE Intell Syst 22:42–50CrossRef
Zurück zum Zitat Van Mulken S, André E, Müller J (1999) An empirical study on the trustworthiness of life-like interface agents. In: Bullinger HJ, Ziegler J (Hrsg) Human-computer interaction: communication, cooperation, and application. Lawrence Erlbaum Associates, London, S 152–156 Van Mulken S, André E, Müller J (1999) An empirical study on the trustworthiness of life-like interface agents. In: Bullinger HJ, Ziegler J (Hrsg) Human-computer interaction: communication, cooperation, and application. Lawrence Erlbaum Associates, London, S 152–156
Zurück zum Zitat Wang N, Pynadath DV, Hill SG (2016a) Trust calibration within a human-robot team: comparing automatically generated explanations. In: 2016 11th ACM/IEEE international conference on human-robot interaction (HRI). Institute of Electrical and Electronics Engineers, Christchurch, S 109–116CrossRef Wang N, Pynadath DV, Hill SG (2016a) Trust calibration within a human-robot team: comparing automatically generated explanations. In: 2016 11th ACM/IEEE international conference on human-robot interaction (HRI). Institute of Electrical and Electronics Engineers, Christchurch, S 109–116CrossRef
Zurück zum Zitat Wang N, Pynadath DV, Hill SG (2016b) The impact of pomdp-generated explanations on trust and performance in human-robot teams. In: Thangarajah J, Tuyls K, Jonker C, Marsella S (Hrsg) Proceedings of the 2016 international conference on autonomous agents & multiagent systems. International Foundation for Autonomous Agents and Multiagent Systems, Richland, S 997–1005 Wang N, Pynadath DV, Hill SG (2016b) The impact of pomdp-generated explanations on trust and performance in human-robot teams. In: Thangarajah J, Tuyls K, Jonker C, Marsella S (Hrsg) Proceedings of the 2016 international conference on autonomous agents & multiagent systems. International Foundation for Autonomous Agents and Multiagent Systems, Richland, S 997–1005
Zurück zum Zitat Weitz K, Hassan T, Schmid U, Garbas JU (2019) Deep-learned faces of pain and emotions: elucidating the differences of facial expressions with the help of explainable AI methods. tm-Technisches Messen 86:404–412CrossRef Weitz K, Hassan T, Schmid U, Garbas JU (2019) Deep-learned faces of pain and emotions: elucidating the differences of facial expressions with the help of explainable AI methods. tm-Technisches Messen 86:404–412CrossRef
Zurück zum Zitat Weitz K, Schiller D, Schlagowski R, Huber T, André E (2020) „Let me explain!“: exploring the potential of virtual agents in explainable AI interaction design. Journal on Multimodal User Interfaces 15:87-98 Weitz K, Schiller D, Schlagowski R, Huber T, André E (2020) „Let me explain!“: exploring the potential of virtual agents in explainable AI interaction design. Journal on Multimodal User Interfaces 15:87-98
Zurück zum Zitat Xu J, De'Aira GB, Howard A (2018) Would you trust a robot therapist? Validating the equivalency of trust in human-robot healthcare scenarios. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). Institute of Electrical and Electronics Engineers, Nanjing, S 442–447CrossRef Xu J, De'Aira GB, Howard A (2018) Would you trust a robot therapist? Validating the equivalency of trust in human-robot healthcare scenarios. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). Institute of Electrical and Electronics Engineers, Nanjing, S 442–447CrossRef
Zurück zum Zitat Zhu L, Williams T (2020) Effects of proactive explanations by robots on human-robot trust. In: Wagner AR, Feil-Seifer D, Haring KS, Rossi S, Williams T, He H, Ge SS (Hrsg) International conference on social robotics. Springer, Cham, S 85–95CrossRef Zhu L, Williams T (2020) Effects of proactive explanations by robots on human-robot trust. In: Wagner AR, Feil-Seifer D, Haring KS, Rossi S, Williams T, He H, Ge SS (Hrsg) International conference on social robotics. Springer, Cham, S 85–95CrossRef
Metadaten
Titel
Vertrauen und Vertrauenswürdigkeit bei sozialen Robotern
verfasst von
Katharina Weitz
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-658-31114-8_16

Premium Partner