Skip to main content

2024 | OriginalPaper | Buchkapitel

Fairness and Explainability for Enabling Trust in AI Systems

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter discusses the ethical complications and challenges arising from the use of AI systems in our everyday lives. It outlines recent and upcoming regulations and policies regarding the use of AI systems, and dives into the topics of explainability and fairness. We argue that trustworthiness has at its heart explainability, and thus we present ideas and techniques aimed at making AI systems more understandable and ultimately more trustworthy for humans. Moreover, we discuss the topic of algorithmic fairness and the requirement that AI systems are free from biases and do not discriminate against individuals on the basis of some protected attributes. In all cases, we present a brief summary of the most important concepts and results in the literature. As a conclusion, we present some ideas for future research and sketch open challenges in the field.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
4.
Zurück zum Zitat Apley DW, Zhu J (2020) Visualizing the effects of predictor variables in black box supervised learning models. J R Stat Soc Ser B: Stat Methodol 82(4):1059–1086MathSciNetCrossRef Apley DW, Zhu J (2020) Visualizing the effects of predictor variables in black box supervised learning models. J R Stat Soc Ser B: Stat Methodol 82(4):1059–1086MathSciNetCrossRef
5.
Zurück zum Zitat Barshan E, Brunet ME, Dziugaite GK (2020) Relatif: identifying explanatory training examples via relative influence. PMLR Barshan E, Brunet ME, Dziugaite GK (2020) Relatif: identifying explanatory training examples via relative influence. PMLR
7.
Zurück zum Zitat Beutel A, Chen J, Doshi T, Qian H, Wei L, Wu Y, Heldt L, Zhao Z, Hong L, Chi EH, Goodrow C (2019) Fairness in recommendation ranking through pairwise comparisons. ACM, pp 2212–2220 Beutel A, Chen J, Doshi T, Qian H, Wei L, Wu Y, Heldt L, Zhao Z, Hong L, Chi EH, Goodrow C (2019) Fairness in recommendation ranking through pairwise comparisons. ACM, pp 2212–2220
8.
Zurück zum Zitat Black E, Yeom S, Fredrikson M (2020) Fliptest: fairness testing via optimal transport. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 111–121 Black E, Yeom S, Fredrikson M (2020) Fliptest: fairness testing via optimal transport. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 111–121
9.
Zurück zum Zitat Burke R, Sonboli N, Ordonez-Gauger A (2018) Balanced neighborhoods for multi-sided fairness in recommendation. In: Friedler SA, Wilson C (eds) Conference on fairness, accountability and transparency, FAT 2018, New York, NY, USA, Proceedings of machine learning research, vol 81. PMLR, pp 202–214. http://proceedings.mlr.press/v81/burke18a.html Burke R, Sonboli N, Ordonez-Gauger A (2018) Balanced neighborhoods for multi-sided fairness in recommendation. In: Friedler SA, Wilson C (eds) Conference on fairness, accountability and transparency, FAT 2018, New York, NY, USA, Proceedings of machine learning research, vol 81. PMLR, pp 202–214. http://​proceedings.​mlr.​press/​v81/​burke18a.​html
10.
Zurück zum Zitat Calders T, Verwer S (2010) Three naive bayes approaches for discrimination-free classification. Data Min Knowl Discov 21(2):277–292MathSciNetCrossRef Calders T, Verwer S (2010) Three naive bayes approaches for discrimination-free classification. Data Min Knowl Discov 21(2):277–292MathSciNetCrossRef
11.
Zurück zum Zitat Chen J, Song L, Wainwright M, Jordan M (2018) Learning to explain: An information-theoretic perspective on model interpretation. In Dy J, Krause A (eds) Proceedings of the 35th international conference on machine learning, Proceedings of machine learning research, vol 80. PMLR, pp 883–892. https://proceedings.mlr.press/v80/chen18j.html Chen J, Song L, Wainwright M, Jordan M (2018) Learning to explain: An information-theoretic perspective on model interpretation. In Dy J, Krause A (eds) Proceedings of the 35th international conference on machine learning, Proceedings of machine learning research, vol 80. PMLR, pp 883–892. https://​proceedings.​mlr.​press/​v80/​chen18j.​html
12.
Zurück zum Zitat Cheng W, Shen Y, Huang L, Zhu Y (2019) Incorporating interpretability into latent factor models via fast influence analysis. In: Teredesai A, Kumar V, Li Y, Rosales R, Terzi E, Karypis G (eds) Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery and data mining, KDD 2019. ACM, Anchorage, AK, USA, pp 885–893. https://doi.org/10.1145/3292500.3330857 Cheng W, Shen Y, Huang L, Zhu Y (2019) Incorporating interpretability into latent factor models via fast influence analysis. In: Teredesai A, Kumar V, Li Y, Rosales R, Terzi E, Karypis G (eds) Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery and data mining, KDD 2019. ACM, Anchorage, AK, USA, pp 885–893. https://​doi.​org/​10.​1145/​3292500.​3330857
14.
Zurück zum Zitat Dwork C, Hardt M, Pitassi T, Reingold O, Zemel RS (2012) Fairness through awareness. In: Innovations in theoretical computer science. ACM, pp 214–226 Dwork C, Hardt M, Pitassi T, Reingold O, Zemel RS (2012) Fairness through awareness. In: Innovations in theoretical computer science. ACM, pp 214–226
15.
Zurück zum Zitat Ekstrand MD, Tian M, Azpiazu IM, Ekstrand JD, Anuyah O, McNeill D, Pera MS (2018) All the cool kids, how do they fit in?: Popularity and demographic biases in recommender evaluation and effectiveness. In: FAT, Proceedings of machine learning research, vol 81. PMLR, pp 172–186 Ekstrand MD, Tian M, Azpiazu IM, Ekstrand JD, Anuyah O, McNeill D, Pera MS (2018) All the cool kids, how do they fit in?: Popularity and demographic biases in recommender evaluation and effectiveness. In: FAT, Proceedings of machine learning research, vol 81. PMLR, pp 172–186
16.
Zurück zum Zitat Feldman M, Friedler SA, Moeller J, Scheidegger C, Venkatasubramanian S (2015) Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 259–268 Feldman M, Friedler SA, Moeller J, Scheidegger C, Venkatasubramanian S (2015) Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 259–268
17.
Zurück zum Zitat Friedler SA, Scheidegger C, Venkatasubramanian S, Choudhary S, Hamilton EP, Roth D (2019) A comparative study of fairness-enhancing interventions in machine learning. In: Proceedings of the conference on fairness, accountability, and transparency, FAT*. ACM, pp 329–338 Friedler SA, Scheidegger C, Venkatasubramanian S, Choudhary S, Hamilton EP, Roth D (2019) A comparative study of fairness-enhancing interventions in machine learning. In: Proceedings of the conference on fairness, accountability, and transparency, FAT*. ACM, pp 329–338
18.
Zurück zum Zitat Friedman JH (2000) Greedy function approximation: a gradient boosting machine. Ann Stat 29:1189–1232MathSciNet Friedman JH (2000) Greedy function approximation: a gradient boosting machine. Ann Stat 29:1189–1232MathSciNet
20.
Zurück zum Zitat Goodfellow I, Bengio Y, Courville A (201) Deep learning. MIT Press Goodfellow I, Bengio Y, Courville A (201) Deep learning. MIT Press
21.
Zurück zum Zitat Greenwell BM, Boehmke BC, McCarthy AJ (2018) A simple and effective model-based variable importance measure Greenwell BM, Boehmke BC, McCarthy AJ (2018) A simple and effective model-based variable importance measure
22.
Zurück zum Zitat Guidotti R, Monreale A, Ruggieri S, Pedreschi D, Turini F, Giannotti F (2021) Local rule-based explanations of black box decision systems. arXiv:1805.10820 (2021) Guidotti R, Monreale A, Ruggieri S, Pedreschi D, Turini F, Giannotti F (2021) Local rule-based explanations of black box decision systems. arXiv:​1805.​10820 (2021)
23.
Zurück zum Zitat Guo H, Rajani N, Hase P, Bansal M, Xiong C (2021) FastIF: Scalable influence functions for efficient model interpretation and debugging. In: Proceedings of the 2021 conference on empirical methods in natural language processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, pp 10333–10350. https://doi.org/10.18653/v1/2021.emnlp-main.808 Guo H, Rajani N, Hase P, Bansal M, Xiong C (2021) FastIF: Scalable influence functions for efficient model interpretation and debugging. In: Proceedings of the 2021 conference on empirical methods in natural language processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, pp 10333–10350. https://​doi.​org/​10.​18653/​v1/​2021.​emnlp-main.​808
25.
27.
Zurück zum Zitat Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. In: NIPS, pp 3315–3323 Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. In: NIPS, pp 3315–3323
28.
Zurück zum Zitat Hase P, Bansal M (2020) Evaluating explainable AI: Which algorithmic explanations help users predict model behavior? In: Proceedings of the 58th annual meeting of the association for computational linguistics. Association for Computational Linguistics, Online, pp. 5540–5552. https://doi.org/10.18653/v1/2020.acl-main.491 Hase P, Bansal M (2020) Evaluating explainable AI: Which algorithmic explanations help users predict model behavior? In: Proceedings of the 58th annual meeting of the association for computational linguistics. Association for Computational Linguistics, Online, pp. 5540–5552. https://​doi.​org/​10.​18653/​v1/​2020.​acl-main.​491
30.
Zurück zum Zitat Ibrahim M, Louie M, Modarres C, Paisley J (2019) Global explanations of neural networks: Mapping the landscape of predictions. In: Proceedings of the 2019 AAAI/ACM conference on AI, Ethics, and society, pp 279–287 Ibrahim M, Louie M, Modarres C, Paisley J (2019) Global explanations of neural networks: Mapping the landscape of predictions. In: Proceedings of the 2019 AAAI/ACM conference on AI, Ethics, and society, pp 279–287
31.
Zurück zum Zitat Jacovi A, Goldberg Y (2020) Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In: Proceedings of the 58th annual meeting of the association for computational linguistics. Association for Computational Linguistics, Online, pp 4198–4205. https://doi.org/10.18653/v1/2020.acl-main.386 Jacovi A, Goldberg Y (2020) Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In: Proceedings of the 58th annual meeting of the association for computational linguistics. Association for Computational Linguistics, Online, pp 4198–4205. https://​doi.​org/​10.​18653/​v1/​2020.​acl-main.​386
33.
Zurück zum Zitat Kaffes V, Sacharidis D, Giannopoulos G (2021) Model-agnostic counterfactual explanations of recommendations. In: Masthoff J, Herder E, Tintarev N, Tkalcic M (eds) Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2021. ACM, Utrecht, The Netherlands, pp 280–285. https://doi.org/10.1145/3450613.3456846 Kaffes V, Sacharidis D, Giannopoulos G (2021) Model-agnostic counterfactual explanations of recommendations. In: Masthoff J, Herder E, Tintarev N, Tkalcic M (eds) Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2021. ACM, Utrecht, The Netherlands, pp 280–285. https://​doi.​org/​10.​1145/​3450613.​3456846
34.
Zurück zum Zitat Kamishima T, Akaho S, Sakuma J. (2011)Fairness-aware learning through regularization approach. In: ICDM workshops. IEEE Computer Society, pp 643–650 Kamishima T, Akaho S, Sakuma J. (2011)Fairness-aware learning through regularization approach. In: ICDM workshops. IEEE Computer Society, pp 643–650
35.
Zurück zum Zitat Karimi A, Schölkopf B, Valera I (2021) Algorithmic recourse: from counterfactual explanations to interventions. In: FAccT. ACM, pp 353–362 Karimi A, Schölkopf B, Valera I (2021) Algorithmic recourse: from counterfactual explanations to interventions. In: FAccT. ACM, pp 353–362
36.
Zurück zum Zitat Karimi AH, Barthe G, Schölkopf B, Valera I (2022) A survey of algorithmic recourse: contrastive explanations and consequential recommendations. ACM Comput Surv 55(5):1–29CrossRef Karimi AH, Barthe G, Schölkopf B, Valera I (2022) A survey of algorithmic recourse: contrastive explanations and consequential recommendations. ACM Comput Surv 55(5):1–29CrossRef
37.
Zurück zum Zitat Kavouras L, Tsopelas K, Giannopoulos G, Sacharidis D, Psaroudaki E, Theologitis N, Rontogiannis D, Fotakis D, Emiris I (2023) Fairness aware counterfactuals for subgroups. In: Thirty-seventh conference on neural information processing systems. https://openreview.net/forum?id=38dQv3OwN3 Kavouras L, Tsopelas K, Giannopoulos G, Sacharidis D, Psaroudaki E, Theologitis N, Rontogiannis D, Fotakis D, Emiris I (2023) Fairness aware counterfactuals for subgroups. In: Thirty-seventh conference on neural information processing systems. https://​openreview.​net/​forum?​id=​38dQv3OwN3
38.
Zurück zum Zitat Kearns M, Neel S, Roth A, Wu ZS (2018) Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In: International conference on machine learning. PMLR, pp 2564–2572 Kearns M, Neel S, Roth A, Wu ZS (2018) Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In: International conference on machine learning. PMLR, pp 2564–2572
39.
Zurück zum Zitat Kearns M, Neel S, Roth A, Wu ZS (2019) An empirical study of rich subgroup fairness for machine learning. In: Proceedings of the conference on fairness, accountability, and transparency, pp 100–109 Kearns M, Neel S, Roth A, Wu ZS (2019) An empirical study of rich subgroup fairness for machine learning. In: Proceedings of the conference on fairness, accountability, and transparency, pp 100–109
40.
Zurück zum Zitat Kilbertus N, Rojas-Carulla M, Parascandolo G, Hardt M, Janzing D, Schölkopf B (2017) Avoiding discrimination through causal reasoning. In: NIPS, pp 656–666 Kilbertus N, Rojas-Carulla M, Parascandolo G, Hardt M, Janzing D, Schölkopf B (2017) Avoiding discrimination through causal reasoning. In: NIPS, pp 656–666
41.
Zurück zum Zitat Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning, Proceedings of machine learning research, vol 70. PMLR, pp 1885–1894. https://proceedings.mlr.press/v70/koh17a.html Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning, Proceedings of machine learning research, vol 70. PMLR, pp 1885–1894. https://​proceedings.​mlr.​press/​v70/​koh17a.​html
42.
Zurück zum Zitat von Kügelgen J, Karimi A, Bhatt U, Valera I, Weller A, Schölkopf B (2022) On the fairness of causal algorithmic recourse. In: AAAI. AAAI Press, pp 9584–9594 von Kügelgen J, Karimi A, Bhatt U, Valera I, Weller A, Schölkopf B (2022) On the fairness of causal algorithmic recourse. In: AAAI. AAAI Press, pp 9584–9594
43.
Zurück zum Zitat Kuratomi A, Pitoura E, Papapetrou P, Lindgren T, Tsaparas P (2023) Measuring the burden of (un) fairness using counterfactuals. In: Machine learning and principles and practice of knowledge discovery in databases: international workshops of ECML PKDD 2022, Proceedings, Part I. Springer, Berlin, pp 402–417 Kuratomi A, Pitoura E, Papapetrou P, Lindgren T, Tsaparas P (2023) Measuring the burden of (un) fairness using counterfactuals. In: Machine learning and principles and practice of knowledge discovery in databases: international workshops of ECML PKDD 2022, Proceedings, Part I. Springer, Berlin, pp 402–417
44.
Zurück zum Zitat Kusner MJ, Loftus JR, Russell C, Silva R (2017) Counterfactual fairness. In: Advances in neural information processing systems 30: annual conference on neural information processing systems, pp 4066–4076 Kusner MJ, Loftus JR, Russell C, Silva R (2017) Counterfactual fairness. In: Advances in neural information processing systems 30: annual conference on neural information processing systems, pp 4066–4076
45.
Zurück zum Zitat Ley D, Mishra S, Magazzeni D (2022) Global counterfactual explanations: investigations, implementations and improvements. arXiv:2204.06917 Ley D, Mishra S, Magazzeni D (2022) Global counterfactual explanations: investigations, implementations and improvements. arXiv:​2204.​06917
46.
Zurück zum Zitat Ley D, Mishra S, Magazzeni D (2023) Globe-ce: a translation-based approach for global counterfactual explanations. In: International conference on machine learning Ley D, Mishra S, Magazzeni D (2023) Globe-ce: a translation-based approach for global counterfactual explanations. In: International conference on machine learning
47.
Zurück zum Zitat Liu W, Guo J, Sonboli N, Burke R, Zhang S (2019) Personalized fairness-aware re-ranking for microlending. In: ACM RecSys. ACM, pp 467–471 Liu W, Guo J, Sonboli N, Burke R, Zhang S (2019) Personalized fairness-aware re-ranking for microlending. In: ACM RecSys. ACM, pp 467–471
50.
Zurück zum Zitat Mothilal RK, Sharma A, Tan C (2020) Explaining machine learning classifiers through diverse counterfactual explanations. In: Hildebrandt M, Castillo C, Celis E, Ruggieri S, Taylor L, Zanfir-Fortuna G (eds) FAT* ’20: conference on fairness, accountability, and transparency. ACM, Barcelona, Spain, pp 607–617. https://doi.org/10.1145/3351095.3372850 Mothilal RK, Sharma A, Tan C (2020) Explaining machine learning classifiers through diverse counterfactual explanations. In: Hildebrandt M, Castillo C, Celis E, Ruggieri S, Taylor L, Zanfir-Fortuna G (eds) FAT* ’20: conference on fairness, accountability, and transparency. ACM, Barcelona, Spain, pp 607–617. https://​doi.​org/​10.​1145/​3351095.​3372850
51.
Zurück zum Zitat Nabi R, Shpitser I (2018) Fair inference on outcomes. In: AAAI. AAAI Press, pp 1931–1940 Nabi R, Shpitser I (2018) Fair inference on outcomes. In: AAAI. AAAI Press, pp 1931–1940
52.
Zurück zum Zitat Nauta M, Trienes J, Pathak S, Nguyen E, Peters M, Schmitt Y, Schlötterer J, van Keulen M, Seifert C (2023) From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Comput Surv. https://doi.org/10.1145/3583558 Nauta M, Trienes J, Pathak S, Nguyen E, Peters M, Schmitt Y, Schlötterer J, van Keulen M, Seifert C (2023) From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Comput Surv. https://​doi.​org/​10.​1145/​3583558
53.
Zurück zum Zitat Nourani M, Roy C, Block JE, Honeycutt DR, Rahman T, Ragan ED, Gogate V (2022) On the importance of user backgrounds and impressions: lessons learned from interactive ai applications. ACM Trans Interact Intell Syst 12(4). https://doi.org/10.1145/3531066 Nourani M, Roy C, Block JE, Honeycutt DR, Rahman T, Ragan ED, Gogate V (2022) On the importance of user backgrounds and impressions: lessons learned from interactive ai applications. ACM Trans Interact Intell Syst 12(4). https://​doi.​org/​10.​1145/​3531066
55.
Zurück zum Zitat Peake G, Wang J (2018) Explanation mining: Post hoc interpretability of latent factor models for recommendation systems. In: Guo Y, Farooq F (eds) Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery and data mining, KDD 2018. ACM, London, UK, pp 2060–2069. https://doi.org/10.1145/3219819.3220072 Peake G, Wang J (2018) Explanation mining: Post hoc interpretability of latent factor models for recommendation systems. In: Guo Y, Farooq F (eds) Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery and data mining, KDD 2018. ACM, London, UK, pp 2060–2069. https://​doi.​org/​10.​1145/​3219819.​3220072
56.
Zurück zum Zitat Pearl J, Mackenzie D (2018) The book of why: the new science of cause and effect. Basic books Pearl J, Mackenzie D (2018) The book of why: the new science of cause and effect. Basic books
57.
Zurück zum Zitat Pedreschi D, Giannotti F, Guidotti R, Monreale A, Ruggieri S, Turini F (2019) Meaningful explanations of black box ai decision systems. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 9780–9784 Pedreschi D, Giannotti F, Guidotti R, Monreale A, Ruggieri S, Turini F (2019) Meaningful explanations of black box ai decision systems. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 9780–9784
58.
Zurück zum Zitat Pitoura E, Stefanidis K, Koutrika G (2022) Fairness in rankings and recommendations: an overview. VLDB J 31(3):431–458CrossRef Pitoura E, Stefanidis K, Koutrika G (2022) Fairness in rankings and recommendations: an overview. VLDB J 31(3):431–458CrossRef
59.
Zurück zum Zitat Rawal K, Lakkaraju H (2020) Beyond individualized recourse: interpretable and interactive summaries of actionable recourses. Adv Neural Inf Process Syst 33:12187–12198 Rawal K, Lakkaraju H (2020) Beyond individualized recourse: interpretable and interactive summaries of actionable recourses. Adv Neural Inf Process Syst 33:12187–12198
60.
Zurück zum Zitat Rawls J (1971) Atheory of justice. Cambridge (Mass.) Rawls J (1971) Atheory of justice. Cambridge (Mass.)
61.
Zurück zum Zitat Ribeiro MT, Singh S, Guestrin C (2016)“Why Should I Trust You?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’16. Association for Computing Machinery, New York, NY, USA, pp 1135—1144. https://doi.org/10.1145/2939672.2939778 Ribeiro MT, Singh S, Guestrin C (2016)“Why Should I Trust You?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’16. Association for Computing Machinery, New York, NY, USA, pp 1135—1144. https://​doi.​org/​10.​1145/​2939672.​2939778
62.
Zurück zum Zitat Rong Y, Leemann T, Trang Nguyen T, Fiedler L, Qian P, Unhelkar V, Seidel T, Kasneci G, Kasneci E (2023) Towards human-centered explainable ai: a survey of user studies for model explanations Rong Y, Leemann T, Trang Nguyen T, Fiedler L, Qian P, Unhelkar V, Seidel T, Kasneci G, Kasneci E (2023) Towards human-centered explainable ai: a survey of user studies for model explanations
63.
Zurück zum Zitat Russell SJ, Norvig P (2021) Artificial intelligence a modern approach. London Russell SJ, Norvig P (2021) Artificial intelligence a modern approach. London
65.
Zurück zum Zitat Sacharidis D, Giannopoulos G, Papastefanatos G, Stefanidis K (2023) Auditing for spatial fairness. In: Stoyanovich J, Teubner J, Mamoulis N, Pitoura E, Mühlig J, Hose K, Bhowmick SS, Lissandrini M (eds) Proceedings 26th international conference on extending database technology, EDBT 2023. OpenProceedings.org, pp 485–491. https://doi.org/10.48786/EDBT.2023.41 Sacharidis D, Giannopoulos G, Papastefanatos G, Stefanidis K (2023) Auditing for spatial fairness. In: Stoyanovich J, Teubner J, Mamoulis N, Pitoura E, Mühlig J, Hose K, Bhowmick SS, Lissandrini M (eds) Proceedings 26th international conference on extending database technology, EDBT 2023. OpenProceedings.org, pp 485–491. https://​doi.​org/​10.​48786/​EDBT.​2023.​41
66.
Zurück zum Zitat Sacharidis D, Mouratidis K, Kleftogiannis D (2019) A common approach for consumer and provider fairness in recommendations. In: Tkalcic M, Pera S (eds) Proceedings of ACM RecSys 2019 late-breaking results co-located with the 13th ACM conference on recommender systems, RecSys 2019 late-breaking results, CEUR workshop proceedings, vol 2431. CEUR-WS.org, pp. 1–5. http://ceur-ws.org/Vol-2431/paper1.pdf Sacharidis D, Mouratidis K, Kleftogiannis D (2019) A common approach for consumer and provider fairness in recommendations. In: Tkalcic M, Pera S (eds) Proceedings of ACM RecSys 2019 late-breaking results co-located with the 13th ACM conference on recommender systems, RecSys 2019 late-breaking results, CEUR workshop proceedings, vol 2431. CEUR-WS.org, pp. 1–5. http://​ceur-ws.​org/​Vol-2431/​paper1.​pdf
67.
Zurück zum Zitat Sacharidis D, Mukamakuza CP, Werthner H (2020) Fairness and diversity in social-based recommender systems. In: Kuflik T, Torre I, Burke R, Gena C (eds) ACM conference on user modeling, adaptation and personalization (UMAP). ACM, Adjunct Publication, pp 83–88. https://doi.org/10.1145/3386392.3397603 Sacharidis D, Mukamakuza CP, Werthner H (2020) Fairness and diversity in social-based recommender systems. In: Kuflik T, Torre I, Burke R, Gena C (eds) ACM conference on user modeling, adaptation and personalization (UMAP). ACM, Adjunct Publication, pp 83–88. https://​doi.​org/​10.​1145/​3386392.​3397603
68.
Zurück zum Zitat Sapiezynski P, Zeng W, Robertson RE, Mislove A, Wilson C (2019) Quantifying the impact of user attentionon fair group representation in ranked lists. In: WWW (Companion Volume). ACM, pp 553–562 Sapiezynski P, Zeng W, Robertson RE, Mislove A, Wilson C (2019) Quantifying the impact of user attentionon fair group representation in ranked lists. In: WWW (Companion Volume). ACM, pp 553–562
69.
Zurück zum Zitat Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE international conference on computer vision (ICCV), pp 618–626. https://doi.org/10.1109/ICCV.2017.74 Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE international conference on computer vision (ICCV), pp 618–626. https://​doi.​org/​10.​1109/​ICCV.​2017.​74
70.
Zurück zum Zitat Serbos D, Qi S, Mamoulis N, Pitoura E, Tsaparas P (2012) Fairness in package-to-group recommendations. In: Proceedings of the 26th international conference on world wide web, WWW ’17. International world wide web conferences steering committee, Republic and Canton of Geneva, CHE, pp 371–379. https://doi.org/10.1145/3038912.3052612 Serbos D, Qi S, Mamoulis N, Pitoura E, Tsaparas P (2012) Fairness in package-to-group recommendations. In: Proceedings of the 26th international conference on world wide web, WWW ’17. International world wide web conferences steering committee, Republic and Canton of Geneva, CHE, pp 371–379. https://​doi.​org/​10.​1145/​3038912.​3052612
72.
Zurück zum Zitat Tintarev N, Masthoff J (2007) A survey of explanations in recommender systems. In: ICDEW Tintarev N, Masthoff J (2007) A survey of explanations in recommender systems. In: ICDEW
73.
Zurück zum Zitat Tramer F, Atlidakis V, Geambasu R, Hsu D, Hubaux JP, Humbert M, Juels A, Lin H (2017) Fairtest: discovering unwarranted associations in data-driven applications. In: 2017 IEEE European symposium on security and privacy (EuroS &P). IEEE, pp 401–416 Tramer F, Atlidakis V, Geambasu R, Hsu D, Hubaux JP, Humbert M, Juels A, Lin H (2017) Fairtest: discovering unwarranted associations in data-driven applications. In: 2017 IEEE European symposium on security and privacy (EuroS &P). IEEE, pp 401–416
74.
Zurück zum Zitat Ustun B, Spangher A, Liu Y (2019) Actionable recourse in linear classification. In: Proceedings of the conference on fairness, accountability, and transparency, pp 10–19 Ustun B, Spangher A, Liu Y (2019) Actionable recourse in linear classification. In: Proceedings of the conference on fairness, accountability, and transparency, pp 10–19
75.
Zurück zum Zitat Wachter S, Mittelstadt B, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the gdpr. Harv JL Tech 31:841 Wachter S, Mittelstadt B, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the gdpr. Harv JL Tech 31:841
76.
Zurück zum Zitat Waterman DA (1985) A guide to expert systems. Addison-Wesley Longman Publishing Co., Inc. Waterman DA (1985) A guide to expert systems. Addison-Wesley Longman Publishing Co., Inc.
77.
Zurück zum Zitat Weydemann, L., Sacharidis, D., Werthner, H.: Defining and measuring fairness in location recommendations. In: LocalRec@SIGSPATIAL. ACM, pp 6:1–6:8 Weydemann, L., Sacharidis, D., Werthner, H.: Defining and measuring fairness in location recommendations. In: LocalRec@SIGSPATIAL. ACM, pp 6:1–6:8
78.
Zurück zum Zitat Xie Y, He E, Jia X, Chen W, Skakun S, Bao H, Jiang Z, Ghosh R, Ravirathinam P (2022) Fairness by where: a statistically-robust and model-agnostic bi-level learning framework. In: AAAI. AAAI Press, pp 12208–12216 Xie Y, He E, Jia X, Chen W, Skakun S, Bao H, Jiang Z, Ghosh R, Ravirathinam P (2022) Fairness by where: a statistically-robust and model-agnostic bi-level learning framework. In: AAAI. AAAI Press, pp 12208–12216
84.
Zurück zum Zitat Zhang Y, Lai G, Zhang M, Zhang Y, Liu Y, Ma S (2014) Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceedings of the 37th international ACM SIGIR conference on Research and development in information retrieval, pp 83–92 Zhang Y, Lai G, Zhang M, Zhang Y, Liu Y, Ma S (2014) Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceedings of the 37th international ACM SIGIR conference on Research and development in information retrieval, pp 83–92
Metadaten
Titel
Fairness and Explainability for Enabling Trust in AI Systems
verfasst von
Dimitris Sacharidis
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-55109-3_3