Skip to main content

2022 | OriginalPaper | Buchkapitel

4. Künstliche Intelligenz im Management

Chancen und Risiken von Künstlicher Intelligenz als Entscheidungsunterstützung

verfasst von : Jeanette Kalimeris, Sabrina Renz, Sebastian Hofreiter, Matthias Spörrle

Erschienen in: Praxisbeispiele der Digitalisierung

Verlag: Springer Fachmedien Wiesbaden

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Zusammenfassung

Menschliche Entscheidungen sind fehleranfällig und unterliegen oft kognitiven Verzerrungen. Insbesondere bei Entscheidungen, die von Unsicherheit, Dringlichkeit und Komplexität gekennzeichnet sind, ist dies der Fall. Hierbei gilt es zwischen Fehlern, die durchaus bedeutsam für den Erkenntnisgewinn sein können und dem Irrtum zu differenzieren. Letzteres basiert auf einer inkorrekten Einschätzung und kann nicht immer als solche bestimmt werden. Diverse Managemententscheidungen unterliegen ebenfalls Fehlern und kommen als Verzerrungen in Personalentscheidungen oder im strategisch organisationalen Kontext zu tragen. Der Einsatz von Künstlicher Intelligenz (KI) im Management kann menschlichen Verzerrungen entgegenwirken und Transparenz in Entscheidungsprozessen bringen. Zudem kann der Einsatz von KI die zunehmende Komplexität, Ambiguität und Unsicherheiten im Umgang mit großen Datenstrukturen reduzieren. Dabei gilt es jedoch auf potentielle Fallstricke zu achten, da eine KI durchaus auch fehleranfällig sein kann und diese strukturellen Fehler (z. B. verzerrte Trainingsdaten) dementsprechend in praktischen Szenarien anwendet. Darüber hinaus gilt es ethische und moralische Aspekte in der Interaktion zwischen Menschen und KI in symbiotischen Entscheidungsprozessen zu berücksichtigen und zu implementieren. Dieses Kapitel beleuchtet den Einsatz von KI in Managemententscheidungen und den damit verbundenen Vorteilen sowie Herausforderungen, die dem aktuellen Stand der Technologie zugrunde legen.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Abbass, H. A. (2019). Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. Cognitive Computation, 11(2), 159–171.CrossRef Abbass, H. A. (2019). Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. Cognitive Computation, 11(2), 159–171.CrossRef
Zurück zum Zitat Acciarini, C., Brunetta, F., & Boccardelli, P. (2020). Cognitive biases and decision-making strategies in times of change: A systematic literature review. Management Decision, 59, 638–652. Acciarini, C., Brunetta, F., & Boccardelli, P. (2020). Cognitive biases and decision-making strategies in times of change: A systematic literature review. Management Decision, 59, 638–652.
Zurück zum Zitat Barnes, J. H., Jr. (1984). Cognitive biases and their impact on strategic planning. Strategic Management Journal, 5(2), 129–137.CrossRef Barnes, J. H., Jr. (1984). Cognitive biases and their impact on strategic planning. Strategic Management Journal, 5(2), 129–137.CrossRef
Zurück zum Zitat Benard, S., Paik, I., & Correll, S. J. (2008). Cognitive bias and the motherhood penalty. Hastings LawJournal, 59(6), 1359–1387. Benard, S., Paik, I., & Correll, S. J. (2008). Cognitive bias and the motherhood penalty. Hastings LawJournal, 59(6), 1359–1387.
Zurück zum Zitat Bhatnagar, S., Alexandrova, A., Avin, S., Cave, S., Cheke, L., Crosby, M., & Hernández-Orallo, J., et al. (2017, November). Mapping intelligence: Requirements and possibilities. In 3rd Conference on" Philosophy and Theory of Artificial Intelligence (S. 117–135). Springer, Cham. Bhatnagar, S., Alexandrova, A., Avin, S., Cave, S., Cheke, L., Crosby, M., & Hernández-Orallo, J., et al. (2017, November). Mapping intelligence: Requirements and possibilities. In 3rd Conference on" Philosophy and Theory of Artificial Intelligence (S. 117–135). Springer, Cham.
Zurück zum Zitat Bostrom, N. (2011). The ethics of artificial intelligence. In Cambridge handbook of artificial intelligence. Cambridge University Press. Bostrom, N. (2011). The ethics of artificial intelligence. In Cambridge handbook of artificial intelligence. Cambridge University Press.
Zurück zum Zitat Bourgin, D. D., Peterson, J. C., Reichman, D., Russell, S. J., & Griffiths, T. L. (2019, Mai). Cognitive model priors for predicting human decisions. In International conference on machine learning (S. 5133–5141). PMLR. Bourgin, D. D., Peterson, J. C., Reichman, D., Russell, S. J., & Griffiths, T. L. (2019, Mai). Cognitive model priors for predicting human decisions. In International conference on machine learning (S. 5133–5141). PMLR.
Zurück zum Zitat Bowen, C. C., Swim, J. K., & Jacobs, R. R. (2000). Evaluating gender biases on actual job performance of real people: A meta-analysis 1. Journal of Applied Social Psychology, 30(10), 2194–2215.CrossRef Bowen, C. C., Swim, J. K., & Jacobs, R. R. (2000). Evaluating gender biases on actual job performance of real people: A meta-analysis 1. Journal of Applied Social Psychology, 30(10), 2194–2215.CrossRef
Zurück zum Zitat Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.CrossRef Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.CrossRef
Zurück zum Zitat Chira, I., Adams, M., & Thornton, B. (2008). Behavioral bias within the decision making process. Journal of Business Economics Research 6(8), 11–20. Chira, I., Adams, M., & Thornton, B. (2008). Behavioral bias within the decision making process. Journal of Business Economics Research 6(8), 11–20.
Zurück zum Zitat Colson, E. (2019, 8. Juli). What AI-driven decision making looks like. Harvard Business Review. https://hbr.org/2019/07/what-ai-driven-decision-making-looks-like. Colson, E. (2019, 8. Juli). What AI-driven decision making looks like. Harvard Business Review. https://​hbr.​org/​2019/​07/​what-ai-driven-decision-making-looks-like.​
Zurück zum Zitat Courtland R. (2018). Bias detectives: the researchers striving to make algorithms fair. Nature, 558(7710), 357–360. https://doi.org/10.1038/d41586-018-05469-3. Courtland R. (2018). Bias detectives: the researchers striving to make algorithms fair. Nature, 558(7710), 357–360. https://​doi.​org/​10.​1038/​d41586-018-05469-3.​
Zurück zum Zitat Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24(1), 87–114.CrossRef Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24(1), 87–114.CrossRef
Zurück zum Zitat Cowgill, B., Dell'Acqua, F., Deng, S., Hsu, D., Verma, N., & Chaintreau, A. (2020, Juli). Biased programmers? or biased data? a field experiment in operationalizing ai ethics. In Proceedings of the 21st ACM Conference on Economics and Computation(S. 679–681). Cowgill, B., Dell'Acqua, F., Deng, S., Hsu, D., Verma, N., & Chaintreau, A. (2020, Juli). Biased programmers? or biased data? a field experiment in operationalizing ai ethics. In Proceedings of the 21st ACM Conference on Economics and Computation(S. 679–681).
Zurück zum Zitat Crawford, K. (2013, 1. April). The hidden biases in big data.Harvard Business Review. https://hbr.org/2013/04/the-hidden-biases-in-big-data. Crawford, K. (2013, 1. April). The hidden biases in big data.Harvard Business Review. https://​hbr.​org/​2013/​04/​the-hidden-biases-in-big-data.​
Zurück zum Zitat Daugherty, P. R., Wilson, H. J., & Chowdhury, R. (2019). Using artificial intelligence to promote diversity. MIT Sloan Management Review, 60(2), 1. Daugherty, P. R., Wilson, H. J., & Chowdhury, R. (2019). Using artificial intelligence to promote diversity. MIT Sloan Management Review, 60(2), 1.
Zurück zum Zitat Deprez-Sims, A.-S., & Morris, S. B. (2010). Accents in the workplace: Their effects during a job interview. International Journal of Psychology, 45(6), 417–426.CrossRef Deprez-Sims, A.-S., & Morris, S. B. (2010). Accents in the workplace: Their effects during a job interview. International Journal of Psychology, 45(6), 417–426.CrossRef
Zurück zum Zitat Dezfouli, A., Nock, R., & Dayan, P. (2020). Adversarial vulnerabilities of human decision-making. Proceedings of the National Academy of Sciences, 117(46), 29221–29228.CrossRef Dezfouli, A., Nock, R., & Dayan, P. (2020). Adversarial vulnerabilities of human decision-making. Proceedings of the National Academy of Sciences, 117(46), 29221–29228.CrossRef
Zurück zum Zitat Ding, D., Hill, F., Santoro, A., & Botvinick, M. (2020). Object-based attention for spatio-temporal reasoning: Outperforming neuro-symbolic models with flexible distributed architectures. arXiv, preprint arXiv:2012.08508. Ding, D., Hill, F., Santoro, A., & Botvinick, M. (2020). Object-based attention for spatio-temporal reasoning: Outperforming neuro-symbolic models with flexible distributed architectures. arXiv, preprint arXiv:2012.08508.
Zurück zum Zitat Dlamini, Z., Francies, F. Z., Hull, R., & Marima, R. (2020). Artificial intelligence (AI) and big data in cancer and precision oncology. Computational and Structural Biotechnology Journal, 18, 2300–2311. Dlamini, Z., Francies, F. Z., Hull, R., & Marima, R. (2020). Artificial intelligence (AI) and big data in cancer and precision oncology. Computational and Structural Biotechnology Journal, 18, 2300–2311.
Zurück zum Zitat Ellis, A. (1976). The biological basis of human irrationality. Journal of Individual Psychology, 32, 145–168. Ellis, A. (1976). The biological basis of human irrationality. Journal of Individual Psychology, 32, 145–168.
Zurück zum Zitat Endres, M. L., Chowdhury, S., & Milner, M. (2009). Ambiguity tolerance and accurate assessment of self-efficacy in a complex decision task. Journal of Management & Organization, 15(1), 31–46. Endres, M. L., Chowdhury, S., & Milner, M. (2009). Ambiguity tolerance and accurate assessment of self-efficacy in a complex decision task. Journal of Management & Organization, 15(1), 31–46.
Zurück zum Zitat Evans, J. S. B., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8(3), 223–241.CrossRef Evans, J. S. B., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8(3), 223–241.CrossRef
Zurück zum Zitat Ferrer, X., van Nuenen, T., Such, J. M., Coté, M., & Criado, N. (2021). Bias and Discrimination in AI: a cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2), 72–80. Ferrer, X., van Nuenen, T., Such, J. M., Coté, M., & Criado, N. (2021). Bias and Discrimination in AI: a cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2), 72–80.
Zurück zum Zitat Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, 7(1), 1–9.CrossRef Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, 7(1), 1–9.CrossRef
Zurück zum Zitat Fukuda, K., Vogel, E., Mayr, U., & Awh, E. (2010). Quantity, not quality: The relationship between fluid intelligence and working memory capacity. Psychonomic Bulletin & Review, 17(5), 673–679.CrossRef Fukuda, K., Vogel, E., Mayr, U., & Awh, E. (2010). Quantity, not quality: The relationship between fluid intelligence and working memory capacity. Psychonomic Bulletin & Review, 17(5), 673–679.CrossRef
Zurück zum Zitat Funder, D. C. (1987). Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin, 101(1), 75–90. Funder, D. C. (1987). Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin, 101(1), 75–90.
Zurück zum Zitat Gal, D. (2018). Why the most important idea in behav- ioral decision-making is a fallacy. Scientific American, 29(6), 52–54. https://doi.org/10.1038/scientificamerican mind1118–52. Gal, D. (2018). Why the most important idea in behav- ioral decision-making is a fallacy. Scientific American, 29(6), 52–54. https://​doi.​org/​10.​1038/​scientificameric​an mind1118–52.
Zurück zum Zitat Gastounioti, A., & Kontos, D. (2020). Is it time to get rid of black boxes and cultivate trust in AI? Radiology: Artificial Intelligence, 2(3), e200088. https://doi.org/10.1148/ryai.2020200088. Gastounioti, A., & Kontos, D. (2020). Is it time to get rid of black boxes and cultivate trust in AI? Radiology: Artificial Intelligence, 2(3), e200088. https://​doi.​org/​10.​1148/​ryai.​2020200088.​
Zurück zum Zitat Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451–482.CrossRef Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451–482.CrossRef
Zurück zum Zitat Gil, D., Hobson, S., Mojsilović, A., Puri, R., & Smith, J. R. (2020). AI for management: An overview. In The future of management in an AI world (S. 3–19).CrossRef Gil, D., Hobson, S., Mojsilović, A., Puri, R., & Smith, J. R. (2020). AI for management: An overview. In The future of management in an AI world (S. 3–19).CrossRef
Zurück zum Zitat Hertwig, R., & Todd, P. M. (2003). More is not always better: The benefits of cognitive limits. In D. Hardman & L. Macchi (Hrsg.), Thinking: Psychological perspectives on reasoning, judgment and decision making (S. 213–231). Wiley. Hertwig, R., & Todd, P. M. (2003). More is not always better: The benefits of cognitive limits. In D. Hardman & L. Macchi (Hrsg.), Thinking: Psychological perspectives on reasoning, judgment and decision making (S. 213–231). Wiley.
Zurück zum Zitat Ishfaq, M., Nazir, M. S., Qamar, M. A. J., & Usman, M. (2020). Cognitive bias and the extraversion personality shaping the behavior of investors. Frontiers in Psychology, 11, 556506. https://doi.org/10.3389/fpsyg.2020.556506. Ishfaq, M., Nazir, M. S., Qamar, M. A. J., & Usman, M. (2020). Cognitive bias and the extraversion personality shaping the behavior of investors. Frontiers in Psychology, 11, 556506. https://​doi.​org/​10.​3389/​fpsyg.​2020.​556506.​
Zurück zum Zitat Jaderberg, M., Czarnecki, W. M., Dunning, I., Marris, L., Lever, G., Castaneda, A. G., et al. (2019). Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science, 364(6443), 859–865.CrossRef Jaderberg, M., Czarnecki, W. M., Dunning, I., Marris, L., Lever, G., Castaneda, A. G., et al. (2019). Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science, 364(6443), 859–865.CrossRef
Zurück zum Zitat Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586.CrossRef Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586.CrossRef
Zurück zum Zitat Johnson, D. D., Blumstein, D. T., Fowler, J. H., & Haselton, M. G. (2013). The evolution of error: Error management, cognitive constraints, and adaptive decision-making biases. Trends in Ecology & Evolution, 28(8), 474–481.CrossRef Johnson, D. D., Blumstein, D. T., Fowler, J. H., & Haselton, M. G. (2013). The evolution of error: Error management, cognitive constraints, and adaptive decision-making biases. Trends in Ecology & Evolution, 28(8), 474–481.CrossRef
Zurück zum Zitat Kahneman, D., & Tversky, A. (1979). Intuitive prediction: Biases and corrective procedures. Management Science, 12, 313–327. Kahneman, D., & Tversky, A. (1979). Intuitive prediction: Biases and corrective procedures. Management Science, 12, 313–327.
Zurück zum Zitat Kahneman, D., & Tversky, A. (1983). Can irrationality be intelligently discussed? Behavioral and Brain Sciences, 6(3), 509–510.CrossRef Kahneman, D., & Tversky, A. (1983). Can irrationality be intelligently discussed? Behavioral and Brain Sciences, 6(3), 509–510.CrossRef
Zurück zum Zitat Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.CrossRef Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.CrossRef
Zurück zum Zitat Keil, M., Depledge, G., & Rai, A. (2007). Escalation: The role of problem recognition and cognitive bias. Decision Sciences, 38(3), 391–421.CrossRef Keil, M., Depledge, G., & Rai, A. (2007). Escalation: The role of problem recognition and cognitive bias. Decision Sciences, 38(3), 391–421.CrossRef
Zurück zum Zitat Kienzler, M. (2018). Value-based pricing and cognitive biases: An overview for business markets. Industrial Marketing Management, 68, 86–94.CrossRef Kienzler, M. (2018). Value-based pricing and cognitive biases: An overview for business markets. Industrial Marketing Management, 68, 86–94.CrossRef
Zurück zum Zitat Koch, A. J., D’Mello, S. D., & Sackett, P. R. (2015). A meta-analysis of gender stereotypes and bias in experimental simulations of employment decision making. Journal of Applied Psychology, 100(1), 128–161. Koch, A. J., D’Mello, S. D., & Sackett, P. R. (2015). A meta-analysis of gender stereotypes and bias in experimental simulations of employment decision making. Journal of Applied Psychology, 100(1), 128–161.
Zurück zum Zitat de Kock, F. S., & Hauptfleisch, D. B. (2018). Reducing racial similarity bias in interviews by increasing structure: A quasi-experiment using multilevel analysis. International Perspectives in Psychology, 7(3), 137–154.CrossRef de Kock, F. S., & Hauptfleisch, D. B. (2018). Reducing racial similarity bias in interviews by increasing structure: A quasi-experiment using multilevel analysis. International Perspectives in Psychology, 7(3), 137–154.CrossRef
Zurück zum Zitat Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40,e253. Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40,e253.
Zurück zum Zitat Lieto, A., Bhatt, M., Oltramari, A., & Vernon, D. (2018). The role of cognitive architectures in general artificial intelligence. Elsevier.CrossRef Lieto, A., Bhatt, M., Oltramari, A., & Vernon, D. (2018). The role of cognitive architectures in general artificial intelligence. Elsevier.CrossRef
Zurück zum Zitat Lin, Y. T., Hung, T. W., & Huang, L. T. L. (2021). Engineering equity: How AI can help reduce the harm of implicit bias. Philosophy & Technology, 34(1), 65–90. Lin, Y. T., Hung, T. W., & Huang, L. T. L. (2021). Engineering equity: How AI can help reduce the harm of implicit bias. Philosophy & Technology, 34(1), 65–90.
Zurück zum Zitat Liu, B. (2021). „Weak AI“ is likely to never become “Strong AI”, so what is its greatest value for us? arXiv, preprint arXiv:2103.15294. Liu, B. (2021). „Weak AI“ is likely to never become “Strong AI”, so what is its greatest value for us? arXiv, preprint arXiv:2103.15294.
Zurück zum Zitat Lohia, P. K., Ramamurthy, K. N., Bhide, M., Saha, D., Varshney, K. R., & Puri, R. (2019, Mai). Bias mitigation post-processing for individual and group fairness. In Icassp 2019-2019 ieee international conference on acoustics, speech and signal processing (icassp)(S. 2847–2851). IEEE. Lohia, P. K., Ramamurthy, K. N., Bhide, M., Saha, D., Varshney, K. R., & Puri, R. (2019, Mai). Bias mitigation post-processing for individual and group fairness. In Icassp 2019-2019 ieee international conference on acoustics, speech and signal processing (icassp)(S. 2847–2851). IEEE.
Zurück zum Zitat Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390(6657), 279–281.CrossRef Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390(6657), 279–281.CrossRef
Zurück zum Zitat Luzadis, R., Wesolowski, M., & Snavely, B. K. (2008). Understanding criterion choice in hiring decisions from a prescriptive gender bias perspective. Journal of Managerial Issues, 20(4), 468–484. Luzadis, R., Wesolowski, M., & Snavely, B. K. (2008). Understanding criterion choice in hiring decisions from a prescriptive gender bias perspective. Journal of Managerial Issues, 20(4), 468–484.
Zurück zum Zitat Marcus, G. (2018). Deep learning: A critical appraisal. arXiv, preprint arXiv:1801.00631. Marcus, G. (2018). Deep learning: A critical appraisal. arXiv, preprint arXiv:1801.00631.
Zurück zum Zitat McCarthy, J. M., Van Iddekinge, C. H., & Campion, M. A. (2010). Are highly structured job interviews resistant to demographic similarity effects? Personnel Psychology, 63(2), 325–359.CrossRef McCarthy, J. M., Van Iddekinge, C. H., & Campion, M. A. (2010). Are highly structured job interviews resistant to demographic similarity effects? Personnel Psychology, 63(2), 325–359.CrossRef
Zurück zum Zitat McIlroy-Young, R., Sen, S., Kleinberg, J., & Anderson, A. (2020, August). Aligning superhuman ai with human behavior: Chess as a model system. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (S. 1677–1687). McIlroy-Young, R., Sen, S., Kleinberg, J., & Anderson, A. (2020, August). Aligning superhuman ai with human behavior: Chess as a model system. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (S. 1677–1687).
Zurück zum Zitat McKay, C. (2020). Predicting risk in criminal procedure: actuarial tools, algorithms, AI and judicial decision-making. Current Issues in Criminal Justice, 32(1), 22–39.CrossRef McKay, C. (2020). Predicting risk in criminal procedure: actuarial tools, algorithms, AI and judicial decision-making. Current Issues in Criminal Justice, 32(1), 22–39.CrossRef
Zurück zum Zitat Milosavljevic, M., Navalpakkam, V., Koch, C., & Rangel, A. (2012). Relative visual saliency differences induce sizable bias in consumer choice. Journal of Consumer Psychology, 22(1), 67–74.CrossRef Milosavljevic, M., Navalpakkam, V., Koch, C., & Rangel, A. (2012). Relative visual saliency differences induce sizable bias in consumer choice. Journal of Consumer Psychology, 22(1), 67–74.CrossRef
Zurück zum Zitat Mujtaba, D. F., & Mahapatra, N. R. (2019, November). Ethical considerations in AI-based recruitment. In 2019 IEEE International Symposium on Technology and Society (ISTAS) (S. 1–7). IEEE. Mujtaba, D. F., & Mahapatra, N. R. (2019, November). Ethical considerations in AI-based recruitment. In 2019 IEEE International Symposium on Technology and Society (ISTAS) (S. 1–7). IEEE.
Zurück zum Zitat Narayan Banerjee, D., & Sekhar Chanda, S. (2020). AI failures: A review of underlying issues. arXiv, e-prints, arXiv: 2008.04073. Narayan Banerjee, D., & Sekhar Chanda, S. (2020). AI failures: A review of underlying issues. arXiv, e-prints, arXiv: 2008.04073.
Zurück zum Zitat Oaksford, M., & Hall, S. (2016). On the source of human irrationality. Trends in Cognitive Sciences, 20(5), 336–344.CrossRef Oaksford, M., & Hall, S. (2016). On the source of human irrationality. Trends in Cognitive Sciences, 20(5), 336–344.CrossRef
Zurück zum Zitat Pingitore, R., Dugoni, B. L., Tindale, R. S., & Spring, B. (1994). Bias against overweight job applicants in a simulated employment interview. Journal of Applied Psychology, 79(6), 909–918. Pingitore, R., Dugoni, B. L., Tindale, R. S., & Spring, B. (1994). Bias against overweight job applicants in a simulated employment interview. Journal of Applied Psychology, 79(6), 909–918.
Zurück zum Zitat Power, D. J. (2008). Decision support systems: A historical overview. In Handbook on decision support systems (Bd. 1, S. 121–140). Springer.CrossRef Power, D. J. (2008). Decision support systems: A historical overview. In Handbook on decision support systems (Bd. 1, S. 121–140). Springer.CrossRef
Zurück zum Zitat Roselli, D., Matthews, J., & Talagala, N. (2019, Mai). Managing bias in AI. In Companion Proceedings of The 2019 World Wide Web Conference (S. 539–544). Roselli, D., Matthews, J., & Talagala, N. (2019, Mai). Managing bias in AI. In Companion Proceedings of The 2019 World Wide Web Conference (S. 539–544).
Zurück zum Zitat Rost, M. (2018). Künstliche Intelligenz. Datenschutz und Datensicherheit, 42(9), 558–565.CrossRef Rost, M. (2018). Künstliche Intelligenz. Datenschutz und Datensicherheit, 42(9), 558–565.CrossRef
Zurück zum Zitat Santos, L. R., & Rosati, A. G. (2015). The evolutionary roots of human decision making. Annual Review of Psychology, 66, 321–347. Santos, L. R., & Rosati, A. G. (2015). The evolutionary roots of human decision making. Annual Review of Psychology, 66, 321–347.
Zurück zum Zitat Siau, K., & Wang, W. (2020). Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. Journal of Database Management (JDM), 31(2), 74–87.CrossRef Siau, K., & Wang, W. (2020). Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. Journal of Database Management (JDM), 31(2), 74–87.CrossRef
Zurück zum Zitat Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354–359.CrossRef Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354–359.CrossRef
Zurück zum Zitat Taniguchi, H., Sato, H., & Shirakawa, T. (2018). A machine learning model with human cognitive biases capable of learning from small and biased datasets. Scientific Reports, 8(1), 1–13.CrossRef Taniguchi, H., Sato, H., & Shirakawa, T. (2018). A machine learning model with human cognitive biases capable of learning from small and biased datasets. Scientific Reports, 8(1), 1–13.CrossRef
Zurück zum Zitat Thomas, D. E., Eden, L., Hitt, M. A., & Miller, S. R. (2007). Experience of emerging market firms: The role of cognitive bias in developed market entry and survival. Management International Review, 47(6), 845–867.CrossRef Thomas, D. E., Eden, L., Hitt, M. A., & Miller, S. R. (2007). Experience of emerging market firms: The role of cognitive bias in developed market entry and survival. Management International Review, 47(6), 845–867.CrossRef
Zurück zum Zitat Thomas, O. (2018). Two decades of cognitive bias research in entrepreneurship: What do we know and where do we go from here? Management Review Quarterly, 68(2), 107–143.CrossRef Thomas, O. (2018). Two decades of cognitive bias research in entrepreneurship: What do we know and where do we go from here? Management Review Quarterly, 68(2), 107–143.CrossRef
Zurück zum Zitat Tomasello, M. (2014). The ultra-social animal. European Journal of Social Psychology, 44(3), 187–194.CrossRef Tomasello, M. (2014). The ultra-social animal. European Journal of Social Psychology, 44(3), 187–194.CrossRef
Zurück zum Zitat Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56.CrossRef Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56.CrossRef
Zurück zum Zitat Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.CrossRef Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.CrossRef
Zurück zum Zitat Vallverdú, J. (2020). Approximate and situated causality in deep learning. Philosophies, 5(2), 1–12. Vallverdú, J. (2020). Approximate and situated causality in deep learning. Philosophies, 5(2), 1–12.
Zurück zum Zitat Van Esch, P., Black, J. S., & Ferolie, J. (2019). Marketing AI recruitment: The next phase in job application and selection. Computers in Human Behavior, 90, 215–222.CrossRef Van Esch, P., Black, J. S., & Ferolie, J. (2019). Marketing AI recruitment: The next phase in job application and selection. Computers in Human Behavior, 90, 215–222.CrossRef
Zurück zum Zitat van Esch, P., Black, J. S., & Arli, D. (2021). Job candidates’ reactions to AI-enabled job application processes. AI and Ethics, 1(2), 119–130. van Esch, P., Black, J. S., & Arli, D. (2021). Job candidates’ reactions to AI-enabled job application processes. AI and Ethics, 1(2), 119–130.
Zurück zum Zitat Vives, M.-L., & FeldmanHall, O. (2018). Tolerance to ambiguous uncertainty predicts prosocial behavior. Nature Communications, 9(1), 1–9.CrossRef Vives, M.-L., & FeldmanHall, O. (2018). Tolerance to ambiguous uncertainty predicts prosocial behavior. Nature Communications, 9(1), 1–9.CrossRef
Zurück zum Zitat Wamba-Taguimdje, S. L., Wamba, S. F., Kamdjoug, J. R. K., & Wanko, C. E. T. (2020). Influence of artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects. Business Process Management Journal, 26(7), 1893–1924. Wamba-Taguimdje, S. L., Wamba, S. F., Kamdjoug, J. R. K., & Wanko, C. E. T. (2020). Influence of artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects. Business Process Management Journal, 26(7), 1893–1924.
Zurück zum Zitat Yampolskiy, R. V. (2020). Unexplainability and incomprehensibility of AI. Journal of Artificial Intelligence and Consciousness, 7(02), 277–291.CrossRef Yampolskiy, R. V. (2020). Unexplainability and incomprehensibility of AI. Journal of Artificial Intelligence and Consciousness, 7(02), 277–291.CrossRef
Zurück zum Zitat Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist – It’s time to make it fair. Nature Publishing Group.CrossRef Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist – It’s time to make it fair. Nature Publishing Group.CrossRef
Metadaten
Titel
Künstliche Intelligenz im Management
verfasst von
Jeanette Kalimeris
Sabrina Renz
Sebastian Hofreiter
Matthias Spörrle
Copyright-Jahr
2022
DOI
https://doi.org/10.1007/978-3-658-37903-2_4

Premium Partner