Skip to main content

2022 | OriginalPaper | Buchkapitel

35. Erklärbare KI in der medizinischen Diagnose – Erfolge und Herausforderungen

verfasst von : Adriano Lucieri, Muhammad Naseer Bajwa, Andreas Dengel, Sheraz Ahmed

Erschienen in: Künstliche Intelligenz im Gesundheitswesen

Verlag: Springer Fachmedien Wiesbaden

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Zusammenfassung

Der große Erfolg moderner, bildbasierter KI-Methoden und das damit einhergehende Interesse für die Anwendung von KI in kritischen Entscheidungsprozessen führte zu einem Anstieg der Bemühungen, intelligente Systeme transparent und erklärbar zu gestalten. Besonders im medizinischen Kontext, wo computergestützte Entscheidungen direkten Einfluss auf die Behandlung und das Wohlsein von Patienten haben können, ist Transparenz für den sicheren Übergang von Forschung in die Praxis von höchster Wichtigkeit. Dieser Beitrag beschäftigt sich mit dem aktuellen Stand moderner Methoden zur Erklärung und Interpretation von Deep-Learning-basierten KI-Algorithmen in Anwendungen der medizinischen Forschung und Diagnose von Krankheiten. Zunächst werden erste bemerkenswerte Erfolge im Einsatz erklärbarer KI zur Validierung bekannter und Exploration potenzieller Biomarker sowie Methoden zur nachträglichen Korrektur von KI-Modellen aufgezeigt. Im Anschluss werden einige verbleibende Herausforderungen, die der Anwendung von KI als klinische Entscheidungshilfe im Weg stehen, kritisch diskutiert und Empfehlungen für die Ausrichtung zukünftiger Forschung ausgesprochen.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digital Medicine, 1(1), 1–8.CrossRef Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digital Medicine, 1(1), 1–8.CrossRef
Zurück zum Zitat Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. Advances in Neural Information Processing Systems, 31, 9505–9515. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. Advances in Neural Information Processing Systems, 31, 9505–9515.
Zurück zum Zitat Alipour, K., Schulze, J. P., Yao, Y., Ziskind, A., & Burachas, G. (2020). A study on multimodal and interactive explanations for visual question answering. arXiv preprint arXiv:2003.00431. Alipour, K., Schulze, J. P., Yao, Y., Ziskind, A., & Burachas, G. (2020). A study on multimodal and interactive explanations for visual question answering. arXiv preprint arXiv:​2003.​00431.
Zurück zum Zitat Arbabshirani, M. R., Fornwalt, B. K., Mongelluzzo, G. J., Suever, J. D., Geise, B. D., Patel, A. A., & Moore, G. J. (2018). Advanced machine learning in action: Identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digital Medicine, 1(1), 1–7.CrossRef Arbabshirani, M. R., Fornwalt, B. K., Mongelluzzo, G. J., Suever, J. D., Geise, B. D., Patel, A. A., & Moore, G. J. (2018). Advanced machine learning in action: Identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digital Medicine, 1(1), 1–7.CrossRef
Zurück zum Zitat Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., & Chatila, R. (2020) Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., & Chatila, R. (2020) Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
Zurück zum Zitat Ba, J., Mnih, V., & Kavukcuoglu, K. (2014). Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755. Ba, J., Mnih, V., & Kavukcuoglu, K. (2014). Multiple object recognition with visual attention. arXiv preprint arXiv:​1412.​7755.
Zurück zum Zitat Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
Zurück zum Zitat Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba A. (2017). Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineers (S. 6541–6549), 21.07.–26.07.2017, Honolulu, The Computer Vision Foundation (CVF). Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba A. (2017). Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineers (S. 6541–6549), 21.07.–26.07.2017, Honolulu, The Computer Vision Foundation (CVF).
Zurück zum Zitat Beam, A. L., & Kohane, I. S. (2016). Translating artificial intelligence into clinical care. JAMA, 316(22), 2368–2369.CrossRef Beam, A. L., & Kohane, I. S. (2016). Translating artificial intelligence into clinical care. JAMA, 316(22), 2368–2369.CrossRef
Zurück zum Zitat Beede, E., Baylor, E., Hersch, F., Iurchenko, A., Wilcox, L., Ruamviboonsuk, P., & Vardoulakis, L. M. (2020). A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI conference on human factors in computing systems (S. 1–12), 25.04.–30.04.2020, Honolulu, Special Interest Group on Computer-Human Interaction (SIGCHI). Beede, E., Baylor, E., Hersch, F., Iurchenko, A., Wilcox, L., Ruamviboonsuk, P., & Vardoulakis, L. M. (2020). A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI conference on human factors in computing systems (S. 1–12), 25.04.–30.04.2020, Honolulu, Special Interest Group on Computer-Human Interaction (SIGCHI).
Zurück zum Zitat Buchanan, B., Sutherland, G., & Feigenbaum, E. A. (1969). Heuristic DENDRAL: A program for generating explanatory hypotheses in organic chemistry. In B. Meltzer & D. Michie (Hrsg.), Machine intelligence (Bd. 4, S. 209–254). Edinburgh University Press. Buchanan, B., Sutherland, G., & Feigenbaum, E. A. (1969). Heuristic DENDRAL: A program for generating explanatory hypotheses in organic chemistry. In B. Meltzer & D. Michie (Hrsg.), Machine intelligence (Bd. 4, S. 209–254). Edinburgh University Press.
Zurück zum Zitat Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended consequences of machine learning in medicine. JAMA, 318(6), 517–518.CrossRef Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended consequences of machine learning in medicine. JAMA, 318(6), 517–518.CrossRef
Zurück zum Zitat Cai, C. J., Reif, E., Hegde, N., Hipp, J., Kim, B., Smilkov, D., Wattenberg, M., Viegas, F., Corrado, G. S., & Stumpe, M. C., & Terry, M. (2019a). Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 CHI conference on human factors in computing systems (S. 1–14), 04.05.–09.05.2019, Glasgow, Special Interest Group on Computer-Human Interaction (SIGCHI). Cai, C. J., Reif, E., Hegde, N., Hipp, J., Kim, B., Smilkov, D., Wattenberg, M., Viegas, F., Corrado, G. S., & Stumpe, M. C., & Terry, M. (2019a). Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 CHI conference on human factors in computing systems (S. 1–14), 04.05.–09.05.2019, Glasgow, Special Interest Group on Computer-Human Interaction (SIGCHI).
Zurück zum Zitat Cai, C. J., Jongejan, J., & Holbrook, J. (2019b). The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th international conference on intelligent user interfaces (S. 258–262), 16.03.–20.03.2019, Los Angeles, Special Interest Group on Computer-Human Interaction (SIGCHI). Cai, C. J., Jongejan, J., & Holbrook, J. (2019b). The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th international conference on intelligent user interfaces (S. 258–262), 16.03.–20.03.2019, Los Angeles, Special Interest Group on Computer-Human Interaction (SIGCHI).
Zurück zum Zitat Carrieri A.P., Haiminen N., Maudsley-Barton S., Gardiner L.J., Murphy B., Mayes A., Paterson S., Grimshaw S., Winn M., Shand C., & Rowe, W. (2020). Explainable AI reveals key changes in skin microbiome associated with menopause, smoking, aging and skin hydration. bioRxiv. Carrieri A.P., Haiminen N., Maudsley-Barton S., Gardiner L.J., Murphy B., Mayes A., Paterson S., Grimshaw S., Winn M., Shand C., & Rowe, W. (2020). Explainable AI reveals key changes in skin microbiome associated with menopause, smoking, aging and skin hydration. bioRxiv.
Zurück zum Zitat Caruana, R. (1997). Multitask learning. Machine Learning, 28(1), 41–75.CrossRef Caruana, R. (1997). Multitask learning. Machine Learning, 28(1), 41–75.CrossRef
Zurück zum Zitat Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, Association for Computing Machinery (ACM), Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD) (S. 1721–1730), 10.08.–13.08.2015, Sydney, SIGKDD,. Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, Association for Computing Machinery (ACM), Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD) (S. 1721–1730), 10.08.–13.08.2015, Sydney, SIGKDD,.
Zurück zum Zitat Cole, E. B., Zhang, Z., Marques, H. S., Hendrick, R. E., Yaffe, M. J., & Pisano, E. D. (2014). Impact of computer-aided detection systems on radiologist accuracy with digital mammography. American Journal of Roentgenology, 203(4), 909–916.CrossRef Cole, E. B., Zhang, Z., Marques, H. S., Hendrick, R. E., Yaffe, M. J., & Pisano, E. D. (2014). Impact of computer-aided detection systems on radiologist accuracy with digital mammography. American Journal of Roentgenology, 203(4), 909–916.CrossRef
Zurück zum Zitat Coppola, D., Kuan Lee, H., & Guan, C. (2020). Interpreting mechanisms of prediction for skin cancer diagnosis using multi-task learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, Institute of Electrical and Electronics Engineers, The Computer Vision Foundation (IEEE, CVF), virtuelle Konferenz (S. 734–735), 14.06.–19.06.2020, CVF. Coppola, D., Kuan Lee, H., & Guan, C. (2020). Interpreting mechanisms of prediction for skin cancer diagnosis using multi-task learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, Institute of Electrical and Electronics Engineers, The Computer Vision Foundation (IEEE, CVF), virtuelle Konferenz (S. 734–735), 14.06.–19.06.2020, CVF.
Zurück zum Zitat Couteaux, V., Nempont, O., Pizaine, G., & Bloch, I. (2019). Towards interpretability of segmentation networks by analyzing DeepDreams. Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support, 11797, 56–63. Couteaux, V., Nempont, O., Pizaine, G., & Bloch, I. (2019). Towards interpretability of segmentation networks by analyzing DeepDreams. Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support, 11797, 56–63.
Zurück zum Zitat Cruz-Roa, A. A., Ovalle, J. E. A., Madabhushi A., & Osorio, F. A. G. (2013). A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection. In International conference on medical image computing and computer-assisted intervention, (S. 403–410), 22.09.–26.09.2013, Nagoya, The Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Springer. Cruz-Roa, A. A., Ovalle, J. E. A., Madabhushi A., & Osorio, F. A. G. (2013). A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection. In International conference on medical image computing and computer-assisted intervention, (S. 403–410), 22.09.–26.09.2013, Nagoya, The Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Springer.
Zurück zum Zitat DFKI. (2020a). exAID – Bringing the power of deep learning to clinical practice! Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI, Hrsg.). https://exaid.kl.dfki.de/. Zugegriffen: 13. Okt. 2020. DFKI. (2020a). exAID – Bringing the power of deep learning to clinical practice! Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI, Hrsg.). https://​exaid.​kl.​dfki.​de/​. Zugegriffen: 13. Okt. 2020.
Zurück zum Zitat Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., Fei-Fei, L. (2009) Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineers (IEEE) (S. 248–255), 20.06–25.06.2009, Miami, The Computer Vision Foundation (CVF). Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., Fei-Fei, L. (2009) Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineers (IEEE) (S. 248–255), 20.06–25.06.2009, Miami, The Computer Vision Foundation (CVF).
Zurück zum Zitat Eitel, F., & Ritter, K. (2019). Testing the Robustness of attribution methods for convolutional neural networks in MRI-based Alzheimer’s disease classification. Alzheimer’s Disease Neuroimaging Initiative (ADNI, Hrsg.). Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support, 11797(1), 3–11. Eitel, F., & Ritter, K. (2019). Testing the Robustness of attribution methods for convolutional neural networks in MRI-based Alzheimer’s disease classification. Alzheimer’s Disease Neuroimaging Initiative (ADNI, Hrsg.). Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support, 11797(1), 3–11.
Zurück zum Zitat Elwyn, G., Scholl, I., Tietbohl, C., Mann, M., Edwards, A. G., Clay, C., Légaré, F., Van der Weijden, T., Lewis, C. L., Wexler, R. M., & Frosch, D. L. (2013). “Many miles to go”: A systematic review of the implementation of patient decision support interventions into routine clinical practice. BMC Medical Informatics and Decision Making, 13(2), 1–10. Elwyn, G., Scholl, I., Tietbohl, C., Mann, M., Edwards, A. G., Clay, C., Légaré, F., Van der Weijden, T., Lewis, C. L., Wexler, R. M., & Frosch, D. L. (2013). “Many miles to go”: A systematic review of the implementation of patient decision support interventions into routine clinical practice. BMC Medical Informatics and Decision Making, 13(2), 1–10.
Zurück zum Zitat Erion, G., Janizek, J. D., Sturmfels, P., Lundberg, S., & Lee, S. I. (2019). Learning explainable models using attribution priors. arXiv preprint arXiv:1906.10670. Erion, G., Janizek, J. D., Sturmfels, P., Lundberg, S., & Lee, S. I. (2019). Learning explainable models using attribution priors. arXiv preprint arXiv:​1906.​10670.
Zurück zum Zitat Essemlali, A., St-Onge, E., Descoteaux, M., & Jodoin, P. M. (2020). Understanding Alzheimer disease’s structural connectivity through explainable AI. Medical Imaging with Deep Learning, 121, 217–229 (PMLR). Essemlali, A., St-Onge, E., Descoteaux, M., & Jodoin, P. M. (2020). Understanding Alzheimer disease’s structural connectivity through explainable AI. Medical Imaging with Deep Learning, 121, 217–229 (PMLR).
Zurück zum Zitat Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter S. M., Blau, H. M., & Thrun, S. (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter S. M., Blau, H. M., & Thrun, S. (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.
Zurück zum Zitat Fong, R., Patrick, M., & Vedaldi, A. (2019). Understanding deep networks via extremal perturbations and smooth masks. In Proceedings of the IEEE International conference on computer vision, Institute of Electrical and Electronics Engineers (IEEE) (S. 2950–2958), 27.10.–02.11.2019, Seoul, The Computer Vision Foundation (CVF). Fong, R., Patrick, M., & Vedaldi, A. (2019). Understanding deep networks via extremal perturbations and smooth masks. In Proceedings of the IEEE International conference on computer vision, Institute of Electrical and Electronics Engineers (IEEE) (S. 2950–2958), 27.10.–02.11.2019, Seoul, The Computer Vision Foundation (CVF).
Zurück zum Zitat Ghorbani, A., Wexler, J., Zou, J. Y., & Kim, B. (2019). Towards automatic concept-based explanations. Advances in Neural Information Processing Systems, 32, 9277–9286. Ghorbani, A., Wexler, J., Zou, J. Y., & Kim, B. (2019). Towards automatic concept-based explanations. Advances in Neural Information Processing Systems, 32, 9277–9286.
Zurück zum Zitat Ghosh, S., Elenius, D., Li, W., Lincoln, P., Shankar, N., & Steiner, W. (2016). ARSENAL: Automatic requirements specification extraction from natural language. In NASA Formal Methods Symposium (S. 41–46), 07.06.–09.06.2016, Minneapolis, National Aeronautics and Space Administration (NASA). Springer. Ghosh, S., Elenius, D., Li, W., Lincoln, P., Shankar, N., & Steiner, W. (2016). ARSENAL: Automatic requirements specification extraction from natural language. In NASA Formal Methods Symposium (S. 41–46), 07.06.–09.06.2016, Minneapolis, National Aeronautics and Space Administration (NASA). Springer.
Zurück zum Zitat Graziani, M., Andrearczyk, V., & Müller, H. (2019) Visualizing and interpreting feature reuse of pretrained CNNs for histopathology. In MVIP 2019: Irish machine vision and image processing conference proceedings, irish pattern recognition and classification society, 28.08.–30.08.2019, Dublin, Technological University Dublin. Graziani, M., Andrearczyk, V., & Müller, H. (2019) Visualizing and interpreting feature reuse of pretrained CNNs for histopathology. In MVIP 2019: Irish machine vision and image processing conference proceedings, irish pattern recognition and classification society, 28.08.–30.08.2019, Dublin, Technological University Dublin.
Zurück zum Zitat Graziani, M., Andrearczyk, V., & Müller, H. (2018). Regression concept vectors for bidirectional explanations in histopathology. Understanding and Interpreting Machine Learning in Medical Image Computing Applications, 11038, 124–132. Graziani, M., Andrearczyk, V., & Müller, H. (2018). Regression concept vectors for bidirectional explanations in histopathology. Understanding and Interpreting Machine Learning in Medical Image Computing Applications, 11038, 124–132.
Zurück zum Zitat Graziani, M., Otálora, S., Muller, H., & Andrearczyk V. (2020). Guiding CNNs towards relevant concepts by multi-task and adversarial learning. arXiv preprint arXiv:2008.01478. Graziani, M., Otálora, S., Muller, H., & Andrearczyk V. (2020). Guiding CNNs towards relevant concepts by multi-task and adversarial learning. arXiv preprint arXiv:​2008.​01478.
Zurück zum Zitat Guan, J. (2019). Artificial intelligence in healthcare and medicine: Promises, ethical challenges and governance. Chinese Medical Sciences Journal, 34(2), 76–83. Guan, J. (2019). Artificial intelligence in healthcare and medicine: Promises, ethical challenges and governance. Chinese Medical Sciences Journal, 34(2), 76–83.
Zurück zum Zitat Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., & Kim, R. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 316(22), 2402–2410.CrossRef Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., & Kim, R. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 316(22), 2402–2410.CrossRef
Zurück zum Zitat Hendricks, L. A., Hu, R., Darrell, T., & Akata, Z. (2018). Grounding visual explanations. In Proceedings of the European Conference on Computer Vision (ECCV) (S. 264–279), 08.09.–14.09.2018, München, The Computer Vision Foundation (CVF). Hendricks, L. A., Hu, R., Darrell, T., & Akata, Z. (2018). Grounding visual explanations. In Proceedings of the European Conference on Computer Vision (ECCV) (S. 264–279), 08.09.–14.09.2018, München, The Computer Vision Foundation (CVF).
Zurück zum Zitat Hendricks, L. A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., & Darrell, T. (2016). Generating visual explanations. In European conference on computer vision (S. 3–19), 08.10.–16.10.2016, Amsterdam, The Computer Vision Foundation (CVF). Springer. Hendricks, L. A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., & Darrell, T. (2016). Generating visual explanations. In European conference on computer vision (S. 3–19), 08.10.–16.10.2016, Amsterdam, The Computer Vision Foundation (CVF). Springer.
Zurück zum Zitat Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., & Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82–97.CrossRef Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., & Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82–97.CrossRef
Zurück zum Zitat Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312. Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.
Zurück zum Zitat Holzinger, A., Biemann, C., Pattichis, C. S., Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923. Holzinger, A., Biemann, C., Pattichis, C. S., Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:​1712.​09923.
Zurück zum Zitat Hooker, S., Erhan, D., Kindermans, P. J., & Kim, B. (2019). A benchmark for interpretability methods in deep neural networks. Advances in Neural Information Processing Systems, 32, 9737–9748. Hooker, S., Erhan, D., Kindermans, P. J., & Kim, B. (2019). A benchmark for interpretability methods in deep neural networks. Advances in Neural Information Processing Systems, 32, 9737–9748.
Zurück zum Zitat Huk Park, D., Hendricks, L. A., Akata, Z., Rohrbach, A., Schiele, B., Darrell, T., & Rohrbach, M. (2018) Multimodal explanations: Justifying decisions and pointing to the evidence. In Proceedings of the IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineering (IEEE) (S. 8779–8788), 19.06.–21.06.2018, Salt Lake City, The Computer Vision Foundation (CVF). Huk Park, D., Hendricks, L. A., Akata, Z., Rohrbach, A., Schiele, B., Darrell, T., & Rohrbach, M. (2018) Multimodal explanations: Justifying decisions and pointing to the evidence. In Proceedings of the IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineering (IEEE) (S. 8779–8788), 19.06.–21.06.2018, Salt Lake City, The Computer Vision Foundation (CVF).
Zurück zum Zitat Izadyyazdanabadi, M., Belykh, E., Cavallo, C., Zhao, X., Gandhi, S., Moreira, L. B., Eschbacher, J., Nakaji, P., Preul, M. C., & Yang, Y. (2018). Weakly-supervised learning-based feature localization for confocal laser endomicroscopy glioma images. In International conference on medical image computing and computer-assisted intervention (S. 300–308), 16.09.–20.09.2019, Granada, The Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Springer. Izadyyazdanabadi, M., Belykh, E., Cavallo, C., Zhao, X., Gandhi, S., Moreira, L. B., Eschbacher, J., Nakaji, P., Preul, M. C., & Yang, Y. (2018). Weakly-supervised learning-based feature localization for confocal laser endomicroscopy glioma images. In International conference on medical image computing and computer-assisted intervention (S. 300–308), 16.09.–20.09.2019, Granada, The Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Springer.
Zurück zum Zitat Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. Advances in Neural Information Processing Systems, 28, 2017–2025. Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. Advances in Neural Information Processing Systems, 28, 2017–2025.
Zurück zum Zitat Jansen, C., Penzel, T., Hodel, S., Breuer, S., Spott, M., & Krefting, D. (2019). Network physiology in insomnia patients: Assessment of relevant changes in network topology with interpretable machine learning models. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(12), 123129. Jansen, C., Penzel, T., Hodel, S., Breuer, S., Spott, M., & Krefting, D. (2019). Network physiology in insomnia patients: Assessment of relevant changes in network topology with interpretable machine learning models. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(12), 123129.
Zurück zum Zitat Jirotka, M., Procter, R., Hartswood, M., Slack, R., Simpson, A., Coopmans, C., Hinds, C., & Voss, A. (2005). Collaboration and trust in healthcare innovation: The eDiaMoND case study. Computer Supported Cooperative Work (CSCW), 14(4), 369–398.CrossRef Jirotka, M., Procter, R., Hartswood, M., Slack, R., Simpson, A., Coopmans, C., Hinds, C., & Voss, A. (2005). Collaboration and trust in healthcare innovation: The eDiaMoND case study. Computer Supported Cooperative Work (CSCW), 14(4), 369–398.CrossRef
Zurück zum Zitat Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach H., & Wortman Vaughan, J. (2020). Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (S. 1–14), 25.04.–30.04.2020, Honolulu, Special Interest Group on Computer-Human Interaction (SIGCHI). Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach H., & Wortman Vaughan, J. (2020). Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (S. 1–14), 25.04.–30.04.2020, Honolulu, Special Interest Group on Computer-Human Interaction (SIGCHI).
Zurück zum Zitat Kawahara, J., Daneshvar, S., Argenziano, G., & Hamarneh, G. (2018). Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE Journal of Biomedical and Health Informatics, Institute of Electrical and Electronics Engineers (IEEE), 23(2), 538–546. Kawahara, J., Daneshvar, S., Argenziano, G., & Hamarneh, G. (2018). Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE Journal of Biomedical and Health Informatics, Institute of Electrical and Electronics Engineers (IEEE), 23(2), 538–546.
Zurück zum Zitat Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler J., & Viegas F. (2018). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). International Conference on Machine Learning, 80, 2668–2677. Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler J., & Viegas F. (2018). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). International Conference on Machine Learning, 80, 2668–2677.
Zurück zum Zitat Kohli, A., & Jha, S. (2018). Why CAD failed in mammography. Journal of the American College of Radiology, 15(3), 535–537.CrossRef Kohli, A., & Jha, S. (2018). Why CAD failed in mammography. Journal of the American College of Radiology, 15(3), 535–537.CrossRef
Zurück zum Zitat Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J., (2019) Faithful and customizable explanations of black box models. In Proceedings of the 2019 AAAI/ACM conference on AI, Ethics, and Society, Association for the Advancement of Artificial Intelligence, Association for Computing Machinery (AAAI, ACM) (S. 131–138), 27.01.–01.02.2019, Honolulu, AAAI. Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J., (2019) Faithful and customizable explanations of black box models. In Proceedings of the 2019 AAAI/ACM conference on AI, Ethics, and Society, Association for the Advancement of Artificial Intelligence, Association for Computing Machinery (AAAI, ACM) (S. 131–138), 27.01.–01.02.2019, Honolulu, AAAI.
Zurück zum Zitat Lucieri, A., Bajwa, M. N., Dengel, A., & Ahmed, S. (2020b). Explaining ai-based decision support systems using concept localization maps. arXiv preprint arXiv:2005.01399. Lucieri, A., Bajwa, M. N., Dengel, A., & Ahmed, S. (2020b). Explaining ai-based decision support systems using concept localization maps. arXiv preprint arXiv:​2005.​01399.
Zurück zum Zitat Lucieri, A., Bajwa, M. N., Braun, S. A., Malik, M. I., Dengel, A., & Ahmed, S. (2020a). On interpretability of deep learning based skin lesion classifiers using concept activation vectors. In International Joint Conference on Neural Networks (IJCNN) (S. 1–10), 19.07.–24.07.2020, Glasgow, Computational Intelligence Society (CIS). Lucieri, A., Bajwa, M. N., Braun, S. A., Malik, M. I., Dengel, A., & Ahmed, S. (2020a). On interpretability of deep learning based skin lesion classifiers using concept activation vectors. In International Joint Conference on Neural Networks (IJCNN) (S. 1–10), 19.07.–24.07.2020, Glasgow, Computational Intelligence Society (CIS).
Zurück zum Zitat Lundberg, S. M., & Lee ,S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4765–4774. Lundberg, S. M., & Lee ,S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4765–4774.
Zurück zum Zitat Mahendran, A., & Vedaldi, A. (2016). Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision, 120(3), 233–255.CrossRef Mahendran, A., & Vedaldi, A. (2016). Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision, 120(3), 233–255.CrossRef
Zurück zum Zitat Mitsuhara, M., Fukui, H., Sakashita, Y., Ogata, T., Hirakawa, T., Yamashita, T., & Fujiyoshi, H. (2019). Embedding human knowledge into deep neural network via attention map. arXiv preprint arXiv:1905.03540. Mitsuhara, M., Fukui, H., Sakashita, Y., Ogata, T., Hirakawa, T., Yamashita, T., & Fujiyoshi, H. (2019). Embedding human knowledge into deep neural network via attention map. arXiv preprint arXiv:​1905.​03540.
Zurück zum Zitat Montavon, G., Samek, W., & Müller, K. R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15. Montavon, G., Samek, W., & Müller, K. R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.
Zurück zum Zitat Munir, M., Siddiqui, S. A., Küsters, F., Mercier, D., Dengel, A., & Ahmed, S. (2019). TSXplain: Demystification of DNN Decisions for Time-Series using Natural Language and Statistical Features. In International conference on artificial neural networks (S. 426–439), 17.09.–19.09.2019, München, European Neural Network Society (ENNS). Springer. Munir, M., Siddiqui, S. A., Küsters, F., Mercier, D., Dengel, A., & Ahmed, S. (2019). TSXplain: Demystification of DNN Decisions for Time-Series using Natural Language and Statistical Features. In International conference on artificial neural networks (S. 426–439), 17.09.–19.09.2019, München, European Neural Network Society (ENNS). Springer.
Zurück zum Zitat Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., & Mordvintsev, A. (2018). The building blocks of interpretability. Distill, 3(3), e10. Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., & Mordvintsev, A. (2018). The building blocks of interpretability. Distill, 3(3), e10.
Zurück zum Zitat Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect. Basic Books. Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect. Basic Books.
Zurück zum Zitat Rabold, J., Deininger, H., Siebers, M., & Schmid, U. (2019). Enriching visual with verbal explanations for relational concepts–combining LIME with Aleph. In Joint European conference on machine learning and knowledge discovery in databases (S. 180–192), 16.09.–20.09.2019, Würzburg, Julius-Maximilians-Universität Würzburg. Springer. Rabold, J., Deininger, H., Siebers, M., & Schmid, U. (2019). Enriching visual with verbal explanations for relational concepts–combining LIME with Aleph. In Joint European conference on machine learning and knowledge discovery in databases (S. 180–192), 16.09.–20.09.2019, Würzburg, Julius-Maximilians-Universität Würzburg. Springer.
Zurück zum Zitat Rajpurkar, P., Irvin, J., Ball, R. L., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C. P., & Patel, B. N. (2018) Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS medicine, 15(11), e1002686. Rajpurkar, P., Irvin, J., Ball, R. L., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C. P., & Patel, B. N. (2018) Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS medicine, 15(11), e1002686.
Zurück zum Zitat Ribeiro, M. T., Singh, S., & Guestrin, C. (2016) “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, Association for Computing Machinery, Special Interest Group on Knowledge Discovery and Data Mining (ACM, SIGKDD) (S. 1135–1144), 13.08.–17.08.2016, San Francisco, SIGKDD. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016) “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, Association for Computing Machinery, Special Interest Group on Knowledge Discovery and Data Mining (ACM, SIGKDD) (S. 1135–1144), 13.08.–17.08.2016, San Francisco, SIGKDD.
Zurück zum Zitat Rieger, L., Singh, C., Murdoch, W. J., & Yu, B. (2019). Interpretations are useful: Penalizing explanations to align neural networks with prior knowledge. arXiv preprint arXiv:1909.13584. Rieger, L., Singh, C., Murdoch, W. J., & Yu, B. (2019). Interpretations are useful: Penalizing explanations to align neural networks with prior knowledge. arXiv preprint arXiv:​1909.​13584.
Zurück zum Zitat Ross, A. S., Hughes, M. C., & Doshi-Velez, F. (2017). Right for the right reasons: Training differentiable models by constraining their explanations. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (S. 2662–2670), 19.08.–25.08.2017, Melbourne, International Joint Conference on Artificial Intelligence (IJCAI). Ross, A. S., Hughes, M. C., & Doshi-Velez, F. (2017). Right for the right reasons: Training differentiable models by constraining their explanations. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (S. 2662–2670), 19.08.–25.08.2017, Melbourne, International Joint Conference on Artificial Intelligence (IJCAI).
Zurück zum Zitat Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.CrossRef Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.CrossRef
Zurück zum Zitat Samek, W., Binder, A., Montavon, G., Lapuschkin, S., & Müller, K. R. (2016). Evaluating the visualization of what a deep neural network has learned. IEEE Transactions on Neural Networks and Learning Systems, 28(11), 2660–2673.CrossRef Samek, W., Binder, A., Montavon, G., Lapuschkin, S., & Müller, K. R. (2016). Evaluating the visualization of what a deep neural network has learned. IEEE Transactions on Neural Networks and Learning Systems, 28(11), 2660–2673.CrossRef
Zurück zum Zitat Sayres, R., Taly, A., Rahimy, E., Blumer, K., Coz, D., Hammel, N., Krause, J., Narayanaswamy, A., Rastegar, Z., Wu, D., Xu, S., Barb, S., Joseph, A., Shumski, M., Smith, J., Sood, A. B., Corrado, G. S., Peng, L., & Webster, D. R. (2019). Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology, 126(4), 552–564.CrossRef Sayres, R., Taly, A., Rahimy, E., Blumer, K., Coz, D., Hammel, N., Krause, J., Narayanaswamy, A., Rastegar, Z., Wu, D., Xu, S., Barb, S., Joseph, A., Shumski, M., Smith, J., Sood, A. B., Corrado, G. S., Peng, L., & Webster, D. R. (2019). Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology, 126(4), 552–564.CrossRef
Zurück zum Zitat Selvaraju, R. R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., & Batra, D. (2016). Grad-CAM: Why did you say that? arXiv preprint arXiv:1611.07450. Selvaraju, R. R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., & Batra, D. (2016). Grad-CAM: Why did you say that? arXiv preprint arXiv:​1611.​07450.
Zurück zum Zitat Shortliffe, E. H. (1974). MYCIN: A rule-based computer program for advising physicians regarding antimicrobial therapy selection. In Proceedings of the 1974 Annual ACM conference – Volume 2, Association for Computing Machinery (ACM) (S. 2950–2958), San Diego, ACM. Shortliffe, E. H. (1974). MYCIN: A rule-based computer program for advising physicians regarding antimicrobial therapy selection. In Proceedings of the 1974 Annual ACM conference – Volume 2, Association for Computing Machinery (ACM) (S. 2950–2958), San Diego, ACM.
Zurück zum Zitat Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713. Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:​1605.​01713.
Zurück zum Zitat Shrikumar, A., Greenside, P., & Kundaje, A. (2017). Learning important features through propagating activation differences. In International conference on machine learning, (S. 3145–3153), 06.08.–11.08.2017, Sydney, The International Machine Learning Society (IMLS), . Shrikumar, A., Greenside, P., & Kundaje, A. (2017). Learning important features through propagating activation differences. In International conference on machine learning, (S. 3145–3153), 06.08.–11.08.2017, Sydney, The International Machine Learning Society (IMLS), .
Zurück zum Zitat Sikka, K., Silberfarb, A., Byrnes, J., Sur, I., Chow, E., Divakaran, A., & Rohwer, R. (2020). Deep Adaptive Semantic Logic (DASL): Compiling Declarative Knowledge into Deep Neural Networks. arXiv preprint arXiv:2003.07344. Sikka, K., Silberfarb, A., Byrnes, J., Sur, I., Chow, E., Divakaran, A., & Rohwer, R. (2020). Deep Adaptive Semantic Logic (DASL): Compiling Declarative Knowledge into Deep Neural Networks. arXiv preprint arXiv:​2003.​07344.
Zurück zum Zitat Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:​1312.​6034.
Zurück zum Zitat Singh, A., Sengupta, S., & Lakshminarayanan, V. (2020). Explainable deep learning models in medical image analysis. arXiv preprint arXiv:2005.13799. Singh, A., Sengupta, S., & Lakshminarayanan, V. (2020). Explainable deep learning models in medical image analysis. arXiv preprint arXiv:​2005.​13799.
Zurück zum Zitat Sonntag, D., Nunnari, F., & Profitlich, H. J. (2020). The Skincare project, an interactive deep learning system for differential diagnosis of malignant skin lesions. Technical report. arXiv preprint arXiv:2005.09448. Sonntag, D., Nunnari, F., & Profitlich, H. J. (2020). The Skincare project, an interactive deep learning system for differential diagnosis of malignant skin lesions. Technical report. arXiv preprint arXiv:​2005.​09448.
Zurück zum Zitat Stiglic, G., Kocbek, P., Fijacko, N., Zitnik, M., Verbert, K., & Cilar, L. (2020) Interpretability of machine learning based prediction models in healthcare. arXiv preprint arXiv:2002.08596. Stiglic, G., Kocbek, P., Fijacko, N., Zitnik, M., Verbert, K., & Cilar, L. (2020) Interpretability of machine learning based prediction models in healthcare. arXiv preprint arXiv:​2002.​08596.
Zurück zum Zitat Teach, R. L., & Shortliffe, E. H. (1981). An analysis of physician attitudes regarding computer-based clinical consultation systems. Computers and Biomedical Research, 14(6), 542–558.CrossRef Teach, R. L., & Shortliffe, E. H. (1981). An analysis of physician attitudes regarding computer-based clinical consultation systems. Computers and Biomedical Research, 14(6), 542–558.CrossRef
Zurück zum Zitat Tjoa, E., & Guan, C. (2019). A survey on explainable artificial intelligence (XAI): Towards medical XAI. arXiv preprint arXiv:1907.07374. Tjoa, E., & Guan, C. (2019). A survey on explainable artificial intelligence (XAI): Towards medical XAI. arXiv preprint arXiv:​1907.​07374.
Zurück zum Zitat Tjoa, E., & Guan, C. (2020). Quantifying explainability of saliency methods in deep neural networks. arXiv preprint arXiv:2009.02899. Tjoa, E., & Guan, C. (2020). Quantifying explainability of saliency methods in deep neural networks. arXiv preprint arXiv:​2009.​02899.
Zurück zum Zitat Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008.
Zurück zum Zitat Yamamoto, Y., Tsuzuki, T., Akatsuka, J., Ueki, M., Morikawa, H., Numata, Y., Takahara, T., Tsuyuki, T., Tsutsumi, K., Nakazawa, R., & Shimizu, A. (2019). Automated acquisition of explainable knowledge from unannotated histopathology images. Nature Communications, 10(1), 1–9.CrossRef Yamamoto, Y., Tsuzuki, T., Akatsuka, J., Ueki, M., Morikawa, H., Numata, Y., Takahara, T., Tsuyuki, T., Tsutsumi, K., Nakazawa, R., & Shimizu, A. (2019). Automated acquisition of explainable knowledge from unannotated histopathology images. Nature Communications, 10(1), 1–9.CrossRef
Zurück zum Zitat Yan, Y., Kawahara, J., Hamarneh, G. (2019). Melanoma recognition via visual attention. In International Conference on Information Processing in Medical Imaging (S. 793–804), 02.06.–07.06.2019, Hong Kong, The Hong Kong University of Science and Technology (HKUST). Springer. Yan, Y., Kawahara, J., Hamarneh, G. (2019). Melanoma recognition via visual attention. In International Conference on Information Processing in Medical Imaging (S. 793–804), 02.06.–07.06.2019, Hong Kong, The Hong Kong University of Science and Technology (HKUST). Springer.
Zurück zum Zitat Yang, Q., Steinfeld, A., & Zimmerman, J. (2019b) Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI conference on human factors in computing systems (S. 1–11), 04.05.–09.05.2019, Glasgow, Special Interest Group on Computer-Human Interaction (SIGCHI). Yang, Q., Steinfeld, A., & Zimmerman, J. (2019b) Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI conference on human factors in computing systems (S. 1–11), 04.05.–09.05.2019, Glasgow, Special Interest Group on Computer-Human Interaction (SIGCHI).
Zurück zum Zitat Yang, H. L., Kim, J. J., Kim, J. H., Kang, Y. K., Park, D. H., Park, H. S., Kim, H. K., & Kim, M. S. (2019a). Weakly supervised lesion localization for age-related macular degeneration detection using optical coherence tomography images. PloS one, 14(4), e0215076. Yang, H. L., Kim, J. J., Kim, J. H., Kang, Y. K., Park, D. H., Park, H. S., Kim, H. K., & Kim, M. S. (2019a). Weakly supervised lesion localization for age-related macular degeneration detection using optical coherence tomography images. PloS one, 14(4), e0215076.
Zurück zum Zitat Zeiler, M. D., & Fergus, R. (2014) Visualizing and understanding convolutional networks. In European Conference on Computer Vision (ECCV), (S. 818–833), 06.09.–12.09.2014, Zürich, The Computer Vision Foundation (CVF). Springer. Zeiler, M. D., & Fergus, R. (2014) Visualizing and understanding convolutional networks. In European Conference on Computer Vision (ECCV), (S. 818–833), 06.09.–12.09.2014, Zürich, The Computer Vision Foundation (CVF). Springer.
Zurück zum Zitat Zhang, R., Tan, S., Wang, R., Manivannan, S., Chen, J., Lin, H., & Zheng, W. S. (2019). Biomarker localization by combining CNN classifier and generative adversarial network. In International conference on medical image computing and computer-assisted intervention (S. 209–217), 13.10.–17.10.2019, Shenzhen, The Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Springer. Zhang, R., Tan, S., Wang, R., Manivannan, S., Chen, J., Lin, H., & Zheng, W. S. (2019). Biomarker localization by combining CNN classifier and generative adversarial network. In International conference on medical image computing and computer-assisted intervention (S. 209–217), 13.10.–17.10.2019, Shenzhen, The Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Springer.
Zurück zum Zitat Zhang Z., Xie Y., Xing F., McGough M., Yang L. (2017) MDNet: A semantically and visually interpretable medical image diagnosis network. In Proceedings of the IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineers (IEEE) (S. 6428–6436), 21.07.–26.07.2017, Honolulu, The Computer Vision Foundation (CVF). Zhang Z., Xie Y., Xing F., McGough M., Yang L. (2017) MDNet: A semantically and visually interpretable medical image diagnosis network. In Proceedings of the IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineers (IEEE) (S. 6428–6436), 21.07.–26.07.2017, Honolulu, The Computer Vision Foundation (CVF).
Zurück zum Zitat Zhou, B., Sun, Y., Bau, D., & Torralba A. (2018). Interpretable basis decomposition for visual explanation. In Proceedings of the European Conference on Computer Vision (ECCV) (S. 119–134), 08.09.–14.09.2018, München, The Computer Vision Foundation (CVF). Zhou, B., Sun, Y., Bau, D., & Torralba A. (2018). Interpretable basis decomposition for visual explanation. In Proceedings of the European Conference on Computer Vision (ECCV) (S. 119–134), 08.09.–14.09.2018, München, The Computer Vision Foundation (CVF).
Metadaten
Titel
Erklärbare KI in der medizinischen Diagnose – Erfolge und Herausforderungen
verfasst von
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
Copyright-Jahr
2022
DOI
https://doi.org/10.1007/978-3-658-33597-7_35

Premium Partner