Skip to main content

2024 | OriginalPaper | Buchkapitel

Improving Deep Learning Transparency: Leveraging the Power of LIME Heatmap

verfasst von : Helia Farhood, Mohammad Najafi, Morteza Saberi

Erschienen in: Service-Oriented Computing – ICSOC 2023 Workshops

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep learning techniques have recently demonstrated remarkable precision in executing tasks, particularly in image classification. However, their intricate structures make them mysterious even to knowledgeable users, obscuring the rationale behind their decision-making procedures. Therefore, interpreter methodologies have emerged to introduce clarity into these techniques. Among these approaches is the Local Interpretable Model-Agnostic Explanations (LIME), which stands out as a means to enhance comprehensibility. We believe that interpretable deep learning methods have unrealised potential in a variety of application domains, an aspect that has been largely neglected in the existing literature. This research aims to demonstrate the utility of features like the LIME heatmap in advancing classification accuracy within a designated decision-support framework. Real-world contexts take centre stage as we illustrate how the heatmap determines the image segments playing the greatest influence on class scoring. This critical insight empowers users to formulate sensitivity analyses and discover how manipulation of the identified feature could potentially mislead the deep learning classifier. As a second significant contribution, we examine the LIME heatmap data of GoogLeNet and SqueezeNet, two prevalent network models, in an effort to improve the comprehension of these models. Furthermore, we compare LIME with another recognised interpretive method known as Gradient-weighted Class Activation Mapping (Grad-CAM), evaluating their performance comprehensively. Experiments and evaluations conducted on real-world datasets containing images of fish readily demonstrate the superiority of the method, thereby validating our hypothesis.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Stiffler, M., Hudler, A., Lee, E., Braines, D., Mott, D., Harborne, D.: An analysis of reliability using lime with deep learning models. In: Annual Fall Meeting of the Distributed Analytics and Information Science International Technology Alliance, AFM DAIS ITA (2018) Stiffler, M., Hudler, A., Lee, E., Braines, D., Mott, D., Harborne, D.: An analysis of reliability using lime with deep learning models. In: Annual Fall Meeting of the Distributed Analytics and Information Science International Technology Alliance, AFM DAIS ITA (2018)
2.
Zurück zum Zitat Shah, S.S., Sheppard, J.W.: Evaluating explanations of convolutional neural network image classifications. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2020) Shah, S.S., Sheppard, J.W.: Evaluating explanations of convolutional neural network image classifications. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2020)
4.
Zurück zum Zitat Cian, D., van Gemert, J., Lengyel, A.: Evaluating the performance of the lime and grad-cam explanation methods on a lego multi-label image classification task. arXiv preprint arXiv:2008.01584 (2020) Cian, D., van Gemert, J., Lengyel, A.: Evaluating the performance of the lime and grad-cam explanation methods on a lego multi-label image classification task. arXiv preprint arXiv:​2008.​01584 (2020)
5.
Zurück zum Zitat Lee, E., Braines, D., Stiffler, M., Hudler, A., Harborne, D.: Developing the sensitivity of lime for better machine learning explanation. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, vol. 11006, pp. 349–356. SPIE (2019) Lee, E., Braines, D., Stiffler, M., Hudler, A., Harborne, D.: Developing the sensitivity of lime for better machine learning explanation. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, vol. 11006, pp. 349–356. SPIE (2019)
6.
Zurück zum Zitat Hessari, H., Nategh, T.: The role of co-worker support for tackling techno stress along with these influences on need for recovery and work motivation. Int. J. Intell. Property Manage. 12(2), 233–259 (2022) Hessari, H., Nategh, T.: The role of co-worker support for tackling techno stress along with these influences on need for recovery and work motivation. Int. J. Intell. Property Manage. 12(2), 233–259 (2022)
7.
Zurück zum Zitat Ashraf, J., Bakhshi, A.D., Moustafa, N., Khurshid, H., Javed, A., Beheshti, A.: Novel deep learning-enabled LSTM autoencoder architecture for discovering anomalous events from intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 22(7), 4507–4518 (2020)CrossRef Ashraf, J., Bakhshi, A.D., Moustafa, N., Khurshid, H., Javed, A., Beheshti, A.: Novel deep learning-enabled LSTM autoencoder architecture for discovering anomalous events from intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 22(7), 4507–4518 (2020)CrossRef
8.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
9.
Zurück zum Zitat Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of Parakinson’s disease using LIME on DaTSCAN imagery. Comput. Biol. Med. 126, 104041 (2020)CrossRef Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of Parakinson’s disease using LIME on DaTSCAN imagery. Comput. Biol. Med. 126, 104041 (2020)CrossRef
10.
Zurück zum Zitat Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015) Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
11.
Zurück zum Zitat Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016) Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)0.5 mb model size. arXiv preprint arXiv:​1602.​07360 (2016)
12.
Zurück zum Zitat Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017) Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
13.
Zurück zum Zitat Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)CrossRef Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)CrossRef
14.
Zurück zum Zitat Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_10CrossRef Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://​doi.​org/​10.​1007/​978-3-030-28954-6_​10CrossRef
15.
Zurück zum Zitat Eitel, F., et al.: Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation. NeuroImage: Clin. 24, 102003 (2019)CrossRef Eitel, F., et al.: Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation. NeuroImage: Clin. 24, 102003 (2019)CrossRef
16.
Zurück zum Zitat Sun, J., Lapuschkin, S., Samek, W., Binder, A.: Explain and improve: LRP-inference fine-tuning for image captioning models. Inf. Fusion 77, 233–246 (2022)CrossRef Sun, J., Lapuschkin, S., Samek, W., Binder, A.: Explain and improve: LRP-inference fine-tuning for image captioning models. Inf. Fusion 77, 233–246 (2022)CrossRef
17.
Zurück zum Zitat Gorski, L., Ramakrishna, S., Nowosielski, J.M.: Towards grad-cam based explainability in a legal text processing pipeline. arXiv preprint arXiv:2012.09603 (2020) Gorski, L., Ramakrishna, S., Nowosielski, J.M.: Towards grad-cam based explainability in a legal text processing pipeline. arXiv preprint arXiv:​2012.​09603 (2020)
18.
Zurück zum Zitat Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847. IEEE (2018) Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847. IEEE (2018)
19.
Zurück zum Zitat Chen, H., Ji, Y.: Learning variational word masks to improve the interpretability of neural text classifiers. arXiv preprint arXiv:2010.00667 (2020) Chen, H., Ji, Y.: Learning variational word masks to improve the interpretability of neural text classifiers. arXiv preprint arXiv:​2010.​00667 (2020)
20.
Zurück zum Zitat Mohseni, S., Block, J.E., Ragan, E.D.: A human-grounded evaluation benchmark for local explanations of machine learning. arXiv preprint arXiv:1801.05075 (2018) Mohseni, S., Block, J.E., Ragan, E.D.: A human-grounded evaluation benchmark for local explanations of machine learning. arXiv preprint arXiv:​1801.​05075 (2018)
21.
Zurück zum Zitat Farhood, H., Saberi, M., Najafi, M.: Improving object recognition in crime scenes via local interpretable model-agnostic explanations. In: 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), pp. 90–94. IEEE (2021) Farhood, H., Saberi, M., Najafi, M.: Improving object recognition in crime scenes via local interpretable model-agnostic explanations. In: 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), pp. 90–94. IEEE (2021)
25.
Zurück zum Zitat Oh, H.M., Lee, H., Kim, M.Y.: Comparing convolutional neural network (CNN) models for machine learning-based drone and bird classification of anti-drone system. In: 2019 19th International Conference on Control, Automation and Systems (ICCAS), pp. 87–90. IEEE (2019) Oh, H.M., Lee, H., Kim, M.Y.: Comparing convolutional neural network (CNN) models for machine learning-based drone and bird classification of anti-drone system. In: 2019 19th International Conference on Control, Automation and Systems (ICCAS), pp. 87–90. IEEE (2019)
Metadaten
Titel
Improving Deep Learning Transparency: Leveraging the Power of LIME Heatmap
verfasst von
Helia Farhood
Mohammad Najafi
Morteza Saberi
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-0989-2_7

Premium Partner