Skip to main content

2023 | OriginalPaper | Buchkapitel

Applications of Large Language Models (LLMs) in Business Analytics – Exemplary Use Cases in Data Preparation Tasks

verfasst von : Mehran Nasseri, Patrick Brandtner, Robert Zimmermann, Taha Falatouri, Farzaneh Darbanian, Tobechi Obinwanne

Erschienen in: HCI International 2023 – Late Breaking Papers

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The application of data analytics in management has become a crucial success factor for the modern enterprise. To apply analytical models, appropriately prepared data must be available. Preparing this data can be cumbersome, time-consuming, and error prone. In the current era of Artificial Intelligence (AI), Large Language Models (LLMs) like OpenAI’s ChatGPT offer a promising pathway to support these tasks. However, their potential in enhancing the efficiency and effectiveness of data preparation remains largely unexplored. In this paper, we apply and evaluate the performance of OpenAI’s ChatGPT for data preparation. Based on four real-life use cases we show, that ChatGPT demonstrates high performance in the context of translating text, assigning products to given categories, classifying sentiments of customer reviews, and extracting information from textual requests. The results of our paper indicate that ChatGPT can be a valuable tool for many companies, helping with daily data preparation tasks. We demonstrated that ChatGPT can handle different languages and formats of data and have shown that LLMs can perform multiple tasks with minimal or no fine-tuning, leveraging their pre-trained knowledge and generalization abilities. However, we have also observed that ChatGPT may sometimes produce incorrect outputs, especially when input data is noisy or ambiguous. We have also noticed that ChatGPT may struggle with tasks that require more complex reasoning or domain-specific knowledge. Future research should focus on improving the robustness and reliability of LLMs for data preparation tasks, as well as on developing more efficient and user-friendly ways to deploy and interact with them.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Udokwu, C., Brandtner, P., Darbanian, F., Falatouri, T.: Proposals for addressing research gaps at the intersection of data analytics and supply chain management. J. Adv. Inf. Technol. (2022) Udokwu, C., Brandtner, P., Darbanian, F., Falatouri, T.: Proposals for addressing research gaps at the intersection of data analytics and supply chain management. J. Adv. Inf. Technol. (2022)
2.
Zurück zum Zitat Brandtner, P.: Predictive analytics and intelligent decision support systems in supply chain risk management—research directions for future studies. In: Yang, X.-S., Sherratt, S., Dey, N., Joshi, A. (eds.) Proceedings of Seventh International Congress on Information and Communication Technology, vol. 464. Lecture Notes in Networks and Systems, pp. 549–558. Springer Nature Singapore, Singapore (2023) Brandtner, P.: Predictive analytics and intelligent decision support systems in supply chain risk management—research directions for future studies. In: Yang, X.-S., Sherratt, S., Dey, N., Joshi, A. (eds.) Proceedings of Seventh International Congress on Information and Communication Technology, vol. 464. Lecture Notes in Networks and Systems, pp. 549–558. Springer Nature Singapore, Singapore (2023)
3.
Zurück zum Zitat Brandtner, P., Mates, M.: Artificial intelligence in strategic foresight – current practices and future application potentials. In: Proceedings of the 2021 12th International Conference on E-business, Management and Economics (ICEME 2021). International Conference on E-business, Management and Economics (ICEME 2021), pp. 75–81 (2021) Brandtner, P., Mates, M.: Artificial intelligence in strategic foresight – current practices and future application potentials. In: Proceedings of the 2021 12th International Conference on E-business, Management and Economics (ICEME 2021). International Conference on E-business, Management and Economics (ICEME 2021), pp. 75–81 (2021)
7.
Zurück zum Zitat Saltz, J.S.: CRISP-DM for data science: strengths, weaknesses and potential next steps. In: 2021 IEEE International Conference on Big Data (Big Data). 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15.12.2021 – 18.12.2021, pp. 2337–2344. IEEE (2021). https://doi.org/10.1109/BigData52589.2021.9671634 Saltz, J.S.: CRISP-DM for data science: strengths, weaknesses and potential next steps. In: 2021 IEEE International Conference on Big Data (Big Data). 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15.12.2021 – 18.12.2021, pp. 2337–2344. IEEE (2021). https://​doi.​org/​10.​1109/​BigData52589.​2021.​9671634
9.
Zurück zum Zitat Kosinski, M.: Theory of Mind May Have Spontaneously Emerged in Large Language Models (2023) Kosinski, M.: Theory of Mind May Have Spontaneously Emerged in Large Language Models (2023)
10.
Zurück zum Zitat Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2019). https://doi.org/10.18653/v1/N19-1423 Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2019). https://​doi.​org/​10.​18653/​v1/​N19-1423
13.
Zurück zum Zitat Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners, 1–9 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners, 1–9 (2019)
14.
Zurück zum Zitat Vaswani, A., et al.: Attention Is All You Need. Advances in neural information processing systems 5998–6008 Vaswani, A., et al.: Attention Is All You Need. Advances in neural information processing systems 5998–6008
19.
Zurück zum Zitat Ouyang, L., et al.: Training language models to follow instructions with human feedback Ouyang, L., et al.: Training language models to follow instructions with human feedback
20.
Zurück zum Zitat Zhang, S., et al.: OPT: Open Pre-trained Transformer Language Models (2022). Accessed 23 Mar 2023 Zhang, S., et al.: OPT: Open Pre-trained Transformer Language Models (2022). Accessed 23 Mar 2023
21.
Zurück zum Zitat Chakrabarty, T., Padmakumar, V., He, H.: Help me write a poem: instruction tuning as a vehicle for collaborative poetry writing. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 6848–6863 Chakrabarty, T., Padmakumar, V., He, H.: Help me write a poem: instruction tuning as a vehicle for collaborative poetry writing. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 6848–6863
25.
Zurück zum Zitat Snæbjarnarson, V., Einarsson, H.: Cross-lingual QA as a stepping stone for monolingual open QA in Icelandic. In: MIA 2022 - Workshop on Multi-lingual Information Access, Proceedings of the Workshop, pp. 29–36 (2022) Snæbjarnarson, V., Einarsson, H.: Cross-lingual QA as a stepping stone for monolingual open QA in Icelandic. In: MIA 2022 - Workshop on Multi-lingual Information Access, Proceedings of the Workshop, pp. 29–36 (2022)
26.
Zurück zum Zitat Daull, X., Bellot, P., Bruno, E., Martin, V., Murisasco, E.: Complex QA and language models hybrid architectures, Survey (2023) Daull, X., Bellot, P., Bruno, E., Martin, V., Murisasco, E.: Complex QA and language models hybrid architectures, Survey (2023)
27.
Zurück zum Zitat DeRosa, D.M., Lepsinger, R.: Virtual Team Success: A Practical Guide for Working and Learning from Distance. John Wiley & Sons DeRosa, D.M., Lepsinger, R.: Virtual Team Success: A Practical Guide for Working and Learning from Distance. John Wiley & Sons
28.
Zurück zum Zitat Hosseini-Asl, E., Asadi, S., Asemi, A., Lavangani, M.A.Z.: Neural text generation for idea generation: the case of brainstorming. Int. J. Human-Comput. Stud. 151 (2021) Hosseini-Asl, E., Asadi, S., Asemi, A., Lavangani, M.A.Z.: Neural text generation for idea generation: the case of brainstorming. Int. J. Human-Comput. Stud. 151 (2021)
29.
Zurück zum Zitat Palomaki, J., Kytola, A., Vatanen, T.: Collaborative idea generation with a language model. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2021) Palomaki, J., Kytola, A., Vatanen, T.: Collaborative idea generation with a language model. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2021)
30.
Zurück zum Zitat Chang, C.K., Huang, Y.M., Hsiao, Y.P., Huang, Y.M.: Exploring the feasibility and acceptance of using a natural language generation system for brain-storming Interactive Learning Environments, 738–751 (2020) Chang, C.K., Huang, Y.M., Hsiao, Y.P., Huang, Y.M.: Exploring the feasibility and acceptance of using a natural language generation system for brain-storming Interactive Learning Environments, 738–751 (2020)
32.
Zurück zum Zitat Zeng, Y., Nie, J.-Y.: Open-domain dialogue generation based on pre-trained language models (2020) Zeng, Y., Nie, J.-Y.: Open-domain dialogue generation based on pre-trained language models (2020)
37.
Zurück zum Zitat Dunn, A., et al.: Structured information extraction from complex scientific text with fi-ne-tuned large language models (2022) Dunn, A., et al.: Structured information extraction from complex scientific text with fi-ne-tuned large language models (2022)
38.
Zurück zum Zitat Wu, T., Shiri, F., Kang, J., Qi, G., Haffari, G., Li, Y.-F.: KC-GEE: Knowledge-based Conditioning for Generative Event Extraction (2022) Wu, T., Shiri, F., Kang, J., Qi, G., Haffari, G., Li, Y.-F.: KC-GEE: Knowledge-based Conditioning for Generative Event Extraction (2022)
40.
Zurück zum Zitat Fan, A., Lewis, M., Dauphin, Y.N.: Strategies for training large transformer models (2019) Fan, A., Lewis, M., Dauphin, Y.N.: Strategies for training large transformer models (2019)
41.
Zurück zum Zitat Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018)
43.
Zurück zum Zitat Zhang, Y., Feng, Y., Chen, Y., Zhao, D.: Conversational language generation: a review (2021) Zhang, Y., Feng, Y., Chen, Y., Zhao, D.: Conversational language generation: a review (2021)
44.
Zurück zum Zitat Zhang, Y., et al.: DIALOGPT: large-scale generative pre-training for conversational response generation. In: Celikyilmaz, A., Wen, T.-H. (eds.) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Online, pp. 270–278. Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-demos.30 Zhang, Y., et al.: DIALOGPT: large-scale generative pre-training for conversational response generation. In: Celikyilmaz, A., Wen, T.-H. (eds.) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Online, pp. 270–278. Association for Computational Linguistics. https://​doi.​org/​10.​18653/​v1/​2020.​acl-demos.​30
45.
Zurück zum Zitat Gao, T., Xia, L., Yu, D. (eds.): Fine-tuning pre-trained language model with multi-level adaptive learning rates for answer selection. In: The 28th International Joint Conference on Artificial Intelligence (2019) Gao, T., Xia, L., Yu, D. (eds.): Fine-tuning pre-trained language model with multi-level adaptive learning rates for answer selection. In: The 28th International Joint Conference on Artificial Intelligence (2019)
48.
Zurück zum Zitat Zhang, J., Yang, H.: Neural response generation with dynamically weighted copy mechanism (2020) Zhang, J., Yang, H.: Neural response generation with dynamically weighted copy mechanism (2020)
50.
Zurück zum Zitat Hai, H.N.: ChatGPT: The Evolution of Natural Language Processing (2023) Hai, H.N.: ChatGPT: The Evolution of Natural Language Processing (2023)
51.
Zurück zum Zitat Dou, Z., Li, C., Li, Y., Wang, S.: Improving information extraction via fine-tuning pre-trained language model 39(4), 5371–5381 (2020) Dou, Z., Li, C., Li, Y., Wang, S.: Improving information extraction via fine-tuning pre-trained language model 39(4), 5371–5381 (2020)
54.
Zurück zum Zitat Wang, L., et al.: Document-Level Ma-chine Translation with Large Language Models (2023) Wang, L., et al.: Document-Level Ma-chine Translation with Large Language Models (2023)
55.
Zurück zum Zitat Jiao, W., Huang, J., Wang, W., Wang, X., Shi, S., Tu, Z.: ParroT: Translating During Chat Using Large Language Models (2023) Jiao, W., Huang, J., Wang, W., Wang, X., Shi, S., Tu, Z.: ParroT: Translating During Chat Using Large Language Models (2023)
57.
Zurück zum Zitat Yan, L., et al.: Practical and Ethical Challenges of Large Language Models in Education: A Systematic Literature Review (2023) Yan, L., et al.: Practical and Ethical Challenges of Large Language Models in Education: A Systematic Literature Review (2023)
58.
Zurück zum Zitat Reiss, M.V.: Testing the Reliability of ChatGPT for Text Annotation and Classification: A Cautionary Remark (2023) Reiss, M.V.: Testing the Reliability of ChatGPT for Text Annotation and Classification: A Cautionary Remark (2023)
59.
Zurück zum Zitat Wang, Z., Xie, Q., Ding, Z., Feng, Y., Xia, R.: Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study (2023) Wang, Z., Xie, Q., Ding, Z., Feng, Y., Xia, R.: Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study (2023)
60.
Zurück zum Zitat Wei, X., et al.: Zero-Shot Information Extraction via Chatting with ChatGPT (2023) Wei, X., et al.: Zero-Shot Information Extraction via Chatting with ChatGPT (2023)
61.
Zurück zum Zitat Han, R., Peng, T., Yang, C., Wang, B., Liu, L., Wan, X.: Is Information Extraction Solved by ChatGPT? An Analysis of Performance, Evaluation Criteria, Robustness and Errors (2023) Han, R., Peng, T., Yang, C., Wang, B., Liu, L., Wan, X.: Is Information Extraction Solved by ChatGPT? An Analysis of Performance, Evaluation Criteria, Robustness and Errors (2023)
Metadaten
Titel
Applications of Large Language Models (LLMs) in Business Analytics – Exemplary Use Cases in Data Preparation Tasks
verfasst von
Mehran Nasseri
Patrick Brandtner
Robert Zimmermann
Taha Falatouri
Farzaneh Darbanian
Tobechi Obinwanne
Copyright-Jahr
2023
DOI
https://doi.org/10.1007/978-3-031-48057-7_12