Skip to main content

2024 | OriginalPaper | Buchkapitel

KiProL: A Knowledge-Injected Prompt Learning Framework for Language Generation

verfasst von : Yaru Zhao, Yakun Huang, Bo Cheng

Erschienen in: Advances in Knowledge Discovery and Data Mining

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Despite the success of prompt learning-based models in text generation tasks, they still suffer from the introduction of external commonsense knowledge, especially from biased knowledge introduction. In this work, we propose KiProL, a knowledge-injected prompt learning framework to improve language generation and training efficiency. KiProL tackles ineffective learning and utilization of knowledge, reduces the biased knowledge introduction, as well as high training expenses. Then, we inject the recommended knowledge into the prompt learning encoder to optimize guiding prefixes without modifying the pre-trained model’s parameters, resulting in reduced computational expenses and shorter training duration. Our experiments on two publicly available datasets (i.e., Explanation Generation and Story Ending Generation) show that KiProL outperforms baseline models. It improves fluency by an average of 2%, while diversity increases by 3.4% when compared with advanced prompt learning-based methods. Additionally, KiProL is 45% faster than the state-of-the-art knowledgeable, prompt learning method in training efficiency.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Radford, A., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Radford, A., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
2.
Zurück zum Zitat Alabi, J.O., Adelani, D.I., et al.: Adapting pre-trained language models to African languages via multilingual adaptive fine-tuning. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 4336–4349 (2022) Alabi, J.O., Adelani, D.I., et al.: Adapting pre-trained language models to African languages via multilingual adaptive fine-tuning. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 4336–4349 (2022)
3.
Zurück zum Zitat Li, X.L., Liang, P.: Prefix-tuning: optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, pp. 4582–4597 (2021) Li, X.L., Liang, P.: Prefix-tuning: optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, pp. 4582–4597 (2021)
4.
Zurück zum Zitat Zhu, C., Xu, Y., Ren, X., Lin, B.Y., Jiang, M., Yu, W.: Knowledge-augmented methods for natural language processing. In: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 1228–1231 (2023) Zhu, C., Xu, Y., Ren, X., Lin, B.Y., Jiang, M., Yu, W.: Knowledge-augmented methods for natural language processing. In: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 1228–1231 (2023)
5.
Zurück zum Zitat Ji, H., Ke, P., Huang, S., Wei, F., Zhu, X., Huang, M.: Language generation with multi-hop reasoning on commonsense knowledge graph. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 725–736 (2020) Ji, H., Ke, P., Huang, S., Wei, F., Zhu, X., Huang, M.: Language generation with multi-hop reasoning on commonsense knowledge graph. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 725–736 (2020)
6.
Zurück zum Zitat Zhang, H., Liu, Z., Xiong, C., et al.: Grounded conversation generation as guided traverses in commonsense knowledge graphs. In: The 58th Annual Meeting of the Association for Computational Linguistics, pp. 2031–2043 (2020) Zhang, H., Liu, Z., Xiong, C., et al.: Grounded conversation generation as guided traverses in commonsense knowledge graphs. In: The 58th Annual Meeting of the Association for Computational Linguistics, pp. 2031–2043 (2020)
7.
Zurück zum Zitat Zhong, P., Liu, Y., et al.: Keyword-guided neural conversational model. In: AAAI Conference on Artificial Intelligence, vol. 35, pp. 14568–14576 (2021) Zhong, P., Liu, Y., et al.: Keyword-guided neural conversational model. In: AAAI Conference on Artificial Intelligence, vol. 35, pp. 14568–14576 (2021)
8.
Zurück zum Zitat Zheng, C., Huang, M.: Exploring prompt-based few-shot learning for grounded dialog generation. arXiv preprint arXiv:2109.06513 (2021) Zheng, C., Huang, M.: Exploring prompt-based few-shot learning for grounded dialog generation. arXiv preprint arXiv:​2109.​06513 (2021)
9.
Zurück zum Zitat Chen, L., Zhang, G., Zhou, H.: Fast greedy map inference for determinantal point process to improve recommendation diversity. In: International Conference on Neural Information Processing Systems, pp. 5627–5638 (2018) Chen, L., Zhang, G., Zhou, H.: Fast greedy map inference for determinantal point process to improve recommendation diversity. In: International Conference on Neural Information Processing Systems, pp. 5627–5638 (2018)
10.
Zurück zum Zitat Wang, C., Liang, S., Zhang, Y., Li, X., Gao, T.: Does it make sense? and why? A pilot study for sense making and explanation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4020–4026 (2019) Wang, C., Liang, S., Zhang, Y., Li, X., Gao, T.: Does it make sense? and why? A pilot study for sense making and explanation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4020–4026 (2019)
11.
Zurück zum Zitat Mostafazadeh, N., Chambers, N., et al.: A corpus and cloze evaluation for deeper understanding of commonsense stories. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 839–849 (2016) Mostafazadeh, N., Chambers, N., et al.: A corpus and cloze evaluation for deeper understanding of commonsense stories. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 839–849 (2016)
12.
Zurück zum Zitat Speer, R., Chin, J., et al.: ConceptNet 5.5: an open multilingual graph of general knowledge. In: AAAI Conference on Artificial Intelligence, pp. 4444–4451 (2017) Speer, R., Chin, J., et al.: ConceptNet 5.5: an open multilingual graph of general knowledge. In: AAAI Conference on Artificial Intelligence, pp. 4444–4451 (2017)
13.
Zurück zum Zitat Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, pp. 3104–3112 (2014) Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, pp. 3104–3112 (2014)
14.
Zurück zum Zitat Tang, T., Li, J., Zhao, W.X., Wen, J.R.: Context-tuning: learning contextualized prompts for natural language generation. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 6340–6354 (2022) Tang, T., Li, J., Zhao, W.X., Wen, J.R.: Context-tuning: learning contextualized prompts for natural language generation. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 6340–6354 (2022)
15.
Zurück zum Zitat Pang, B., Nijkamp, E., et al.: Towards holistic and automatic evaluation of open-domain dialogue generation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3619–3629 (2020) Pang, B., Nijkamp, E., et al.: Towards holistic and automatic evaluation of open-domain dialogue generation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3619–3629 (2020)
Metadaten
Titel
KiProL: A Knowledge-Injected Prompt Learning Framework for Language Generation
verfasst von
Yaru Zhao
Yakun Huang
Bo Cheng
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-2266-2_6

Premium Partner