Skip to main content

2024 | OriginalPaper | Buchkapitel

A Weighted Cross-Modal Feature Aggregation Network for Rumor Detection

verfasst von : Jia Li, Zihan Hu, Zhenguo Yang, Lap-Kei Lee, Fu Lee Wang

Erschienen in: Advances in Knowledge Discovery and Data Mining

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, we propose a Weighted Cross-modal Aggregation network (WCAN) for rumor detection in order to combine highly correlated features in different modalities and obtain a unified representation in the same space. WCAN exploits an adversarial training method to add perturbations to text features to enhance model robustness. Specifically, we devise a weighted cross-modal aggregation (WCA) module that measures the distance between text, image and social graph modality distributions using KL divergence, which leverages correlations between modalities. By using MSE loss, the fusion features are progressively closer to the original features of the image and social graph while taking into account all of the information from each modality. In addition, WCAN includes a feature fusion module that uses dual-modal co-attention blocks to dynamically adjust features from three modalities. Experiments are conducted on two datasets, WEIBO and PHEME, and the experimental results demonstrate the superior performance of the proposed method.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Yan, M., Yang, W., Sun, B., Zhu, Y.: Heterogeneous graph attention networks with bi-directional information propagation for rumor detection. In: ICBDA, pp. 236–242 (2022) Yan, M., Yang, W., Sun, B., Zhu, Y.: Heterogeneous graph attention networks with bi-directional information propagation for rumor detection. In: ICBDA, pp. 236–242 (2022)
2.
Zurück zum Zitat Qian, S., Wang, J., Hu, J., Fang, Q., Xu, C.: Hierarchical multi-modal contextual attention network for fake news detection. In: ACM SIGIR, pp. 153–162 (2021) Qian, S., Wang, J., Hu, J., Fang, Q., Xu, C.: Hierarchical multi-modal contextual attention network for fake news detection. In: ACM SIGIR, pp. 153–162 (2021)
3.
Zurück zum Zitat Sun, T., Qian, Z., Dong, S., Li, P., Zhu, Q.: Rumor detection on social media with graph adversarial contrastive learning (2022) Sun, T., Qian, Z., Dong, S., Li, P., Zhu, Q.: Rumor detection on social media with graph adversarial contrastive learning (2022)
4.
Zurück zum Zitat Zheng, J., Zhang, X., Guo, S., Wang, Q., Zang, W., Zhang, Y.: MFAN: multi-modal feature-enhanced attention networks for rumor detection. In: IJCAI, pp. 2413–2419 (2022) Zheng, J., Zhang, X., Guo, S., Wang, Q., Zang, W., Zhang, Y.: MFAN: multi-modal feature-enhanced attention networks for rumor detection. In: IJCAI, pp. 2413–2419 (2022)
5.
Zurück zum Zitat Wiegmann, M., Khatib, K.A., Khanna, V., Stein, B.: Analyzing persuasion strategies of debaters on social media. In: 29th International Conference on Computational Linguistics, pp. 6897–6905 (2022) Wiegmann, M., Khatib, K.A., Khanna, V., Stein, B.: Analyzing persuasion strategies of debaters on social media. In: 29th International Conference on Computational Linguistics, pp. 6897–6905 (2022)
6.
Zurück zum Zitat Ye, L., Rochan, M., Liu, Z., Wang, Y.: Cross-modal self-attention network for referring image segmentation. In: CVPR, pp. 10502–10511 (2019) Ye, L., Rochan, M., Liu, Z., Wang, Y.: Cross-modal self-attention network for referring image segmentation. In: CVPR, pp. 10502–10511 (2019)
7.
Zurück zum Zitat Lu, W., Chenyu, W., Guo, H., Zhao, Z.: A cross-modal alignment for zero-shot image classification. IEEE Access 11, 9067–9073 (2023)CrossRef Lu, W., Chenyu, W., Guo, H., Zhao, Z.: A cross-modal alignment for zero-shot image classification. IEEE Access 11, 9067–9073 (2023)CrossRef
8.
Zurück zum Zitat Ma, J., Wu, F., Chen, Y., Ji, X., Ding, Y.: Effective multimodal reinforcement learning with modality alignment and importance enhancement. arXiv preprint arXiv:2302.09318 (2023) Ma, J., Wu, F., Chen, Y., Ji, X., Ding, Y.: Effective multimodal reinforcement learning with modality alignment and importance enhancement. arXiv preprint arXiv:​2302.​09318 (2023)
9.
Zurück zum Zitat Pandey, R., Shao, R., Liang, P.P., Salakhutdinov, R., Morency, L.P.: Cross-modal attention congruence regularization for vision-language relation alignment. arXiv preprint arXiv:2212.10549 (2022) Pandey, R., Shao, R., Liang, P.P., Salakhutdinov, R., Morency, L.P.: Cross-modal attention congruence regularization for vision-language relation alignment. arXiv preprint arXiv:​2212.​10549 (2022)
10.
Zurück zum Zitat Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference On Machine Learning, pp. 8748–8763. PMLR (2021) Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference On Machine Learning, pp. 8748–8763. PMLR (2021)
11.
Zurück zum Zitat Lu, Y.J., Li, C.T.: GCAN: graph-aware co-attention networks for explainable fake news detection on social media. arXiv preprint arXiv:2004.11648 (2020) Lu, Y.J., Li, C.T.: GCAN: graph-aware co-attention networks for explainable fake news detection on social media. arXiv preprint arXiv:​2004.​11648 (2020)
12.
Zurück zum Zitat Zubiaga, A., Liakata, M., Procter, R.: Exploiting context for rumour detection in social media. In: 9th International Conference on Social Informatics, pp. 109–123 (2017) Zubiaga, A., Liakata, M., Procter, R.: Exploiting context for rumour detection in social media. In: 9th International Conference on Social Informatics, pp. 109–123 (2017)
13.
Zurück zum Zitat Song, C., Yang, C., Chen, H., Tu, C., Liu, Z., Sun, M.: CED: credible early detection of social media rumors. IEEE Trans. Knowl. Data Eng. 33, 3035–3047 (2019)CrossRef Song, C., Yang, C., Chen, H., Tu, C., Liu, Z., Sun, M.: CED: credible early detection of social media rumors. IEEE Trans. Knowl. Data Eng. 33, 3035–3047 (2019)CrossRef
14.
Zurück zum Zitat Wang, Y., et al.: EANN: event adversarial neural networks for multi-modal fake news detection. In: ACM SIGKDD, pp. 849–857 (2018) Wang, Y., et al.: EANN: event adversarial neural networks for multi-modal fake news detection. In: ACM SIGKDD, pp. 849–857 (2018)
15.
Zurück zum Zitat Yao, L., Mao, C., Luo, Y.: Graph convolutional networks for text classification. In: AAAI, pp. 7370–7377 (2019) Yao, L., Mao, C., Luo, Y.: Graph convolutional networks for text classification. In: AAAI, pp. 7370–7377 (2019)
16.
Zurück zum Zitat Khattar, D., Goud, J.S., Gupta, M., Varma, V.: MVAE: multimodal variational autoencoder for fake news detection. In: WWW, pp. 2915–2921 (2019) Khattar, D., Goud, J.S., Gupta, M., Varma, V.: MVAE: multimodal variational autoencoder for fake news detection. In: WWW, pp. 2915–2921 (2019)
17.
Zurück zum Zitat Singhal, S., Shah, R.R., Chakraborty, T., Kumaraguru, P., Satoh, S.: SpotFake: a multi-modal framework for fake news detection. In: BigMM, pp. 39–47 (2019) Singhal, S., Shah, R.R., Chakraborty, T., Kumaraguru, P., Satoh, S.: SpotFake: a multi-modal framework for fake news detection. In: BigMM, pp. 39–47 (2019)
Metadaten
Titel
A Weighted Cross-Modal Feature Aggregation Network for Rumor Detection
verfasst von
Jia Li
Zihan Hu
Zhenguo Yang
Lap-Kei Lee
Fu Lee Wang
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-2266-2_4

Premium Partner