Skip to main content
Top

2024 | OriginalPaper | Chapter

Unveiling Backdoor Risks Brought by Foundation Models in Heterogeneous Federated Learning

Authors : Xi Li, Chen Wu, Jiaqi Wang

Published in: Advances in Knowledge Discovery and Data Mining

Publisher: Springer Nature Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The foundation models (FMs) have been used to generate synthetic public datasets for the heterogeneous federated learning (HFL) problem where each client uses a unique model architecture. However, the vulnerabilities of integrating FMs, especially against backdoor attacks, are not well-explored in the HFL contexts. In this paper, we introduce a novel backdoor attack mechanism for HFL that circumvents the need for client compromise or ongoing participation in the FL process. This method plants and transfers the backdoor through a generated synthetic public dataset, which could help evade existing backdoor defenses in FL by presenting normal client behaviors. Empirical experiments across different HFL configurations and benchmark datasets demonstrate the effectiveness of our attack compared to traditional client-based attacks. Our findings reveal significant security risks in developing robust FM-assisted HFL systems. This research contributes to enhancing the safety and integrity of FL systems, highlighting the need for advanced security measures in the era of FMs. The source codes can be found in the link (https://​github.​com/​lixi1994/​backdoor_​FM_​hete_​FL).

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: AISTATS, vol. 108, pp. 2938–2948. PMLR (2020) Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: AISTATS, vol. 108, pp. 2938–2948. PMLR (2020)
2.
go back to reference Che, L., Wang, J., Zhou, Y., Ma, F.: Multimodal federated learning: a survey. Sensors 23(15), 6986 (2023)CrossRef Che, L., Wang, J., Zhou, Y., Ma, F.: Multimodal federated learning: a survey. Sensors 23(15), 6986 (2023)CrossRef
3.
go back to reference Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526 (2017) Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:​1712.​05526 (2017)
4.
go back to reference Chou, S., Chen, P., Ho, T.: How to backdoor diffusion models? In: CVPR, pp. 4015–4024. IEEE (2023) Chou, S., Chen, P., Ho, T.: How to backdoor diffusion models? In: CVPR, pp. 4015–4024. IEEE (2023)
5.
go back to reference Dai, J., Chen, C., Li, Y.: A backdoor attack against LSTM-based text classification systems. IEEE Access 7, 138872–138878 (2019)CrossRef Dai, J., Chen, C., Li, Y.: A backdoor attack against LSTM-based text classification systems. IEEE Access 7, 138872–138878 (2019)CrossRef
7.
go back to reference Gu, T., Dolan-Gavitt, B., Garg, S.: Badnets: identifying vulnerabilities in the machine learning model supply chain. CoRR abs/1708.06733 (2017) Gu, T., Dolan-Gavitt, B., Garg, S.: Badnets: identifying vulnerabilities in the machine learning model supply chain. CoRR abs/1708.06733 (2017)
8.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
9.
go back to reference Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. CoRR abs/1503.02531 (2015) Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. CoRR abs/1503.02531 (2015)
10.
go back to reference Huang, W., Ye, M., Du, B.: Learn from others and be yourself in heterogeneous federated learning. In: CVPR, pp. 10133–10143. IEEE (2022) Huang, W., Ye, M., Du, B.: Learn from others and be yourself in heterogeneous federated learning. In: CVPR, pp. 10133–10143. IEEE (2022)
11.
go back to reference Kairouz, P., et al.: Advances and open problems in federated learning. Found. Trends® Mach. Learn. 14(1–2), 1–210 (2021) Kairouz, P., et al.: Advances and open problems in federated learning. Found. Trends® Mach. Learn. 14(1–2), 1–210 (2021)
12.
go back to reference Kandpal, N., Jagielski, M., Tramèr, F., Carlini, N.: Backdoor attacks for in-context learning with language models. CoRR abs/2307.14692 (2023) Kandpal, N., Jagielski, M., Tramèr, F., Carlini, N.: Backdoor attacks for in-context learning with language models. CoRR abs/2307.14692 (2023)
13.
go back to reference Kirillov, A., et al.: Segment anything (2023) Kirillov, A., et al.: Segment anything (2023)
16.
go back to reference Li, L., Song, D., Li, X., Zeng, J., Ma, R., Qiu, X.: Backdoor attacks on pre-trained models by layerwise weight poisoning. In: EMNLP (2021) Li, L., Song, D., Li, X., Zeng, J., Ma, R., Qiu, X.: Backdoor attacks on pre-trained models by layerwise weight poisoning. In: EMNLP (2021)
17.
go back to reference Li, X., Wang, S., Huang, R., Gowda, M., Kesidis, G.: Temporal-distributed backdoor attack against video based action recognition. In: AAAI (2024) Li, X., Wang, S., Huang, R., Gowda, M., Kesidis, G.: Temporal-distributed backdoor attack against video based action recognition. In: AAAI (2024)
18.
go back to reference Li, X., Wang, S., Wu, C., Zhou, H., Wang, J.: Backdoor threats from compromised foundation models to federated learning. CoRR abs/2311.00144 (2023) Li, X., Wang, S., Wu, C., Zhou, H., Wang, J.: Backdoor threats from compromised foundation models to federated learning. CoRR abs/2311.00144 (2023)
19.
go back to reference Lin, T., Kong, L., Stich, S.U., Jaggi, M.: Ensemble distillation for robust model fusion in federated learning. In: NeurIPS (2020) Lin, T., Kong, L., Stich, S.U., Jaggi, M.: Ensemble distillation for robust model fusion in federated learning. In: NeurIPS (2020)
20.
go back to reference Lu, S., Li, R., Liu, W., Chen, X.: Defense against backdoor attack in federated learning. Comput. Secur. 121, 102819 (2022)CrossRef Lu, S., Li, R., Liu, W., Chen, X.: Defense against backdoor attack in federated learning. Comput. Secur. 121, 102819 (2022)CrossRef
21.
go back to reference McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS, pp. 1273–1282. PMLR (2017) McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS, pp. 1273–1282. PMLR (2017)
22.
go back to reference McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS (2017) McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS (2017)
24.
go back to reference Rieger, P., Nguyen, T.D., Miettinen, M., Sadeghi, A.: Deepsight: mitigating backdoor attacks in federated learning through deep model inspection. In: NDSS. The Internet Society (2022) Rieger, P., Nguyen, T.D., Miettinen, M., Sadeghi, A.: Deepsight: mitigating backdoor attacks in federated learning through deep model inspection. In: NDSS. The Internet Society (2022)
25.
go back to reference Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models (2022) Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models (2022)
26.
go back to reference Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter (2020) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter (2020)
27.
go back to reference Shi, J., Liu, Y., Zhou, P., Sun, L.: BadGPT: exploring security vulnerabilities of ChatGPT via backdoor attacks to instructGPT. CoRR abs/2304.12298 (2023) Shi, J., Liu, Y., Zhou, P., Sun, L.: BadGPT: exploring security vulnerabilities of ChatGPT via backdoor attacks to instructGPT. CoRR abs/2304.12298 (2023)
28.
go back to reference Socher, R., et al.: Recursive deep models for semantic compositionality over a sentiment treebank. In: EMNLP, pp. 1631–1642. ACL (2013) Socher, R., et al.: Recursive deep models for semantic compositionality over a sentiment treebank. In: EMNLP, pp. 1631–1642. ACL (2013)
29.
go back to reference Sun, L., Lyu, L.: Federated model distillation with noise-free differential privacy. In: Zhou, Z. (ed.) IJCAI, pp. 1563–1570. ijcai.org (2021) Sun, L., Lyu, L.: Federated model distillation with noise-free differential privacy. In: Zhou, Z. (ed.) IJCAI, pp. 1563–1570. ijcai.org (2021)
31.
go back to reference Wang, B., et al.: Decodingtrust: a comprehensive assessment of trustworthiness in GPT models. CoRR abs/2306.11698 (2023) Wang, B., et al.: Decodingtrust: a comprehensive assessment of trustworthiness in GPT models. CoRR abs/2306.11698 (2023)
32.
go back to reference Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning. In: NeurIPS (2020) Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning. In: NeurIPS (2020)
33.
go back to reference Wang, J., Ma, F.: Federated learning for rare disease detection: a survey (2023) Wang, J., Ma, F.: Federated learning for rare disease detection: a survey (2023)
34.
35.
go back to reference Wu, C., Yang, X., Zhu, S., Mitra, P.: Toward cleansing backdoored neural networks in federated learning. In: ICDCS, pp. 820–830. IEEE (2022) Wu, C., Yang, X., Zhu, S., Mitra, P.: Toward cleansing backdoored neural networks in federated learning. In: ICDCS, pp. 820–830. IEEE (2022)
36.
go back to reference Wu, C., Wu, F., Liu, R., Lyu, L., Huang, Y., Xie, X.: Fedkd: communication efficient federated learning via knowledge distillation. CoRR abs/2108.13323 (2021) Wu, C., Wu, F., Liu, R., Lyu, L., Huang, Y., Xie, X.: Fedkd: communication efficient federated learning via knowledge distillation. CoRR abs/2108.13323 (2021)
37.
go back to reference Xiang, Z., Miller, D.J., Chen, S., Li, X., Kesidis, G.: A backdoor attack against 3D point cloud classifiers. In: ICCV (2021) Xiang, Z., Miller, D.J., Chen, S., Li, X., Kesidis, G.: A backdoor attack against 3D point cloud classifiers. In: ICCV (2021)
38.
go back to reference Xie, C., Chen, M., Chen, P., Li, B.: CRFL: certifiably robust federated learning against backdoor attacks. In: ICML, vol. 139, pp. 11372–11382. PMLR (2021) Xie, C., Chen, M., Chen, P., Li, B.: CRFL: certifiably robust federated learning against backdoor attacks. In: ICML, vol. 139, pp. 11372–11382. PMLR (2021)
39.
go back to reference Xie, C., Huang, K., Chen, P., Li, B.: DBA: distributed backdoor attacks against federated learning. In: ICLR. OpenReview.net (2020) Xie, C., Huang, K., Chen, P., Li, B.: DBA: distributed backdoor attacks against federated learning. In: ICLR. OpenReview.net (2020)
40.
go back to reference Xu, J., Ma, M.D., Wang, F., Xiao, C., Chen, M.: Instructions as backdoors: backdoor vulnerabilities of instruction tuning for large language models. CoRR abs/2305.14710 (2023) Xu, J., Ma, M.D., Wang, F., Xiao, C., Chen, M.: Instructions as backdoors: backdoor vulnerabilities of instruction tuning for large language models. CoRR abs/2305.14710 (2023)
41.
go back to reference Yi, L., Wang, G., Liu, X., Shi, Z., Yu, H.: FedGH: heterogeneous federated learning with generalized global header. In: MM, pp. 8686–8696. ACM (2023) Yi, L., Wang, G., Liu, X., Shi, Z., Yu, H.: FedGH: heterogeneous federated learning with generalized global header. In: MM, pp. 8686–8696. ACM (2023)
42.
go back to reference Yu, S., Qian, W., Jannesari, A.: Resource-aware federated learning using knowledge extraction and multi-model fusion. CoRR abs/2208.07978 (2022) Yu, S., Qian, W., Jannesari, A.: Resource-aware federated learning using knowledge extraction and multi-model fusion. CoRR abs/2208.07978 (2022)
43.
go back to reference Zhang, X., Zhao, J.J., LeCun, Y.: Character-level convolutional networks for text classification. In: NeurIPS, pp. 649–657 (2015) Zhang, X., Zhao, J.J., LeCun, Y.: Character-level convolutional networks for text classification. In: NeurIPS, pp. 649–657 (2015)
44.
go back to reference Zhuang, W., Chen, C., Lyu, L.: When foundation model meets federated learning: motivations, challenges, and future directions. CoRR abs/2306.15546 (2023) Zhuang, W., Chen, C., Lyu, L.: When foundation model meets federated learning: motivations, challenges, and future directions. CoRR abs/2306.15546 (2023)
Metadata
Title
Unveiling Backdoor Risks Brought by Foundation Models in Heterogeneous Federated Learning
Authors
Xi Li
Chen Wu
Jiaqi Wang
Copyright Year
2024
Publisher
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-2259-4_13

Premium Partner