Skip to main content

2024 | OriginalPaper | Buchkapitel

Unlearnable Examples for Time Series

verfasst von : Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

Erschienen in: Advances in Knowledge Discovery and Data Mining

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Unlearnable examples (UEs) refer to training samples modified to be unlearnable to Deep Neural Networks (DNNs). These examples are usually generated by adding error-minimizing noises that can fool a DNN model into believing that there is nothing (no error) to learn from the data. The concept of UE has been proposed as a countermeasure against unauthorized data exploitation on personal data. While UE has been extensively studied on images, it is unclear how to craft effective UEs for time series data. In this work, we introduce the first UE generation method to protect time series data from unauthorized training by deep learning models. To this end, we propose a new form of error-minimizing noise that can be selectively applied to specific segments of time series, rendering them unlearnable to DNN models while remaining imperceptible to human observers. Through extensive experiments on a wide range of time series datasets, we demonstrate that the proposed UE generation method is effective in both classification and generation tasks. It can protect time series data against unauthorized exploitation, while preserving their utility for legitimate usage, thereby contributing to the development of secure and trustworthy machine learning systems.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Abadi, M., et al.: Deep learning with differential privacy. In: SIGSAC (2016) Abadi, M., et al.: Deep learning with differential privacy. In: SIGSAC (2016)
2.
Zurück zum Zitat Barra, S., Carta, S.M., Corriga, A., Podda, A.S., Recupero, D.R.: Deep learning and time series-to-image encoding for financial forecasting. IEEE/CAA J. Automat. Sinica 7(3), 683–692 (2020)CrossRef Barra, S., Carta, S.M., Corriga, A., Podda, A.S., Recupero, D.R.: Deep learning and time series-to-image encoding for financial forecasting. IEEE/CAA J. Automat. Sinica 7(3), 683–692 (2020)CrossRef
3.
Zurück zum Zitat Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012) Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. arXiv preprint arXiv:​1206.​6389 (2012)
4.
Zurück zum Zitat Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: SP (2017) Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: SP (2017)
5.
Zurück zum Zitat Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv preprint arXiv:2003.01690 (2020) Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv preprint arXiv:​2003.​01690 (2020)
6.
Zurück zum Zitat Esteban, C., Hyland, S.L., Rätsch, G.: Real-valued (medical) time series generation with recurrent conditional gans. arXiv e-prints pp. arXiv–1706 (2017) Esteban, C., Hyland, S.L., Rätsch, G.: Real-valued (medical) time series generation with recurrent conditional gans. arXiv e-prints pp. arXiv–1706 (2017)
7.
Zurück zum Zitat Feng, Y., Duan, Q., Chen, X., Yakkali, S.S., Wang, J.: Space cooling energy usage prediction based on utility data for residential buildings using machine learning methods. Appl. Energy 291, 116814 (2021)CrossRef Feng, Y., Duan, Q., Chen, X., Yakkali, S.S., Wang, J.: Space cooling energy usage prediction based on utility data for residential buildings using machine learning methods. Appl. Energy 291, 116814 (2021)CrossRef
8.
Zurück zum Zitat Fu, S., He, F., Liu, Y., Shen, L., Tao, D.: Robust unlearnable examples: protecting data against adversarial learning. arXiv preprint arXiv:2203.14533 (2022) Fu, S., He, F., Liu, Y., Shen, L., Tao, D.: Robust unlearnable examples: protecting data against adversarial learning. arXiv preprint arXiv:​2203.​14533 (2022)
9.
Zurück zum Zitat Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015) Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
10.
Zurück zum Zitat Hill, K.: The secretive company that might end privacy as we know it (2020) Hill, K.: The secretive company that might end privacy as we know it (2020)
11.
Zurück zum Zitat Huang, H., Ma, X., Erfani, S.M., Bailey, J., Wang, Y.: Unlearnable examples: making personal data unexploitable. In: ICLR (2020) Huang, H., Ma, X., Erfani, S.M., Bailey, J., Wang, Y.: Unlearnable examples: making personal data unexploitable. In: ICLR (2020)
12.
Zurück zum Zitat Jiang, W., Diao, Y., Wang, H., Sun, J., Wang, M., Hong, R.: Unlearnable examples give a false sense of security: piercing through unexploitable data with learnable examples. arXiv preprint arXiv:2305.09241 (2023) Jiang, W., Diao, Y., Wang, H., Sun, J., Wang, M., Hong, R.: Unlearnable examples give a false sense of security: piercing through unexploitable data with learnable examples. arXiv preprint arXiv:​2305.​09241 (2023)
13.
Zurück zum Zitat Jiang, Y., Ma, X., Erfani, S.M., Bailey, J.: Backdoor attacks on time series: a generative approach. In: SaTML (2023) Jiang, Y., Ma, X., Erfani, S.M., Bailey, J.: Backdoor attacks on time series: a generative approach. In: SaTML (2023)
14.
Zurück zum Zitat Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: ICML (2017) Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: ICML (2017)
15.
16.
Zurück zum Zitat Li, J., et al.: Universal adversarial perturbations generative network for speaker recognition. In: ICME (2020) Li, J., et al.: Universal adversarial perturbations generative network for speaker recognition. In: ICME (2020)
17.
18.
Zurück zum Zitat Ma, T., Antoniou, C., Toledo, T.: Hybrid machine learning algorithm and statistical time series model for network-wide traffic forecast. Transport. Res. Part C: Emerg. Technol. 111, 352–372 (2020)CrossRef Ma, T., Antoniou, C., Toledo, T.: Hybrid machine learning algorithm and statistical time series model for network-wide traffic forecast. Transport. Res. Part C: Emerg. Technol. 111, 352–372 (2020)CrossRef
19.
Zurück zum Zitat Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
20.
Zurück zum Zitat Maweu, B.M., Shamsuddin, R., Dakshit, S., Prabhakaran, B.: Generating healthcare time series data for improving diagnostic accuracy of deep neural networks. IEEE Trans. Instrum. Meas. 70, 1–15 (2021)CrossRef Maweu, B.M., Shamsuddin, R., Dakshit, S., Prabhakaran, B.: Generating healthcare time series data for improving diagnostic accuracy of deep neural networks. IEEE Trans. Instrum. Meas. 70, 1–15 (2021)CrossRef
21.
Zurück zum Zitat Muñoz-González, L., et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: AISec (2017) Muñoz-González, L., et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: AISec (2017)
22.
Zurück zum Zitat Phan, N., Wang, Y., Wu, X., Dou, D.: Differential privacy preservation for deep auto-encoders: an application of human behavior prediction. In: AAAI (2016) Phan, N., Wang, Y., Wu, X., Dou, D.: Differential privacy preservation for deep auto-encoders: an application of human behavior prediction. In: AAAI (2016)
23.
Zurück zum Zitat Phan, N., Wu, X., Hu, H., Dou, D.: Adaptive laplace mechanism: differential privacy preservation in deep learning. In: ICDM (2017) Phan, N., Wu, X., Hu, H., Dou, D.: Adaptive laplace mechanism: differential privacy preservation in deep learning. In: ICDM (2017)
24.
Zurück zum Zitat Qin, T., Gao, X., Zhao, J., Ye, K., Xu, C.Z.: Learning the unlearnable: adversarial augmentations suppress unlearnable example attacks. arXiv preprint arXiv:2303.15127 (2023) Qin, T., Gao, X., Zhao, J., Ye, K., Xu, C.Z.: Learning the unlearnable: adversarial augmentations suppress unlearnable example attacks. arXiv preprint arXiv:​2303.​15127 (2023)
25.
Zurück zum Zitat Ren, J., Xu, H., Wan, Y., Ma, X., Sun, L., Tang, J.: Transferable unlearnable examples. In: ICLR (2022) Ren, J., Xu, H., Wan, Y., Ma, X., Sun, L., Tang, J.: Transferable unlearnable examples. In: ICLR (2022)
26.
Zurück zum Zitat Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: NeurIPS (2018) Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: NeurIPS (2018)
27.
Zurück zum Zitat Shafahi, A., Najibi, M., Xu, Z., Dickerson, J., Davis, L.S., Goldstein, T.: Universal adversarial training. In: AAAI (2020) Shafahi, A., Najibi, M., Xu, Z., Dickerson, J., Davis, L.S., Goldstein, T.: Universal adversarial training. In: AAAI (2020)
28.
Zurück zum Zitat Shan, S., Ding, W., Wenger, E., Zheng, H., Zhao, B.Y.: Post-breach recovery: protection against white-box adversarial examples for leaked DNN models. In: ACM SIGSAC Conference on Computer and Communications Security (2022) Shan, S., Ding, W., Wenger, E., Zheng, H., Zhao, B.Y.: Post-breach recovery: protection against white-box adversarial examples for leaked DNN models. In: ACM SIGSAC Conference on Computer and Communications Security (2022)
29.
Zurück zum Zitat Shan, S., Wenger, E., Zhang, J., Li, H., Zheng, H., Zhao, B.Y.: Fawkes: protecting personal privacy against unauthorized deep learning models. In: USENIX-Security (2020) Shan, S., Wenger, E., Zhang, J., Li, H., Zheng, H., Zhao, B.Y.: Fawkes: protecting personal privacy against unauthorized deep learning models. In: USENIX-Security (2020)
30.
Zurück zum Zitat Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: SIGSAC (2015) Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: SIGSAC (2015)
31.
Zurück zum Zitat Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: SP (2017) Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: SP (2017)
32.
Zurück zum Zitat Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014) Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
33.
Zurück zum Zitat Wang, Y., Ma, X., Bailey, J., Yi, J., Zhou, B., Gu, Q.: On the convergence and robustness of adversarial training. In: ICML, pp. 6586–6595 (2019) Wang, Y., Ma, X., Bailey, J., Yi, J., Zhou, B., Gu, Q.: On the convergence and robustness of adversarial training. In: ICML, pp. 6586–6595 (2019)
34.
Zurück zum Zitat Wang, Y., Zou, D., Yi, J., Bailey, J., Ma, X., Gu, Q.: Improving adversarial robustness requires revisiting misclassified examples. In: ICLR (2020) Wang, Y., Zou, D., Yi, J., Bailey, J., Ma, X., Gu, Q.: Improving adversarial robustness requires revisiting misclassified examples. In: ICLR (2020)
35.
Zurück zum Zitat Wiese, M., Knobloch, R., Korn, R., Kretschmer, P.: Quant gans: deep generation of financial time series. Quantitative Finance 20(9), 1419–1440 (2020)MathSciNetCrossRef Wiese, M., Knobloch, R., Korn, R., Kretschmer, P.: Quant gans: deep generation of financial time series. Quantitative Finance 20(9), 1419–1440 (2020)MathSciNetCrossRef
36.
Zurück zum Zitat Wu, D., Xia, S.T., Wang, Y.: Adversarial weight perturbation helps robust generalization. Adv. Neural Inf. Process. Syst. 33 (2020) Wu, D., Xia, S.T., Wang, Y.: Adversarial weight perturbation helps robust generalization. Adv. Neural Inf. Process. Syst. 33 (2020)
37.
Zurück zum Zitat Yuan, C.H., Wu, S.H.: Neural tangent generalization attacks. In: International Conference on Machine Learning, pp. 12230–12240. PMLR (2021) Yuan, C.H., Wu, S.H.: Neural tangent generalization attacks. In: International Conference on Machine Learning, pp. 12230–12240. PMLR (2021)
38.
Zurück zum Zitat Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML (2019) Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML (2019)
39.
Zurück zum Zitat Zhang, J., et al.: Unlearnable clusters: towards label-agnostic unlearnable examples. In: CVPR (2023) Zhang, J., et al.: Unlearnable clusters: towards label-agnostic unlearnable examples. In: CVPR (2023)
40.
Zurück zum Zitat Zhao, S., Ma, X., Zheng, X., Bailey, J., Chen, J., Jiang, Y.G.: Clean-label backdoor attacks on video recognition models. In: CVPR (2020) Zhao, S., Ma, X., Zheng, X., Bailey, J., Chen, J., Jiang, Y.G.: Clean-label backdoor attacks on video recognition models. In: CVPR (2020)
Metadaten
Titel
Unlearnable Examples for Time Series
verfasst von
Yujing Jiang
Xingjun Ma
Sarah Monazam Erfani
James Bailey
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-2266-2_17

Premium Partner