Skip to main content

2020 | OriginalPaper | Buchkapitel

Sparse Adversarial Attack via Perturbation Factorization

verfasst von : Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, Yujiu Yang

Erschienen in: Computer Vision – ECCV 2020

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This work studies the sparse adversarial attack, which aims to generate adversarial perturbations onto partial positions of one benign image, such that the perturbed image is incorrectly predicted by one deep neural network (DNN) model. The sparse adversarial attack involves two challenges, i.e., where to perturb, and how to determine the perturbation magnitude. Many existing works determined the perturbed positions manually or heuristically, and then optimized the magnitude using a proper algorithm designed for the dense adversarial attack. In this work, we propose to factorize the perturbation at each pixel to the product of two variables, including the perturbation magnitude and one binary selection factor (i.e., 0 or 1). One pixel is perturbed if its selection factor is 1, otherwise not perturbed. Based on this factorization, we formulate the sparse attack problem as a mixed integer programming (MIP) to jointly optimize the binary selection factors and continuous perturbation magnitudes of all pixels, with a cardinality constraint on selection factors to explicitly control the degree of sparsity. Besides, the perturbation factorization provides the extra flexibility to incorporate other meaningful constraints on selection factors or magnitudes to achieve some desired performance, such as the group-wise sparsity or the enhanced visual imperceptibility. We develop an efficient algorithm by equivalently reformulating the MIP problem as a continuous optimization problem. Extensive experiments demonstrate the superiority of the proposed method over several state-of-the-art sparse attack methods. The implementation of the proposed method is available at https://​github.​com/​wubaoyuan/​Sparse-Adversarial-Attack.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
2
Note that some sparse attack methods fail to generate 100% ASR in our experiments.
 
Literatur
1.
Zurück zum Zitat Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)CrossRef Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)CrossRef
2.
Zurück zum Zitat Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)CrossRef Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)CrossRef
3.
Zurück zum Zitat Bach, F., Jenatton, R., Mairal, J., Obozinski, G., et al.: Optimization with sparsity-inducing penalties. Found. Trends® Mach. Learn. 4(1), 1–106 (2012) Bach, F., Jenatton, R., Mairal, J., Obozinski, G., et al.: Optimization with sparsity-inducing penalties. Found. Trends® Mach. Learn. 4(1), 1–106 (2012)
4.
Zurück zum Zitat Bai, J., et al.: Targeted attack for deep hashing based retrieval. In: ECCV (2020) Bai, J., et al.: Targeted attack for deep hashing based retrieval. In: ECCV (2020)
5.
Zurück zum Zitat Bibi, A., Wu, B., Ghanem, B.: Constrained k-means with general pairwise and cardinality constraints. arXiv preprint arXiv:1907.10410 (2019) Bibi, A., Wu, B., Ghanem, B.: Constrained k-means with general pairwise and cardinality constraints. arXiv preprint arXiv:​1907.​10410 (2019)
6.
Zurück zum Zitat Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011) Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011)
7.
Zurück zum Zitat Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017) Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)
8.
Zurück zum Zitat Chen, H., Zhang, H., Chen, P.Y., Yi, J., Hsieh, C.J.: Show-and-fool: crafting adversarial examples for neural image captioning. arXiv preprint arXiv:1712.02051 (2017) Chen, H., Zhang, H., Chen, P.Y., Yi, J., Hsieh, C.J.: Show-and-fool: crafting adversarial examples for neural image captioning. arXiv preprint arXiv:​1712.​02051 (2017)
9.
Zurück zum Zitat Chen, W., Zhang, Z., Hu, X., Wu, B.: Boosting decision-based black-box adversarial attacks with random sign flip. In: Proceedings of the European Conference on Computer Vision (2020) Chen, W., Zhang, Z., Hu, X., Wu, B.: Boosting decision-based black-box adversarial attacks with random sign flip. In: Proceedings of the European Conference on Computer Vision (2020)
10.
Zurück zum Zitat Croce, F., Hein, M.: Sparse and imperceivable adversarial attacks. In: ICCV, pp. 4724–4732 (2019) Croce, F., Hein, M.: Sparse and imperceivable adversarial attacks. In: ICCV, pp. 4724–4732 (2019)
11.
Zurück zum Zitat Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255. IEEE (2009) Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255. IEEE (2009)
12.
Zurück zum Zitat Dong, Y., et al.: Efficient decision-based black-box adversarial attacks on face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7714–7722 (2019) Dong, Y., et al.: Efficient decision-based black-box adversarial attacks on face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7714–7722 (2019)
13.
Zurück zum Zitat Feng, Y., Chen, B., Dai, T., Xia, S.: Adversarial attack on deep product quantization network for image retrieval. In: AAAI (2020) Feng, Y., Chen, B., Dai, T., Xia, S.: Adversarial attack on deep product quantization network for image retrieval. In: AAAI (2020)
14.
Zurück zum Zitat Feng, Y., Wu, B., Fan, Y., Li, Z., Xia, S.: Efficient black-box adversarial attack guided by the distribution of adversarial perturbations. arXiv preprint arXiv:2006.08538 (2020) Feng, Y., Wu, B., Fan, Y., Li, Z., Xia, S.: Efficient black-box adversarial attack guided by the distribution of adversarial perturbations. arXiv preprint arXiv:​2006.​08538 (2020)
15.
Zurück zum Zitat Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015) Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
16.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
17.
Zurück zum Zitat Karmon, D., Zoran, D., Goldberg, Y.: LaVAN: localized and visible adversarial noise. In: ICML (2018) Karmon, D., Zoran, D., Goldberg, Y.: LaVAN: localized and visible adversarial noise. In: ICML (2018)
18.
Zurück zum Zitat Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, Citeseer (2009) Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, Citeseer (2009)
19.
Zurück zum Zitat Legge, G.E., Foley, J.M.: Contrast masking in human vision. JOSA 70(12), 1458–1471 (1980)CrossRef Legge, G.E., Foley, J.M.: Contrast masking in human vision. JOSA 70(12), 1458–1471 (1980)CrossRef
20.
Zurück zum Zitat Li, T., Wu, B., Yang, Y., Fan, Y., Zhang, Y., Liu, W.: Compressing convolutional neural networks via factorized convolutional filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3977–3986 (2019) Li, T., Wu, B., Yang, Y., Fan, Y., Zhang, Y., Liu, W.: Compressing convolutional neural networks via factorized convolutional filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3977–3986 (2019)
21.
22.
Zurück zum Zitat Li, Y., Yang, X., Wu, B., Lyu, S.: Hiding faces in plain sight: Disrupting AI face synthesis with adversarial perturbations. arXiv preprint arXiv:1906.09288 (2019) Li, Y., Yang, X., Wu, B., Lyu, S.: Hiding faces in plain sight: Disrupting AI face synthesis with adversarial perturbations. arXiv preprint arXiv:​1906.​09288 (2019)
23.
Zurück zum Zitat Liu, A., et al.: Spatiotemporal attacks for embodied agents. In: European Conference on Computer Vision (2020) Liu, A., et al.: Spatiotemporal attacks for embodied agents. In: European Conference on Computer Vision (2020)
24.
Zurück zum Zitat Liu, A., et al.: Perceptual-sensitive GAN for generating adversarial patches. In: 33rd AAAI Conference on Artificial Intelligence (2019) Liu, A., et al.: Perceptual-sensitive GAN for generating adversarial patches. In: 33rd AAAI Conference on Artificial Intelligence (2019)
25.
Zurück zum Zitat Liu, A., Wang, J., Liu, X., Cao, b., Zhang, C., Yu, H.: Bias-based universal adversarial patch attack for automatic check-out. In: European Conference on Computer Vision (2020) Liu, A., Wang, J., Liu, X., Cao, b., Zhang, C., Yu, H.: Bias-based universal adversarial patch attack for automatic check-out. In: European Conference on Computer Vision (2020)
26.
Zurück zum Zitat Luo, B., Liu, Y., Wei, L., Xu, Q.: Towards imperceptible and robust adversarial example attacks against neural networks. In: AAAI (2018) Luo, B., Liu, Y., Wei, L., Xu, Q.: Towards imperceptible and robust adversarial example attacks against neural networks. In: AAAI (2018)
27.
Zurück zum Zitat Modas, A., Moosavi-Dezfooli, S.M., Frossard, P.: Sparsefool: a few pixels make a big difference. In: CVPR, pp. 9087–9096 (2019) Modas, A., Moosavi-Dezfooli, S.M., Frossard, P.: Sparsefool: a few pixels make a big difference. In: CVPR, pp. 9087–9096 (2019)
28.
Zurück zum Zitat Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. IEEE (2016) Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. IEEE (2016)
29.
Zurück zum Zitat Sarikaya, R., Hinton, G.E., Deoras, A.: Application of deep belief networks for natural language understanding. IEEE/ACM Trans. Audio Speech Lang. Process. (TASLP) 22(4), 778–784 (2014) Sarikaya, R., Hinton, G.E., Deoras, A.: Application of deep belief networks for natural language understanding. IEEE/ACM Trans. Audio Speech Lang. Process. (TASLP) 22(4), 778–784 (2014)
30.
Zurück zum Zitat Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23, 828–841 (2019)CrossRef Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23, 828–841 (2019)CrossRef
31.
Zurück zum Zitat Sun, Y., Liang, D., Wang, X., Tang, X.: DeepID3: face recognition with very deep neural networks. arXiv preprint arXiv:1502.00873 (2015) Sun, Y., Liang, D., Wang, X., Tang, X.: DeepID3: face recognition with very deep neural networks. arXiv preprint arXiv:​1502.​00873 (2015)
32.
Zurück zum Zitat Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR, pp. 2818–2826 (2016) Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR, pp. 2818–2826 (2016)
33.
Zurück zum Zitat Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014) Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
34.
Zurück zum Zitat Wang, H., et al.: CosFace: large margin cosine loss for deep face recognition. In: CVPR, pp. 5265–5274 (2018) Wang, H., et al.: CosFace: large margin cosine loss for deep face recognition. In: CVPR, pp. 5265–5274 (2018)
35.
Zurück zum Zitat Wu, B., et al.: Tencent ML-images: a large-scale multi-label image database for visual representation learning. IEEE Access 7, 172683–172693 (2019)CrossRef Wu, B., et al.: Tencent ML-images: a large-scale multi-label image database for visual representation learning. IEEE Access 7, 172683–172693 (2019)CrossRef
36.
Zurück zum Zitat Wu, B., Ghanem, B.: \(l_p\)-box ADMM: a versatile framework for integer programming. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 41, 1695–1708 (2018)CrossRef Wu, B., Ghanem, B.: \(l_p\)-box ADMM: a versatile framework for integer programming. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 41, 1695–1708 (2018)CrossRef
37.
Zurück zum Zitat Wu, B., Shen, L., Zhang, T., Ghanem, B.: Map inference via \(l_2\)-sphere linear program reformulation. Int. J. Comput. Vis. 128, 1–24 (2020)CrossRef Wu, B., Shen, L., Zhang, T., Ghanem, B.: Map inference via \(l_2\)-sphere linear program reformulation. Int. J. Comput. Vis. 128, 1–24 (2020)CrossRef
38.
Zurück zum Zitat Xu, K., et al.: Structured adversarial attack: Towards general implementation and better interpretability. In: ICLR (2019) Xu, K., et al.: Structured adversarial attack: Towards general implementation and better interpretability. In: ICLR (2019)
39.
Zurück zum Zitat Xu, Y., et al.: Exact adversarial attack to image captioning via structured output learning with latent variables. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4135–4144 (2019) Xu, Y., et al.: Exact adversarial attack to image captioning via structured output learning with latent variables. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4135–4144 (2019)
40.
Zurück zum Zitat Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc.: Ser. B (Stat. Methodol.) 68(1), 49–67 (2006)MathSciNetCrossRef Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc.: Ser. B (Stat. Methodol.) 68(1), 49–67 (2006)MathSciNetCrossRef
41.
Zurück zum Zitat Zhao, P., Liu, S., Wang, Y., Lin, X.: An ADMM-based universal framework for adversarial attacks on deep neural networks. In: 2018 ACMMM, pp. 1065–1073. ACM (2018) Zhao, P., Liu, S., Wang, Y., Lin, X.: An ADMM-based universal framework for adversarial attacks on deep neural networks. In: 2018 ACMMM, pp. 1065–1073. ACM (2018)
Metadaten
Titel
Sparse Adversarial Attack via Perturbation Factorization
verfasst von
Yanbo Fan
Baoyuan Wu
Tuanhui Li
Yong Zhang
Mingyang Li
Zhifeng Li
Yujiu Yang
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-58542-6_3

Premium Partner