Skip to main content

2024 | OriginalPaper | Buchkapitel

Bi-CryptoNets: Leveraging Different-Level Privacy for Encrypted Inference

verfasst von : Man-Jie Yuan, Zheng Zou, Wei Gao

Erschienen in: Advances in Knowledge Discovery and Data Mining

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Privacy-preserving neural networks have attracted increasing attention in recent years, and various algorithms have been developed to keep the balance between accuracy, computational complexity and information security from the cryptographic view. This work takes a different view from the input data and structure of neural networks. We decompose the input data (e.g., some images) into sensitive and insensitive segments according to importance and privacy. The sensitive segment includes some important and private information such as human faces and we take strong homomorphic encryption to keep security, whereas the insensitive one contains some background and we add perturbations. We propose the bi-CryptoNets, i.e., plaintext and ciphertext branches, to deal with two segments, respectively, and ciphertext branch could utilize the information from plaintext branch by unidirectional connections. We adopt knowledge distillation for our bi-CryptoNets by transferring representations from a well-trained teacher neural network. Empirical studies show the effectiveness and decrease of inference latency for our bi-CryptoNets.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
Downloaded from yann.lecun.com/exdb/mnist and www.​cs.​toronto.​edu/​~kriz/​cifar.
 
Literatur
1.
Zurück zum Zitat Boemer, F., Costache, A., Cammarota, R., Wierzynski, C.: nGraph-HE2: a high-throughput framework for neural network inference on encrypted data. In: WAHC@CCS, pp. 45–56 (2019) Boemer, F., Costache, A., Cammarota, R., Wierzynski, C.: nGraph-HE2: a high-throughput framework for neural network inference on encrypted data. In: WAHC@CCS, pp. 45–56 (2019)
2.
Zurück zum Zitat Boulemtafes, A., Derhab, A., Challal, Y.: A review of privacy-preserving techniques for deep learning. Neurocomputing 384, 21–45 (2020)CrossRef Boulemtafes, A., Derhab, A., Challal, Y.: A review of privacy-preserving techniques for deep learning. Neurocomputing 384, 21–45 (2020)CrossRef
3.
Zurück zum Zitat Brakerski, Z., Gentry, C., Vaikuntanathan, V.: (leveled) fully homomorphic encryption without bootstrapping. ACM Trans. Comput. Theory 6(3), 1–36 (2014)MathSciNetCrossRef Brakerski, Z., Gentry, C., Vaikuntanathan, V.: (leveled) fully homomorphic encryption without bootstrapping. ACM Trans. Comput. Theory 6(3), 1–36 (2014)MathSciNetCrossRef
4.
Zurück zum Zitat Brutzkus, A., Gilad-Bachrach, R., Elisha, O.: Low latency privacy preserving inference. In: ICML, pp. 812–821 (2019) Brutzkus, A., Gilad-Bachrach, R., Elisha, O.: Low latency privacy preserving inference. In: ICML, pp. 812–821 (2019)
5.
Zurück zum Zitat Chabanne, H., Wargny, A., Milgram, J., Morel, C., Prouff, E.: Privacy-preserving classification on deep neural network. Cryptol. ePrint Arch. (2017) Chabanne, H., Wargny, A., Milgram, J., Morel, C., Prouff, E.: Privacy-preserving classification on deep neural network. Cryptol. ePrint Arch. (2017)
7.
Zurück zum Zitat Chillotti, I., Gama, N., Georgieva, M., Izabachène, M.: TFHE: fast fully homomorphic encryption over the torus. J. Cryptol. 33(1), 34–91 (2020)MathSciNetCrossRef Chillotti, I., Gama, N., Georgieva, M., Izabachène, M.: TFHE: fast fully homomorphic encryption over the torus. J. Cryptol. 33(1), 34–91 (2020)MathSciNetCrossRef
8.
Zurück zum Zitat Chou, E., Beal, J., Levy, D., Yeung, S., Haque, A., Fei-Fei, L.: Faster CryptoNets: leveraging sparsity for real-world encrypted inference. CoRR/abstract 1811.09953 (2018) Chou, E., Beal, J., Levy, D., Yeung, S., Haque, A., Fei-Fei, L.: Faster CryptoNets: leveraging sparsity for real-world encrypted inference. CoRR/abstract 1811.09953 (2018)
9.
Zurück zum Zitat Dathathri, R., Kostova, B., Saarikivi, O., Dai, W., Laine, K., Musuvathi, M.: EVA: an encrypted vector arithmetic language and compiler for efficient homomorphic computation. In: PLDI, pp. 546–561 (2020) Dathathri, R., Kostova, B., Saarikivi, O., Dai, W., Laine, K., Musuvathi, M.: EVA: an encrypted vector arithmetic language and compiler for efficient homomorphic computation. In: PLDI, pp. 546–561 (2020)
10.
Zurück zum Zitat Dathathri, R., et al.: CHET: an optimizing compiler for fully-homomorphic neural-network inferencing. In: PLDI, pp. 142–156 (2019) Dathathri, R., et al.: CHET: an optimizing compiler for fully-homomorphic neural-network inferencing. In: PLDI, pp. 142–156 (2019)
11.
Zurück zum Zitat Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. J. Priv. Confidentiality 7(3), 17–51 (2016)CrossRef Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. J. Priv. Confidentiality 7(3), 17–51 (2016)CrossRef
12.
Zurück zum Zitat Gentry, C.: Fully homomorphic encryption using ideal lattices. In: STOC, pp. 169–178 (2009) Gentry, C.: Fully homomorphic encryption using ideal lattices. In: STOC, pp. 169–178 (2009)
13.
Zurück zum Zitat Ghodsi, Z., Jha, N., Reagen, B., Garg, S.: Circa: stochastic ReLUs for private deep learning. In: NeurIPS, pp. 2241–2252 (2021) Ghodsi, Z., Jha, N., Reagen, B., Garg, S.: Circa: stochastic ReLUs for private deep learning. In: NeurIPS, pp. 2241–2252 (2021)
14.
Zurück zum Zitat Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In: ICML, pp. 201–210 (2016) Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In: ICML, pp. 201–210 (2016)
15.
Zurück zum Zitat Gong, X., Chen, Y., Yang, W., Mei, G., Wang, Q.: InverseNet: augmenting model extraction attacks with training data inversion. In: IJCAI, pp. 2439–2447 (2021) Gong, X., Chen, Y., Yang, W., Mei, G., Wang, Q.: InverseNet: augmenting model extraction attacks with training data inversion. In: IJCAI, pp. 2439–2447 (2021)
16.
Zurück zum Zitat Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016) Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
17.
Zurück zum Zitat Gou, J., Yu, B., Maybank, S., Tao, D.: Knowledge distillation: a survey. Int. J. Comput. Vis. 129(6), 1789–1819 (2021)CrossRef Gou, J., Yu, B., Maybank, S., Tao, D.: Knowledge distillation: a survey. Int. J. Comput. Vis. 129(6), 1789–1819 (2021)CrossRef
18.
Zurück zum Zitat Hashemi, M.: Enlarging smaller images before inputting into convolutional neural network: zero-padding vs. interpolation. J. Big Data 6(1), 1–13 (2019) Hashemi, M.: Enlarging smaller images before inputting into convolutional neural network: zero-padding vs. interpolation. J. Big Data 6(1), 1–13 (2019)
19.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
20.
Zurück zum Zitat Hesamifard, E., Takabi, H., Ghasemi, M.: Deep neural networks classification over encrypted data. In: CODASPY, pp. 97–108 (2019) Hesamifard, E., Takabi, H., Ghasemi, M.: Deep neural networks classification over encrypted data. In: CODASPY, pp. 97–108 (2019)
21.
Zurück zum Zitat Hinton, G., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Improving neural networks by preventing co-adaptation of feature detectors. CoRR/abstract 1207.0580 (2012) Hinton, G., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Improving neural networks by preventing co-adaptation of feature detectors. CoRR/abstract 1207.0580 (2012)
22.
Zurück zum Zitat Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. CoRR/abstract 1503.02531 (2015) Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. CoRR/abstract 1503.02531 (2015)
23.
Zurück zum Zitat Huang, G., Liu, Z., Maaten, L., Weinberger, K.: Densely connected convolutional networks. In: CVPR, pp. 2261–2269 (2017) Huang, G., Liu, Z., Maaten, L., Weinberger, K.: Densely connected convolutional networks. In: CVPR, pp. 2261–2269 (2017)
24.
Zurück zum Zitat Iandola, F., Moskewicz, M., Ashraf, K., Han, S., Dally, W., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size. CoRR/abstract 1602.07360 (2016) Iandola, F., Moskewicz, M., Ashraf, K., Han, S., Dally, W., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size. CoRR/abstract 1602.07360 (2016)
25.
Zurück zum Zitat Lee, E., et al.: Low-complexity deep convolutional neural networks on fully homomorphic encryption using multiplexed parallel convolutions. In: ICML, pp. 12403–12422 (2022) Lee, E., et al.: Low-complexity deep convolutional neural networks on fully homomorphic encryption using multiplexed parallel convolutions. In: ICML, pp. 12403–12422 (2022)
26.
Zurück zum Zitat Li, Z., et al.: Curriculum temperature for knowledge distillation. In: AAAI, pp. 1504–1512 (2023) Li, Z., et al.: Curriculum temperature for knowledge distillation. In: AAAI, pp. 1504–1512 (2023)
27.
Zurück zum Zitat Lou, Q., Jiang, L.: SHE: a fast and accurate deep neural network for encrypted data. In: NeurIPS, pp. 10035–10043 (2019) Lou, Q., Jiang, L.: SHE: a fast and accurate deep neural network for encrypted data. In: NeurIPS, pp. 10035–10043 (2019)
28.
Zurück zum Zitat Lou, Q., Lu, W., Hong, C., Jiang, L.: Falcon: fast spectral inference on encrypted data. In: NeurIPS, pp. 2364–2374 (2020) Lou, Q., Lu, W., Hong, C., Jiang, L.: Falcon: fast spectral inference on encrypted data. In: NeurIPS, pp. 2364–2374 (2020)
29.
Zurück zum Zitat Radosavovic, I., Kosaraju, R., Girshick, R., He, K., Dollár, P.: Designing network design spaces. In: CVPR, pp. 10425–10433 (2020) Radosavovic, I., Kosaraju, R., Girshick, R., He, K., Dollár, P.: Designing network design spaces. In: CVPR, pp. 10425–10433 (2020)
30.
Zurück zum Zitat Ribeiro, M., Grolinger, K., Capretz, M.: MLaaS: machine learning as a service. In: ICMLA, pp. 896–902 (2015) Ribeiro, M., Grolinger, K., Capretz, M.: MLaaS: machine learning as a service. In: ICMLA, pp. 896–902 (2015)
31.
Zurück zum Zitat Romero, A., Ballas, N., Kahou, S., Chassang, A., Gatta, C., Bengio, Y.: FitNets: hints for thin deep nets. In: ICLR (2015) Romero, A., Ballas, N., Kahou, S., Chassang, A., Gatta, C., Bengio, Y.: FitNets: hints for thin deep nets. In: ICLR (2015)
33.
Zurück zum Zitat Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR/abstract 1409.1556 (2014) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR/abstract 1409.1556 (2014)
34.
Zurück zum Zitat Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. JMLR 15(1), 1929–1958 (2014)MathSciNet Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. JMLR 15(1), 1929–1958 (2014)MathSciNet
35.
Zurück zum Zitat Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: ICML, pp. 6105–6114 (2019) Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: ICML, pp. 6105–6114 (2019)
36.
Zurück zum Zitat Xie, X.R., Yuan, M.J., Bai, X.T., Gao, W., Zhou, Z.H.: On the Gini-impurity preservation for privacy random forests. In: NeurIPS (2023) Xie, X.R., Yuan, M.J., Bai, X.T., Gao, W., Zhou, Z.H.: On the Gini-impurity preservation for privacy random forests. In: NeurIPS (2023)
37.
Zurück zum Zitat Yang, K., Yau, J., Fei-Fei, L., Deng, J., Russakovsky, O.: A study of face obfuscation in imagenet. In: ICML, pp. 25313–25330 (2022) Yang, K., Yau, J., Fei-Fei, L., Deng, J., Russakovsky, O.: A study of face obfuscation in imagenet. In: ICML, pp. 25313–25330 (2022)
38.
Zurück zum Zitat Yin, X., Zhu, Y., Hu, J.: A comprehensive survey of privacy-preserving federated learning: a taxonomy, review, and future directions. ACM Comput. Surv. 54(6), 131:1–131:36 (2021) Yin, X., Zhu, Y., Hu, J.: A comprehensive survey of privacy-preserving federated learning: a taxonomy, review, and future directions. ACM Comput. Surv. 54(6), 131:1–131:36 (2021)
39.
Zurück zum Zitat Yuan, M.J., Zou, Z., Gao, W.: Bi-cryptonets: leveraging different-level privacy for encrypted inference. CoRR/abstract 2402.01296 (2024) Yuan, M.J., Zou, Z., Gao, W.: Bi-cryptonets: leveraging different-level privacy for encrypted inference. CoRR/abstract 2402.01296 (2024)
40.
Zurück zum Zitat Zagoruyko, S., Komodakis, N.: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. In: ICLR (2017) Zagoruyko, S., Komodakis, N.: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. In: ICLR (2017)
Metadaten
Titel
Bi-CryptoNets: Leveraging Different-Level Privacy for Encrypted Inference
verfasst von
Man-Jie Yuan
Zheng Zou
Wei Gao
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-2253-2_17

Premium Partner