|Table of Contents|

[1] Zhao Ningning, Jiang Rui,. Poisoning attack detection scheme based on data integritysampling audit algorithm in neural network [J]. Journal of Southeast University (English Edition), 2023, 39 (3): 314-322. [doi:10.3969/j.issn.1003-7985.2023.03.012]
Copy

Poisoning attack detection scheme based on data integritysampling audit algorithm in neural network()
神经网络中基于数据完整性抽样审计算法的中毒攻击检测方案
Share:

Journal of Southeast University (English Edition)[ISSN:1003-7985/CN:32-1325/N]

Volumn:
39
Issue:
2023 3
Page:
314-322
Research Field:
Computer Science and Engineering
Publishing date:
2023-09-20

Info

Title:
Poisoning attack detection scheme based on data integritysampling audit algorithm in neural network
神经网络中基于数据完整性抽样审计算法的中毒攻击检测方案
Author(s):
Zhao Ningning Jiang Rui
School of Cyber Science and Engineering, Southeast University, Nanjing 210096, China
赵宁宁 蒋睿
东南大学网络空间安全学院, 南京 210096
Keywords:
poisoning attack neural network deep learning data integrity sampling audit
投毒攻击 神经网络 深度学习 数据完整性审计
PACS:
TP339
DOI:
10.3969/j.issn.1003-7985.2023.03.012
Abstract:
To address the issue that most existing detection and defense methods can only detect known poisoning attacks but cannot defend against other types of poisoning attacks, a poisoning attack detecting scheme with data recovery(PAD-DR)is proposed to effectively detect the poisoning attack and recover the poisoned data in a neural network. First, the PAD-DR scheme can detect all types of poisoning attacks. The data sampling detection algorithm is combined with a real-time data detection method for input layer nodes using a neural network so that the system can ensure the integrity and availability of the training data to avoid being changed or corrupted. Second, the PAD-DR scheme can recover corrupted or poisoned training data from poisoning attacks. Cauchy Reed-Solomon(CRS)code technology can encode training data and store them separately. Once the poisoning attack is detected, the original training data is recovered, and the system may get data from any k nodes from all n stores to recover the original training data. Finally, the security objectives of the PAD-DR scheme to withstand poisoning attacks, resist forgery and tampering attacks, and recover the data accurately are formally proved.
为了解决大多数现有的检测和防御方法只能检测已知的中毒攻击, 而不能防御其他类型中毒攻击的问题, 提出了一种带数据恢复的中毒攻击检测方案(PAD-DR), 以有效地检测中毒攻击并恢复神经网络中中毒的数据.首先, 该方案可以检测各种中毒攻击.将数据采样检测算法与神经网络中输入层节点的实时数据检测方法相结合, 使系统能够确保训练数据的完整性和可用性, 避免被更改或损坏.其次, 该方案可以恢复中毒攻击中损坏或中毒的训练数据.将柯西-里德-所罗门(CRS)码技术应用于编码训练数据, 并将其单独存储, 一旦检测到中毒攻击并且导致数据受损, 系统就可以从所有n个存储中的任何k个节点获取数据, 以恢复原始的训练数据.最后, 给出了PAD-DR方案的形式化证明, 证明了该方案能够抵御中毒攻击、抵御临时伪造和篡改攻击, 以及准确恢复数据.

References:

[1] LeCun Y, Bengio Y, Hinton G. Deep learning[J].Nature, 2015, 521(7553): 436-444. DOI: 10.1038/nature14539.
[2] Ackerman E. How Drive.ai is mastering autonomous driving with deep learning [EB/OL].(2017-12)[2022-10-16]. https://spectrum.ieee.org/cars-that-think/transportation/self-driving/how-driveai-is-mastering-autonomous-driving-with-deep-learning.
[3] Class Central. Deep learning for self-driving cars [EB/OL].(2017)[2022-11-20]. https://www.class-central.com/mooc/8132/6-s094-deep-learning-for-self-driving-cars.
[4] Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning[J].Nature, 2015, 518(7540): 529-533. DOI: 10.1038/nature14236.
[5] Giusti A, Guzzi J, Ciresan D C, et al. A machine learning approach to visual perception of forest trails for mobile robots[J].IEEE Robotics and Automation Letters, 2016, 1(2): 661-667. DOI: 10.1109/lra.2015.2509024.
[6] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas, NV, USA, 2016: 770-778. DOI: 10.1109/CVPR.2016.90.
[7] Chakraborty K, Bhattacharyya S, Bag R, et al. Comparative sentiment analysis on a set of movie reviews using deep learning approach[C]//International Conference on Advanced Machine Learning Technologies and Applications. Cham, Switzerland: Springer, 2018: 311-318. DOI: 10.1007/978-3-319-74690-6_31.
[8] Biggio B, Nelson B, Laskov P. Support vector machines under adversarial label noise[C]// Proceedings of the Asian Conference on Machine Learning. Taoyuan, China, 2011: 97-112.
[9] Mu�F1;oz-González L, Biggio B, Demontis A, et al. Towards poisoning of deep learning algorithms with back-gradient optimization[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. Dallas, TX, USA, 2017: 27-38. DOI: 10.1145/3128572.3140451.
[10] Liu H Q, Li D X, Li Y C. Poisonous label attack: Black-box data poisoning attack with enhanced conditional DCGAN[J].Neural Processing Letters, 2021, 53(6): 4117-4142. DOI: 10.1007/s11063-021-10584-w.
[11] Alberti M, Pondenkandath V, Wursch M, et al. Are you tampering with my data? [C]// The European Conference on Computer Vision. Munich, Germany, 2018: 296-312.
[12] Shafahi A, Huang W R, Najibi M, et al. Poison frogs! Targeted clean-label poisoning attacks on neural networks[C]// Neural Information Processing Systems 31(NIPS). Montréal, Canada, 2018: 6106-6116.
[13] Jagielski M, Oprea A, Biggio B, et al. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning[C]//2018 IEEE Symposium on Security and Privacy(SP). San Francisco, CA, USA, 2018: 19-35. DOI: 10.1109/SP.2018.00057.
[14] Zhang X Z, Zhu X J, Wright S. Training set debugging using trusted items[C]//Proceedings of the AAAI Conference on Artificial Intelligence. New Orleans, LA, USA, 2018: 1-8. DOI: 10.1609/aaai.v32i1.11610.
[15] Peri N, Gupta N, Huang W R, et al. Deep k-NN defense against clean-label data poisoning attacks [EB/OL].(2019)[2022-12-22]. http://arxiv.org/abs/1909.13374.
[16] Shen S Q, Tople S, Saxena P. Auror: Defending against poisoning attacks in collaborative deep learning systems[C]//Proceedings of the 32nd Annual Conference on Computer Security Applications. Los Angeles, CA, USA, 2016: 508-519. DOI: 10.1145/2991079.2991125.
[17] Liu K, Dolan-Gavitt B, Garg S. Fine-pruning: Defending against backdooring attacks on deep neural networks[C]//International Symposium on Research in Attacks, Intrusions, and Defenses. Heraklion, Crete, Greece, 2018: 273-294. DOI: 10.1007/978-3-030-00470-5_13.
[18] Diakonikolas I, Kamath G, Kane D. Sever: A robust meta-algorithm for stochastic optimization[C]// International Conference on Machine Learning. Long Beach, California, USA, 2019: 1596-1606.
[19] Chen J, Zhang X X, Zhang R, et al. De-Pois: An attack-agnostic defense against data poisoning attacks[J].IEEE Transactions on Information Forensics and Security, 2021, 16: 3412-3425. DOI: 10.1109/TIFS.2021.3080522.
[20] Krawczyk H. HMQV: A high-performance secure Diffie-Hellman protocol[C]// 25th Annual International Crytology Conference. Santa Barbara, CA, USA, 2005: 1-62.
[21] Bao F, Deng R H, Zhu H F. Variations of Diffie-Hellman problem[C]// Proceeding of the International Conference on Information and Communications Security. Hohhot, China, 2003: 301-312.
[22] Roth R M, Lempel A. On MDS codes via cauchy matrices[J].IEEE Transactions on Information Theory, 1989, 35(6): 1314-1319. DOI: 10.1109/18.45291.

Memo

Memo:
Biographies: Zhao Ningning(1998—), female, graduate; Jiang Rui(corresponding author), male, doctor, professor, R.Jiang@seu.edu.cn.
Foundation items: The National Natural Science Foundation of China(No.61372103), the Natural Science Foundation of Jiangsu Province(No. BK20201265), the Project of the National Engineering Research Center of Classified Protection and Safeguard Technology for Cyber Security(No.C21640-2).
Citation: Zhao Ningning, Jiang Rui.Poisoning attack detection scheme based on data integrity sampling audit algorithm in neural network[J].Journal of Southeast University(English Edition), 2023, 39(3):314-322.DOI:10.3969/j.issn.1003-7985.2023.03.012.
Last Update: 2023-09-20