Proceedings of the
The Nineteenth International Conference on Computational Intelligence and Security (CIS 2023)
December 1 – 4, 2023, Haikou, China

Refining Adversarial Perturbations by Evolutionary Strategies in Black-Box Attack

Shu Youa, Zhenhua Lib and Yining Lic

School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China.

ABSTRACT

Recent research has shown that deep neural networks are vulnerable to attacks from adversarial examples. These attacks involve subtle perturbations to input images, which can cause the DNN model to make incorrect predictions. In the black-box setting, the attacker has access to the input and output of the target model. Existing black-box attack methods often require many queries to generate adversarial examples, which are also low-quality. To address this problem, we propose Refining adversarial perturbations by evolutionary strategies for the black-box attacks that can efficiently find high-quality adversarial examples under the constraint of limited queries. Furthermore, our method can generate many adversarial examples with a small l2 norm, making them difficult to detect by defense systems. Experimental results demonstrate that our method outperforms state-of-the-art black-box methods regarding quality and efficiency.

Keywords: Refining adversarial perturbations by evolutionary strategies, Black-box attack, High-quality, Efficient.



Download PDF