Proceedings of the
The Nineteenth International Conference on Computational Intelligence and Security (CIS 2023)
December 1 – 4, 2023, Haikou, China
SODM:Stealing Object Detection Models via Diffusion Model
1College of Cyberspace Security, Hainan University, China.
2National Computer Network Intrusion Prevention Center, University of Chinese Academy of Sciences;School of Cyber Engineering, Xidian University, China
ABSTRACT
As deep neural networks (DNNs) demonstrate exceptional performance across multiple domains, Machine Learning as a Service (MLaaS) has gained popularity within cloud services. Deploying machine learning models in cloud services exposes them to the looming threat of model stealing attacks. Model stealing attacks primarily concentrate on image classification models in computer vision. However, research regarding model stealing attacks targeting another vital domain within computer vision, namely object detection models, still needs to be explored. We introduce an approach to steal object detection models, SODM. This method aims to expand the scope of attack scenarios in black-box settings, further relaxing attack assumptions and reducing the associated attack costs, all while achieving high-fidelity stealing of object detection models. Extensive experimental validations across various settings demonstrate our approach's excellence compared to other model stealing methods under relaxed attack assumptions. Before employing sample filtering, the fidelity of the substitute model reaches 93% of the victim model's accuracy. With the application of mutual information, we successfully reduce attack costs by 6.7% while maintaining the fidelity of the substitute model at 90%.
Keywords: Deep learning, Privacy security, Model stealing attacks, Substitute models, Diffusion models.

Download PDF
1College of Cyberspace Security, Hainan University, China.
2National Computer Network Intrusion Prevention Center, University of Chinese Academy of Sciences;School of Cyber Engineering, Xidian University, China
ABSTRACT
As deep neural networks (DNNs) demonstrate exceptional performance across multiple domains, Machine Learning as a Service (MLaaS) has gained popularity within cloud services. Deploying machine learning models in cloud services exposes them to the looming threat of model stealing attacks. Model stealing attacks primarily concentrate on image classification models in computer vision. However, research regarding model stealing attacks targeting another vital domain within computer vision, namely object detection models, still needs to be explored. We introduce an approach to steal object detection models, SODM. This method aims to expand the scope of attack scenarios in black-box settings, further relaxing attack assumptions and reducing the associated attack costs, all while achieving high-fidelity stealing of object detection models. Extensive experimental validations across various settings demonstrate our approach's excellence compared to other model stealing methods under relaxed attack assumptions. Before employing sample filtering, the fidelity of the substitute model reaches 93% of the victim model's accuracy. With the application of mutual information, we successfully reduce attack costs by 6.7% while maintaining the fidelity of the substitute model at 90%.
Keywords: Deep learning, Privacy security, Model stealing attacks, Substitute models, Diffusion models.

Download PDF