Great Haste Makes Great Waste: Exploiting and Attacking Efficient Deep Learning



Sanghyun Hong (Oregon State University)

Sanghyun Hong is an Assistant Professor of Computer Science at Oregon State University. He works on building trustworthy and socially responsible AI systems for the future. He is the recipient of the Samsung Global Research (GRO) Award 2022 and was selected as a DARPA Riser 2022. He was also an invited speaker at USENIX Enigma 2021, where he talked about practical hardware attacks on deep learning. He earned his Ph.D. at the University of Maryland, College Park, under the guidance of Prof. Tudor Dumitras. He was also a recipient of the Ann G. Wylie Dissertation Fellowship. He received his B.S. at Seoul National University.



Short Abstract: Recent increases in the computational demands of deep neural networks have sparked interest in efficient deep learning mechanisms, such as neural network quantization or input-adaptive multi-exit inferences. Those mechanisms provide significant computational savings while preserving a model's accuracy, making it practical to run commercial-scale models in resource-constrained settings. However, most methods focus on "hastiness"—i.e., how fast and efficiently they get correct predictions—and it overlooks the security vulnerability that can "waste" their practicality. In this talk, I will revisit efficient deep learning from a security perspective and introduce emerging research on exploiting and attacking them to achieve malicious objectives. First, I will show how an adversary can exploit neural network quantization to induce malicious behaviors. An adversary can manipulate a pre-trained model to behave maliciously upon quantization. Next, I will show how input-adaptive mechanisms, such as multi-exit models, fail to promise computational efficiency in adversarial settings. By adding human-imperceptible input perturbations, an attacker can completely offset the computational savings provided by these input-adaptive models. Finally, I will conclude my talk by encouraging the audience to examine efficient deep learning practices with an adversarial lens and discuss future research directions for building defense mechanisms. I believe that this is the best moment to listen to Benjamin's advice: "Take time for all the things."