Towards Adversarial Robustness of Deep Vision Algorithms



Hanshu Yan (ByteDance)

Dr Hanshu Yan is working as a Research Scientist at TikTok/ByteDance Singapore towards efficient and controllable diffusion-based generative models. He obtained PhD from NUS in Sep/2022, advised by Prof. Vincent Y. F. Tan and Dr Jiashi Feng. He was also working closely with Dr Jingfeng Zhang and Prof. Masashi Sugiyama from RIKEN-AIP on the topic of machine learning robustness. Previously, he received M. Sc from NUS, B.Eng and B. Sc from Beihang University. His research interests include machine learning (generative modeling, efficiency, and robustness) and computer vision (image/video generation, editing, and processing).



Short Abstract: Deep learning methods have achieved great success in solving computer vision tasks, and they have been widely utilized in artificially intelligent systems for image processing, analysis, and understanding. However, deep neural networks have been shown to be vulnerable to adversarial perturbations in input data. The security issues of deep neural networks have thus come to the fore. It is imperative to study the adversarial robustness of deep vision algorithms comprehensively. This talk focuses on the adversarial robustness of image classification models and image denoisers. We will discuss the robustness of deep vision algorithms from three perspectives: 1) robustness evaluation (we propose the ObsAtk to evaluate the robustness of denoisers), 2) robustness improvement (HAT, TisODE, and CIFS are developed to robustify vision models), and 3) the connection between adversarial robustness and generalization capability to new domains (we find that adversarially robust denoisers can deal with unseen types of real-world noise).