A generative approach to robust machine learning



Vikash Sehwag (Princeton University)

Vikash is a PhD candidate in Electrical Engineering at Princeton University. He is interested in research problems at the intersection of security, privacy, and machine learning. Some topics he has worked on are adversarial robust supervised / self-supervised learning, adversarial robustness in compressed neural networks, self-supervised detection of outliers, robust open-world machine learning, and privacy leakage in large scale deep learning. He is co-advised by Prateek Mittal and Mung Chiang. Before joining Princeton, he completed my undergraduate in E&ECE (with minor in CS) from IIT Kharagpur, India. He earlier had an amazing summer internship experience at Microsoft Research, Redmond. Before that, He spent a wonderful summer working with Heinz Koeppl at TU Darmstadt. He has also received Qualcomm Innovation Fellowship in 2019.



Short Abstract: My talk will focus on the emerging direction of making machine learning reliable by incorporating data-distribution information through generative models. It is natural to ask, why would generative models help, how much generative models can even help, or which generative models are most helpful? I will present a framework catered to answering these questions. In particular, I’ll demonstrate why we should use diffusion-based generative models, instead of generative adversarial networks (GANs) in robust learning applications. Though diffusion-based generative models are an unbounded source of data, their sampling process is hard to guide toward specific regions. I will discuss the techniques we developed to enable guided sampling from regions critical to robust learning.