Towards Robust Representation Learning and Beyond



Cihang Xie (UCSC)

Cihang Xie is an Assistant Professor of Computer Science and Engineering at University of California, Santa Cruz. His research interest lies at the intersection of computer vision and machine learning, with the goal of building robust & explainable AI systems. Cihang received his Ph.D. degree from Johns Hopkins University, and was awarded the Facebook PhD Fellowship. Cihang serve(d) as an Area Chair for ICCV’21, ECCV’22, ICLR’22 and NeurIPS’22. For more information, please visit his website at https://cihangxie.github.io/.



Short Abstract: Deep learning has transformed computer vision in the past few years. As fueled by powerful computational resources and massive amounts of data, deep networks achieve compelling, sometimes even superhuman, performance on a wide range of visual benchmarks. Nonetheless, these success stories come with bitterness---deep networks are vulnerable to adversarial examples. The existence of adversarial examples reveals that the computations performed by the current deep networks are dramatically different from those by human brains, and, on the other hand, provides opportunities for understanding and improving these models. In this talk, I will first show that the vulnerability of deep networks is a much more severe issue than we thought---the threats from adversarial examples are ubiquitous and catastrophic. Then I will discuss how to equip deep networks with robust representations for defending against adversarial examples. We approach the solution from the perspective of neural architecture design, and show incorporating architectural elements like feature-level denoisers or smooth activation functions can effectively boost model robustness. The last part of this talk will rethink the value of adversarial examples. Rather than treating adversarial examples as a threat to deep networks, we take a further step on uncovering that adversarial examples can help deep networks substantially improve the generalization ability.