Adversarial attack in black-box settings



Yiwen Guo (Formerly Bytedance AI)

Dr Yiwen Guo received his PhD degree from Tsinghua University. He used to work at ByteDance AI Lab and Intel Labs, as a staff research scientist. His research interest lies in the intersection of machine learning and security, and he has published more than 30 papers on top-tier conferences and journals including CVPR, ECCV, NeurIPS, ICLR, TPAMI, etc.



Short Abstract: Adversarial examples have attracted great attention from the community. In particular, adversarial examples generated in black-box settings where the architecture of target models is unknown is of interest to both academia and industry, owing to its power of compromising real-world learning-based systems. In this talk, the speaker will bring some old and new thoughts on ways of generating powerful black-box adversarial examples. Both query-based and transfer-based methods will be discussed, which would hopefully also inspire research on optimization and model generalization.