【Academic Forum】AI Security and Privacy Forum 16th Session
AI Security and Privacy Forum 16th Session
Adversarial Robustness of Deep Vision Models and Black-Box Attacks
Time: 10:00 to 12:00, Beijing Time
Date: 16th Sep. (Friday), 2022
Seminar Information
Speaker: Prof. Yahong Han, Professor, Tianjin University
Topic: Adversarial Robustness of Deep Vision Models and Black-Box Attacks
Speaker: Prof. Baoyuan Wu, Associate Professor, The Chinese University of Hong Kong, Shenzhen
Topic: Our recent advances on black-box attacks and a comprehensive benchmark
Host
Prof. Baoyuan WU, Associate Professor, School of Data Science, CUHK-Shenz
Agenda
10:00-10:05 AM
Welcome and opening Speech
Prof. Baoyuan Wu
10:05-11:00 AM
Guest delivers speech
Prof. Yahong Han
11:00-11:25 AM
Guest delivers speech
Prof. Baoyuan Wu
11:25-11:50 AM
Q&A
Prof. Baoyuan Wu and Prof. Yahong Han
Host
Shenzhen Research Institute of Big Data (SRIBD)
China Society of Image and Graphics (CSIG)
Organizer
College of Intelligence and Computing at Tianjin University
School of Data Science, CUHK-Shenzhen
Co-organizer
Hisense National Key Laboratory of
DigitalMultimedia Technology
School of Artificial Intelligence,
Hebei University of Technology
Format
Online
http://live.bilibili.com/22947067
Biography
Prof. Yahong Han
Yahong Han is an Outstanding Professor in the College of Intelligence and Computing at Tianjin University. He received Ph. D. degree in the College of Computer Science and Technology at Zhejiang University. His current research interests include Multimedia Analysis, Computer Vision, and Al Security. He was awarded the CCF Outstanding Dissertation in 2012 and was elected to the Program for New Century Excellent Talents in University by the Ministry of Education of China in 2013. He has been a Visiting Scholar at UC Berkeley from Nov. 2014 to Nov. 2015. He was awarded the Best Paper Finalist and Grand Challenge Honorable Mention Award of ACM Multimedia 2017. He is also the Winner of the Large-Scale Video QA Challenge in ICCV 2017. In 2021, as the Ph. D. Supervisor, he was awarded the CIG Outstanding Dissertation. Recently, Yahong received fundings of key programs from China's Ministry of Science & Technology and NSFC etc.
Prof. Baoyuan Wu
Dr. Baoyuan Wu is an Associate Professor of School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen). His research interests are Al security and privacy, machine learning, computer vision and optimization. He has published 50+ top-tier conference and journal papers, including TPAMI, IJCV, NeurIPS, CVPR, ICCV, ECCV, ICLR, A.AAl. He is currently serving as an Associate Editor of Neurocomputing, Area Chair of Neur-IPS 2022, ICLR 2022/2023; AAAl 2022 and ICIG 2021, Senior Program Committee Member of AAAl 2021 and IJCAl 2020/2021.
Abstract
Adversarial Robustness of Deep Vision Models and Black-Box Attacks
Abstract: With the rapid development of techniques like deep learnings, we are witnessing an unprecedented booming of AI and its applications. The vulnerability of deep learning models to adversarial noises raised great attention on the trustworthy machine learning and AI security from both academics and industry. In this talk, we will analyze the vulnerability of CNNs and Vision Transformers (ViTs). Towards the evaluation of model’s adversarial robustness, we will discuss the robustness/safety radius and the noise compression. Then we will introduce a new framework for query-efficient black-box adversarial attack by bridging transfer-based and decision-based attacks, as well as a new method of Decision-based Black-Box Attack for ViTs. Finally we present brief discussions about adversarial game and federated domain adaptation towards the open environment applications.
Our recent advances on black-box attacks and a comprehensive benchmark
Abstract: Firstly, we adopted the conditional flow-based model (c-Glow) to well approximate the conditional adversarial distribution (CAD). Then, we proposed two effective black-box methods through utilizing the approximated CAD. In the CG-Attack method, we designed a partial transfer mechanism which transfers partial parameters of the CAD of surrogate models, such that both the model-level adversarial transferability and query feedback are well utilized simultaneously. In the MCG method, we proposed a meta learning framework which captures both the example-level and model-level adversarial transferability, such that the fine-tuned CAD is more suitable for the target model. Moreover, the MCG framework could be naturally combined with any query-based attack method to boost the performance. Finally, I will introduce BlackboxBench, which is a comprehensive benchmark containing mainstream adversarial black-box attack methods. It is recently released at https://blackboxbench.github.io/.