【Academic Forum】AI Security and Privacy Forum 17th Session
AI Security and Privacy Forum 17th Session
Difference between Forward and Backward Information can Defend Score-Based Query Attack
Time: 10:00 to 11:30 AM, Beijing Time
Date: 18th Oct. (Tuesday), 2022
Seminar Information
Speaker: Prof. Yahong Han, Professor, Tianjin University
Topic: Adversarial Robustness of Deep Vision Models and Black-Box Attacks
Speaker: Prof. Baoyuan Wu, Associate Professor, The Chinese University of Hong Kong, Shenzhen
Topic: Our recent advances on black-box attacks and a comprehensive benchmark

Host
Prof. Baoyuan WU, Assistant Professor, School of Data Science, CUHK-Shenzhen
Agenda
10:00-10:05 AM: Welcome and opening speech - Prof. Baoyuan Wu
10:05-11:00 AM: Guest speech - Prof. Xiaolin Huang
11:00-11:30 AM: Q&A - Prof. Baoyuan Wu and Prof. Xiaolin Huang
Format
Live in Bilibili
http://live.bilibili.com/22947067
Biography
Prof. Xiaolin Huang
Xiaolin Huang received the B.S. degree from Xi’an Jiaotong University, Xi’an, China, in 2006, and the PhD degree from Tsinghua University, Beijing, China. From 2012 to 2015, he worked as a postdoctoral researcher with ESAT-STADIUS, KU Leuven, Leuven, Belgium. After that he was selected as an Alexander von Humboldt fellow and working in Pattern Recognition Lab, the Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany. From 2016, he has been an associate professor with the Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China. In 2017, he was awarded by "1000-Talent Plan" (Young Program).
His current research interests include machine learning and optimization, especially for robustness and sparsity of both kernel learning and deep neural networks. On these topics, he has published over 50 papers, some of them appear in Nature Reviews Methods Primers, Journal of Machine Learning Research, IEEE Transactions on Pattern Analysis and Machine Intelligence, Applied and Computational Harmonic Analysis, etc.
Abstract
Difference between Forward and Backward Information can Defend Score-Based Query Attack
Abstract: The score-based query attacks (SQAs) pose practical threats to real-world deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model’s output scores. To deal with SQAs, we are facing two seemingly contradictive targets: changing the output for defending attackers and keeping the output for serving users. We find the difference between forward and backward information is the key point we can make use of. Following this idea, we propose novel defenses to confound SQAs towards incorrect attack directions, by imposing Adversarial Attack on Attackers (AAA), or to hide the correct attack directions, by Unifying Gradients (UniG) of a batch. Numerical experiments support our effectiveness on defending SQAs for several CIFAR-10/ImageNet models, compared to the SOTA defenses. Besides, we will also introduce some results from robust learning for classical kernel methods, which differs from adversarial robustness for deep learning but may have inspiration for both sides.
Host
Shenzhen Research Institute of Big Data (SRIBD)
China Society of Image and Graphics (CSIG)
Organizer
Departmentof Automation of
Shanghai Jiao Tong University
School of Data Science, CUHK-Shenzhen
Co-organizer
Shenzhen Institute of Electronics
Shenzhen Key Laboratory of Pattern Analysis andPerceptual Computing (Piloting phase)
Format
Online (Bilibili)
http://live.bilibili.com/22947067