DAI, Zhongxiang
Assistant Professor
Ph.D. in Computer Science, National University of Singapore, 2017-2021
B.Eng. in Electrical Engineering, National University of Singapore, 2011-2015
Dr. Zhongxiang Dai is an Assistant Professor at the School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-SZ). He worked as a Postdoctoral Associate at the Massachusetts Institute of Technology (MIT) in 2024 and as a Postdoctoral Fellow at the National University of Singapore (NUS) from 2021 to 2023. He received his B.Eng. in Electrical Engineering with First Class Honors in 2015 and his Ph.D. in Computer Science in 2021, both from NUS. During his Ph.D. at the School of Computing in NUS, he was awarded the Dean's Graduate Research Excellence Award and multiple Research Achievement Awards.
Dr. Dai’s research interests encompass both the theory and practice of machine learning. Theoretically, he focuses on the theory of multi-armed bandits (MAB) and Bayesian optimization (BO). On the application side, he is interested in using MAB and BO to (1) solve real-world black-box optimization problems (e.g., AutoML and AI4Science) and (2) achieve Data-Centric AI, such as data-efficient prompt optimization for large language models (LLMs), data-efficient reinforcement learning with human feedback (RLHF) for LLMs, among others. His research has resulted in over 25 publications in top AI conferences and journals, including more than 20 papers in ICML, NeurIPS, and ICLR (i.e., the top 3 AI conferences). He regularly serves as a program committee member/reviewer for leading AI conferences and journals, such as ICML, NeurIPS, ICLR, AAAI, TPAMI, etc. He was a Senior Program Committee (SPC) member for IJCAI 2021.
Selected Publications (# corresponding author, * equal contribution):
1. X. Lin, Z. Dai#, A. Verma, S. K. Ng, P. Jaillet and B. K. H. Low, “Prompt optimization with human feedback,” arXiv preprint 2024.
2. X. Lin*, Z. Wu*, Z. Dai#, W. Hu, Y. Shu, S. K. Ng, P. Jaillet and B. K. H. Low, “Use Your INSTINCT: INSTruction optimization for LLMs usIng Neural bandits Coupled with Transformers,” in ICML 2024.
3. Z. Dai*, G. K. R. Lau*, A. Verma, Y. Shu, B. K. H. Low and P. Jaillet, “Quantum Bayesian optimization,” In NeurIPS 2023.
4. Z. Dai, Q. P. Nguyen, S. S. Tay, D. Urano, R. Leong, B. K. H. Low and P. Jaillet, “Batch Bayesian optimization for replicable experimental design,” In NeurIPS 2023.
5. A. Hemachandra, Z. Dai#, J. Singh, S. K. Ng and B. K. H. Low, “Training-free neural active learning with initialization-robustness guarantees,” in ICML 2023.
6. Z. Dai, Y. Shu, A. Verma, F. X. Fan, B. K. H. Low and P. Jaillet, “Federated neural bandits,” In ICLR 2023.
7. Y. Shu*, Z. Dai*, W. Sng, A. Verma, P. Jaillet and B. K. H. Low, “Zeroth-order optimization with trajectory-informed derivative estimation,” In ICLR 2023.
8. Z. Dai, Y. Shu, B. K. H. Low and P. Jaillet, “Sample-then-optimize batch neural Thompson sampling,” In NeurIPS 2022.
9. A. Verma*, Z. Dai* and B. K. H. Low, “Bayesian optimization under stochastic delayed feedback,” In ICML 2022.
10. Z. Dai, B. K. H. Low and P. Jaillet, “Differentially private federated Bayesian optimization with distributed exploration,” In NeurIPS 2021.
11. Z. Dai, B. K. H. Low and P. Jaillet, “Federated Bayesian optimization via Thompson sampling,” In NeurIPS 2020.
12. Z. Dai, Y. Chen, B. K. H. Low, P. Jaillet and T.-H. Ho, “R2-B2: Recursive Reasoning-Based Bayesian optimization for no-regret learning in games,” In ICML 2020.
13. Z. Dai, H. Yu, B. K. H. Low and P. Jaillet, “Bayesian optimization meets Bayesian optimal stopping,” In ICML 2019.
A complete list of publications can be found at https://daizhongxiang.github.io