DAI, Zhongxiang
Assistant Professor
Ph.D. in Computer Science, National University of Singapore, 2017-2021
B.Eng. in Electrical Engineering, National University of Singapore, 2011-2015
Dr. Zhongxiang Dai is an Assistant Professor (Presidential Young Fellow) and Ph.D. Supervisor at the School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen). His research is supported by multiple funding sources, serving as the Principal Investigator (PI) for projects including the NSFC Young Scientist Fund, the Guangdong Provincial Natural Science Foundation Excellent Young Scientists Fund, the Shenzhen Natural Science Foundation General Program, as well as the Huawei LLM Agent Cooperation Project and the Huawei Young Scholars Program.
Prior to joining CUHK-Shenzhen, Dr. Dai was a Postdoctoral Fellow at the Massachusetts Institute of Technology (MIT) in 2024 and at the National University of Singapore (NUS) from 2021 to 2023. He received his B.Eng. in Electrical Engineering (First Class Honors) in 2015 and his Ph.D. in Computer Science in 2021, both from the National University of Singapore. His doctoral studies were supported by the Singapore-MIT Alliance for Research and Technology (SMART) Graduate Fellowship.
Dr. Dai's research interests cover the theory and application of machine learning. On the application side, he focuses on Large Language Models (LLMs), with research directions including LLM-based agents, LLM personalization, LLM online routing, LLM-based social simulation, and prompt optimization for LLMs. On the theoretical side, he conducts in-depth research on multi-armed bandit algorithms. He has published over 36 papers in top-tier AI conferences and journals, with over 28 publications at ICML, NeurIPS, and ICLR. He serves as an Area Chair for NeurIPS and ICLR and regularly reviews for multiple top-tier AI conferences.
Selected Publications (# corresponding author, * equal contribution):
1. X. Lin, Z. Dai#, A. Verma, S. K. Ng, P. Jaillet and B. K. H. Low, “Prompt optimization with human feedback,” arXiv preprint 2024.
2. X. Lin*, Z. Wu*, Z. Dai#, W. Hu, Y. Shu, S. K. Ng, P. Jaillet and B. K. H. Low, “Use Your INSTINCT: INSTruction optimization for LLMs usIng Neural bandits Coupled with Transformers,” in ICML 2024.
3. Z. Dai*, G. K. R. Lau*, A. Verma, Y. Shu, B. K. H. Low and P. Jaillet, “Quantum Bayesian optimization,” In NeurIPS 2023.
4. Z. Dai, Q. P. Nguyen, S. S. Tay, D. Urano, R. Leong, B. K. H. Low and P. Jaillet, “Batch Bayesian optimization for replicable experimental design,” In NeurIPS 2023.
5. A. Hemachandra, Z. Dai#, J. Singh, S. K. Ng and B. K. H. Low, “Training-free neural active learning with initialization-robustness guarantees,” in ICML 2023.
6. Z. Dai, Y. Shu, A. Verma, F. X. Fan, B. K. H. Low and P. Jaillet, “Federated neural bandits,” In ICLR 2023.
7. Y. Shu*, Z. Dai*, W. Sng, A. Verma, P. Jaillet and B. K. H. Low, “Zeroth-order optimization with trajectory-informed derivative estimation,” In ICLR 2023.
8. Z. Dai, Y. Shu, B. K. H. Low and P. Jaillet, “Sample-then-optimize batch neural Thompson sampling,” In NeurIPS 2022.
9. A. Verma*, Z. Dai* and B. K. H. Low, “Bayesian optimization under stochastic delayed feedback,” In ICML 2022.
10. Z. Dai, B. K. H. Low and P. Jaillet, “Differentially private federated Bayesian optimization with distributed exploration,” In NeurIPS 2021.
11. Z. Dai, B. K. H. Low and P. Jaillet, “Federated Bayesian optimization via Thompson sampling,” In NeurIPS 2020.
12. Z. Dai, Y. Chen, B. K. H. Low, P. Jaillet and T.-H. Ho, “R2-B2: Recursive Reasoning-Based Bayesian optimization for no-regret learning in games,” In ICML 2020.
13. Z. Dai, H. Yu, B. K. H. Low and P. Jaillet, “Bayesian optimization meets Bayesian optimal stopping,” In ICML 2019.
A complete list of publications can be found at https://daizhongxiang.github.io

