LIU, Guiliang
Assistant Professor
Ph.D. in Computer Science, Simon Fraser University, 2020
B.S. in Computer Science & Engineering, South China University of Technology, 2016
Dr. Guiliang Liu is an Assistant Professor at the School of Data Science, The Chinese University of Hong Kong, Shenzhen. He earned his Ph.D. in Computing Science from Simon Fraser University, Canada, and later conducted postdoctoral research at both the University of Waterloo and the Vector Institute in Canada.
Dr. Liu’s research centers on reinforcement learning and embodied intelligent decision-making. He has pioneered the development of data engines for embodied intelligence, facilitating the generation and deployment of robotic manipulation skills through sim-to-real generalization. He has also introduced inverse constrained reinforcement learning models to improve the safety and stability of reinforcement learning control systems.
In addition to his academic roles, Dr. Liu serves as the Chief Scientist for Reinforcement Learning at DexForce, the Director of the Embodied Decision Making (Edem) lab, as well as a Research Fellow at the Shenzhen Loop Area Institute. He has authored over 50 papers in leading international machine learning conferences and journals, including NeurIPS, ICML, and ICLR.
Dr. Liu is currently an Area Chair for NeurIPS and ICLR and has been recognized with several prestigious accolades. These include selection for the “Qiming Talent Program,” “Pengcheng Talent Program,” and “Presidential Young Scholars Program.” Moreover, Dr. Liu leads multiple research projects at the provincial and municipal levels and serves as a co-principal investigator for a major sub-project under Shenzhen’s key research initiatives.
1. Guiliang Liu#, Yueci Deng#, Runyi Zhao, Huayi Zhou, Jian Chen, Jietao Chen, Ruiyan Xu, Yunxin Tai, Kui Jia. DexScale: Automating Data Scaling for Sim2Real Generalizable Robot Control.International Conference on Machine Learning (ICML) 2025.
2. Guiliang Liu, Ashutosh Adhikari, Amir-massoud Farahmand, Pascal Poupart. Learning Object-Oriented Dynamics for Planning from Text. International Conference on Learning Representations (ICLR) 2022
3. Yudong Luo, Guiliang Liu, Haonan Duan, Oliver Schulte, Pascal Poupart. Distributional Reinforcement Learning with Monotonic Splines. International Conference on Learning Representations (ICLR) 2022
4. Guiliang Liu, Xiangyu Sun, Oliver Schulte, Pascal Poupart. Learning Tree Interpretation from Object Representation for Deep Reinforcement Learning. Advances in Neural Information Processing Systems (NeurIPS) 2021.
5. Guiliang Liu, Oliver Schulte, Pascal Poupart, Mike Rudd, Mehrsan Javan. Learning Agent Representations for Ice Hockey. Advances in Neural Information Processing Systems (NeurIPS) 2020.

