YAN, Ming
Associate Professor
Assistant Dean (Undergraduate Affairs)
Undergraduate Program Director for CSE Major
Ph.D. in Mathematics, University of California, Los Angeles, 2012
M.S. in Mathematics, University of Science and Technology of China, 2008
B.S. in Mathematics, University of Science and Technology of China, 2005
Ming Yan is a Assistant Dean (Undergraduate Affairs) in the School of Data Science at The Chinese University of Hong Kong, Shenzhen. He received his B.S. and M.S. in Mathematics from University of Science and Technology of China in 2005 and 2008, respectively, and then his Ph.D. in Mathematics from University of California, Los Angeles in 2012. After completing his Ph.D., Ming Yan was a Postdoctoral Fellow in the Department of Computational and Applied Mathematics at Rice University from 2012 to 2013. He then moved to University of California, Los Angeles as a Postdoctoral Scholar and an Assistant Adjunct Professor from 2013 to 2015. From 2015 to 2022, he worked as a tenure-track Assistant Professor and a tenured Associate Professor in the Department of Computational Mathematics, Science and Engineering (CMSE) and the Department of Mathematics at Michigan State University.
His research interests lie in large-scale optimization and its applications in image processing, machine learning, and other data-science problems. He has published more than 50 papers in machine learning conferences, including ICML, ICLR, NeurIPS, and AISTATS, and journals including IEEE journals TPAMI, TSP, and SIAM journals SIIMS, SISC, and MMS. He received a Facebook Faculty Award in 2020.
Z. Song, L. Shi, S. Pu, and M. Yan, Compressed gradient tracking for decentralized optimization over general directed networks, IEEE Transactions on Signal Processing, 70(2022), 1775–1787.
H. Tang, Y. Li, J. Liu, and M. Yan, ErrorCompensatedX: error compensation for variance reduced algorithms, In: Proceedings of the Conference on Neural Information Processing Systems (NeurIPS 2021).
X. Liu, W. Jin, Y. Ma, Y. Li, H. Liu, Y. Wang, M. Yan, and J. Tang, Elastic graph neural networks, In: Proceedings of International Conference on Machine Learning (ICML 2021), PMLR 139 (2021), 6837–6849, 2021.
Y. Li and M. Yan, On linear convergence of two decentralized algorithms, Journal of Optimization Theory and Applications, 189 (2021), 271–290.
X. Liu, Y. Li, R. Wang, J. Tang, and M. Yan, Linear convergent decentralized optimization with compression, In: Proceedings of the International Conference on Learning Representations (ICLR 2021).
J. Liu, M. Yan, and T. Zeng, Surface-aware blind image deblurring, IEEE Transactions on Pattern Analysis and Machine Intelligence, 43 (2021), 1041–1055.
X. Liu, Y. Li, J. Tang, and M. Yan, A double residual compression algorithm for efficient distributed learning, In: Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS 2020), 133–143.
Z. Li, W. Shi, and M. Yan, A decentralized proximal-gradient method with network independent stepsizes and separated convergence rates, IEEE Transactions on Signal Processing, 67 (2019), 4494–4506.
H. Tang, X. Lian, M. Yan, Ce Zhang, and Ji Liu, D2: Decentralized training over decentralized data, In: Proceedings of International Conference on Machine Learning (ICML 2018), PMLR 80 (2018), 4848–4856.
M. Yan, A new primal-dual algorithm for minimizing the sum of three functions with a linear operator, Journal of Scientific Computing, 76 (2018), 1698–1717.
Z. Peng, Y. Xu, M. Yan, and W. Yin, ARock: an algorithmic framework for asynchronous parallel coordinate updates, SIAM Journal on Scientific Computing, 38 (2016), A2851–A2879.
M. Yan, Restoration of images corrupted by impulse noise and mixed Gaussian impulse noise using blind inpainting, SIAM Journal on Imaging Sciences, 6 (2013), 1227–1245.
A complete list of publications can be found at https://mingyan08.github.io/