大师讲堂预告 | 人工智能的过去、现在与未来:从黑盒到白盒,从开环到闭环

1月16日上午10:30,香港中文大学(深圳)荣幸地邀请到马毅教授做客大师讲堂。他将以“人工智能的过去、现在与未来:从黑盒到白盒,从开环到闭环”为主题做相关演讲。
欢迎我校师生前往现场参与讲座。

活 动 安 排
Event Arrangement
主题:人工智能的过去、现在与未来:从黑盒到白盒,从开环到闭环
主讲嘉宾:马毅教授
日期:2024年1月16日,星期二
时间:上午10:30-11:45
地点:行政楼西翼W201
语言:英文
主持人:贾奎教授
Topic: The Past, Present, and Future of Artificial Intelligence: from black-box to white-box, from open-loop to closed-loop.
Speaker: Professor Yi Ma
Date: Tuesday, Jan.16, 2024
Time: 10:30am - 11:45am
Venue: W201, Administration Building
Language: English
Host:Professor Kui Jia
嘉 宾 简 介
Speaker Profile

马毅教授
Yi Ma
马毅,香港大学数据科学研究院首任院长,新任计算机科学系主任,加州大学伯克利分校电子工程与计算机科学系教授,研究领域包括计算机视觉、高维数据分析和综合智能系统。马毅教授于1995年获得清华大学自动化、数学双学士学位,后赴美国加利福尼亚大学伯克利分校求学,分别获得电子工程与计算机科学、数学双硕士学位(1997年)以及电子工程与计算机科学博士学位(2000年)。
马毅教授曾是伊利诺伊大学厄巴纳-香槟分校电子与计算机工程系教授(2000-2011年)、微软亚洲研究院视觉计算组首席研究员和研究经理(2009-2014年)及上海科技大学信息科学与技术学院执行院长(2014-2017年)。2018年,他正式加入加州大学伯克利分校电子工程与计算机科学系教职团队。马毅教授发表了超过65篇期刊论文和130篇会议论文,并出版了三本关于计算机视觉、广义主成分分析(PCA)和高维数据分析的教科书。他曾获得2004年的NSF职业生涯奖、2005年的ONR青年研究员奖、ICCV 1999 “David Marr” 计算机视觉奖及ECCV 2004及ACCV 2009的最佳论文奖,并曾担任ICCV 2013的程序主席和ICCV 2015的大会主席。此外,他还是IEEE、ACM和SIAM的会士。
Yi Ma is the inaugural director of the Data Science Institute and the new head of the Computer Science Department of the University of Hong Kong. He has been a professor at the EECS Department at the University of California, Berkeley since 2018. His research interests include computer vision, high-dimensional data analysis, and integrated intelligent systems. Yi received his two bachelor’s degrees in Automation and Applied Mathematics from Tsinghua University in 1995, two master’s degrees in EECS and Mathematics in 1997, and a PhD degree in EECS from UC Berkeley in 2000.
He has been on the faculty of UIUC ECE from 2000 to 2011, the principal researcher and manager of the Visual Computing group of Microsoft Research Asia from 2009 to 2014, and the Executive Dean of the School of Information Science and Technology of ShanghaiTech University from 2014 to 2017. He joined the faculty of UC Berkeley EECS in 2018. He has published over 65 journal papers, 130 conference papers, and three textbooks on computer vision, generalized PCA, and high-dimensional data analysis. He received the NSF Career award in 2004 and the ONR Young Investigator award in 2005. He also received the David Marr prize in computer vision from ICCV 1999 and best paper awards from ECCV 2004 and ACCV 2009. He has served as the Program Chair for ICCV 2013 and the General Chair for ICCV 2015. He is a Fellow of IEEE, ACM, and SIAM.
摘要
Abstract
在此次演讲中,我们将从智能研究的历史角度,系统且原理性地回顾过去十年人工智能的发展与实践。我们的研究团队认为,智能的核心目标是学习一种紧凑且结构化的表示方式(或记忆方式),以最大化对感知世界的信息增益(可通过其编码率来评估)。
我们相信,通过优化该原理性目标,可以为过去和现在几乎所有的基于深度学习网络(包括ResNets 和Transformers)的人工智能实践提供一个统一且“白盒化”的解释。得益于此,我们已经可以开发出数学上可解释、实践中具有竞争力、语义上有意义的深度网络。详情可参考我们团队的最新的研究成果:https://ma-lab-berkeley.github.io/CRATE/
此外,我们的研究表明,要正确且自动地学习这种表示方式,需要整合结合编码理论、优化算法、反馈控制理论和博弈论的基本原理。尤其值得注意的是,我们应该转向闭环的方式来构建和训练编码与解码的网络模型,这与目前流行的开环端到端训练方法截然不同。这样的转变,不仅回应了智能研究80年前的初衷,更为我们开辟了一个更加广阔的未来:在这个新框架下,我们可以期待开发出更接近自然智能的下一代自主智能系统。
相关论文请参见:
1. https://ma-lab-berkeley.github.io/CRATE/
In this talk, we provide a more systematic and principled view about the practice of artificial intelligence in the past decade from the history of the study of intelligence. We argue that the most fundamental objective of intelligence is to learn a compact and structured representation, or a memory, of the sensed world that maximizes information gain, measurable by coding rates of the learned representation.
We contend that optimizing this principled objective provides a unifying white-box explanation for almost all past and current practices of artificial intelligence based on deep networks, including ResNets and Transformers. Hence, mathematically interpretable, practically competitive, and semantically meaningful deep networks are now within our reach, see our latest release: https://ma-lab-berkeley.github.io/CRATE/
Furthermore, our study shows that to learn such representation correctly and automatically, one needs to integrate fundamental ideas from coding theory, optimization, feedback control, and game theory. Particularly one needs to close the loop of encoding and decoding networks, instead of the current practice of training them end-to-end as open-loop networks. This connects us back to the true origin of the study of intelligence 80 years ago. Probably most importantly, this new framework reveals a much broader and brighter future for developing next-generation autonomous intelligent systems that could truly emulate the computational mechanisms of natural intelligence.
Related papers can be found at:
1. https://ma-lab-berkeley.github.io/CRATE/
2. https://jmlr.org/papers/v23/21-0631.html
3. https://www.mdpi.com/1099-4300/24/4/456/htm