Dear all,
Our next AI seminar on "New Frontiers In Adaptive Experimental Design For Multi-objective Optimization" by Syrine Belakaria is scheduled to be on Nov 10th (Friday), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Zoom link: https://oregonstate.zoom.us/j/98684050301?pwd=ZzhianQxUFBPUmdYVWJKOFhaVURCQ…<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonsta…>
Learning Interpretable Models on Complex Medical Data
Syrine Belakaria
Data Science Postdoctoral Fellow
Computer Science
Stanford University
Abstract:
Many design optimization problems in science, engineering, and industrial domains are instantiations of the following general problem: adaptive optimization of complex design spaces guided by expensive experiments where the expense is measured in terms of resources consumed by the experiments. This talk focuses on the problem of multi-objective optimization (MOO) using expensive black-box function evaluations (also referred to as experiments) where the goal is to approximate the optimal Pareto set of solutions by minimizing the total resource cost of experiments. For example, in drug design optimization, we need to find designs that trade off effectiveness, safety, and cost using expensive experiments. The key challenge is to select the sequence of experiments to uncover high-quality solutions by minimizing the total resource cost. In this talk, I will describe a general framework for solving MOO problems based on the output space entropy (OSE) search principle: select the experiment that maximizes the information gained per unit resource cost about the optimal Pareto front. I will also explain how to instantiate the principle of OSE search to derive efficient algorithms for the following four MOO problem settings: 1) The most basic single-fidelity setting where experiments are expensive and accurate; 2) Handling black-box constraints that cannot be evaluated without performing experiments; 3) The multi-fidelity setting where candidate experiments can vary in the amount of resources consumed and their evaluation accuracy. I will present experimental results on real-world engineering and science applications to demonstrate the effectiveness of OSE framework in terms of accuracy of MOO solutions and computational efficiency.
Speaker Bio:
Syrine Belakaria is a Data Science Postdoctoral fellow in the Computer Science department at Stanford University working with Professor Stefano Ermon and Professor Barbara Engelhardt. She obtained her PhD in Computer Science from Washington State University where she was advised by Professor Jana Doppa; MS in Electrical Engineering from the University of Idaho; and Engineering degree in Information Technology from the Higher School of Communication of Tunis, Tunisia. She won IBM PhD Fellowship (2021-2023) and was a Finalist for Microsoft Research Fellowship (2021), was selected for the MIT Rising Stars in EECS (2021), won the WSU Harriet Rigas Outstanding Woman in Doctoral Studies Award (2023), won Outstanding TA award in CS (2019), and two Outstanding Reviewer Awards from the ICML conference. She spent time as a research intern at Microsoft Research and Meta Research. Her general research interests are in the broad area of AI for science and engineering with a current focus on adaptive experiment design to solve real-world problems including hardware design, materials design, electric transportation systems, and Auto ML.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our next AI seminar on "Learning Interpretable Models on Complex Medical Data" by Jennifer G. Dy is scheduled to be on Nov 03rd (Tomorrow), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Zoom link: https://oregonstate.zoom.us/j/98684050301?pwd=ZzhianQxUFBPUmdYVWJKOFhaVURCQ…<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonsta…>
Learning Interpretable Models on Complex Medical Data
Jennifer G. Dy
Professor
Department of Electrical and Computer Engineering
Northeastern University
Abstract:
Machine learning as a field has become more and more important due to the ubiquity of data collection in various disciplines. Coupled with this data collection is the hope that new discoveries or knowledge can be learned. My research spans both fundamental research in machine learning and their application to biomedical imaging, health, science and engineering. Multi-disciplinary research is instrumental to the growth of the various areas involved. In many applications, data is often complex, high-dimensional and multi-faceted, where multiple possible interpretations are inherent in the data. Fortunately, domain scientists often have rich knowledge that can guide data driven methods. Thus, it is important to enable incorporation of domain input into the design of algorithms. Furthermore, for clinicians and domain scientists to trust and use the results of learning algorithms, not only are models necessary to be accurate but it is also imperative for learning models to be interpretable. In this talk, I highlight these challenges through our experience in collaborative research working on discovering disease subtypes and then provide examples of how these challenges led to innovations in machine learning and to new discoveries.
Speaker Bio:
Jennifer G. Dy is a Full Professor at the Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, where she first joined the faculty in 2002. She received her M.S. and Ph.D. in 1997 and 2001 respectively from the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, and her B.S. degree from the Department of Electrical Engineering, University of the Philippines, in 1993. Her research spans both foundations in machine learning and its application to biomedical imaging, health, science and engineering, with research contributions in unsupervised learning, interpretable models, explainable AI, dimensionality reduction, feature selection/sparse methods, learning from uncertain experts, active learning, Bayesian models, and deep representation learning. She is Director of AI Faculty at the Institute for Experiential AI, an institute with 90+ faculty across all colleges at Northeastern. She is also the Director of the Machine Learning Lab and is a founding faculty member of the SPIRAL (Signal Processing, Imaging, Reasoning, and Learning) Center at Northeastern. She received an NSF Career award in 2004. She has served or is serving as Secretary for the ICML Board (formerly, International Machine Learning Society), associate editor/editorial board member for the Journal of Machine Learning Research, Machine Learning journal, IEEE Transactions on Pattern Analysis and Machine Intelligence, organizing and or technical program committee member for premier conferences in machine learning, AI, and data mining (ICML, NeurIPS, ACM SIGKDD, AAAI, IJCAI, UAI, AISTATS, ICLR, SIAM SDM), Program Chair for SIAM SDM 2013, ICML 2018, AISTATS 2023, and AAAI 2024.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our next AI seminar on "Learning Interpretable Models on Complex Medical Data" by Jennifer G. Dy is scheduled to be on Nov 03rd (Friday), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Zoom link: https://oregonstate.zoom.us/j/98684050301?pwd=ZzhianQxUFBPUmdYVWJKOFhaVURCQ…<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonsta…>
Learning Interpretable Models on Complex Medical Data
Jennifer G. Dy
Professor
Department of Electrical and Computer Engineering
Northeastern University
Abstract:
Machine learning as a field has become more and more important due to the ubiquity of data collection in various disciplines. Coupled with this data collection is the hope that new discoveries or knowledge can be learned. My research spans both fundamental research in machine learning and their application to biomedical imaging, health, science and engineering. Multi-disciplinary research is instrumental to the growth of the various areas involved. In many applications, data is often complex, high-dimensional and multi-faceted, where multiple possible interpretations are inherent in the data. Fortunately, domain scientists often have rich knowledge that can guide data driven methods. Thus, it is important to enable incorporation of domain input into the design of algorithms. Furthermore, for clinicians and domain scientists to trust and use the results of learning algorithms, not only are models necessary to be accurate but it is also imperative for learning models to be interpretable. In this talk, I highlight these challenges through our experience in collaborative research working on discovering disease subtypes and then provide examples of how these challenges led to innovations in machine learning and to new discoveries.
Speaker Bio:
Jennifer G. Dy is a Full Professor at the Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, where she first joined the faculty in 2002. She received her M.S. and Ph.D. in 1997 and 2001 respectively from the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, and her B.S. degree from the Department of Electrical Engineering, University of the Philippines, in 1993. Her research spans both foundations in machine learning and its application to biomedical imaging, health, science and engineering, with research contributions in unsupervised learning, interpretable models, explainable AI, dimensionality reduction, feature selection/sparse methods, learning from uncertain experts, active learning, Bayesian models, and deep representation learning. She is Director of AI Faculty at the Institute for Experiential AI, an institute with 90+ faculty across all colleges at Northeastern. She is also the Director of the Machine Learning Lab and is a founding faculty member of the SPIRAL (Signal Processing, Imaging, Reasoning, and Learning) Center at Northeastern. She received an NSF Career award in 2004. She has served or is serving as Secretary for the ICML Board (formerly, International Machine Learning Society), associate editor/editorial board member for the Journal of Machine Learning Research, Machine Learning journal, IEEE Transactions on Pattern Analysis and Machine Intelligence, organizing and or technical program committee member for premier conferences in machine learning, AI, and data mining (ICML, NeurIPS, ACM SIGKDD, AAAI, IJCAI, UAI, AISTATS, ICLR, SIAM SDM), Program Chair for SIAM SDM 2013, ICML 2018, AISTATS 2023, and AAAI 2024.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Hi all,
We’ll have Jennifer Dy<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…> speaking this week and Syrine Belakaria<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…> on November 17 at the AI Seminar.
Here is an early announcement of a “Halloween Special” on December 1 by our own Heather Knight.
Happy Halloween (or Dia de los Muertos),
Prasad
Title: Best Practices for Robot Death
Abstract: Have you ever been sad after losing a pet, lost all data on your phone, or spilled coffee on your laptop? Imagine how a robot owner might feel after losing their companion robot, or seeing its memory and capability erased. We often replace phones after mere years of use, robot software goes out of date much faster than a dog or cat's lifespan, yet there are few social considerations for the grief a human might feel in losing software support for a robot system, or over robot hardwares for which no new parts are constructed. Yet, just like living systems, impermanence in technological systems is a given. This talk considers ways in which perspectives from Social Robotics — a field that leverages anthropomorphism, i.e., the ways in which humans automatically imbue humanlike characteristics to non-human things — can inform future social considerations and best practices for Robot Death. For example, social structures and legal mechanisms that take human-robot bonding into account by compassionately delivering news, and/or validating naturally existing sadness post-machine decline. Drawing from my personal experience having a Nao robot stolen out of my car in Corvallis, OR, from analyses of robot funerals after Starship Robots are hit by cars or trains, and international examples of ceremonies conducted for lost machines, this talk acknowledges that digital lives can also experience existential 1's and 0's. What can Social Robotics as a field teach us about compassionately handling robot and machine death? Moreover, how might robots and machines help us to compassionately navigate impermanence and suffering in our own lives? We finish with a discussion of opportunities for robot-facilitated mindfulness practices, including "maranasati," which means mindfulness of death and has surprising benefits to one's mental health.
Bio: Dr. Heather Knight is an Assistant Professor of Computer Science at Oregon State University, and directs the CHARISMA Robotics Lab, advising students in Robotics, EECS, AI, and MIME. Her students investigate multi-robot expressive motion, human-robot interactive communication, interfaces for human-in-the-loop robot control, and frequently use entertainment methods to bootstrap the development of everyday social robots. Past projects include Cyberflora, a robot flower garden installation at the Smithsonian/Cooper-Hewitt Design Museum, Silicon Comedy, an online learning system for interactive robot comedy featured on TED.com, and This Too Shall Pass, a music video featuring a two-floor Rube Goldberg Machine that won a British Video Music Award. Her academic background included a postdoc at Stanford University exploring minimal robots and autonomous car interfaces, a Robotics PhD at Carnegie Mellon University on Expressive Motion for Low Degree-of-Freedom Robots, and M.S. and B.S. in Electrical Engineering & Computer Science at Massachusetts Institute of Technology, where she developed a sensate skin for a robot teddy bear at the MIT Media Lab. Outside of academia, she has developed robotics and instrumentation at NASA’s Jet Propulsion Laboratory, and sensor and field applications as an early employee of Aldebaran Robotics, the developers of the Nao and Pepper robots (since acquired by Softbank).
1
0
AI Seminar
by Mangannavar, Rajesh Devaraddi
26 Oct '23
26 Oct '23
Dear all,
Our next AI seminar on "Symbolic AI 3.0 (S3): Rise of the LLMs" by Scott Sanner is scheduled to be on Oct 27th (Tommorrow), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Symbolic AI 3.0 (S3): Rise of the LLMs
Scott Sanner
Associate Professor
Industrial Engineering and Computer Science
University of Toronto
Abstract:
Large Language Models (LLMs) such as ChatGPT have emerged as a revolutionary technology for natural language reasoning and numerous related AI applications. I'll discuss some of my group's own work on abstract reasoning and interactive conversational systems leveraging LLMs and the game-changing realizations that I have taken away from these investigations. This talk will then discuss some general implications of the LLM era and my conjectures as to how it will shift research foci in the near future and enable levels of user-facing AI deployment that were unthinkable just two years ago.
Speaker Bio:
Scott Sanner is an Associate Professor in Industrial Engineering and Cross-appointed in Computer Science at the University of Toronto. Scott’s research focuses on a broad range of AI topics spanning sequential decision-making, (conversational) recommender systems, and applications of machine/deep learning. Scott is currently an Associate Editor for ACM Transactions on Recommender Systems (TORS), the Machine Learning Journal (MLJ), and the Journal of Artificial Intelligence Research (JAIR). Scott was a co-recipient of paper awards from the AI Journal (2014), Transport Research Board (2016), and CPAIOR (2018). He was a recipient of a Google Faculty Research Award in 2020 and a Visiting Researcher at Google while on sabbatical during 2022-23.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our next AI seminar on "Symbolic AI 3.0 (S3): Rise of the LLMs" by Scott Sanner is scheduled to be on Oct 27th (Friday), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Symbolic AI 3.0 (S3): Rise of the LLMs
Scott Sanner
Associate Professor
Industrial Engineering and Computer Science
University of Toronto
Abstract:
Large Language Models (LLMs) such as ChatGPT have emerged as a revolutionary technology for natural language reasoning and numerous related AI applications. I'll discuss some of my group's own work on abstract reasoning and interactive conversational systems leveraging LLMs and the game-changing realizations that I have taken away from these investigations. This talk will then discuss some general implications of the LLM era and my conjectures as to how it will shift research foci in the near future and enable levels of user-facing AI deployment that were unthinkable just two years ago.
Speaker Bio:
Scott Sanner is an Associate Professor in Industrial Engineering and Cross-appointed in Computer Science at the University of Toronto. Scott’s research focuses on a broad range of AI topics spanning sequential decision-making, (conversational) recommender systems, and applications of machine/deep learning. Scott is currently an Associate Editor for ACM Transactions on Recommender Systems (TORS), the Machine Learning Journal (MLJ), and the Journal of Artificial Intelligence Research (JAIR). Scott was a co-recipient of paper awards from the AI Journal (2014), Transport Research Board (2016), and CPAIOR (2018). He was a recipient of a Google Faculty Research Award in 2020 and a Visiting Researcher at Google while on sabbatical during 2022-23.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our next AI seminar on "Self-Driving Vehicles: A Cautionary Tale" by Michael J. Quinn is scheduled to be on Oct 20th (Tomorrow), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Self-Driving Vehicles: A Cautionary Tale
Michael J. Quinn
Abstract:
Elaine Herzberg was the first pedestrian to be killed by a self-driving vehicle. I summarize Uber’s effort to develop an autonomous vehicle, focusing on the engineering and management decisions that contributed to the March 18, 2018, accident. The story of Herzberg’s death illustrates some of the challenges faced by developers of new AI-driven technologies. I conclude by suggesting some practical ways that regulators can help ensure public safety as autonomous vehicles are deployed.
Speaker Bio:
Michael J. Quinn is a computer scientist and author. His early research was in parallel computing, and his textbooks on that subject have been used by hundreds of universities worldwide. In the early 2000s his focus shifted to computer ethics, and in 2004 he published a textbook, Ethics for the Information Age, that explores moral problems related to modern uses of information technology, such as privacy, intellectual property rights, computer security, computerized system failures, the relationship between automation and unemployment, and the impact of social media on democracy. The book, now in its eighth edition, has been used at more than 250 colleges and universities in the United States and many more internationally. Dr. Quinn was a computer science professor at Oregon State University from 1989 to 2007, where he served as head of the Department of Computer Science for five years. >From 1983 to 1989 he was a professor at the University of New Hampshire, and from 2007 to 2022 he was dean of the College of Science and Engineering at Seattle University.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our next AI seminar on "Self-Driving Vehicles: A Cautionary Tale" by Michael J. Quinn is scheduled to be on Oct 20th (Friday), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Self-Driving Vehicles: A Cautionary Tale
Michael J. Quinn
Abstract:
Elaine Herzberg was the first pedestrian to be killed by a self-driving vehicle. I summarize Uber’s effort to develop an autonomous vehicle, focusing on the engineering and management decisions that contributed to the March 18, 2018, accident. The story of Herzberg’s death illustrates some of the challenges faced by developers of new AI-driven technologies. I conclude by suggesting some practical ways that regulators can help ensure public safety as autonomous vehicles are deployed.
Speaker Bio:
Michael J. Quinn is a computer scientist and author. His early research was in parallel computing, and his textbooks on that subject have been used by hundreds of universities worldwide. In the early 2000s his focus shifted to computer ethics, and in 2004 he published a textbook, Ethics for the Information Age, that explores moral problems related to modern uses of information technology, such as privacy, intellectual property rights, computer security, computerized system failures, the relationship between automation and unemployment, and the impact of social media on democracy. The book, now in its eighth edition, has been used at more than 250 colleges and universities in the United States and many more internationally. Dr. Quinn was a computer science professor at Oregon State University from 1989 to 2007, where he served as head of the Department of Computer Science for five years. >From 1983 to 1989 he was a professor at the University of New Hampshire, and from 2007 to 2022 he was dean of the College of Science and Engineering at Seattle University.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our next AI seminar on "The Past, Present, and Future of Artificial Intelligence: from black-box to white-box, from open-loop to closed-loop" by Yi Ma is scheduled to be on Oct 13th (Tomorrow), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: Please note that the speaker will be on zoom for the event but it will be set up in KEC 1001 for everyone to attend.
Zoom Link: https://oregonstate.zoom.us/j/98684050301?pwd=ZzhianQxUFBPUmdYVWJKOFhaVURCQ…<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonsta…>
The Past, Present, and Future of Artificial Intelligence: from black-box to white-box, from open-loop to closed-loop
Yi Ma
Director, Data Science Institute Head,
Computer Science Department University of Hong Kong
Abstract:
In this talk, we provide a more systematic and principled view about the practice of artificial intelligence in the past decade from the history of the study of intelligence. We argue that the most fundamental objective of intelligence is to learn a compact and structured representation of the sensed world that maximizes information gain, measurable by coding rates of the learned representation. We contend that optimizing this principled objective provides a unifying white-box explanation for almost all past and current practices of artificial intelligence based on deep networks, including CNNs, ResNets, and Transformers. Hence, mathematically interpretable, practically competitive, and semantically meaningful deep networks are now within our reach, see our latest release: https://ma-lab-berkeley.github.io/CRATE/<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fma-lab-be…>. Furthermore, our study shows that to learn such representation correctly and automatically, additional computational mechanisms are necessary besides deep networks. For intelligence to become autonomous, one needs to integrate fundamental ideas from coding theory, optimization, feedback control, and game theory. This connects us back to the true origin of the study of intelligence 80 years ago. Probably most importantly, this new framework reveals a much broader and brighter future for developing next-generation autonomous intelligent systems that could truly emulate the computational mechanisms of natural intelligence.
Related papers can be found at:
1. https://ma-lab-berkeley.github.io/CRATE/<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fma-lab-be…>
2. https://jmlr.org/papers/v23/21-0631.html<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjmlr.org%…>
3. https://www.mdpi.com/1099-4300/24/4/456/htm<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mdpi.…>
Speaker Bio:
Yi Ma is the inaugural director of the Data Science Institute and the new head of the Computer Science Department of the University of Hong Kong. He has been a professor at the EECS Department at the University of California, Berkeley since 2018. His research interests include computer vision, high-dimensional data analysis, and integrated intelligent systems. Yi received his two bachelor’s degrees in Automation and Applied Mathematics from Tsinghua University in 1995, two master’s degrees in EECS and Mathematics in 1997, and a PhD degree in EECS from UC Berkeley in 2000. He has been on the faculty of UIUC ECE from 2000 to 2011, the principal researcher and manager of the Visual Computing group of Microsoft Research Asia from 2009 to 2014, and the Executive Dean of the School of Information Science and Technology of ShanghaiTech University from 2014 to 2017. He joined the faculty of UC Berkeley EECS in 2018. He has published over 60 journal papers, 120 conference papers, and three textbooks in computer vision, generalized PCA, and high-dimensional data analysis. He received the NSF Career award in 2004 and the ONR Young Investigator award in 2005. He also received the David Marr prize in computer vision from ICCV 1999 and best paper awards from ECCV 2004 and ACCV 2009. He has served as the Program Chair for ICCV 2013 and the General Chair for ICCV 2015. He is a Fellow of IEEE, ACM, and SIAM.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our next AI seminar on "The Past, Present, and Future of Artificial Intelligence: from black-box to white-box, from open-loop to closed-loop" by Yi Ma is scheduled to be on Oct 13th (Friday), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: Please note that the speaker will be on zoom for the event but it will be set up in KEC 1001 for everyone to attend.
Zoom Link: https://oregonstate.zoom.us/j/98684050301?pwd=ZzhianQxUFBPUmdYVWJKOFhaVURCQ…<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonsta…>
The Past, Present, and Future of Artificial Intelligence: from black-box to white-box, from open-loop to closed-loop
Yi Ma
Director, Data Science Institute Head,
Computer Science Department University of Hong Kong
Abstract:
In this talk, we provide a more systematic and principled view about the practice of artificial intelligence in the past decade from the history of the study of intelligence. We argue that the most fundamental objective of intelligence is to learn a compact and structured representation of the sensed world that maximizes information gain, measurable by coding rates of the learned representation. We contend that optimizing this principled objective provides a unifying white-box explanation for almost all past and current practices of artificial intelligence based on deep networks, including CNNs, ResNets, and Transformers. Hence, mathematically interpretable, practically competitive, and semantically meaningful deep networks are now within our reach, see our latest release: https://ma-lab-berkeley.github.io/CRATE/<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fma-lab-be…>. Furthermore, our study shows that to learn such representation correctly and automatically, additional computational mechanisms are necessary besides deep networks. For intelligence to become autonomous, one needs to integrate fundamental ideas from coding theory, optimization, feedback control, and game theory. This connects us back to the true origin of the study of intelligence 80 years ago. Probably most importantly, this new framework reveals a much broader and brighter future for developing next-generation autonomous intelligent systems that could truly emulate the computational mechanisms of natural intelligence.
Related papers can be found at:
1. https://ma-lab-berkeley.github.io/CRATE/<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fma-lab-be…>
2. https://jmlr.org/papers/v23/21-0631.html<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjmlr.org%…>
3. https://www.mdpi.com/1099-4300/24/4/456/htm<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mdpi.…>
Speaker Bio:
Yi Ma is the inaugural director of the Data Science Institute and the new head of the Computer Science Department of the University of Hong Kong. He has been a professor at the EECS Department at the University of California, Berkeley since 2018. His research interests include computer vision, high-dimensional data analysis, and integrated intelligent systems. Yi received his two bachelor’s degrees in Automation and Applied Mathematics from Tsinghua University in 1995, two master’s degrees in EECS and Mathematics in 1997, and a PhD degree in EECS from UC Berkeley in 2000. He has been on the faculty of UIUC ECE from 2000 to 2011, the principal researcher and manager of the Visual Computing group of Microsoft Research Asia from 2009 to 2014, and the Executive Dean of the School of Information Science and Technology of ShanghaiTech University from 2014 to 2017. He joined the faculty of UC Berkeley EECS in 2018. He has published over 60 journal papers, 120 conference papers, and three textbooks in computer vision, generalized PCA, and high-dimensional data analysis. He received the NSF Career award in 2004 and the ONR Young Investigator award in 2005. He also received the David Marr prize in computer vision from ICCV 1999 and best paper awards from ECCV 2004 and ACCV 2009. He has served as the Program Chair for ICCV 2013 and the General Chair for ICCV 2015. He is a Fellow of IEEE, ACM, and SIAM.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged