Dear all,
Our next AI seminar on *"Integrated Planning and Reinforcement Learning for
Compositional Domains" *by Harsha Kokel is scheduled to be on May 26th
(Friday), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*Integrated Planning and Reinforcement Learning for Compositional
Domains*Harsha
Kokel, Research Scientist
IBM Research
*Abstract:*
Many real-world domains involving sequential decision-making exhibit a
compositional nature. Such domains possess an inherent hierarchy that
allows decision-makers to abstract away certain details and focus on making
high-level decisions using abstractions. In this talk, I will present an
overview of some of our recent efforts in leveraging a combination of
planning and reinforcement learning for such domains. First, I will present
our approach to constructing task-specific abstractions from the influence
information in the domain. Second, I will talk about using that in a
framework for learning efficient and generalizable agents. Finally, I will
discuss the difference between the action spaces of two
sequential-decision-making formulations (MDP & PDDL task) and propose an
approach to bridge that gap.
*Speaker Bio:*
Harsha Kokel is a Research Scientist at IBM Research. Her research focuses
on efficient knowledge-guided learning in structured, relational domains.
She is interested in sequential decision-making problems and exploring the
combination of planning and reinforcement learning. She earned her Ph.D.
at the University of Texas at Dallas. Her research has been published in
the top AI/ML conferences including AAAI, IJCAI, and NeurIPS. She is
currently co-organizing workshops on bridging the gap between planning and
RL; at ICAPS & IJCAI 2023. She also serves as an assistant electronic
publishing editor for JAIR.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged
Dear all,
Our next AI seminar on *"What's wrong with LLMs and what we should be
building instead" *by Thomas G. Dietterich is scheduled to be on May 19th
(Today), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*What's wrong with LLMs and what we should be building instead*
Thomas G. Dietterich
Distinguished Professor Emeritus
School of Electrical Engineering and Computer Science
Oregon State University
*Abstract:*
Large Language Models provide a pre-trained foundation for training many
interesting AI systems. However, they have many shortcomings. They are
expensive to train and to update, their non-linguistic knowledge is poor,
they make false and self-contradictory statements, and these statements can
be socially and ethically inappropriate. This talk will review these
shortcomings and current efforts to address them within the existing LLM
framework. It will then argue for a different, more modular architecture
that decomposes the functions of existing LLMs and adds several additional
components. We believe this alternative can address all of the shortcomings
of LLMs. We will speculate about how this modular architecture could be
built through a combination of machine learning and engineering.
*Speaker Bio:*
Thomas G. Dietterich (AB Oberlin College 1977; MS University of Illinois
1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in
the School of Electrical Engineering and Computer Science at Oregon State
University. Dietterich is one of the pioneers of the field of Machine
Learning and has authored more than 225 refereed publications and two
books. His current research topics include robust artificial intelligence,
robust human-AI systems, and applications in sustainability. Dietterich has
devoted many years of service to the research community. He is a former
President of the Association for the Advancement of Artificial
Intelligence, and the founding President of the International Machine
Learning Society. Other major roles include Executive Editor of the journal
Machine Learning, co-founder of the Journal for Machine Learning Research,
and program chair of AAAI 1990 and NIPS 2000. He currently serves as one of
the moderators for the cs.LG category on arXiv.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged
Dear all,
Our next AI seminar on *"What's wrong with LLMs and what we should be
building instead" *by Thomas G. Dietterich is scheduled to be on May 19th
(Friday), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*What's wrong with LLMs and what we should be building instead*
Thomas G. Dietterich
Distinguished Professor Emeritus
School of Electrical Engineering and Computer Science
Oregon State University
*Abstract:*
Large Language Models provide a pre-trained foundation for training many
interesting AI systems. However, they have many shortcomings. They are
expensive to train and to update, their non-linguistic knowledge is poor,
they make false and self-contradictory statements, and these statements can
be socially and ethically inappropriate. This talk will review these
shortcomings and current efforts to address them within the existing LLM
framework. It will then argue for a different, more modular architecture
that decomposes the functions of existing LLMs and adds several additional
components. We believe this alternative can address all of the shortcomings
of LLMs. We will speculate about how this modular architecture could be
built through a combination of machine learning and engineering.
*Speaker Bio:*
Thomas G. Dietterich (AB Oberlin College 1977; MS University of Illinois
1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in
the School of Electrical Engineering and Computer Science at Oregon State
University. Dietterich is one of the pioneers of the field of Machine
Learning and has authored more than 225 refereed publications and two
books. His current research topics include robust artificial intelligence,
robust human-AI systems, and applications in sustainability. Dietterich has
devoted many years of service to the research community. He is a former
President of the Association for the Advancement of Artificial
Intelligence, and the founding President of the International Machine
Learning Society. Other major roles include Executive Editor of the journal
Machine Learning, co-founder of the Journal for Machine Learning Research,
and program chair of AAAI 1990 and NIPS 2000. He currently serves as one of
the moderators for the cs.LG category on arXiv.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged
Dear all,
Our next AI seminar on *"Designing Interactive AI for Writers" *by Ken
Arnold is scheduled to be on May 12th (Tomorrow), 1-2 PM PST (Add to Google
Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*Designing Interactive AI for Writers*
Ken Arnold
Assistant Professor
Calvin University
*Abstract:*
When AI systems (Gmail, ChatGPT, etc.) suggest words, writers frequently
appropriate them as their own. Although this interaction can help writers,
it also threatens accuracy, usefulness, and even integrity. I will show
empirical results from controlled studies that found that predictive text
systems nudge writers to conform both their writing and their opinions to
the system’s suggestions. Since I conjecture that these threats are
inherent to autocomplete-style predictive interactions, I ask: can large
language models help writers without casting doubt on their authorship? I
will show prototypes that explore intelligence-augmentation approaches for
structuring and revising documents. I hope to spark conversation about what
visions and values might shape how we design interactions with emerging AI
systems.
*Speaker Bio:*
Ken Arnold (B.S., Cornell; M.S., MIT; Ph.D., Harvard) is an assistant
professor of computer science and data science at Calvin University. His
research has shown how predictive text interfaces, like those in smartphone
keyboards and email apps, can shape the content of what people communicate.
He is currently working on intelligence augmentation to help writers craft
words that are fully their own. His current research interests include
human-AI interaction in communication, creativity, and education.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged
Dear all,
Our next AI seminar on *"Designing Interactive AI for Writers" *by Ken
Arnold is scheduled to be on May 12th (Friday), 1-2 PM PST (Add to Google
Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*Designing Interactive AI for Writers*
Ken Arnold
Assistant Professor
Calvin University
*Abstract:*
When AI systems (Gmail, ChatGPT, etc.) suggest words, writers frequently
appropriate them as their own. Although this interaction can help writers,
it also threatens accuracy, usefulness, and even integrity. I will show
empirical results from controlled studies that found that predictive text
systems nudge writers to conform both their writing and their opinions to
the system’s suggestions. Since I conjecture that these threats are
inherent to autocomplete-style predictive interactions, I ask: can large
language models help writers without casting doubt on their authorship? I
will show prototypes that explore intelligence-augmentation approaches for
structuring and revising documents. I hope to spark conversation about what
visions and values might shape how we design interactions with emerging AI
systems.
*Speaker Bio:*
Ken Arnold (B.S., Cornell; M.S., MIT; Ph.D., Harvard) is an assistant
professor of computer science and data science at Calvin University. His
research has shown how predictive text interfaces, like those in smartphone
keyboards and email apps, can shape the content of what people communicate.
He is currently working on intelligence augmentation to help writers craft
words that are fully their own. His current research interests include
human-AI interaction in communication, creativity, and education.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"A tutorial on the Bayesian statistical approach to
inverse problems" *by Cory Simon is scheduled to be on May 5th (Tomorrow),
1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*A tutorial on the Bayesian statistical approach to inverse problems*Cory
Simon
Assistant Professor,
School of Chemical, Biological, and Environmental Engineering
Oregon State University
*Abstract:*
We provide a tutorial on the Bayesian statistical approach to inverse
problems. Two categories of inverse problems are: (1) infer parameters in a
model of a system from observations of input-output pairs and (2)
reconstruct the input to a system that caused an observed output. Bayesian
statistical inversion (BSI) provides a solution to inverse problems that
(i) quantifies uncertainty by assigning a probability to each region of
parameter/input space and (ii) allows for incorporation of prior
information/beliefs about the parameters/inputs. We demonstrate BSI on
problems pertaining to heat transfer into a lime fruit; eg. reconstruct the
initial temperature of a lime from a measurement of its temperature later
in time.
[Joint work with: Faaiq Waqar and Swati Patel at Oregon State University.]
*Speaker Bio:*
Cory Simon is an assistant professor of chemical engineering at Oregon
State University. He earned his PhD in Chemical Engineering from the
University of California, Berkeley. His research group develops
mathematical models, trains machine learning models, and conducts computer
simulations to tackle or deliver insights into problems in chemistry and
materials science.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"A tutorial on the Bayesian statistical approach to
inverse problems" *by Cory Simon is scheduled to be on May 5th (Friday),
1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*A tutorial on the Bayesian statistical approach to inverse problems*Cory
Simon
Assistant Professor,
School of Chemical, Biological, and Environmental Engineering
Oregon State University
*Abstract:*
We provide a tutorial on the Bayesian statistical approach to inverse
problems. Two categories of inverse problems are: (1) infer parameters in a
model of a system from observations of input-output pairs and (2)
reconstruct the input to a system that caused an observed output. Bayesian
statistical inversion (BSI) provides a solution to inverse problems that
(i) quantifies uncertainty by assigning a probability to each region of
parameter/input space and (ii) allows for incorporation of prior
information/beliefs about the parameters/inputs. We demonstrate BSI on
problems pertaining to heat transfer into a lime fruit; eg. reconstruct the
initial temperature of a lime from a measurement of its temperature later
in time.
[Joint work with: Faaiq Waqar and Swati Patel at Oregon State University.]
*Speaker Bio:*
Cory Simon is an assistant professor of chemical engineering at Oregon
State University. He earned his PhD in Chemical Engineering from the
University of California, Berkeley. His research group develops
mathematical models, trains machine learning models, and conducts computer
simulations to tackle or deliver insights into problems in chemistry and
materials science.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
We will not have AI seminar this week on Friday at the regular time, but will have two special seminars by Wu Feng, a Computer Science professor at Virginia Tech. The first one is at 10 AM in KEC 1005 and the second is at noon in KEC 1003. Details are linked below. Please try to attend if possible.
Special Seminar: Confessions of an Accidental Greenie: From Green Destiny to the Green500<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fevents.or…>
Special Seminar: At the Synergistic Intersection of Parallel Computing, Data Analytics, and Machine Learning<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fevents.or…>
Thanks,
Prasad
Dear all,
Our next AI seminar on *"Planning and Learning for Reliable Autonomy in the
Open World" *by Sandhya Saisubramanian is scheduled to be on April 21st
(Tomorrow), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: CRPS 122*
*Planning and Learning for Reliable Autonomy in the Open World*Sandhya
Saisubramanian,
Assistant Professor
Electrical Engineering and Computer Science
Oregon State University
*Abstract:*
Safe and reliable decision-making is critical for long-term deployment of
autonomous systems. Despite the recent advances in artificial intelligence
and robotics, ensuring safe and reliable operation of human-aligned
autonomous systems in open-world environments remains a challenge because
these systems often operate based on incomplete information. In this talk,
I will present an overview of some of our recent efforts in mitigating the
undesirable impacts arising due to model incompleteness. First, I will
present techniques to overcome Markovian and non-Markovian negative side
effects. Second, I will present an approach for reward alignment using
explanations. Finally, I will present a technique to maintain and restore
safety of autonomous systems using meta-reasoning.
*Speaker Bio:*
Sandhya Saisubramanian is an Assistant Professor in EECS at Oregon State
University. Her research focus is on reliable decision making in single and
multiple agents that operate in fully and partially observable open-world
environments. She is a recipient of the Outstanding Program Committee
member award at the ICAPS 2022 and a Distinguished Paper award at IJCAI
2020. She received her Phd from the University of Massachusetts Amherst.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"Planning and Learning for Reliable Autonomy in the
Open World" *by Sandhya Saisubramanian is scheduled to be on April 21st
(Friday), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: CRPS 122*
*Planning and Learning for Reliable Autonomy in the Open World*Sandhya
Saisubramanian,
Assistant Professor
Electrical Engineering and Computer Science
Oregon State University
*Abstract:*
Safe and reliable decision-making is critical for long-term deployment of
autonomous systems. Despite the recent advances in artificial intelligence
and robotics, ensuring safe and reliable operation of human-aligned
autonomous systems in open-world environments remains a challenge because
these systems often operate based on incomplete information. In this talk,
I will present an overview of some of our recent efforts in mitigating the
undesirable impacts arising due to model incompleteness. First, I will
present techniques to overcome Markovian and non-Markovian negative side
effects. Second, I will present an approach for reward alignment using
explanations. Finally, I will present a technique to maintain and restore
safety of autonomous systems using meta-reasoning.
*Speaker Bio:*
Sandhya Saisubramanian is an Assistant Professor in EECS at Oregon State
University. Her research focus is on reliable decision making in single and
multiple agents that operate in fully and partially observable open-world
environments. She is a recipient of the Outstanding Program Committee
member award at the ICAPS 2022 and a Distinguished Paper award at IJCAI
2020. She received her Phd from the University of Massachusetts Amherst.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.