Dear all,
Our next AI seminar on "Possible Impossibilities and Impossible Possibilities" by Dr. Yejin Choi is scheduled to be on Oct 6th (Tomorrow), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: Please note that the speaker will be on zoom for the event but it will be set up in KEC 1001 for everyone to attend.
Zoom Link: https://oregonstate.zoom.us/j/98684050301?pwd=ZzhianQxUFBPUmdYVWJKOFhaVURCQ…<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonsta…>
Possible Impossibilities and Impossible Possibilities
Yejin Choi
Wissner-Slivka Professor and MacArthur Fellow
Paul G. Allen School of Computer Science & Engineering
University of Washington
Abstract:
In this talk, I will question if there can be possible impossibilities of large language models (i.e., the fundamental limits of transformers, if any) and the impossible possibilities of language models (i.e., seemingly impossible alternative paths beyond scale, if at all).
Speaker Bio:
Yejin Choi is Wissner-Slivka Professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a senior director at AI2 overseeing the project Mosaic and a Distinguished Research Fellow at the Institute for Ethics in AI at the University of Oxford. Her research investigates if (and how) AI systems can learn commonsense knowledge and reasoning, if machines can (and should) learn moral reasoning, and various other problems in NLP, AI, and Vision including neuro-symbolic integration, language grounding with vision and interactions, and AI for social good. She is a co-recipient of 2 Test of Time Awards (at ACL 2021 and ICCV 2021), 7 Best/Outstanding Paper Awards (at ACL 2023, NAACL 2022, ICML 2022, NeurIPS 2021, AAAI 2019, and ICCV 2013), the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI's 10 to Watch in 2016.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our next AI seminar on "Possible Impossibilities and Impossible Possibilities" by Dr. Yejin Choi is scheduled to be on Oct 6th (Friday), 1-2 PM. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Possible Impossibilities and Impossible Possibilities
Yejin Choi
Wissner-Slivka Professor and MacArthur Fellow
Paul G. Allen School of Computer Science & Engineering
University of Washington
Abstract:
In this talk, I will question if there can be possible impossibilities of large language models (i.e., the fundamental limits of transformers, if any) and the impossible possibilities of language models (i.e., seemingly impossible alternative paths beyond scale, if at all).
Speaker Bio:
Yejin Choi is Wissner-Slivka Professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a senior director at AI2 overseeing the project Mosaic and a Distinguished Research Fellow at the Institute for Ethics in AI at the University of Oxford. Her research investigates if (and how) AI systems can learn commonsense knowledge and reasoning, if machines can (and should) learn moral reasoning, and various other problems in NLP, AI, and Vision including neuro-symbolic integration, language grounding with vision and interactions, and AI for social good. She is a co-recipient of 2 Test of Time Awards (at ACL 2021 and ICCV 2021), 7 Best/Outstanding Paper Awards (at ACL 2023, NAACL 2022, ICML 2022, NeurIPS 2021, AAAI 2019, and ICCV 2013), the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI's 10 to Watch in 2016.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our first AI seminar for the fall term on the topic "Enhancing healthcare with AI-in-the-loop" by Sriraam Natarajan is scheduled to be on Sept 29th (Tomorrow), 1-2 PM PST. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Enhancing healthcare with AI-in-the-loop
Sriraam Natarajan
Professor and the Director for Center for ML Department of Computer Science
University of Texas Dallas
Abstract:
Historically, Artificial Intelligence has taken a symbolic route for representing and reasoning about objects at a higher-level or a statistical route for learning complex models from large data. To achieve true AI in complex domains such as healthcare, it is necessary to make these different paths meet and enable seamless human interaction. First, I will introduce learning from rich, structured, complex and noisy data. One of the key attractive properties of the learned models is that they use a rich representation for modeling the domain that potentially allows for seam-less human interaction. I will present the recent progress that allows for more reasonable human interaction where the human input is taken as “advice” and the learning algorithm combines this advice with data. I will present these algorithms in the context of several healthcare problems -- learning from electronic health records, clinical studies, and surveys -- and demonstrate the value of involving experts during learning.
Speaker Bio:
Sriraam Natarajan is a Professor and the Director for Center for ML at the Department of Computer Science at University of Texas Dallas, a hessian.AI fellow at TU Darmstadt and a RBDSCAII Distinguished Faculty Fellow at IIT Madras. His research interests lie in the field of Artificial Intelligence, with emphasis on Machine Learning, Statistical Relational Learning and AI, Reinforcement Learning, Graphical Models and Biomedical Applications. He is a AAAI senior member and has received the Young Investigator award from US Army Research Office, Amazon Faculty Research Award, Intel Faculty Award, XEROX Faculty Award, Verisk Faculty Award, ECSS Graduate teaching award from UTD and the IU trustees Teaching Award from Indiana University. He is the program chair of AAAI 2024, the general chair of CoDS-COMAD 2024, AI and society track chair of AAAI 2023 and 2022, senior member track chair of AAAI 2023, demo chair of IJCAI 2022, program co-chair of SDM 2020 and ACM CoDS-COMAD 2020 conferences. He was the specialty chief editor of Frontiers in ML and AI journal, and is an associate editor of JAIR, DAMI and Big Data journals.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Dear all,
Our first AI seminar for the fall term on the topic "Enhancing healthcare with AI-in-the-loop" by Sriraam Natarajan is scheduled to be on Sept 29th (Friday), 1-2 PM PST. It will be followed by a 30-minute Q&A session with the graduate students.
Location: KEC 1001
Enhancing healthcare with AI-in-the-loop
Sriraam Natarajan
Professor and the Director for Center for ML Department of Computer Science
University of Texas Dallas
Abstract:
Historically, Artificial Intelligence has taken a symbolic route for representing and reasoning about objects at a higher-level or a statistical route for learning complex models from large data. To achieve true AI in complex domains such as healthcare, it is necessary to make these different paths meet and enable seamless human interaction. First, I will introduce learning from rich, structured, complex and noisy data. One of the key attractive properties of the learned models is that they use a rich representation for modeling the domain that potentially allows for seam-less human interaction. I will present the recent progress that allows for more reasonable human interaction where the human input is taken as “advice” and the learning algorithm combines this advice with data. I will present these algorithms in the context of several healthcare problems -- learning from electronic health records, clinical studies, and surveys -- and demonstrate the value of involving experts during learning.
Speaker Bio:
Sriraam Natarajan is a Professor and the Director for Center for ML at the Department of Computer Science at University of Texas Dallas, a hessian.AI fellow at TU Darmstadt and a RBDSCAII Distinguished Faculty Fellow at IIT Madras. His research interests lie in the field of Artificial Intelligence, with emphasis on Machine Learning, Statistical Relational Learning and AI, Reinforcement Learning, Graphical Models and Biomedical Applications. He is a AAAI senior member and has received the Young Investigator award from US Army Research Office, Amazon Faculty Research Award, Intel Faculty Award, XEROX Faculty Award, Verisk Faculty Award, ECSS Graduate teaching award from UTD and the IU trustees Teaching Award from Indiana University. He is the program chair of AAAI 2024, the general chair of CoDS-COMAD 2024, AI and society track chair of AAAI 2023 and 2022, senior member track chair of AAAI 2023, demo chair of IJCAI 2022, program co-chair of SDM 2020 and ACM CoDS-COMAD 2020 conferences. He was the specialty chief editor of Frontiers in ML and AI journal, and is an associate editor of JAIR, DAMI and Big Data journals.
Please watch this space for future AI Seminars :
https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…>
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly encouraged
Hi everyone,
I hope you had a good summer. We will resume the regular AI seminar<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…> on Fridays at 1 pm in KEC 1001 starting from September 29'th. It is open to public. Grads, please register for AI 507 to get 1 credit.
We will also have a special virtual seminar this week on Thursday, August 14, at 4 pm PST in KEC 1001 by Jonathan Ferrer Mestres, a research scientist at CSIRO, Australia. I hope to see you there.
Zoom link: https://oregonstate.zoom.us/j/98684050301?pwd=ZzhianQxUFBPUmdYVWJKOFhaVURCQ…<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonsta…>
Title: Towards more interpretable solutions for conservation problems
Abstract:
Markov Decision Processes (MDPs) provide a convenient model for representing sequential decision-making optimization problems when the decision maker has complete information about the current state of the system and dynamics are non-deterministic. MDPs have been applied to help recover populations of threatened species under limited resources, to control invasive species, to perform adaptive management of natural resources, and to test behavioral ecology theories. These domains are human-operated systems, where MDP policies provide recommendations. Solutions computed for MDPs with thousands of states are difficult to understand. In human-operated systems, it is crucial that solutions provided by artificial intelligence algorithms can be interpreted and explained in order to increase uptake of MDP solutions. Explainable artificial intelligence, also known as the interpretability problem, aims to generate decisions in which one of the criteria is how easily a human can understand these decisions. We propose to increase the interpretability of MDPs by providing explainable artificial intelligence algorithms that can be used to solve conservation decision problems. We define the problem of solving K-MDPs, i.e., given an original MDP and a number of states (K), generate a reduced state space MDP that minimizes the difference between the original and reduced optimal solutions. Abstracting states aims to reduce the size of large state spaces by aggregating states which are equivalent given a metric. We found that K-MDPs can achieve a substantial reduction of the number of states with a small loss of performance on a number of case studies of increasing complexity from the literature.
Bio:
Jonathan is a Research Scientist within the Conservation Decisions Team, where he focuses on developing trustworthy and explainable artificial intelligence solutions for environmental decision-making. His primary goal is to ensure that artificial intelligence systems are equipped with transparent capabilities, fostering trust among users and experts who rely on these solutions.
Jonathan's research centers on making informed decisions in the face of uncertainty to provide effective environmental solutions.
Prior to his current role, he earned his PhD from Universitat Pompeu Fabra, in the Artificial Intelligence and Machine Learning Group in Barcelona in 2018, under the supervision of Dr. Hector Geffner. His doctoral thesis explored the integration of task and motion planning. Jonathan's educational journey also includes a Master's degree in Intelligent Interactive Systems and a Bachelor's degree in Computer Science.
-------------------------------------------------------
Prasad Tadepalli
Director, AI Program
School of Electrical Engineering and Computer Science
Oregon State University
Corvallis, OR 97330
Dear all,
Our next AI seminar on *"Lipstick on a Pig: Using Language Models as
Few-Shot Learners" *by Sameer Singh is scheduled to be on June 9th
(Tomorrow), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
)
*Location: *
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonsta…
Please note that this is a Zoom-only event
*Lipstick on a Pig: Using Language Models as Few-Shot Learners*
Sameer Singh
Associate Professor,
Computer Science
University of California, Irvine
*Abstract:*
Today's Natural Language Processing (NLP) strategies are heavily reliant on
pre-trained language models due to their ability to deliver semantically
rich representations. While these models provide impressive few-shot
natural language understanding and reasoning capabilities, simply using
them as "fill in the blank" prompts may not be a one-size-fits-all
solution. Despite the allure of direct application of these models - an
allure that grows with model and dataset sizes - the objectives of language
modeling and few-shot learning are not perfectly aligned. Unpacking this
disparity is crucial.
In this talk, I will describe some of our work in characterizing the
differences between language modeling and few-shot learning. I will show
how language modeling comes with crucial shortcomings for few-shot
adaptation and describe a simple approach to address them. Then, focusing
on numerical reasoning, I will show that the reasoning ability of the
language models depends strongly on simple statistics of the pretraining
corpus, performing much more accurately for more common terms. These
results suggest language modeling may not be sufficient to learn robust
reasoners and that we need to take the pretraining data into account when
interpreting few-shot evaluation results. While language models hold
substantial promise, making these 'pigs' presentable with 'lipstick' may
require a more comprehensive approach than currently anticipated.
*Speaker Bio:*
Dr. Sameer Singh is an Associate Professor of Computer Science at the
University of California, Irvine (UCI). He is working primarily on the
robustness and interpretability of machine learning algorithms and models
that reason with text and structure for natural language processing. Sameer
was a postdoctoral researcher at the University of Washington and received
his Ph.D. from the University of Massachusetts, Amherst. He has received
the NSF CAREER award, UCI Distinguished Early Career Faculty award, the
Hellman Faculty Fellowship, and was selected as a DARPA Riser. His group
has received funding from Allen Institute for AI, Amazon, NSF, DARPA, Adobe
Research, Hasso Plattner Institute, NEC, Base 11, and FICO. Sameer has
published extensively at machine learning and natural language processing
venues and received conference paper awards at KDD 2016, ACL 2018, EMNLP
2019, AKBC 2020, ACL 2020, and NAACL 2022. (https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsameersin…)
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged
Dear all,
Our next AI seminar on *"Lipstick on a Pig: Using Language Models as
Few-Shot Learners" *by Sameer Singh is scheduled to be on June 9th
(Friday), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
)
*Location: *
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonsta…
Please note that this is a Zoom-only event
*Lipstick on a Pig: Using Language Models as Few-Shot Learners*
Sameer Singh
Associate Professor,
Computer Science
University of California, Irvine
*Abstract:*
Today's Natural Language Processing (NLP) strategies are heavily reliant on
pre-trained language models due to their ability to deliver semantically
rich representations. While these models provide impressive few-shot
natural language understanding and reasoning capabilities, simply using
them as "fill in the blank" prompts may not be a one-size-fits-all
solution. Despite the allure of direct application of these models - an
allure that grows with model and dataset sizes - the objectives of language
modeling and few-shot learning are not perfectly aligned. Unpacking this
disparity is crucial.
In this talk, I will describe some of our work in characterizing the
differences between language modeling and few-shot learning. I will show
how language modeling comes with crucial shortcomings for few-shot
adaptation and describe a simple approach to address them. Then, focusing
on numerical reasoning, I will show that the reasoning ability of the
language models depends strongly on simple statistics of the pretraining
corpus, performing much more accurately for more common terms. These
results suggest language modeling may not be sufficient to learn robust
reasoners and that we need to take the pretraining data into account when
interpreting few-shot evaluation results. While language models hold
substantial promise, making these 'pigs' presentable with 'lipstick' may
require a more comprehensive approach than currently anticipated.
*Speaker Bio:*
Dr. Sameer Singh is an Associate Professor of Computer Science at the
University of California, Irvine (UCI). He is working primarily on the
robustness and interpretability of machine learning algorithms and models
that reason with text and structure for natural language processing. Sameer
was a postdoctoral researcher at the University of Washington and received
his Ph.D. from the University of Massachusetts, Amherst. He has received
the NSF CAREER award, UCI Distinguished Early Career Faculty award, the
Hellman Faculty Fellowship, and was selected as a DARPA Riser. His group
has received funding from Allen Institute for AI, Amazon, NSF, DARPA, Adobe
Research, Hasso Plattner Institute, NEC, Base 11, and FICO. Sameer has
published extensively at machine learning and natural language processing
venues and received conference paper awards at KDD 2016, ACL 2018, EMNLP
2019, AKBC 2020, ACL 2020, and NAACL 2022. (https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsameersin…)
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged
Dear all,
Our next AI seminar on *"Convergence Analysis Framework for Fixed-Point
Algorithms in Machine Learning and Signal Processing" *by Raviv Raich
is scheduled
to be on June 2nd (Tomorrow), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*Convergence Analysis Framework for Fixed-Point Algorithms in Machine
Learning and Signal Processing*
Raviv Raich
Associate Professor
Electrical and Computer Engineering
Oregon State University
*Abstract:*
Many iterative algorithms are designed to solve challenging signal
processing and machine learning optimization problems. Among such
algorithms are iterative hard thresholding for sparse reconstruction,
projected gradient descent for matrix completion, and alternating
projection for phase retrieval. Such algorithms and others can be viewed
through the fix-point iteration lens. In this talk, we will examine the
fixed-point view of iterative algorithms. We will present results on the
analysis of projected gradient descent for the well-known constrained least
squares problem and show how such analysis can be used to optimize the
iterative solution. As a special case of this framework, we will examine
iterative solutions to the matrix completion problem. Our approach provides
a stepping stone to the optimization of acceleration approaches across
multiple algorithms.
*Speaker Bio:*
Raviv Raich received the B.Sc. and M.Sc. degrees in electrical engineering
from Tel-Aviv University, Tel-Aviv, Israel, in 1994 and 1998, respectively,
and the Ph.D. degree in electrical engineering from the Georgia Institute
of Technology, Atlanta, GA, in 2004. Between 1999 and 2000, he was a
Researcher with the Communications Team, Industrial Research, Ltd.,
Wellington, New Zealand. From 2004 to 2007, he was a Postdoctoral Fellow
with the University of Michigan, Ann Arbor, MI. Raich has been an assistant
professor (2007-2013) and an associate professor (2013-2023) with the
School of Electrical Engineering and Computer Science, Oregon State
University, Corvallis, OR. His research interests include probabilistic
modeling and optimization in signal processing and machine learning. From
2011 to 2014, he was an Associate Editor for the IEEE Transactions on
Signal Processing. He was a Member during 2011–2016 and the Chair during
2017–2018 of the Machine Learning for Signal Processing Technical Committee
(TC) of the IEEE Signal Processing Society. Since 2019, he has been a
Member of the Signal Processing Theory and Methods, TC of the IEEE Signal
Processing Society.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged
Dear all,
Our next AI seminar on *"Convergence Analysis Framework for Fixed-Point
Algorithms in Machine Learning and Signal Processing" *by Raviv Raich
is scheduled
to be on June 2nd (Friday), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*Convergence Analysis Framework for Fixed-Point Algorithms in Machine
Learning and Signal Processing*
Raviv Raich
Associate Professor
Electrical and Computer Engineering
Oregon State University
*Abstract:*
Many iterative algorithms are designed to solve challenging signal
processing and machine learning optimization problems. Among such
algorithms are iterative hard thresholding for sparse reconstruction,
projected gradient descent for matrix completion, and alternating
projection for phase retrieval. Such algorithms and others can be viewed
through the fix-point iteration lens. In this talk, we will examine the
fixed-point view of iterative algorithms. We will present results on the
analysis of projected gradient descent for the well-known constrained least
squares problem and show how such analysis can be used to optimize the
iterative solution. As a special case of this framework, we will examine
iterative solutions to the matrix completion problem. Our approach provides
a stepping stone to the optimization of acceleration approaches across
multiple algorithms.
*Speaker Bio:*
Raviv Raich received the B.Sc. and M.Sc. degrees in electrical engineering
from Tel-Aviv University, Tel-Aviv, Israel, in 1994 and 1998, respectively,
and the Ph.D. degree in electrical engineering from the Georgia Institute
of Technology, Atlanta, GA, in 2004. Between 1999 and 2000, he was a
Researcher with the Communications Team, Industrial Research, Ltd.,
Wellington, New Zealand. From 2004 to 2007, he was a Postdoctoral Fellow
with the University of Michigan, Ann Arbor, MI. Raich has been an assistant
professor (2007-2013) and an associate professor (2013-2023) with the
School of Electrical Engineering and Computer Science, Oregon State
University, Corvallis, OR. His research interests include probabilistic
modeling and optimization in signal processing and machine learning. From
2011 to 2014, he was an Associate Editor for the IEEE Transactions on
Signal Processing. He was a Member during 2011–2016 and the Chair during
2017–2018 of the Machine Learning for Signal Processing Technical Committee
(TC) of the IEEE Signal Processing Society. Since 2019, he has been a
Member of the Signal Processing Theory and Methods, TC of the IEEE Signal
Processing Society.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged
Dear all,
Our next AI seminar on *"Integrated Planning and Reinforcement Learning for
Compositional Domains" *by Harsha Kokel is scheduled to be on May 26th
(Tomorrow), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: *Crop Science Building 122
*Integrated Planning and Reinforcement Learning for Compositional
Domains*Harsha
Kokel, Research Scientist
IBM Research
*Abstract:*
Many real-world domains involving sequential decision-making exhibit a
compositional nature. Such domains possess an inherent hierarchy that
allows decision-makers to abstract away certain details and focus on making
high-level decisions using abstractions. In this talk, I will present an
overview of some of our recent efforts in leveraging a combination of
planning and reinforcement learning for such domains. First, I will present
our approach to constructing task-specific abstractions from the influence
information in the domain. Second, I will talk about using that in a
framework for learning efficient and generalizable agents. Finally, I will
discuss the difference between the action spaces of two
sequential-decision-making formulations (MDP & PDDL task) and propose an
approach to bridge that gap.
*Speaker Bio:*
Harsha Kokel is a Research Scientist at IBM Research. Her research focuses
on efficient knowledge-guided learning in structured, relational domains.
She is interested in sequential decision-making problems and exploring the
combination of planning and reinforcement learning. She earned her Ph.D.
at the University of Texas at Dallas. Her research has been published in
the top AI/ML conferences including AAAI, IJCAI, and NeurIPS. She is
currently co-organizing workshops on bridging the gap between planning and
RL; at ICAPS & IJCAI 2023. She also serves as an assistant electronic
publishing editor for JAIR.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged