Dear all,
Our next AI seminar on *"AI can learn from data. But can it learn to
reason?" *by Guy Van den Broeck is scheduled to be on February 3rd
(Friday), 1-2 PM PST (Add to Google Calendar
<https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=M2dldjQ2cG…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: Rogers 230*
*AI can learn from data. But can it learn to reason?*
Guy Van den Broeck
University of California at Los Angeles
*Abstract:*
Modern artificial intelligence, and deep learning in particular, is
extremely capable at learning predictive models from vast amounts of data.
Many expect that AI will go from powering customer service chatbots to
providing mental health services. That it will go from personalized
advertisement to deciding who is given bail. The expectation is that AI
will solve society’s problems by simply being more intelligent than we are.
Implicit in this bullish perspective is the assumption that AI will
naturally learn to reason from data: that it can form trains of thought
that “make sense,” similar to how a mental health professional, a judge, or
a lawyer might reason about a case, or more formally, how a mathematician
might prove a theorem. This talk will investigate the question whether this
behavior can be learned from data, and how we can design the next
generation of artificial intelligence techniques that can achieve such
capabilities, focusing on neuro-symbolic learning and tractable deep
generative models.
*Speaker Bio:*
Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in
the Computer Science Department, where he directs the Statistical and
Relational Artificial Intelligence (StarAI) lab. His research interests are
in Machine Learning, Knowledge Representation and Reasoning, and Artificial
Intelligence in general. His papers have been recognized with awards from
key conferences such as AAAI, UAI, KR, OOPSLA, and ILP. Guy is the
recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19
Computers and Thought Award.
*Please watch this space for future AI Seminars :*
https://engineering.oregonstate.edu/EECS/research/AI
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"Microbial "Language" Model: Using Natural Language
Processing Techniques to Understand Microbiomes" *by Xiaoli Fern is scheduled
to be on January 27th(Tomorrow), 1-2 PM PST. It will be followed by a
30-minute Q&A session with the graduate students.
*Location: Rogers 230*
*Microbial "Language" Model: Using Natural Language Processing Techniques
to Understand Microbiomes*
Xiaoli Fern
Associate Professor
Oregon State University
*Abstract:*
Human microbiomes and their interactions with various body systems have
been linked to a wide range of diseases and lifestyle variables. To
understand these links, citizen science projects such as the American Gut
Project (AGP) and Human Food Project (HFP) have provided large open source
datasets for microbiome investigation. In this talk, I will present our
recent work that leverages such open source datasets by learning a
microbial “language” model using techniques originally developed for
Natural Language Processing (NLP). Our microbial “language” model is
trained in a self-supervised fashion to capture the interactions among
different microbial species (taxa) and the common compositional patterns in
forming microbial communities, much like the language model in NLP trained
to capture word interactions and the grammatical patterns in natural texts.
Importantly, the learned model allows for individual bacterial species to
be interpreted and represented differently in different contexts of the
microbial environment and produces a representation of a sample by
collectively interpreting different bacteria species in the sample and
their interactions as a whole. To demonstrate the power of our model, we
show that our sample representation leads to improved prediction
performance compared to baseline representations consistently across
multiple tasks including predicting disease states and diet patterns. We
also show that the learned representation, coupled with a simple ensemble
strategy, can produce highly robust models that can generalize well to
microbiome data independently collected from different populations.
Finally, I will present some interpretation results that help understand
our model and its behavior.
*Speaker Bio:*
Xiaoli Fern is an associate professor of Computer Science at Oregon State
University. She received her Ph.D. (2005) in computer engineering from
Purdue University and her M.S. (2000) and B.S. (1997) degrees from Shanghai
Jiao Tong University. Dr. Fern is broadly interested in applied machine
learning and data mining, where she draws inspirations from practical
challenges from real-world applications to develop new methods and new
understanding for machine learning. Her current research focuses on
self-supervised learning from large and complex data with application areas
spanning from material characterization and design, to learning “rules of
life” from microbial and metabolomics data. Her work is sponsored by NSF,
DARPA, DOE and USDA.
*Please watch this space for future AI Seminars :*
https://engineering.oregonstate.edu/EECS/research/AI
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"Microbial "Language" Model: Using Natural Language
Processing Techniques to Understand Microbiomes" *by Xiaoli Fern is scheduled
to be on January 27th(Friday), 1-2 PM PST. It will be followed by a
30-minute Q&A session with the graduate students.
*Location: Rogers 230*
*Microbial "Language" Model: Using Natural Language Processing Techniques
to Understand Microbiomes*
Xiaoli Fern
Associate Professor
Oregon State University
*Abstract:*
Human microbiomes and their interactions with various body systems have
been linked to a wide range of diseases and lifestyle variables. To
understand these links, citizen science projects such as the American Gut
Project (AGP) and Human Food Project (HFP) have provided large open source
datasets for microbiome investigation. In this talk, I will present our
recent work that leverages such open source datasets by learning a
microbial “language” model using techniques originally developed for
Natural Language Processing (NLP). Our microbial “language” model is
trained in a self-supervised fashion to capture the interactions among
different microbial species (taxa) and the common compositional patterns in
forming microbial communities, much like the language model in NLP trained
to capture word interactions and the grammatical patterns in natural texts.
Importantly, the learned model allows for individual bacterial species to
be interpreted and represented differently in different contexts of the
microbial environment and produces a representation of a sample by
collectively interpreting different bacteria species in the sample and
their interactions as a whole. To demonstrate the power of our model, we
show that our sample representation leads to improved prediction
performance compared to baseline representations consistently across
multiple tasks including predicting disease states and diet patterns. We
also show that the learned representation, coupled with a simple ensemble
strategy, can produce highly robust models that can generalize well to
microbiome data independently collected from different populations.
Finally, I will present some interpretation results that help understand
our model and its behavior.
*Speaker Bio:*
Xiaoli Fern is an associate professor of Computer Science at Oregon State
University. She received her Ph.D. (2005) in computer engineering from
Purdue University and her M.S. (2000) and B.S. (1997) degrees from Shanghai
Jiao Tong University. Dr. Fern is broadly interested in applied machine
learning and data mining, where she draws inspirations from practical
challenges from real-world applications to develop new methods and new
understanding for machine learning. Her current research focuses on
self-supervised learning from large and complex data with application areas
spanning from material characterization and design, to learning “rules of
life” from microbial and metabolomics data. Her work is sponsored by NSF,
DARPA, DOE and USDA.
*Please watch this space for future AI Seminars :*
https://engineering.oregonstate.edu/EECS/research/AI
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"Symbols as a Lingua Franca for Supporting Human-AI
Interaction For Explainable and Advisable AI Systems" *by Subbarao
Kambhampati is scheduled to be on January 20th(Tomorrow), 1-2 PM PST. It
will be followed by a 30-minute Q&A session with the graduate students.
*Please note that the speaker will be on zoom for the event but it will be
set up in Rogers 230 for everyone to attend.*
*Symbols as a Lingua Franca for Supporting Human-AI Interaction For
Explainable and Advisable AI Systems*
Subbarao Kambhampati
Professor
Arizona State University
*Abstract:*
Despite the surprising power of many modern AI systems that often learn
their own representations, there is significant discontent about their
inscrutability and the attendant problems in their ability to interact with
humans. While alternatives such as neuro-symbolic approaches have been
proposed, there is a lack of consensus on what they are about. There are
often two independent motivations (i) symbols as a lingua franca for
human-AI interaction and (ii) symbols as system-produced abstractions used
by the AI system in its internal reasoning. The jury is still out on
whether AI systems will need to use symbols in their internal reasoning to
achieve general intelligence capabilities. Whatever the answer there is,
the need for (human-understandable) symbols in human-AI interaction seems
quite compelling. In particular, humans would be interested in providing
explicit (symbolic) knowledge and advice -- and expect machine explanations
in kind. This alone requires AI systems to maintain a symbolic interface
for interaction with humans. In this talk, I will motivate this point of
view, and describe recent efforts in our research group along this
direction.
*Speaker Bio:*
Subbarao Kambhampati is a professor of computer science at Arizona State
University. Kambhampati studies fundamental problems in planning and
decision making, motivated in particular by the challenges of human-aware
AI systems. He is a fellow of Association for the Advancement of Artificial
Intelligence, American Association for the Advancement of Science, and
Association for Computing machinery, and was an NSF Young Investigator. He
served as the president of the Association for the Advancement of
Artificial Intelligence, a trustee of the International Joint Conference on
Artificial Intelligence, the chair of AAAS Section T (Information,
Communication and Computation), and a founding board member of Partnership
on AI. Kambhampati’s research as well as his views on the progress and
societal impacts of AI have been featured in multiple national and
international media outlets. He can be followed on Twitter @rao2z.
*Please watch this space for future AI Seminars :*
https://engineering.oregonstate.edu/EECS/research/AI
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"Symbols as a Lingua Franca for Supporting Human-AI
Interaction For Explainable and Advisable AI Systems" *by Subbarao
Kambhampati is scheduled to be on January 20th(Friday), 1-2 PM PST. It
will be followed by a 30-minute Q&A session with the graduate students.
*Please note that the speaker will be on zoom for the event but it will be
set up in Rogers 230 for everyone to attend.*
*Symbols as a Lingua Franca for Supporting Human-AI Interaction For
Explainable and Advisable AI Systems*
Subbarao Kambhampati
Professor
Arizona State University
*Abstract:*
Despite the surprising power of many modern AI systems that often learn
their own representations, there is significant discontent about their
inscrutability and the attendant problems in their ability to interact with
humans. While alternatives such as neuro-symbolic approaches have been
proposed, there is a lack of consensus on what they are about. There are
often two independent motivations (i) symbols as a lingua franca for
human-AI interaction and (ii) symbols as system-produced abstractions used
by the AI system in its internal reasoning. The jury is still out on
whether AI systems will need to use symbols in their internal reasoning to
achieve general intelligence capabilities. Whatever the answer there is,
the need for (human-understandable) symbols in human-AI interaction seems
quite compelling. In particular, humans would be interested in providing
explicit (symbolic) knowledge and advice -- and expect machine explanations
in kind. This alone requires AI systems to maintain a symbolic interface
for interaction with humans. In this talk, I will motivate this point of
view, and describe recent efforts in our research group along this
direction.
*Speaker Bio:*
Subbarao Kambhampati is a professor of computer science at Arizona State
University. Kambhampati studies fundamental problems in planning and
decision making, motivated in particular by the challenges of human-aware
AI systems. He is a fellow of Association for the Advancement of Artificial
Intelligence, American Association for the Advancement of Science, and
Association for Computing machinery, and was an NSF Young Investigator. He
served as the president of the Association for the Advancement of
Artificial Intelligence, a trustee of the International Joint Conference on
Artificial Intelligence, the chair of AAAS Section T (Information,
Communication and Computation), and a founding board member of Partnership
on AI. Kambhampati’s research as well as his views on the progress and
societal impacts of AI have been featured in multiple national and
international media outlets. He can be followed on Twitter @rao2z.
*Please watch this space for future AI Seminars :*
https://engineering.oregonstate.edu/EECS/research/AI
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"Seeing outside the image: Space and time
completion for video tracking and scene parsing" *by Katerina Fragkiadakiis
is scheduled to be on January 13th(Tomorrow), 1-2 PM PST. It will be
followed by a 30-minute Q&A session with the graduate students.
*Please note that the speaker will be on zoom for the event but it will be
set up in Rogers 230 (different from the usual location) for everyone to
attend.*
*Seeing outside the image: Space and time completion for video tracking and
scene parsing*
Katerina Fragkiadaki
Assistant Professor
Carnegie Mellon University
*Abstract:*
We investigate methods for computer vision architectures to self-improve in
unlabelled data, by exploiting rich regularities of the natural world. As a
starting point, we embrace the fact that the world is 3D, and design neural
architectures that map RGB-D observations into 3D feature maps. This
representation allows us to generate self-supervision objectives using
other regularities: we know that two objects cannot be in the same location
at once, and that multiple views can be related with geometry. We use these
facts to train viewpoint-invariant 3D features (unsupervised), and yield
improvements in object detection and tracking. We then discuss
entity-centric architectures where entities are informed from associative
retrieval or through reconstruction feedback, and show their superior
generalization over models without memory or without
reconstruction feedback. We then shift focus to extracting information
from dynamic scenes. We propose a way to improve motion estimation itself,
by revisiting the classic concept of “particle videos”. Using learned
temporal priors and within-inference optimization, we can track points
across occlusions, and outperform flow-based and feature-matching methods
on fine-grained multi-frame correspondence tasks.
*Speaker Bio:*
Katerina Fragkiadaki is an Assistant Professor in the Machine Learning
Department in Carnegie Mellon University. She received her Ph.D. from
University of Pennsylvania and was a postdoctoral fellow in UC Berkeley and
Google research after that. Her work is on learning
visual representations with little supervision and on combining spatial
reasoning in deep visual learning. Her group develops algorithms for
mobile computer vision, learning of physics and common sense for agents
that move around and interact with the world. Her work has been awarded
with a best Ph.D. thesis award, an NSF CAREER award, AFOSR Young
Investigator award, a DARPA Young Investigator award, Google, TRI, Amazon,
UPMC and Sony faculty research awards.
*Please watch this space for future AI Seminars :*
https://engineering.oregonstate.edu/EECS/research/AI
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"Seeing outside the image: Space and time
completion for video tracking and scene parsing" *by Katerina
Fragkiadakiis scheduled
to be on January 13th(Friday), 1-2 PM PST. It will be followed by a
30-minute Q&A session with the graduate students.
*Please note that the speaker will be on zoom for the event but it will be
set up in Rogers 230 (different from the usual location) for everyone to
attend.*
*Developing AI Decision Tools for Conservation*
Katerina Fragkiadaki
Assistant Professor
Carnegie Mellon University
*Abstract:*
We investigate methods for computer vision architectures to self-improve in
unlabelled data, by exploiting rich regularities of the natural world. As a
starting point, we embrace the fact that the world is 3D, and design neural
architectures that map RGB-D observations into 3D feature maps. This
representation allows us to generate self-supervision objectives using
other regularities: we know that two objects cannot be in the same location
at once, and that multiple views can be related with geometry. We use these
facts to train viewpoint-invariant 3D features (unsupervised), and yield
improvements in object detection and tracking. We then discuss
entity-centric architectures where entities are informed from associative
retrieval or through reconstruction feedback, and show their superior
generalization over models without memory or without
reconstruction feedback. We then shift focus to extracting information
from dynamic scenes. We propose a way to improve motion estimation itself,
by revisiting the classic concept of “particle videos”. Using learned
temporal priors and within-inference optimization, we can track points
across occlusions, and outperform flow-based and feature-matching methods
on fine-grained multi-frame correspondence tasks.
*Speaker Bio:*
Katerina Fragkiadaki is an Assistant Professor in the Machine Learning
Department in Carnegie Mellon University. She received her Ph.D. from
University of Pennsylvania and was a postdoctoral fellow in UC Berkeley and
Google research after that. Her work is on learning
visual representations with little supervision and on combining spatial
reasoning in deep visual learning. Her group develops algorithms for
mobile computer vision, learning of physics and common sense for agents
that move around and interact with the world. Her work has been awarded
with a best Ph.D. thesis award, an NSF CAREER award, AFOSR Young
Investigator award, a DARPA Young Investigator award, Google, TRI, Amazon,
UPMC and Sony faculty research awards.
*Please watch this space for future AI Seminars :*
https://engineering.oregonstate.edu/EECS/research/AI
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our last AI seminar for the fall term on *"Data Science Consulting at AWS" *by
Patrick Roberts and Neville Mehta is scheduled to be on December
2nd(Tomorrow), 1-2 PM PST. It will be followed by a 30-minute Q&A session
with the graduate students.
*Please note that the speaker will be on zoom for the event but it will be
set up in KEC 1001*
*Data Science Consulting at AWS*
Patrick Roberts and Neville Mehta
AWS
*Abstract:*
AWS Data Scientists, Patrick and Neville, will discuss how machine learning
and statistical analyses are being applied to problems in agriculture,
education, energy, manufacturing, and medicine on the AWS cloud, and tell
stories of the daily challenges and adventures in the life of a data
scientist.
*Speaker Bio:*
Patrick Roberts is a Principal Data Scientist (at AWS since 2017) and tech
lead for the AI/ML Global Specialty Practice. Following a PhD in
theoretical physics, Patrick developed an academic research program at
Oregon Health & Science University in computational neuroscience and
modeling biological systems. He later consulted with the pharmaceutical
industry, developing quantitative systems models for predicting clinical
outcomes for novel treatments. His data science experience includes
wearable devices, and using AWS services to solve problems in education,
energy management, health care and life sciences.
Neville Mehta is a Senior Data Scientist with the AI/ML Global Specialty
Practice at AWS, where he helps customers solve business problems by
leveraging ML and optimization technologies. Before AWS, he was a research
scientist at a systematic hedge fund, where he developed profitable trading
and execution systems for commodity markets. His PhD in ML from Oregon
State University focused on making reinforcement learning more efficient by
leveraging hierarchical task structure discovered within problems through
causal analysis.
*Please watch this space for future AI Seminars :*
* https://eecs.oregonstate.edu/ai-events
<https://eecs.oregonstate.edu/ai-events>*
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Hi everyone,
Axel Saenz in the OSU math department is teaching a class on Markov
processes next term (see description below), and asked me to advertise it
in computer science. The class sounds really cool, and I think may be of
particular interest to theory and AI/ML graduate students.
- Huck
---------- Forwarded message ---------
From: Saenz Rodriguez, Axel <saenzroa(a)oregonstate.edu>
Date: Mon, 28 Nov 2022 at 15:20
Subject: Course Announcement: MTH 665
To: <math_all(a)math.oregonstate.edu>
Hi all,
I’m teaching a class on Markov processes with applications to Bayesian
inference. The course is intended for a wide audience and focuses on
practical applications related to data science and statistics. Please
consider registering for the course.
*Course:*
MTH 665, Probability Theory (Markov processes)
*Description:*
Markov processes, a type of random dynamical processes, have a variety of
applications with rich and interesting phenomena that are fully accessible
through tools and methods from linear algebra. For instance, a random walk
on a graph is the typical example of a Markov process. In particular, the
random walker moves from vertex to vertex along edges based on independent
rolls of a die. In the course, we will develop the theory of Markov
processes and consider applications related to data science such as Markov
chain Monte Carlo (MCMC) and Bayesian inference.
*Textbooks:*
The textbooks are available electronically through the OSU library.
- Introduction to stochastic processes with R
<https://search.library.oregonstate.edu/permalink/01ALLIANCE_OSU/jbiqoo/alma…>
(Chapters
2,3, and 5)
- Statistical rethinking: a Bayesian course with examples in R and Stan
<https://search.library.oregonstate.edu/permalink/01ALLIANCE_OSU/jbiqoo/alma…>
(Chapters
3 and 8)
*Requirements:*
Linear algebra is the only background necessary. No previous probability
background or experience required. Measure theory will not be used in the
course. No previous course requirement.
If you have any questions, please contact me directly at
saenzroa(a)oregonstate.edu
Regards,
Axel Saenz Rodriguez
--
Axel Saenz Rodriguez (he/him/his)
Assistant Professor
Department of Mathematics
Oregon State University
--
Axel Saenz Rodriguez (he/him/his)
Assistant Professor
Department of Mathematics
Oregon State University
Dear all,
Our next AI seminar on *"Data Science Consulting at AWS" *by Patrick
Roberts and Neville Mehta is scheduled to be on December 2nd(Friday), 1-2
PM PST. It will be followed by a 30-minute Q&A session with the graduate
students.
*Please note that the speaker will be on zoom for the event but it will be
set up in KEC 1001*
*Data Science Consulting at AWS*
Patrick Roberts and Neville Mehta
AWS
*Abstract:*
AWS Data Scientists, Patrick and Neville, will discuss how machine learning
and statistical analyses are being applied to problems in agriculture,
education, energy, manufacturing, and medicine on the AWS cloud, and tell
stories of the daily challenges and adventures in the life of a data
scientist.
*Speaker Bio:*
Patrick Roberts is a Principal Data Scientist (at AWS since 2017) and tech
lead for the AI/ML Global Specialty Practice. Following a PhD in
theoretical physics, Patrick developed an academic research program at
Oregon Health & Science University in computational neuroscience and
modeling biological systems. He later consulted with the pharmaceutical
industry, developing quantitative systems models for predicting clinical
outcomes for novel treatments. His data science experience includes
wearable devices, and using AWS services to solve problems in education,
energy management, health care and life sciences.
Neville Mehta is a Senior Data Scientist with the AI/ML Global Specialty
Practice at AWS, where he helps customers solve business problems by
leveraging ML and optimization technologies. Before AWS, he was a research
scientist at a systematic hedge fund, where he developed profitable trading
and execution systems for commodity markets. His PhD in ML from Oregon
State University focused on making reinforcement learning more efficient by
leveraging hierarchical task structure discovered within problems through
causal analysis.
*Please watch this space for future AI Seminars :*
* https://eecs.oregonstate.edu/ai-events
<https://eecs.oregonstate.edu/ai-events>*
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.