Dear all,
Our next AI seminar on *"Considerations for More Scalable Trustworthy AI" *by
Richard Mallah is scheduled to be on March 2nd, 1-2 PM PST. It will be
followed by a 30 minute Q&A session by the graduate students.
Zoom Link:
https://oregonstate.zoom.us/j/93591935144?pwd=YjZaSjBYS0NmNUtjQzBEdzhPeDZ5U…
*Considerations for More Scalable Trustworthy AI*
Richard Mallah
Director of AI Projects
The Future of Life Institute
*Abstract:*
As AI systems become less narrow in their capabilities and more
general-purpose, new classes and levels of pitfalls present themselves, so
techniques we bring to bear for the safety and ethical alignment of these
systems will need to scale in new ways and require innovation increasingly
on par with the sophistication of the primary learning functions of the
system. As the differences between what a system can do and what it should
do grow, desiderata we encounter even with narrower AI systems, like
establishing safe bounds, practical interpretability, verification of key
models or algorithms, minimizing negative side effects, maintaining
operator control, and mitigation of unwanted biases each will need to
account for levels of indirection previously unseen.
In this talk, we attempt to explore the cultivation of a security mindset
with respect to what more general systems can do wrong, and application of
that to critical evaluation and design of safety techniques regarding their
scalability or amenability to generality. It is notable, for instance, with
increasing generality, that modeling context is increasingly important and
that AI safety and AI ethics increasingly overlap.
*Speaker Bio:*
Richard Mallah is Director of AI Projects at The Future of Life Institute,
where he does metaresearch, analysis, advocacy, strategy, and field
building regarding technical, strategy, and policy aspects of
transformative AI safety. From December 2015 Richard was on the founding
team of the IEEE Global Initiative on Ethics of Autonomous and Intelligent
Systems and continues to serve on its Executive Committee. He co-chairs the
recurring SafeAI and AISafety technical safety workshops at AAAI and IJCAI,
and in 2021 was the founding Executive Director of the Consortium on the
Landscape of AI Safety, for which he was drafted because of his AI safety
field landscaping & synthesis work at FLI. Richard has served in the Safety
and the Labor & Economy Working Groups at Partnership on AI, has chaired
the IEEE GIEAIS committees on AGI and on lethal autonomous weapons, and is
an Honorary Senior Fellow at the Foresight Institute.
Mr. Mallah has been working in machine learning and AI in industry for over
twenty years, spanning many roles across R&D including algorithms research,
research management, product team management, CTO, chief scientist, and
strategy; in total he’s worked on over a hundred AI/ML-related technical
projects from these different perspectives. Ever-focused on innovation yet
mindful of managing risks, Richard has regularly aligned applied research
drivers with novel research directions in trustworthy AI. In so doing,
safety-related R&D he’s led has included: multiobjective GP agents with
safety objectives, debiasing novel latent spaces, active-learning-enhanced
automated ontology refactoring and alignment, explainability-enhanced
conditional quasi-metric spaces, uncertainty-aware tight blends of symbolic
and subsymbolic knowledge representations, liability-bearing-anomaly
recognition, confidence-enhanced bayesian monte carlo analysis of
operational risk, more robust multimodal LSTM systems,
information-retrieval-enhanced transformer-based systems for more truthful
NLG, and AI auditing methods. Richard advises AI safety startups, VC funds,
incubators, academics, governments, international multistakeholder bodies,
and NGOs on trustworthy AI, scalable AI safety, scalable AI ethics,
wide-angle sustainability, ML model risk management, and assurance. Mr.
Mallah has given dozens of invited talks globally on long-termist
foresight, assurance, robustness, interpretability, and control of advanced
AI and autonomous systems. He holds a degree in Intelligent Systems from
Columbia University.
*Please watch this space for future AI Seminars :*
* https://eecs.oregonstate.edu/ai-events
<https://eecs.oregonstate.edu/ai-events>*
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"**Over a trillion bases and counting: why
leveraging public databases is essential for microbiome analysis**" *by
professor Maude David is scheduled to be on February 23rd (Today), 1-2 PM
PST.
Zoom Link:
https://oregonstate.zoom.us/j/93591935144?pwd=YjZaSjBYS0NmNUtjQzBEdzhPeDZ5U…
*Over a trillion bases and counting: why leveraging public databases is
essential for microbiome analysis*
Maude David
Assistant Professor
Department of Microbiology
Oregon State University
*Abstract:*
The amount of publicly available sequencing data has doubled approximately
every 18 months since 1982, and there are now over a billion Whole Genome
Shotguns available on Genbank. But the number of studies that incorporate
available datasets remain marginal, and as a result, especially in human
gut microbiome studies, where collecting clinical samples can be arduous,
the number of taxa considered in any one study often exceeds the number of
samples ten to one hundred-fold.
In this presentation we will first focus on 16S amplicon data by deriving
microbiome-level properties by applying an embedding algorithm to quantify
taxon co-occurrence patterns in over 18,000 samples from the American Gut
Project microbiome crowdsourcing effort. We show that predictive
models trained using property data are the most accurate, robust, and
generalizable, and that property-based models can be trained on one dataset
and deployed on another. Using these properties, we are able to extract
known and new bacterial metabolic pathways associated with inflammatory
bowel disease across two completely independent studies.
Using publicly available datasets presents several limitations, and among
them disparities in
methodologies, sequencing technologies, and notably poor functional
annotations. Rather than relying solely on completely curated databases for
functional annotation, the second part of this presentation will focus on a
new annotation strategy where we recruited protein sequences carrying
common protein domains (via pfam) alongside KEGG Orthologs. We leveraged
the unannotated sequences in order to generate KO level Hidden Markov
Models that proved to be more sensitive than non-propagated models, on an
independent testing set.
*Speaker Bio:*
Maude David graduated with a Ph.D. from Ecole Centrale de Lyon (France),
and joined Oregon State University as an assistant professor in 2018 after
her postdoctoral work at Lawrence Berkeley National Laboratory and Stanford
University. The David Lab focuses on new biocomputing methods to utilize
publicly available datasets to analyze microbiome sequencing data. The lab
also works on how the gut microbiota can modulate behavior via the
gut-microbiome-brain axis. We use an interdisciplinary approach including
the use of mouse models, in vitro cell culture, anaerobic bacteria culture,
bees (!) and machine learning to tackle those questions.
*Please watch this space for future AI Seminars :*
* https://eecs.oregonstate.edu/ai-events
<https://eecs.oregonstate.edu/ai-events>*
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Hello everyone,
I've looked at the survey results. Unfortunately, there's no timeslot that
everyone can make. I've chosen Friday at 1 PM PST as the least conflicted
meeting time. I hope to see everyone soon!
Our discussion paper will be "Red Teaming Language Models with Language
Models
<https://www.deepmind.com/research/publications/2022/Red-Teaming-Language-Mo…>
":
Language Models (LMs) often cannot be deployed because of their potential
> to harm users in ways that are hard to predict in advance. Prior work
> identifies harmful behaviors before deployment by using human annotators to
> hand-write test cases. However, human annotation is expensive, limiting the
> number and diversity of test cases. In this work, we automatically find
> cases where a target LM behaves in a harmful way, by generating test cases
> (“red teaming”) using another LM. We evaluate the target LM’s replies to
> generated test questions using a classifier trained to detect offensive
> content, uncovering tens of thousands of offensive replies in a 280B
> parameter LM chatbot. We explore several methods, from zero-shot generation
> to reinforcement learning, for generating test cases with varying levels of
> diversity and difficulty. Furthermore, we use prompt engineering to control
> LM-generated test cases to uncover a variety of other harms, automatically
> finding groups of people that the chatbot discusses in offensive ways,
> personal and hospital phone numbers generated as the chatbot’s own contact
> info, leakage of private training data in generated text, and harms that
> occur over the course of a conversation. Overall, LM-based red teaming is
> one promising tool (among many needed) for finding and fixing diverse,
> undesirable LM behaviors before impacting users.
>
Join Zoom Meeting
https://oregonstate.zoom.us/j/95843260079?pwd=TzZTN0xPaFZrazRGTElud0J1cnJLU…
Password: 961594
Phone Dial-In Information
+1 971 247 1195 US (Portland)
+1 253 215 8782 US (Tacoma)
+1 301 715 8592 US (Washington DC)
Meeting ID: 958 4326 0079
All the best,
Quintin
Hi All,
Here's the list of scheduled seminars for the remainder of winter term and the upcoming spring term. With travel picking up again, my expectation is that many/most of these will be held in person (with a simulcast option).
-Ross
===========
Ross L. Hatton
Associate Professor, Robotics and Mechanical Engineering
Collaborative Robotics and Intelligent Systems Institute
Oregon State University
coris.oregonstate.eduresearch.engr.oregonstate.edu/lram/rosslhatton.com
ross.hatton(a)oregonstate.edu
-----------------------
2/25/2022
Nick Gravish
-UCSD Mechanical Engineering
-Dynamics, bioinspired locomotion
-http://web.eng.ucsd.edu/~ngravish/
3/4/2022
No semiinar: Grad applicant visit weekend
3/11/2022
Dan Oblinger
-Analytics Fire (Previously at IBM and DARPA)
-Machine learning, data science, robotics in industry
-https://www.topionetworks.com/people/dan-oblinger-5a8526e8105eb54101b938b2
3/18/2022
No seminar: Finals week
3/25/2022
No seminar: Spring break
4/1/2022
Kristen Macuga
-OSU Psychology
-Technology-assisted decision making, tool use
-https://liberalarts.oregonstate.edu/users/kristen-macuga
4/8/2022
Reserved for RGSA event
4/15/2022
Chris Neider (To be confirmed)
-Starship Technologies
-Food delivery robots
-https://www.starship.xyz/
4/22/2022
Kim Ingraham
-University of Washington Mechanical Engineering, Rehabilitation Medicine
-Biomechanics and wearable robots
-https://kim-ingraham.com
4/29/2022
No seminar scheduled
5/6/2022
Shai Revzen
-University of Michigan Electrical Engineering
-Dynamics, bioinspired, legged locomotion
-https://robotics.umich.edu/profile/shai-revzen/
5/13/2022
ICRA practice talks
5/20/2022
Chris Sanchez
-OSU Psychology
-Human use of computing technology
-https://liberalarts.oregonstate.edu/users/christopher-sanchez
5/27/2022
No seminar: ICRA
6/3/2022
Edward Wang
UCSD ECE & Design Lab
Design and health monitoring
https://www.ejaywang.com
Dear all,
Our next AI seminar on *"Quo Vadis? Predicting Future Trajectories of
Robots through Temporal Logics and Bayesian Inference" *by professor Sriram
Sankaranarayanan
is scheduled to be on February 16th (Tomorrow), 1-2 PM PST. It will be
followed by a 30 minute Q&A session by the graduate students.
Zoom Link:
https://oregonstate.zoom.us/j/93591935144?pwd=YjZaSjBYS0NmNUtjQzBEdzhPeDZ5U…
*Quo Vadis? Predicting Future Trajectories of Robots through Temporal
Logics and Bayesian Inference*
Sriram Sankaranarayanan
Associate Professor
Computer Science
University of Colorado, Boulder
*Abstract:*
Predicting the future states of robots through observation of its past and
current actions is a very interesting problem. It is a fundamental
"primitive" for applications such as runtime monitoring to prevent
impending failures such as collisions or entering forbidden regions in the
workspace. In this talk, we will present a few approaches to this problem
beginning with a simple learning-based extrapolation of the robot's past
positions to predict future trajectories, assuming a simple dynamical model
for the robot. Unfortunately, such an extrapolation will remain valid only
for relatively short time horizons. To improve upon this, we show how the
"intent" of the agent can be represented and reasoned with. To represent
intent, we use a restricted class of formulas from temporal logics as
hypothesized intents. By combining these temporal logic representations
through the machinery of Bayesian Inference, we show how we can predict
future trajectories of robots rapidly through a combination of off-line
pre-computations that enable cheaper real-time predictions. We conclude by
describing ongoing work that develops a hierarchical approach wherein we
separate "short-term" intents from "longer-term" intents that can be
represented by the full strength of temporal logic-based specifications.
This presentation is based on a series of joint works with Hansol Yoon and
Chou Yi.
*Speaker Bio:*
Sriram Sankaranarayanan is an associate professor of Computer Science at
the University of Colorado, Boulder. His research interests include
automatic techniques for reasoning about the behavior of computer and
cyber-physical systems. Sriram obtained a Ph.D. in 2005 from Stanford
University where he was advised by Zohar Manna and Henny Sipma.
Subsequently he worked as a research staff member at NEC research labs in
Princeton, NJ. He has been on the faculty at CU Boulder since 2009. Sriram
has been the recipient of awards including the President's Gold Medal from
IIT Kharagpur (2000), Siebel Scholarship (2005), the CAREER award from NSF
(2009), Dean's award for outstanding junior faculty (2012), outstanding
teaching (2014), and the Provost's faculty achievement award (2014).
*Please watch this space for future AI Seminars :*
* https://eecs.oregonstate.edu/ai-events
<https://eecs.oregonstate.edu/ai-events>*
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Hello everyone,
People have had schedule conflicts with our current meeting time, so I've
decided to send out a survey to find a time that may work better. Please
fill out the survey with your availability: http://whenisgood.net/swyfrcp
Depending on responses, we may or may not be able to meet this week. I'll
send out an update once the meeting time is confirmed.
Our next paper will be "Red Teaming Language Models with Language Models
<https://www.deepmind.com/research/publications/2022/Red-Teaming-Language-Mo…>
":
Language Models (LMs) often cannot be deployed because of their potential
> to harm users in ways that are hard to predict in advance. Prior work
> identifies harmful behaviors before deployment by using human annotators to
> hand-write test cases. However, human annotation is expensive, limiting the
> number and diversity of test cases. In this work, we automatically find
> cases where a target LM behaves in a harmful way, by generating test cases
> (“red teaming”) using another LM. We evaluate the target LM’s replies to
> generated test questions using a classifier trained to detect offensive
> content, uncovering tens of thousands of offensive replies in a 280B
> parameter LM chatbot. We explore several methods, from zero-shot generation
> to reinforcement learning, for generating test cases with varying levels of
> diversity and difficulty. Furthermore, we use prompt engineering to control
> LM-generated test cases to uncover a variety of other harms, automatically
> finding groups of people that the chatbot discusses in offensive ways,
> personal and hospital phone numbers generated as the chatbot’s own contact
> info, leakage of private training data in generated text, and harms that
> occur over the course of a conversation. Overall, LM-based red teaming is
> one promising tool (among many needed) for finding and fixing diverse,
> undesirable LM behaviors before impacting users.
>
I hope as many people as possible are able to come to the new event.
Join Zoom Meeting
https://oregonstate.zoom.us/j/95843260079?pwd=TzZTN0xPaFZrazRGTElud0J1cnJLU…
Password: 961594
Phone Dial-In Information
+1 971 247 1195 US (Portland)
+1 253 215 8782 US (Tacoma)
+1 301 715 8592 US (Washington DC)
Meeting ID: 958 4326 0079
All the best,
Quintin
Hi All,
Our second winter Robotics Seminar will be on Friday, February 25 with a talk from Nick Gravish from the University of California, San Diego, who works in the area of bioinspired locomotion.
The seminar will be from 10-11am, followed by an 11-11:30 student-only Q&A session.
** This seminar will be held in person in LINC 302. **
---
A simulcast of the seminar will be available via Zoom
https://oregonstate.zoom.us/j/92183247338?pwd=MENOWHgvSFVGYzNjZExDT2hWRUNxd…
---
-Ross
===========
Ross L. Hatton
Associate Professor, Robotics and Mechanical Engineering
Collaborative Robotics and Intelligent Systems Institute
Oregon State University
coris.oregonstate.eduresearch.engr.oregonstate.edu/lram/rosslhatton.com
ross.hatton(a)oregonstate.edu
===========
Title: Design and control of emergent oscillations for flapping-wing flyers and swarming snakes
Abstract: Locomotion in living systems and bio-inspired robots requires the generation and control of oscillatory motion. While a common method to generate motion is through modulation of time-dependent “clock” signals, in this talk we will motivate and study an alternative method of oscillatory generation through autonomous limit-cycle systems. Limit-cycle oscillators for robotics have many desirable properties including adaptive behaviors, entrainment between oscillators, and potential simplification of motion control. I will present two examples of the generation and control of autonomous oscillatory motion in bio-inspired robotics. First, I will describe our recent work to study the dynamics of wingbeat oscillations in “asynchronous” insects and how we can build these behaviors into micro-aerial vehicles. In the second part of this talk I will describe how simple snake-like robots with limit-cycle gaits enable swarms to synchronize their movement through contact and without communication. More broadly in this talk I hope to motivate why we should look to autonomous dynamical systems for designing and controlling emergent locomotor behaviors in robotics.
Bio: Dr. Nick Gravish received his PhD from Georgia Tech where he used robots as physical models to motivate and study aspects of biological locomotion. During his post-doc Gravish worked in the microrobotics lab of Rob Wood at Harvard, where he gained expertise in designing and studying insect-scale robots. Gravish is currently an assistant professor at UC San Diego in the Mechanical and Aerospace Engineering department. His lab bridges the gap between bio-inspiration, biomechanics, and robotics, towards the development of new bio-inspired robotic technologies to improve the adaptability and resilience of mobile robots.
Dear all,
Our next AI seminar on *"Quo Vadis? Predicting Future Trajectories of
Robots through Temporal Logics and Bayesian Inference" *by professor Sriram
Sankaranarayanan
is scheduled to be on February 16th, 1-2 PM PST. It will be followed by a
30 minute Q&A session by the graduate students.
Zoom Link:
https://oregonstate.zoom.us/j/93591935144?pwd=YjZaSjBYS0NmNUtjQzBEdzhPeDZ5U…
*Quo Vadis? Predicting Future Trajectories of Robots through Temporal
Logics and Bayesian Inference*
Sriram Sankaranarayanan
Associate Professor
Computer Science
University of Colorado, Boulder
*Abstract:*
Predicting the future states of robots through observation of its past and
current actions is a very interesting problem. It is a fundamental
"primitive" for applications such as runtime monitoring to prevent
impending failures such as collisions or entering forbidden regions in the
workspace. In this talk, we will present a few approaches to this problem
beginning with a simple learning-based extrapolation of the robot's past
positions to predict future trajectories, assuming a simple dynamical model
for the robot. Unfortunately, such an extrapolation will remain valid
only for relatively short time horizons. To improve upon this, we show how
the "intent" of the agent can be represented and reasoned with. To
represent intent, we use a restricted class of formulas from temporal
logics as hypothesized intents. By combining these temporal logic
representations through the machinery of Bayesian Inference, we show how we
can predict future trajectories of robots rapidly through a combination of
off-line pre-computations that enable cheaper real-time predictions. We
conclude by describing ongoing work that develops a hierarchical approach
wherein we separate "short-term" intents from "longer-term" intents that
can be represented by the full strength of temporal logic-based
specifications.
This presentation is based on a series of joint works with Hansol Yoon and
Chou Yi.
*Speaker Bio:*
Sriram Sankaranarayanan is an associate professor of Computer Science at
the University of Colorado, Boulder. His research interests include
automatic techniques for reasoning about the behavior of computer and
cyber-physical systems. Sriram obtained a PhD in 2005 from Stanford
University where he was advised by Zohar Manna and Henny Sipma.
Subsequently he worked as a research staff member at NEC research labs in
Princeton, NJ. He has been on the faculty at CU Boulder since 2009. Sriram
has been the recipient of awards including the President's Gold Medal from
IIT Kharagpur (2000), Siebel Scholarship (2005), the CAREER award from NSF
(2009), Dean's award for outstanding junior faculty (2012), outstanding
teaching (2014), and the Provost's faculty achievement award (2014).
*Please watch this space for future AI Seminars :*
* https://eecs.oregonstate.edu/ai-events
<https://eecs.oregonstate.edu/ai-events>*
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Hello everyone,
We'll be meeting Friday at 2 PM PST to discuss OpenAI's recent paper "Training
language models to follow instructions with human feedback
<https://cdn.openai.com/papers/Training_language_models_to_follow_instructio…>
".
Making language models bigger does not inherently make them better at
> following a user’s intent. For example, large language models can generate
> outputs that are untruthful, toxic, or simply not helpful to the user. In
> other words, these models are not aligned with their users. In this
> paper, we show an avenue for aligning language models with user intent on a
> wide range of tasks by fine-tuning with human feedback. Starting with a set
> of labeler-written prompts and prompts submitted through the OpenAI API, we
> collect a dataset of labeler demonstrations of the desired model behavior,
> which we use to fine-tune GPT-3 using supervised learning. We then collect
> a dataset of rankings of model outputs, which we use to further fine-tune
> this supervised model using reinforcement learning from human feedback
> (RLHF). We call the resulting models InstructGPT. In human evaluations on
> our prompt distribution, outputs from the 1.3B parameter InstructGPT model
> are preferred to outputs from the 175B GPT-3, despite having 100x fewer
> parameters. Moreover, InstructGPT models show improvements in truthfulness
> and reductions in toxic output generation while having minimal performance
> regressions on public NLP datasets. Even though InstructGPT still makes
> simple mistakes, our results show that fine-tuning with human feedback is a
> promising direction for aligning language models with human intent.
Anyone interested in language modeling, reinforcement learning, their
intersection, or language models that can actually follow instructions
should feel welcome to join!
Join Zoom Meeting
https://oregonstate.zoom.us/j/95843260079?pwd=TzZTN0xPaFZrazRGTElud0J1cnJLU…
Password: 961594
Phone Dial-In Information
+1 971 247 1195 US (Portland)
+1 253 215 8782 US (Tacoma)
+1 301 715 8592 US (Washington DC)
Meeting ID: 958 4326 0079
All the best,
Quintin