Dear all,
Our next AI seminar on *"**Some Perspectives on Stochastic Gradient
Learning and an Application in Neuroprosthesis**" *by John Mathews is scheduled
to be on April 14th (Tomorrow), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: CRPS 122 (Location changed from the previous seminar)*
*Some Perspectives on Stochastic Gradient Learning and an Application in
Neuroprosthesis*
V John Mathews
Professor, Electrical and Computer Engineering
Oregon State University
*Abstract:*
This talk will be divided into two parts. The first part will involve some
signal processing/control theory-based perspectives on stochastic gradient
learning. In particular, we will discuss a lowpass-regularized framework
for accelerated learning that contains many momentum-based learning
algorithms as special cases. We will discuss how a control systems
perspective may be employed to derive parameter update algorithms and
characterize their learning behavior. The second part of this talk will
describe efforts to interpret human motor intent from bioelectrical
signals. The estimated movement intent may be used to intuitively control
(i.e., control by thought) prosthetic limbs. We will present a
semi-supervised online learning algorithm for movement intent estimation
and share experimental results demonstrating the ability of this approach
to tackle time-varying characteristics of the neuro-musculoskeletal system.
*Speaker Bio:*
V John Mathews is a professor in the School of Electrical Engineering and
Computer Science at the Oregon State University. He received his Ph.D. and
M.S. degrees in electrical and computer engineering from the University of
Iowa, Iowa City, Iowa in 1984 and 1981, respectively, and the B.E. (Hons.)
degree in electronics and communication engineering from the Regional
Engineering College (now National Institute of Technology),
Tiruchirappalli, India in 1980. Prior to 2015, he was with the Department
of Electrical & Computer Engineering at the University of Utah. He served
as the chairman of the ECE department at Utah from 1999 to 2003, and as the
head of the School of Electrical Engineering and Computer Science at Oregon
State from 2015 to 2017. His current research interests are in nonlinear
and adaptive signal processing and application of signal processing and
machine learning techniques in neural engineering, biomedicine, and
structural health management. Mathews is a Fellow of IEEE. He has held many
leadership positions of the IEEE Signal Processing Society. He is a
recipient of the 2008-09 Distinguished Alumni Award from the National
Institute of Technology, Tiruchirappalli, India, IEEE Utah Section’s
Engineer of the Year Award in 2010, and the Utah Engineers Council’s
Engineer of the Year Award in 2011. He was a distinguished lecturer of the
IEEE Signal Processing Society for 2013 and 2014, and is the recipient of
the 2014 IEEE Signal Processing Society Meritorious Service Award.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"**Some Perspectives on Stochastic Gradient
Learning and an Application in Neuroprosthesis**" *by John Mathews is scheduled
to be on April 14th (Friday), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: CRPS 122 (Location changed from the previous seminar)*
*Some Perspectives on Stochastic Gradient Learning and an Application in
Neuroprosthesis*
V John Mathews
Professor, Electrical and Computer Engineering
Oregon State University
*Abstract:*
This talk will be divided into two parts. The first part will involve some
signal processing/control theory-based perspectives on stochastic gradient
learning. In particular, we will discuss a lowpass-regularized framework
for accelerated learning that contains many momentum-based learning
algorithms as special cases. We will discuss how a control systems
perspective may be employed to derive parameter update algorithms and
characterize their learning behavior. The second part of this talk will
describe efforts to interpret human motor intent from bioelectrical
signals. The estimated movement intent may be used to intuitively control
(i.e., control by thought) prosthetic limbs. We will present a
semi-supervised online learning algorithm for movement intent estimation
and share experimental results demonstrating the ability of this approach
to tackle time-varying characteristics of the neuro-musculoskeletal system.
*Speaker Bio:*
V John Mathews is a professor in the School of Electrical Engineering and
Computer Science at the Oregon State University. He received his Ph.D. and
M.S. degrees in electrical and computer engineering from the University of
Iowa, Iowa City, Iowa in 1984 and 1981, respectively, and the B.E. (Hons.)
degree in electronics and communication engineering from the Regional
Engineering College (now National Institute of Technology),
Tiruchirappalli, India in 1980. Prior to 2015, he was with the Department
of Electrical & Computer Engineering at the University of Utah. He served
as the chairman of the ECE department at Utah from 1999 to 2003, and as the
head of the School of Electrical Engineering and Computer Science at Oregon
State from 2015 to 2017. His current research interests are in nonlinear
and adaptive signal processing and application of signal processing and
machine learning techniques in neural engineering, biomedicine, and
structural health management. Mathews is a Fellow of IEEE. He has held many
leadership positions of the IEEE Signal Processing Society. He is a
recipient of the 2008-09 Distinguished Alumni Award from the National
Institute of Technology, Tiruchirappalli, India, IEEE Utah Section’s
Engineer of the Year Award in 2010, and the Utah Engineers Council’s
Engineer of the Year Award in 2011. He was a distinguished lecturer of the
IEEE Signal Processing Society for 2013 and 2014, and is the recipient of
the 2014 IEEE Signal Processing Society Meritorious Service Award.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"Great Haste Makes Great Waste: Exploiting and
Attacking Efficient Deep Learning" *by Sanghyun Hong is scheduled to be on
April 7th (Tomorrow), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: KEAR 212*
*Great Haste Makes Great Waste: Exploiting and Attacking Efficient Deep
Learning*Sanghyun Hong
Assistant Professor, Computer Science
Oregon State University
*Abstract:*
Recent increases in the computational demands of deep neural networks have
sparked interest in efficient deep learning mechanisms, such as neural
network quantization or input-adaptive multi-exit inferences. Those
mechanisms provide significant computational savings while preserving a
model's accuracy, making it practical to run commercial-scale models in
resource-constrained settings. However, most methods focus on
"hastiness"—i.e., how fast and efficiently they get correct predictions—and
it overlooks the security vulnerability that can "waste" their practicality.
In this talk, I will revisit efficient deep learning from a security
perspective and introduce emerging research on exploiting and attacking
them to achieve malicious objectives. First, I will show how an adversary
can exploit neural network quantization to induce malicious behaviors. An
adversary can manipulate a pre-trained model to behave maliciously upon
quantization. Next, I will show how input-adaptive mechanisms, such as
multi-exit models, fail to promise computational efficiency in adversarial
settings. By adding human-imperceptible input perturbations, an attacker
can completely offset the computational savings provided by these
input-adaptive models. Finally, I will conclude my talk by encouraging the
audience to examine efficient deep learning practices with an adversarial
lens and discuss future research directions for building defense
mechanisms. I believe that this is the best moment to listen to Benjamin's
advice: "Take time for all the things."
*Speaker Bio:*
Sanghyun Hong is an Assistant Professor of Computer Science at Oregon State
University. He works on building trustworthy and socially responsible AI
systems for the future. He is the recipient of the Samsung Global Research
(GRO) Award 2022 and was selected as a DARPA Riser 2022. He was also an
invited speaker at USENIX Enigma 2021, where he talked about practical
hardware attacks on deep learning. He earned his Ph.D. at the University of
Maryland, College Park, under the guidance of Prof. Tudor Dumitras. He was
also a recipient of the Ann G. Wylie Dissertation Fellowship. He received
his B.S. at Seoul National University.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Hi Folks,
I hope to mention that our special topic course “ECE599/AI539 Matrix Analysis for Signal Processing and Machine Learning” has been converted to a regular number course “ECE586/AI586 Applied Matrix Analysis”.
The course contents can be seen in the attached slides. The new regular-number course is going to be offered in Spring 2023. For those who are interested in machine learning/signal processing theory and algorithms (and their deep connections with linear algebra), please consider signing up.
Best,
Xiao
=======================
Xiao Fu, Assistant Professor
School of Electrical Engineering and Computer Science
Oregon State University
Corvallis OR 97331
3003 Kelley Engineering Center
Homepage: https://web.engr.oregonstate.edu/~fuxia/<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fweb.engr.…>
Dear all,
Our next AI seminar on *"The mathematics of neural networks: recent
advances, thoughts, and the path forward" *by Mikhail Belkin is scheduled
to be on February 24th (Tomorrow), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: Rogers 230*
*The mathematics of neural networks: recent advances, thoughts, and the
path forward*
Mikhail Belkin
Professor
University of California, San Diego
*Abstract:*
The recent remarkable practical achievements of neural networks have far
outpaced our theoretical understanding of their properties. Yet, it is hard
to imagine that progress can continue indefinitely, without deeper
understanding of their fundamental principles and limitations. In this talk
I will discuss some recent advances in the mathematics of neural networks,
including some of our recent work, and outline what are in my opinion are
some promising directions for the future research.
*Speaker Bio:*
Mikhail Belkin received his Ph.D. in 2003 from the Department of
Mathematics at the University of Chicago. His research interests are in
theory and applications of machine learning and data analysis. Some of his
well-known work includes widely used Laplacian Eigenmaps, Graph
Regularization and Manifold Regularization algorithms, which brought ideas
from classical differential geometry and spectral analysis to data science.
His recent work has been concerned with understanding remarkable
mathematical and statistical phenomena observed in deep learning. This
empirical evidence necessitated revisiting some of the basic concepts in
statistics and optimization. One of his key recent findings is the "double
descent" risk curve that extends the textbook U-shaped bias-variance
trade-off curve beyond the point of interpolation. He has served on the
editorial boards of the Journal of Machine Learning Research, IEEE Pattern
Analysis and Machine Intelligence and SIAM Journal on Mathematics of Data
Science.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"The mathematics of neural networks: recent
advances, thoughts, and the path forward" *by Mikhail Belkin is scheduled
to be on February 24th (Friday), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: Rogers 230*
*The mathematics of neural networks: recent advances, thoughts, and the
path forward*
Mikhail Belkin
Professor
University of California, San Diego
*Abstract:*
The recent remarkable practical achievements of neural networks have far
outpaced our theoretical understanding of their properties. Yet, it is hard
to imagine that progress can continue indefinitely, without deeper
understanding of their fundamental principles and limitations. In this talk
I will discuss some recent advances in the mathematics of neural networks,
including some of our recent work, and outline what are in my opinion are
some promising directions for the future research.
*Speaker Bio:*
Mikhail Belkin received his Ph.D. in 2003 from the Department of
Mathematics at the University of Chicago. His research interests are in
theory and applications of machine learning and data analysis. Some of his
well-known work includes widely used Laplacian Eigenmaps, Graph
Regularization and Manifold Regularization algorithms, which brought ideas
from classical differential geometry and spectral analysis to data science.
His recent work has been concerned with understanding remarkable
mathematical and statistical phenomena observed in deep learning. This
empirical evidence necessitated revisiting some of the basic concepts in
statistics and optimization. One of his key recent findings is the "double
descent" risk curve that extends the textbook U-shaped bias-variance
trade-off curve beyond the point of interpolation. He has served on the
editorial boards of the Journal of Machine Learning Research, IEEE Pattern
Analysis and Machine Intelligence and SIAM Journal on Mathematics of Data
Science.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"**The Data Pyramid for Building Generalist Agents*
*" *by Yuke Zhu is scheduled to be on February 17th (Tomorrow), 1-2 PM PST (Add
to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: Rogers 230*
*The Data Pyramid for Building Generalist Agents*
Yuke Zhu
Assistant Professor
UT-Austin
*Abstract:*
Recent advances in AI and Machine Learning have made great strides in
developing robust and adaptive agents in the real world. Nonetheless,
unlike the recent remarkable multi-task consolidations in Natural Language
Processing and Computer Vision, today’s Embodied AI research has mainly
focused on building siloed systems for narrow tasks. We argue that the crux
of building generalist agents is harnessing massive, diverse, and
multimodal data altogether. This talk will examine various sources of
training data available for training embodied agents, from Internet-scale
corpora to task demonstrations. We will discuss the complementary values
and limitations of these data in a pyramid structure and introduce our
recent efforts in building generalist agents with this data pyramid.
*Speaker Bio:*
Yuke Zhu is an Assistant Professor in the Computer Science department of
UT-Austin, where he directs the Robot Perception and Learning Lab. He is
also a core faculty at Texas Robotics and a senior research scientist at
NVIDIA. His research lies at the intersection of robotics, machine
learning, and computer vision. He received his Master's and Ph.D. degrees
from Stanford University. His research works have won several awards and
nominations, including the Best Conference Paper Award in ICRA 2019,
Outstanding Learning Paper at ICRA 2022, Outstanding Paper at NeurIPS 2022,
and Best Paper Finalists in IROS 2019, 2021. He is the recipient of the NSF
CAREER Award and the Amazon Research Awards.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"**The Data Pyramid for Building Generalist Agents*
*" *by Yuke Zhu is scheduled to be on February 17th (Friday), 1-2 PM PST (Add
to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: Rogers 230*
*The Data Pyramid for Building Generalist Agents*
Yuke Zhu
Assistant Professor
UT-Austin
*Abstract:*
Recent advances in AI and Machine Learning have made great strides in
developing robust and adaptive agents in the real world. Nonetheless,
unlike the recent remarkable multi-task consolidations in Natural Language
Processing and Computer Vision, today’s Embodied AI research has mainly
focused on building siloed systems for narrow tasks. We argue that the crux
of building generalist agents is harnessing massive, diverse, and
multimodal data altogether. This talk will examine various sources of
training data available for training embodied agents, from Internet-scale
corpora to task demonstrations. We will discuss the complementary values
and limitations of these data in a pyramid structure and introduce our
recent efforts in building generalist agents with this data pyramid.
*Speaker Bio:*
Yuke Zhu is an Assistant Professor in the Computer Science department of
UT-Austin, where he directs the Robot Perception and Learning Lab. He is
also a core faculty at Texas Robotics and a senior research scientist at
NVIDIA. His research lies at the intersection of robotics, machine
learning, and computer vision. He received his Master's and Ph.D. degrees
from Stanford University. His research works have won several awards and
nominations, including the Best Conference Paper Award in ICRA 2019,
Outstanding Learning Paper at ICRA 2022, Outstanding Paper at NeurIPS 2022,
and Best Paper Finalists in IROS 2019, 2021. He is the recipient of the NSF
CAREER Award and the Amazon Research Awards.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.
Dear all,
Our next AI seminar on *"AI can learn from data. But can it learn to
reason?" *by Guy Van den Broeck is scheduled to be on February 3rd
(Tomorrow), 1-2 PM PST (Add to Google Calendar
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcalendar.…>
). It will be followed by a 30-minute Q&A session with the graduate
students.
*Location: Rogers 230*
*AI can learn from data. But can it learn to reason?*
Guy Van den Broeck
University of California at Los Angeles
*Abstract:*
Modern artificial intelligence, and deep learning in particular, is
extremely capable at learning predictive models from vast amounts of data.
Many expect that AI will go from powering customer service chatbots to
providing mental health services. That it will go from personalized
advertisement to deciding who is given bail. The expectation is that AI
will solve society’s problems by simply being more intelligent than we are.
Implicit in this bullish perspective is the assumption that AI will
naturally learn to reason from data: that it can form trains of thought
that “make sense,” similar to how a mental health professional, a judge, or
a lawyer might reason about a case, or more formally, how a mathematician
might prove a theorem. This talk will investigate the question whether this
behavior can be learned from data, and how we can design the next
generation of artificial intelligence techniques that can achieve such
capabilities, focusing on neuro-symbolic learning and tractable deep
generative models.
*Speaker Bio:*
Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in
the Computer Science Department, where he directs the Statistical and
Relational Artificial Intelligence (StarAI) lab. His research interests are
in Machine Learning, Knowledge Representation and Reasoning, and Artificial
Intelligence in general. His papers have been recognized with awards from
key conferences such as AAAI, UAI, KR, OOPSLA, and ILP. Guy is the
recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19
Computers and Thought Award.
*Please watch this space for future AI Seminars :*
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineeri…
Rajesh Mangannavar,
Graduate Student
Oregon State University
----
AI Seminar Important Reminders:
-> For graduate students in the AI program, attendance is strongly
encouraged.