
Dear all, Our next AI seminar is scheduled to be on April 26th (Friday), 2-3 PM. Seminar Location: BEXL 320 Student meeting: 3-3:30 pm, KEC 2057 (sign up here<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F1B7jr4V8FHXTCq9EU_rHxd6UIeLL1c5gLFxDycdYGGds%2Fedit%3Fusp%3Dsharing&data=05%7C02%7Cai%40engr.orst.edu%7C38775a4410a741d47fee08dc63f8d488%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638495167574115955%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=tI6Ju9aUqslFP9yOrOhj9bFaRdCnV5nuS0DdKa8WsZU%3D&reserved=0>) Zoom link: https://oregonstate.zoom.us/j/91611213801?pwd=Wm9JSkN1eW84RUpiS2JEd0E5TEVkdz09<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonstate.zoom.us%2Fj%2F91611213801%3Fpwd%3DWm9JSkN1eW84RUpiS2JEd0E5TEVkdz09&data=05%7C02%7Cai%40engr.orst.edu%7C38775a4410a741d47fee08dc63f8d488%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638495167574115955%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=y%2FGABzgTQgAq4TlvW0Yn%2BNFwLL%2FHVzRkrBLRx6WgiVY%3D&reserved=0> Towards Safe and Actionable AI: Strategies for Robust Adaptation and Proactive Failure Detection Jay Thiagarajan Machine Learning Researcher Machine Intelligence Group Lawrence Livermore National Labs Abstract: As AI technologies and large-scale models continue to integrate into critical applications, prioritizing the robustness and safety of model design has become imperative. Current approaches focus on refining pre-trained models to establish personalized decision rules, enabling reliable predictions and supporting decision-making. The inherent challenges in generalization and safety of these protocols have sparked increased research interest in characterizing model behavior (e.g., diagnosing statistical biases, detecting distribution shifts, predicting generalization capabilities, and evaluating confidence levels), as well as effectively leveraging general-purpose knowledge sources (e.g., pre-trained representations, generative models, and multimodal embeddings) to devise safer solutions. This presentation will delve into innovative strategies for securely and efficiently adapting predictive models, followed by an exploration of methods for proactively detecting failure modes in classification and regression models. Speaker Bio: Jay Thiagarajan is a Machine learning researcher in the Machine Intelligence Group at Lawrence Livermore National Labs. His research broadly spans deep learning, AI/ML safety, generative AI, and human-centric evaluation. He received his PhD degree from Arizona State University. He has served as the PI for projects funded by the DOE, DARPA and the Office of Science on representation learning, high-dimensional sampling, uncertainty quantification and knowledge-driven ML. He has received the LLNL early career award, the WCI gold award for his contribution to the COVID-19 efforts organized by CDC, and multiple best paper awards. He serves on the applied math visioning committee of the DOE Applied Scientific Computing Research program and is part of the ML Commons initiative. Please watch this space for future AI Seminars : https://engineering.oregonstate.edu/EECS/research/AI<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineering.oregonstate.edu%2FEECS%2Fresearch%2FAI&data=05%7C02%7Cai%40engr.orst.edu%7C38775a4410a741d47fee08dc63f8d488%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638495167574115955%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=zRmXioLyUqTmM65YSLfGEY6qEZ1Fo2hIZOthMiviybA%3D&reserved=0> Rajesh Mangannavar, Graduate Student Oregon State University ---- AI Seminar Important Reminders: -> For graduate students in the AI program, attendance is strongly encouraged