
Dear all, Reminder that our next AI seminar is scheduled for Friday, February 14th. Talk details: AI Seminar: Understanding the promises and limits of fine-tuning Speaker: Dr. Aditi Raghunathan, Assistant Professor of Computer Science, Carnegie Mellon University Time: 2:00 PM Location: KEC 1001 and Zoom Zoom link: https://oregonstate.zoom.us/s/98357211915<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonstate.zoom.us%2Fs%2F98357211915&data=05%7C02%7Cai%40engr.oregonstate.edu%7Cfb2a755594c849e6382e08dd4c7a5137%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638750810413253581%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=ITnIH%2Bo3lEENLaYGgxYaUql%2FqQFKOHsyXGOnrftAQY4%3D&reserved=0> Talk Abstract: In recent years, foundation models—large pretrained models that can be adapted for a wide range of tasks—have achieved state-of-the-art performance on a variety of tasks. The adaptation or fine-tuning process is a crucial component that enables specialization to the task of interest, and is the de facto standard for mitigating risks such as reducing toxic and harmful generations from large language models. While the pretrained models are trained on broad data, the adaptation (or fine-tuning) process is often performed on limited well-curated data. How well does fine-tuning generalize beyond the narrow training distribution? Via theory and experiments, we show how to improve current fine-tuning approaches so that they can better leverage diverse pretraining knowledge and improve downstream performance across broader settings than the narrow fine-tuning data. On the flip side, we show that pretrained knowledge can be hard to get rid of, thereby underlining the potential perils of over reliance on fine-tuning for safety. Speaker Bio: Aditi Raghunathan is an Assistant Professor in the Computer Science Department at CMU. She received her PhD from Stanford in 2021, and Bachelor of Technology from IIT Madras in 2016. She is a recipient of the Okawa research grant, Schmidt AI2050 Early Career Fellowship, the Google Research Scholar Award, Rising Stars in EECS, Google PhD Fellowship, Open Philanthropy AI Fellowship, Stanford School of Engineering Fellowship and Google Anita Borg Memorial Fellowship. She was featured in the Forbes 30 under 30 list for her contributions to reliable machine learning. Her PhD thesis was awarded the Arthur Samuel Best Thesis Award at Stanford. Her research has also been recognized by multiple orals and spotlights at top conferences, and a Best Paper Award at Data Problems in ML Workshop at ICLR 2024. For future AI seminars, please visit: https://engineering.oregonstate.edu/EECS/research/AI-seminars<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineering.oregonstate.edu%2FEECS%2Fresearch%2FAI-seminars&data=05%7C02%7Cai%40engr.oregonstate.edu%7Cfb2a755594c849e6382e08dd4c7a5137%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638750810413293262%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=gFMoykDxB8cyYjqoHbCX8SuiyGmy2kbIZHz5h2YGQtE%3D&reserved=0>. Best, Christian Abou Mrad Graduate Student Oregon State University