
Dear all, Reminder that our next AI seminar is scheduled for tomorrow, Friday, April 18th. Talk details: AI Seminar: Efficiently Adapting Large Language Models for Safety, Multilinguality, and Long Context Speaker: Thien Huu Nguyen, Associate Professor, University of Oregon Time: 2:00 PM Location: LINC 302 and Zoom Zoom link: link<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonstate.zoom.us%2Fs%2F98357211915&data=05%7C02%7Cai%40engr.oregonstate.edu%7C69580fb5c731469faf3b08dd7dff5fcd%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638805257971200861%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=CUkLE%2FsmYLnmpi5qLCpSINkt21eOVwRzdOZHSUvlChk%3D&reserved=0> Talk Abstract: Large Language Models (LLMs) have become essential to artificial intelligence, showcasing impressive fluency, reasoning, and adaptability. However, as these models scale to hundreds of billions of parameters, adapting them to new learning settings or desired behaviors can be costly, often requiring substantial data and compute for fine-tuning. In this talk, I will present our three recent works focused on developing efficient methods for adapting LLMs to new learning scenarios and expected behaviors. I will highlight three key challenges: ensuring safe responses, enhancing multilingual capabilities for low-resource languages, and improving long-context efficiency. For each, I will introduce our approach to efficient LLM adaptation, featuring activation editing, encoder-decoder combination, and linear models. I will conclude with our vision for future work. Speaker Bio: Thien Huu Nguyen is an Associate Professor in the Department of Computer Science at the University of Oregon. He obtained his Ph.D. in Natural Language Processing (NLP) from New York University, advised by Professors Ralph Grishman and Kyunghyun Cho, and completed a postdoctoral fellowship with Professor Yoshua Bengio at the University of Montreal. Thien's research focuses on Information Extraction, Text Analysis, Large Language Models (LLMs), Multilingual Learning, Representation Learning, and Deep Learning. He pioneered some of the initial deep learning models for Entity Recognition, Relation Extraction, Event Extraction, and Event Coreference Resolution in NLP. His recent work explores efficient LLMs and multilingual/multi-domain NLP models that achieve robust performance across diverse languages and domains. At the University of Oregon, Thien directs the NSF IUCRC Center for Big Learning (CBL). He has received the NSF CAREER Award for his works on Multilingual Learning for Information Extraction and was recognized as an AI 2000 Most Influential Scholar (Honorable Mention) in NLP. His toolkits for multilingual NLP and acronym identification/disambiguation won Outstanding and Best Demo Paper Awards at EACL. Thien was also an IBM Ph.D. Fellow, and his research has been supported by NSF, IARPA, Army Research Office, Adobe Research, and IBM Research. For future AI seminars, please visit: https://engineering.oregonstate.edu/EECS/research/AI-seminars<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineering.oregonstate.edu%2FEECS%2Fresearch%2FAI-seminars&data=05%7C02%7Cai%40engr.oregonstate.edu%7C69580fb5c731469faf3b08dd7dff5fcd%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638805257971214591%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=U9InDAbUIRKfC5kBKV37jQcbPKRWMHkOzEOTg8rwIGo%3D&reserved=0>. Best, Christian Abou-Mrad Graduate Student Oregon State University