Dear all,
Tomorrow’s AI seminar will be by Yoon Kim, who will be talking about large language models. This seminar will be remote, but please attend in person if possible.
Prasad
Location: KEC 1001
Zoom link: https://oregonstate.zoom.us/j/96491555190?pwd=azJHSXZ0TFQwTFFJdkZCWFhnTW04UT09
Large Language Models and Symbolic Structures
Yoon Kim
EECS, Massachusetts Institute of Technology
Abstract:
Over the past decade the field of NLP has shifted from a pipelined approach (wherein intermediate symbolic structures such as parse trees are explicitly predicted and utilized for downstream tasks) to an end-to-end approach wherein pretrained large language
models (LLMs) are adapted to various downstream tasks via finetuning or prompting. What role (if any) can symbolic structures play in the era of LLMs? In the first part of the talk, we will see how latent symbolic structures in the form of hierarchical alignments
can be used to guide LM-based neural machine translation systems to improve translation of low resource languages and even enable the use of new translation rules during inference. In the second part, we will see how expert-derived grammars can be used to
control LLMs via prompting for tasks such as semantic parsing where the output structure must obey strict domain-specific constraints.
Speaker Biography:
Yoon Kim is an assistant professor at MIT EECS. He received his PhD from Harvard University, advised by Alexander Rush.