
Dear all, Tomorrow's AI seminar will be by Yoon Kim, who will be talking about large language models. This seminar will be remote, but please attend in person if possible. Prasad Location: KEC 1001 Zoom link: https://oregonstate.zoom.us/j/96491555190?pwd=azJHSXZ0TFQwTFFJdkZCWFhnTW04UT09<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonstate.zoom.us%2Fj%2F96491555190%3Fpwd%3DazJHSXZ0TFQwTFFJdkZCWFhnTW04UT09&data=05%7C02%7Cai%40engr.orst.edu%7C7cda49f8336b42178dc208dc395fae85%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638448330301566811%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=8shqsFlmHVjXvS4wID3pk34EmYLfX8WP9wzNvEvnOBI%3D&reserved=0> Large Language Models and Symbolic Structures Yoon Kim EECS, Massachusetts Institute of Technology Abstract: Over the past decade the field of NLP has shifted from a pipelined approach (wherein intermediate symbolic structures such as parse trees are explicitly predicted and utilized for downstream tasks) to an end-to-end approach wherein pretrained large language models (LLMs) are adapted to various downstream tasks via finetuning or prompting. What role (if any) can symbolic structures play in the era of LLMs? In the first part of the talk, we will see how latent symbolic structures in the form of hierarchical alignments can be used to guide LM-based neural machine translation systems to improve translation of low resource languages and even enable the use of new translation rules during inference. In the second part, we will see how expert-derived grammars can be used to control LLMs via prompting for tasks such as semantic parsing where the output structure must obey strict domain-specific constraints. Speaker Biography: Yoon Kim is an assistant professor at MIT EECS. He received his PhD from Harvard University, advised by Alexander Rush.

How do I sign up for AI seminar? Thanks, Zac Get Outlook for iOS<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Faka.ms%2Fo0ukef&data=05%7C02%7Cai%40engr.oregonstate.edu%7Cb7aea8282a8647fc1c1808dc3981d75c%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638448477017186033%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=dfexM7x1nHuuRLjNVR06fyPERhXK4uUvOinkkItDFbQ%3D&reserved=0> ________________________________ From: eecs-grads <eecs-grads-bounces@engr.oregonstate.edu> on behalf of Tadepalli, Prasad <prasad.tadepalli@oregonstate.edu> Sent: Thursday, February 29, 2024 11:50:29 AM To: ai@engr.orst.edu <ai@engr.oregonstate.edu>; eecs-faculty@engr.oregonstate.edu <eecs-faculty@engr.oregonstate.edu>; eecs-grads@engr.oregonstate.edu <eecs-grads@engr.oregonstate.edu>; ai-seminar@engr.oregonstate.edu <ai-seminar@engr.oregonstate.edu> Subject: [eecs-grads] AI Seminar: March 1 Some people who received this message don't often get email from prasad.tadepalli@oregonstate.edu. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> Dear all, Tomorrow’s AI seminar will be by Yoon Kim, who will be talking about large language models. This seminar will be remote, but please attend in person if possible. Prasad Location: KEC 1001 Zoom link: https://oregonstate.zoom.us/j/96491555190?pwd=azJHSXZ0TFQwTFFJdkZCWFhnTW04UT09<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonstate.zoom.us%2Fj%2F96491555190%3Fpwd%3DazJHSXZ0TFQwTFFJdkZCWFhnTW04UT09&data=05%7C02%7Cai%40engr.oregonstate.edu%7Cb7aea8282a8647fc1c1808dc3981d75c%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638448477017186033%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=tctPfHOK7iN8BYjV4TaKl54mwh6YtACAHXd4OBbVyig%3D&reserved=0> Large Language Models and Symbolic Structures Yoon Kim EECS, Massachusetts Institute of Technology Abstract: Over the past decade the field of NLP has shifted from a pipelined approach (wherein intermediate symbolic structures such as parse trees are explicitly predicted and utilized for downstream tasks) to an end-to-end approach wherein pretrained large language models (LLMs) are adapted to various downstream tasks via finetuning or prompting. What role (if any) can symbolic structures play in the era of LLMs? In the first part of the talk, we will see how latent symbolic structures in the form of hierarchical alignments can be used to guide LM-based neural machine translation systems to improve translation of low resource languages and even enable the use of new translation rules during inference. In the second part, we will see how expert-derived grammars can be used to control LLMs via prompting for tasks such as semantic parsing where the output structure must obey strict domain-specific constraints. Speaker Biography: Yoon Kim is an assistant professor at MIT EECS. He received his PhD from Harvard University, advised by Alexander Rush.

Dear Colleagues and Students, I highly recommend this talk, as the speaker (from MIT) is a rising star in NLP who combines LLMs with classical concepts such as grammars and syntactic structures, which is quite different from the purely empirical mainstream. Liang On Feb 29, 2024, at 11:50 AM, Tadepalli, Prasad <prasad.tadepalli@oregonstate.edu> wrote: Dear all, Tomorrow’s AI seminar will be by Yoon Kim, who will be talking about large language models. This seminar will be remote, but please attend in person if possible. Prasad Location: KEC 1001 Zoom link: https://oregonstate.zoom.us/j/96491555190?pwd=azJHSXZ0TFQwTFFJdkZCWFhnTW04UT09<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonstate.zoom.us%2Fj%2F96491555190%3Fpwd%3DazJHSXZ0TFQwTFFJdkZCWFhnTW04UT09&data=05%7C02%7Cai%40engr.oregonstate.edu%7Cc7d7403c4e2840cc2b5a08dc39b29724%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638448686393141217%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=2in6AcbqWCKpwK4Hh6TXnuopdlVB47CY61jo4gqL8PM%3D&reserved=0> Large Language Models and Symbolic Structures Yoon Kim EECS, Massachusetts Institute of Technology Abstract: Over the past decade the field of NLP has shifted from a pipelined approach (wherein intermediate symbolic structures such as parse trees are explicitly predicted and utilized for downstream tasks) to an end-to-end approach wherein pretrained large language models (LLMs) are adapted to various downstream tasks via finetuning or prompting. What role (if any) can symbolic structures play in the era of LLMs? In the first part of the talk, we will see how latent symbolic structures in the form of hierarchical alignments can be used to guide LM-based neural machine translation systems to improve translation of low resource languages and even enable the use of new translation rules during inference. In the second part, we will see how expert-derived grammars can be used to control LLMs via prompting for tasks such as semantic parsing where the output structure must obey strict domain-specific constraints. Speaker Biography: Yoon Kim is an assistant professor at MIT EECS. He received his PhD from Harvard University, advised by Alexander Rush. _______________________________________________ Ai mailing list Ai@engr.oregonstate.edu<mailto:Ai@engr.oregonstate.edu> https://it.engineering.oregonstate.edu/mailman/listinfo/ai<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fit.engineering.oregonstate.edu%2Fmailman%2Flistinfo%2Fai&data=05%7C02%7Cai%40engr.oregonstate.edu%7Cc7d7403c4e2840cc2b5a08dc39b29724%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638448686393141217%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=vA3KbTYGIbQZr9ATpxPuwfayele1lw3ErYh6OptrLd8%3D&reserved=0>

It’s 2pm today: Location: KEC 1001 OR Zoom link: https://oregonstate.zoom.us/j/96491555190?pwd=azJHSXZ0TFQwTFFJdkZCWFhnTW04UT09<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonstate.zoom.us%2Fj%2F96491555190%3Fpwd%3DazJHSXZ0TFQwTFFJdkZCWFhnTW04UT09&data=05%7C02%7Cai%40engr.oregonstate.edu%7C152619b708364ad3534408dc3a15e5e8%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638449112914850569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=BFrjoaK4KUgHyshYzDoxzgGfnycwNTTt9CTmwaXHhB0%3D&reserved=0> https://engineering.oregonstate.edu/events/ai-seminar-large-language-models-symbolic-structures<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineering.oregonstate.edu%2Fevents%2Fai-seminar-large-language-models-symbolic-structures&data=05%7C02%7Cai%40engr.oregonstate.edu%7C152619b708364ad3534408dc3a15e5e8%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638449112914850569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=eptk3C8kpqDvdtUlAcgZloZwzqmuo4QZqEwcMKMwuys%3D&reserved=0> On Feb 29, 2024, at 9:43 PM, Huang, Liang <Liang.Huang@oregonstate.edu> wrote: Dear Colleagues and Students, I highly recommend this talk, as the speaker (from MIT) is a rising star in NLP who combines LLMs with classical concepts such as grammars and syntactic structures, which is quite different from the purely empirical mainstream. Liang On Feb 29, 2024, at 11:50 AM, Tadepalli, Prasad <prasad.tadepalli@oregonstate.edu> wrote: Dear all, Tomorrow’s AI seminar will be by Yoon Kim, who will be talking about large language models. This seminar will be remote, but please attend in person if possible. Prasad Location: KEC 1001 Zoom link: https://oregonstate.zoom.us/j/96491555190?pwd=azJHSXZ0TFQwTFFJdkZCWFhnTW04UT09<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Foregonstate.zoom.us%2Fj%2F96491555190%3Fpwd%3DazJHSXZ0TFQwTFFJdkZCWFhnTW04UT09&data=05%7C02%7Cai%40engr.oregonstate.edu%7C152619b708364ad3534408dc3a15e5e8%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638449112914850569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=BFrjoaK4KUgHyshYzDoxzgGfnycwNTTt9CTmwaXHhB0%3D&reserved=0> Large Language Models and Symbolic Structures Yoon Kim EECS, Massachusetts Institute of Technology Abstract: Over the past decade the field of NLP has shifted from a pipelined approach (wherein intermediate symbolic structures such as parse trees are explicitly predicted and utilized for downstream tasks) to an end-to-end approach wherein pretrained large language models (LLMs) are adapted to various downstream tasks via finetuning or prompting. What role (if any) can symbolic structures play in the era of LLMs? In the first part of the talk, we will see how latent symbolic structures in the form of hierarchical alignments can be used to guide LM-based neural machine translation systems to improve translation of low resource languages and even enable the use of new translation rules during inference. In the second part, we will see how expert-derived grammars can be used to control LLMs via prompting for tasks such as semantic parsing where the output structure must obey strict domain-specific constraints. Speaker Biography: Yoon Kim is an assistant professor at MIT EECS. He received his PhD from Harvard University, advised by Alexander Rush. _______________________________________________ Ai mailing list Ai@engr.oregonstate.edu<mailto:Ai@engr.oregonstate.edu> https://it.engineering.oregonstate.edu/mailman/listinfo/ai<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fit.engineering.oregonstate.edu%2Fmailman%2Flistinfo%2Fai&data=05%7C02%7Cai%40engr.oregonstate.edu%7C152619b708364ad3534408dc3a15e5e8%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638449112914850569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=p2dGOHBru1G93%2BNdbfsAo6gF%2FTdoJ4eNddGjbeIrKic%3D&reserved=0> _______________________________________________ Ai mailing list Ai@engr.oregonstate.edu https://it.engineering.oregonstate.edu/mailman/listinfo/ai
participants (3)
-
Huang, Liang
-
Tadepalli, Prasad
-
Yeh, Chan-Wei