Hello everyone,

I’d just like to send a quick reminder that the alignment reading group is meeting at 1 PM PST today. Anyone interested is welcome to attend. 

All the best, 
Quintin 

Sent from my iPhone

On Mar 8, 2022, at 7:10 PM, Pope, Quintin <popeq@oregonstate.edu> wrote:


Hello everyone,

We'll again be meeting at 1 PM PST this Friday. We'll discuss In-context Learning and Induction Heads. This work attempts to understand how GPT transformers are able to adapt to the current linguistic context so effectively. The authors propose a specific mechanism for in-context learning (induction heads) and argue using different approaches that induction heads account for most in-context learning. In particular, they find no evidence of mesa optimization contributing to in-context learning. If you're at all interested in the internal organization or behaviors of transformers, please feel free to attend!

The paper is from the group behind A Mathematical Framework for Transformer Circuits, which we discussed previously. However, this work is more empirical and is very well explained, so I found it more approachable.

Join Zoom Meeting
https://oregonstate.zoom.us/j/95843260079?pwd=TzZTN0xPaFZrazRGTElud0J1cnJLUT09

Password: 961594

Phone Dial-In Information
+1 971 247 1195 US (Portland)
+1 253 215 8782 US (Tacoma)
+1 301 715 8592 US (Washington DC)

Meeting ID: 958 4326 0079

All the best,
Quintin