Alignment Reading Group: Interpretability

Hello everyone, The alignment reading group will meet this Friday at 2 PM PST. We’ll be discussing "Interpretability in ML: A Broad Overview <https://www.lesswrong.com/posts/57fTWCpsAyjeAimTp/interpretability-in-ml-a-broad-overview-2> " The article introduces many different aspects of interpretability, such as why we might want interpretable models, how to define or think about interpretability, different techniques used for different models, and how to evaluate the quality of interpretability techniques. Please join if you're interested in ML interpretability. Newcomers are welcome! Join Zoom Meeting https://oregonstate.zoom.us/j/95843260079?pwd=TzZTN0xPaFZrazRGTElud0J1cnJLUT... Password: 961594 Phone Dial-In Information +1 971 247 1195 US (Portland) +1 253 215 8782 US (Tacoma) +1 301 715 8592 US (Washington DC) Meeting ID: 958 4326 0079 All the best, Quintin
participants (1)
-
Pope, Quintin