Why you should care about AI #interpretability - Mark Bissell, #Goodfire #AI
https://www.youtube.com/watch?v=6AVMHZPjpTQ
  
  https://www.youtube.com/watch?v=6AVMHZPjpTQ
YouTube
  
  Why you should care about AI interpretability - Mark Bissell, Goodfire AI
  The goal of mechanistic interpretability is to reverse engineer neural networks. Having direct, programmable access to the internal neurons of models unlocks new ways for developers and users to interact with AI — from more precise steering to guardrails…