Research in mechanistic interpretability and neuroscience often relies on interpreting internal representations to understand systems, or manipulating representations to improve models. I gave a talk at the UniReps workshop at NeurIPS on a few challenges for this area, summary thread: 1/12 #ai#ml#neuroscience#computationalneuroscience#interpretability#NeuralRepresentations#neurips2023