Under Construction, More Soon!
Many more details about the CCN2024 Event and the larger BotBM3 collaboration will be posted here soon! Stay tuned.
Narrative Event Outline
This event is motivated by two overarching questions: What can our models teach us about the brain, and what criteria should we be using to evaluate these models? These questions have attained new urgency in recent years as AI engineering (sometimes biologically inspired, increasingly not) has delivered models that push the state of the art not only in performing the kinds of perceptual and cognitive tasks we think the brain performs, but also in predicting measurements of brain responses to task-adjacent (naturalistic) sensory stimuli. These developments would be uniformly positive if not for one major catch: Many different kinds of models (with different architectures, tasks, learning rules, and sensory diets) are equally “good” at many different kinds of task-performance and brain-prediction alike. This “many-to-many” correspondence problem is by no means alien to cognitive neuroscience and is at best a persistent nuisance. At worst, it is an active epistemic crisis – a sentiment summarized succinctly in the subtitle of recent work by the one of the contributors to our event: “If we got it right, would we know?” (Han et al, 2022)
To their enduring credit, the diverse community of model-to-brain mappers has addressed the “many-to-many” correspondence problems of representational modeling in diverse, innovative, and principled ways. All of these attempts, however, are unified in one key challenge: that is, measuring when it is we’ve done whatever our model-to-brain mappings were intended to do. In this Community Event, we introduce the “battle of the metrics”: a concerted, collaborative effort spanning multiple research groups designed to take account of the various methodologies used to measure progress in model-to-brain mappings, and systematically distill the strengths and weaknesses of each. Metrics covered will include classic “alignment metrics” (e.g. RSA, CKA, encoding) as well as newer “super metrics” (e.g. controversial stimuli, and manifold statistics). In effect, the intention of this collaboration is to “lay our cards on the table” – to first outline in detail what we believe our metrics of model-to-brain correspondence tell us about the brain, and then to use a theoretically-motivated, but fundamentally data-driven analysis to take stock of whether our expectations map meaningfully to empirical results.
Our Main Metric of Progress
(The People that Make this Possible)
A central premise of this event and the larger BotBM3 collaboration is that this kind of work cannot be done alone! To that end, we’d like to acknowledge our gratitude for all the many wonderful people that have contributed to this effort so far.
- Leyla Isik (Johns Hopkins)
- Mick Bonner (Johns Hopkins)
- Srijani Saha (Harvard)
- Seda Akbiyik (Harvard)
- Talia Konkle (Harvard)
- Dan Yamins (Stanford)
- Jacob Prince (Harvard)
- Meenakshi Kholsa (UCSD)
- Noga Zaslavsky (NYU)
- Wenxuan Guo (Columbia)
- Nina Miolane (UCSB)
- Eghbal Hosseini (MIT)
- Imran Thobani (Stanford)
- Abdul Canatar (FlatIron)
- Binxu Wang (Harvard)
- Brian Cheung (MIT)
- Andrei Barbu (MIT)
- Chris Z. Wang (MIT)
- Francisco Acosta (UCSB)
- Lotem Elber-Dorezko (CMU)
- Vighnesh Subramniam (MIT)
And a special thanks to the scientist-programmers whose work inspired or directly facilitated our “empirical assay”:
- Jacob Prince (Harvard)
- Imran Thobani (Stanford)
- Alex Williams+ (NYU)
- Nina Miolane+ (UCSB)
- Eric Elmoznino (MILA)
- Nathan Cloos+ (MIT)
- Sueyong Chung+ (FlatIron)
- Ben Sorscher+ (Harvard)
- JohnMark Taylor (Columbia)
- The RSA ToolBox Team
- The Brain-Score Team
- The Net2Brain Team
- Himalaya (GallantLab)
- The ThingsData Team
- The Allen Brain Institute
(+ = “and their Collaborators”)