Can neuroscience localizers uncover brain-like functional specializations in LLMs? Yes! We analyzed 18 LLMs and found units mirroring the brain’s language, theory of mind, and multiple demand networks!
[preprint] [Social Media]
Can neuroscience localizers uncover brain-like functional specializations in LLMs? Yes! We analyzed 18 LLMs and found units mirroring the brain’s language, theory of mind, and multiple demand networks!
[preprint] [Social Media]
For a while, gains in AI have translated into gains in modeling the brain. We test if that will continue to be the case with recent advances in scaling. Surprisingly we get mixed results: while increased scale advances model alignment to behavior, neural alignment saturates.
[preprint] [social media]
Functional responses in the brain to linguistic inputs are spatially organized — but why? We show that a simple smoothness loss added to language model training explains a range of topographic phenomena in neuroscience: arxiv.org/abs/2410.11516, Twitter Thread.
In 2021 we were surprised to find that untrained language models are already decent predictors of activity in the human language system (http://doi.org/10.1073/pnas.2105646118). Badr Alkhamissi in the lab figured out the core components underlying the alignment of untrained models: tokenization and aggregation. With these findings, we built a simple untrained network “SUMA” with state-of-the-art alignment to brain and behavioral data — this feature encoder provides representations that are then useful for efficient language modeling. Directly mapping our model onto the brain, these results characterize the human language system as a generic feature encoder that aggregates incoming sensory representations for downstream use. If you disagree we hope you consider breaking our model (soon on Brain-Score/Github).
See here for social media posts:
https://x.com/bkhmsi/status/1805595986510717136
https://x.com/martin_schrimpf/status/1805599047098470793
https://x.com/GretaTuckute/status/1805676221189308491
https://x.com/ABosselut/status/1805600725537370119
My group will present 5 abstracts at the Cognitive Computational Neuroscience conference at MIT in Boston this fall! The projects cover new models of vision and language, new ways to evaluate these models on their brain alignment, and ideas to make use of the best models.
Current DNNs are Unable to Integrate Visual Information Across Object Discontinuities
Topographic Deep ANN Models Predict the Perceptual Effects of Direct IT Cortical Interventions
A Simple Untrained Recurrent Attention Architecture Aligns to the Human Language Network
Scaling Laws for Task-Optimized Models of the Primate Visual Ventral Stream
Announcing the 2024 Brain-Score Benchmarking Competition! This year — we have turned the table! We invite experimentalists and the community at large to expose the explanatory gaps between current models of primate vision and the biological brain.
We previously found GPT (2) to be a strong model of the human language system ( pnas.org/doi/10.1073/pn).
Our paper on reducing the number of supervised synaptic updates in computational models of vision was accepted to ICLR as a Spotlight! https://openreview.net/forum?id=g1SzIRLQXMM
The paper improved quite a bit since the preprint I think, we especially made a stronger connection to Machine Learning by showing that our proposed techniques outperform other approaches to drastically reduce the number of parameters. We retain over 40% ImageNet top-1 performance with only ~3% of parameters relative to a fully-trained network.
We are excited to announce that submissions to the 2022 Brain-Score competition are open until February 15, 2022!
The first edition of the Brain-Score Competition proposes to evaluate computational models of primate object recognition in over 30 neuronal and behavioral benchmarks and will award $6,000 to the best submissions over three tracks: overall Brain-Score, V1, and object recognition behavior. In addition, selected participants will be invited to present their work in a Cosyne workshop which will feature some of the leading experts in vision neuroscience and computer vision.
For more information, please visit the competition website, follow Brain-Score on twitter, and join our Slack workspace! Good luck!
Our virtual ThreeDWorld is now public: www.threedworld.org
We provide a fully controllable virtual world striving to be ~photorealistic, based on the Unity engine. ThreeDWorld provides visual and audio rendering with physically realistic behavior that users can interact with through an extensive python API. Check out the code here: github.com/threedworld-mit/tdw
MIT News also wrote a great article summarizing the platform: news.mit.edu
Our work modeling the human language system with neural network language models is published in PNAS! https://www.pnas.org/content/118/45/e2105646118
The article received widespread press coverage, e.g. by MIT News, Axios, and Scientific American (Press).
I was awarded a Friends of the McGovern fellowship,
and won an Open Science Prize by the Neuro – Irv and Helga Cooper Foundation for my work on Brain-Score.
I was awarded the Walle Nauta Award for Continuing Dedication in Teaching for the Systems 2 class (Neural Mechanisms of Cognitive Computations) that Mike Halassa and I have been teaching for the past 3 years.
Computational neuroscience has lately had great success at modeling perception with ANNs – but it has been unclear if this approach translates to higher cognitive systems. We made some exciting progress in modeling human language processing https://www.biorxiv.org/content/10.1101/2020.06.26.174482v1.
This work is the result of a terrific collaboration with Idan A. Blank, Greta Tuckute, Carina Kauf, Eghbal A. Hosseini, Nancy Kanwisher, Josh Tenenbaum and Ev Fedorenko.
Work by Ev Fedorenko and others has localized the language network as a set of regions that support high-level language processing (e.g. https://www.sciencedirect.com/science/article/pii/S136466131300288X) BUT the actual mechanisms underlying human language processing have remained unknown.
To evaluate model candidates of mechanisms, we use previously published human recordings: fMRI activations to short passages (Pereira et al., 2018), ECoG recordings to single words in diverse sentences (Fedorenko et al., 2016), fMRI to story fragments (Blank et al. 2014). More specifically, we present the same stimuli to models that were presented to humans and “record” model activations. We then compute a correlation score of how well the model recordings can predict human recordings with a regression fit on a subset of the stimuli.
Since we also want to figure out how close model predictions are to the internal reliability of the data, we extrapolate a ceiling of how well an “infinite number of subjects” could predict individual subjects in the data. Scores are normalized by this estimated ceiling.
So how well do models actually predict our recordings? We tested 43 diverse language models, incl. embedding, recurrent, and transformer models. Specific models (GPT2-xl) predict some of the data near perfectly, and consistently across datasets. Embeddings like GloVe do not.
The scores of models are further predicted by the task performance of models to predict the next word on the WikiText-2 language modeling dataset (evaluated as perplexity, lower is better) – but NOT by task performance on any of the GLUE benchmarks.
Since we only care about neurons because they support interesting behaviors, we tested how well models predict human reading times: specific models again do well and their success correlates with 1) their neural scores, and 2) their performance on the next-word prediction task.
We also explored the relative contributions to brain predictivity of two different aspects of model design: network architecture and training experience, ~akin to evolutionary and learning-based optimization. (see also this recent work). Intrinsic architectural properties (like size and directionality) in some models already yield representational spaces that – without any training – reliably predict brain activity. These untrained scores predict scores after training. While deep learning is mostly focused on the learning part, architecture alone works surprisingly well even on the next-word prediction task. Critically for the brain datasets, a random embedding with the same number of features as GPT2-xl does not yield reliable predictions.
Summary: 1) specific models accurately predict human language data; 2) their neural predictivity is correlated with task performance to predict the next word, 3) and with their ability to predict human reading times; 4) architecture alone already yields reasonable scores. These results suggest that predicting future inputs may shape human language processing, and they enable using ANNs as embodied hypotheses of brain mechanisms. To fuel future generations of neurally plausible models, we will soon release all our code and data.
Certain ANNs are surprisingly good models of primate vision, but require millions of supervised synaptic updates — this unbiological development has been the recent focus of many discussions in neuroscience. Is all this training really necessary? We approach this in new work https://www.biorxiv.org/content/10.1101/2020.06.08.140111v1.
Neuroscientists have argued for innate structure with only thin learning on top, i.e. where structure the genome dictates brain connectivity and is leveraged for rapid experience-dependent development. We took first steps at this with more brain-like neural networks.
We started from CORnet-S, the current top model on neural and behavioral benchmarks in Brain-Score.org. We first found that variants of this model which are trained for only 2% of supervised updates (epochs x images) already achieve 80% of the trained model’s score.
Even without any updates, the models’ brain predictivities are well above chance. Examining this “at-birth” synaptic connectivity and improving it with a new method “Weight Compression”, we can reach 54% without any training at all
However, to be more brain-like we require at least some training — but ideally this would not change millions of synapses requiring precise machinery to coordinate the updates. By training only critical down-sampling layers, we achieve 80% when updating only 5% of synapses.
Applying these three strategies in combination (reducing supervised epochs x images + improved at-birth connectivity + reducing synaptic updates), we achieve ~80% of a fully trained model’s brain predictivity with two orders of magnitude fewer supervised synaptic updates.
Taking a step back, we think these are first steps to model not just primate adult visual processing during inference, but also how the system is wired up from an evolutionary birth state encoded in the genome and by developmental update rules. Lots more work to do!
Create a website and earn with Altervista - Disclaimer - Report Abuse