Greta Tuckute Profile Banner
Greta Tuckute Profile
Greta Tuckute

@GretaTuckute

2,336
Followers
598
Following
38
Media
475
Statuses

PhD candidate, Brain and Cognitive Sciences MIT. Interested in language in biological brains and artificial ones.

Cambridge, Massachusetts, USA
Joined August 2011
Don't wanna be here? Send us removal request.
Pinned Tweet
@GretaTuckute
Greta Tuckute
12 days
1/ Really excited to share: Language in Brains, Minds, and Machines w @Nancy_Kanwisher @ev_fedorenko @AnnualReviews We survey the insights that language models (LMs) provide on the question of how language is represented and processed in the human brain.
Tweet media one
6
164
655
@GretaTuckute
Greta Tuckute
1 year
There is much discussion about capabilities of GPT models. But how accurate are such models as models of language processing in the 🧠? Here, we demonstrate that GPT is accurate enough to noninvasively drive and suppress 🧠activity in the language network of new individuals. 1/
Tweet media one
9
78
313
@GretaTuckute
Greta Tuckute
4 months
1/ Our paper is published! @NatureHumBehav Driving and suppressing brain activity in the human language network with model (GPT)-selected stimuli W. @aloxatel @ShashankSrikant @MayaTaliaferro Mingye (Christina) Wang @martin_schrimpf @cvnlab @ev_fedorenko
Tweet media one
2
56
209
@GretaTuckute
Greta Tuckute
2 years
PINEAPPLE, LIGHT, HAPPY, AVALANCHE, BURDEN Some of these words are consistently remembered better than others. Why is that? Here we provide a simple Bayesian account and show that it explains >80% of variance in word memorability
6
37
187
@GretaTuckute
Greta Tuckute
5 months
Our work on models of the auditory cortex is hereby published in @PLOSBiology ! The paper presents an extensive analysis of audio neural networks as models of auditory processing in the brain. Paper: w/ @jenellefeather (co-1st) @dlboebinger @JoshHMcDermott
7
27
132
@GretaTuckute
Greta Tuckute
10 months
Very honored to receive the Young Scientist Award at #VCCA2023 ! Thanks so much to the @CompAudiology community & conference organizers!!
@CompAudiology
Computational Audiology
10 months
🏆 VCCA2023 Young Scientist Award: @GretaTuckute from MIT, US. Her talk on "Driving and suppressing the human language network using large language models" demonstrated exceptional clarity, scientific merit, and novelty. The jury praised her for her smooth presentation flow.
1
0
6
6
12
121
@GretaTuckute
Greta Tuckute
2 years
Feel very lucky to be among the recipients of the Amazon Fellowship within AI/Robotics. Thanks so much to Amazon / @MIT_SCC , @ev_fedorenko , @JoshHMcDermott , the @mitbrainandcog community and all my wonderful collaborators - incredibly grateful!
@AmazonScience
Amazon Science
2 years
Amazon and @MIT_SCC announced their first set of Amazon Fellows as part of their Science Hub, which aims to expand participation in AI, robotics, and other fields. They will receive funding to conduct independent research projects at MIT. Meet the fellows. #MachineLearning
0
5
16
10
3
111
@GretaTuckute
Greta Tuckute
2 years
Given any location in the brain, what is the probability of that location being selective to language? We present a probabilistic language atlas (LanA) based on functional localization from >800 participants!
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
2 years
Thx to the hard work of @ben_lipkin @GretaTuckute and others + @NIDCD funding, we are ready to present to you LanA (Language Atlas)—a probabilistic atlas for the lang network based on lang localizer data from >800 inds (available for both volume and surface brain spaces). 1/n
9
139
504
0
21
105
@GretaTuckute
Greta Tuckute
1 year
Many studies have now shown that LLMs are able to predict human brain activity during language processing. In this work, we ask which linguistic features are critical to LLM-to-brain alignment. Paper: And tweeprint below!
@KaufCarina
Carina Kauf
1 year
New paper with @GretaTuckute (co-first), @roger_p_levy , @jacobandreas and @ev_fedorenko ! ANN language models are able to predict fMRI brain responses during language processing. But which aspects of the linguistic stimulus contribute to this ANN-brain similarity? A thread 🧵 1/10
8
29
134
0
11
99
@GretaTuckute
Greta Tuckute
5 years
Thanks @mne_python for providing an awesome framework for visualization of neural data! I keep discovering new implementations - never thought I would be making GIFs in Python (EEG dynamics of my visually evoked response)! @matplotlib
4
8
83
@GretaTuckute
Greta Tuckute
2 years
We are excited to present SentSpace at #NAACL2022 on Tuesday (System Demo session, 4.15pm PT, 7th floor)! SentSpace is a tool for characterizing text using diverse lexical, syntactic, and semantic features related to how humans process and understand language.
2
15
75
@GretaTuckute
Greta Tuckute
4 years
Can we exploit artificial neural networks to inform us about high-level language processing in the brain? In this work, we ask which SOTA language models best capture human neural and behavioral responses - and how it relates to model architecture and predictive processing!
@martin_schrimpf
Martin Schrimpf
4 years
Computational neuroscience has lately had great success at modeling perception with ANNs - but it has been unclear if this approach translates to higher cognitive systems. We made some exciting progress in modeling human language processing #tweeprint 1/
Tweet media one
3
154
533
2
17
72
@GretaTuckute
Greta Tuckute
9 months
What kinds of linguistic input maximally drive or suppress the language network? Come hear about our work on generating model-based ‘super stimuli’ to learn more about language processing in the 🧠! Saturday (26th) 1-3pm P-3A.25 (North Schools #CCN2023 ) @CogCompNeuro
Tweet media one
1
8
63
@GretaTuckute
Greta Tuckute
2 years
Do various deep neural networks for audio predict human brain responses? Do they map onto the cortical hierarchy? Mostly, yes. Hear about these findings tonight (Fri) #cosyne2022 poster II-115, with @jenellefeather (joint), @dlboebinger , @JoshHMcDermott
Tweet media one
1
8
62
@GretaTuckute
Greta Tuckute
4 months
Our @CogCompNeuro GAC write-up is out! Our two main questions: 1⃣ How should we use neuroscientific data in model development (raw experimental data vs. qualitative insights)? 2⃣ How should we collect experimental data for model development (model-free vs. model-based)?
@KohitijKar
Kohitij Kar
4 months
Our 2022 @CogCompNeuro GAC is now a preprint! Glad I could share and discuss ideas with this awesome team led by @GretaTuckute and I, comprising @dfinz @eshedmargalit @jcbyts @JoelZylber16090 @s_y_chung @alonamarie @ev_fedorenko @kalatwt @KriegeskorteLab
Tweet media one
2
14
77
1
9
60
@GretaTuckute
Greta Tuckute
2 years
Our paper on EG (individual with no left temporal lobe) is out! We investigated the emergence of the language network (left-lateralized frontotemporal network) and our results suggest that temporal language areas are necessary for the emergence of frontal language areas.
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
2 years
The first paper on this individual is now out: @GretaTuckute @ampaunov @HopeKean @smallhannahe @ZachMineroff @IbanDlank with a note from EG herself!
2
36
165
2
7
53
@GretaTuckute
Greta Tuckute
5 years
Exactly one year ago I traveled to Boston for the first time (and yes, fell in love with it). This fall, I am returning to Boston - a bit more permanently - since I will be joining @mit @medialab ! Endlessly grateful for the support I have received throughout this CRAZY journey!!
1
0
46
@GretaTuckute
Greta Tuckute
2 months
Submission deadlines for Conference on Cognitive Computational Neuroscience (CCN) coming up -- come hang out in Cambridge @MIT August 6th-9th!
@CogCompNeuro
CogCompNeuro
2 months
📢Submissions for #CCN2024 are now open at ! 📢 We welcome submissions for 2-page papers (deadline: 12 April) and Generative Adversarial Collaborations (GACs), Keynote+Tutorials, and (new this year!) Community Events (deadline: 5 April).
2
32
59
0
10
43
@GretaTuckute
Greta Tuckute
1 month
Big review paper on language in the 🧠 from @ev_fedorenko @neuranna @tamaregev !
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
1 month
Thrilled to share a review on THE LANGUAGE NETWORK AS A NATURAL KIND—a culmination of ~20 yrs of thinking about+studying language from linguistic, psycholinguistic, and cog neuro perspectives. @NatRevNeurosci With the amazing @neuranna @tamaregev 🥳 🧵1/n
Tweet media one
19
247
833
0
1
39
@GretaTuckute
Greta Tuckute
5 years
Based on a couple of requests: a script for creating GIFs of neural time series data in Python (including some simple sample data)! Please let me know if you have any questions :) @mne_news
0
11
36
@GretaTuckute
Greta Tuckute
2 years
Join our #ccneuro22 GAC workshop Aug 26 (1.30pm PT) -- we will focus on how to optimally use neuroscience data to guide the next generation of brain models. Current use of data is often limited to post-hoc model evaluation or vague ‘inspirations’ for model development.
Tweet media one
2
8
35
@GretaTuckute
Greta Tuckute
5 years
In this work, we propose a framework for a closed-loop neurofeedback system in EEG. We ask how we can exploit neurofeedback to modulate top-down attentional states in real-time. Further, we visualize discriminative information of decoded attentional states using sensitivity maps.
@Tweetteresearch
Lars Kai Hansen
5 years
Tuckute et al.: Closed loop attention control by real-time EEG decoding...
0
4
5
3
11
35
@GretaTuckute
Greta Tuckute
7 months
Grad school applications are not easy — we provide help and resources!
@mitbrainandcog
MIT Brain and Cognitive Sciences
7 months
The MIT BCS PhD Application Assistance Program is open for applications! This student-run initiative seeks to support individuals from underserved & non-traditional academic backgrounds through the PhD application process. Learn more:
Tweet media one
1
41
95
0
2
33
@GretaTuckute
Greta Tuckute
5 months
Also -- at #NeurIPS2023 , reach out to chat about sounds, language, brains, LLMs etc! @MycalTucker and I will present our preliminary work on information-theoretic approaches to investigate brain-model similarity on Fri 3pm CT at @unireps .
1
0
30
@GretaTuckute
Greta Tuckute
2 years
Come hear about our work (with @jenellefeather @dlboebinger @JoshHMcDermott ) on DNNs as candidate models of the auditory cortex! Which models are best? How do network layers map onto the cortical hierarchy? How does training task affect predictions? Poster 2.20, 7.30pm today!
@jenellefeather
Jenelle Feather
2 years
Excited to present new work tonight at @CogCompNeuro ! Come by poster 2.53 to hear about model metamers (also as a talk Sunday at 10:40am). Then swing by poster 2.20 and chat with @GretaTuckute about deep neural network models of the auditory system! #CCN2022 (links in thread)
1
4
32
0
4
30
@GretaTuckute
Greta Tuckute
29 days
Special issue @jneurolang on leveraging LLMs to study the mind / brain, including some of our work on trying to understand which aspects of the linguistic stimulus—linguistic structure or meaning— contribute to LLM-brain similarity.
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
29 days
The Special Issue (SI) of @jneurolang (the OA flagship journal of the Neurobiology of Language Society @MITPress ) on cognitive computational neuroscience of language🧠🤖 is finally out: Co-edited with AlessandroLopopolo MilenaRabovsky @roger_p_levy 🧵1/n
2
32
137
0
0
25
@GretaTuckute
Greta Tuckute
2 years
If you are ever in need of the probability that any location in the brain (volume/surface) is language-selective, please check out our LanA Language Atlas (published today @ScientificData !)
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
2 years
Now out in @NaturePortfolio @ScientificData . Congrats to @ben_lipkin @GretaTuckute and the rest of the team! The paper: The website where the atlas and the individual maps can be explored and downloaded:
0
24
93
0
5
23
@GretaTuckute
Greta Tuckute
5 years
Python toolbox for computing EEG-based effect size maps: as implemented in "Single-Trial Decoding of Scalp EEG under Natural Conditions"
@Tweetteresearch
Lars Kai Hansen
5 years
Tuckute et al.: Single-Trial Decoding of Scalp EEG under Natural Conditions Brain state decoding and visualization of classifiers' effect size maps = attention to channel/time
0
3
12
0
6
22
@GretaTuckute
Greta Tuckute
6 months
Sad for me as my brilliant and thoughtful (!) office mate is leaving me.. but lucky Georgia Tech! Go work with Anna!
@neuranna
Anna Ivanova
6 months
The final paperwork is still pending, but it’s time to spread the word, so here goes--- I am starting as an Assistant Professor in Psychology at Georgia Tech! thread ⬇️
51
57
476
1
0
22
@GretaTuckute
Greta Tuckute
3 years
Neurofeedback is a powerful tool to investigate how neural states link to behavior. Our work on EEG neurofeedback for visual attention is out (part of my MSc work @DTUtweet , with @SThereseH @TWKjaer @Tweetteresearch )
1
1
22
@GretaTuckute
Greta Tuckute
2 years
Thomas is starting up his podcasts series again — thanks a lot for a great conversation! (escaping from everything else happening in the world for ~1hr)
@ModusMirandi
Modus Mirandi Podcast
2 years
Check out my conversation with @GretaTuckute , in which we discuss #language , #computers , and the #brain .
Tweet media one
0
7
29
0
1
22
@GretaTuckute
Greta Tuckute
2 years
Check out @ev_fedorenko 's thread on some of the things we have learned about brain reorganization -- and importantly, what we are yet to learn! Lots of cool questions!
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
2 years
What happens when parts of the🧠that eventually become a highly functionally specialized language system (left frontal and temporal areas) get damaged at birth or shortly thereafter? Where does the language system live in such brains? Long answer short: it's complicated! 1/n
8
90
365
1
1
22
@GretaTuckute
Greta Tuckute
3 months
Spend 3 weeks in Woods Hole with a great NeuroAI crowd! (note that the application is open to internationals)
@gkreiman
Gabriel Kreiman
3 months
Applications are invited to participate in the Brains, Minds and Machines summer school [08/04/2024 -- 08/25/2024] in Woods Hole, MA. Deadline = 03/20/2024. More information here: .
Tweet media one
0
47
112
0
5
21
@GretaTuckute
Greta Tuckute
5 years
Honored to finish my last MSc course credits in one of the most inspiring places I have ever been to, Japan, and wrapping up these 5 years years of study/research at @uni_copenhagen , @DTU_Compute , @Caltech , @MIT and now @HokkaidoUni !
Tweet media one
2
1
21
@GretaTuckute
Greta Tuckute
5 years
Stop by poster A14 tomorrow morning at @SNLmtg in Helsinki to hear about our work on the emergence of language regions in the human brain ()! Work in collab with @ev_fedorenko @IdanAsherBlank @HopeKean and Zach Mineroff
1
4
20
@GretaTuckute
Greta Tuckute
1 year
Lovely piece by @matteo_wong about using AI to understand the brain, featuring some of the many (!) cool studies that have come out within the very last few months!
@TheAtlantic
The Atlantic
1 year
AI is very far from reading your mind—but it has already begun to transform how we study the brain, enabling scientists to test new, mathematically precise theories, writes @matteo_wong :
2
12
32
0
2
16
@GretaTuckute
Greta Tuckute
1 year
Highlighting just 3 of the main predictors: Sentences that are surprising, fall in the middle of the grammaticality and plausibility range elicit the highest brain responses🧠. (see paper for systematic analysis across 11 features) 10/
Tweet media one
2
0
14
@GretaTuckute
Greta Tuckute
1 year
How accurate are the predictions at single-sentence level? r=0.43 (predictivity performance for new stimuli AND participants) The model predictions captured most (69%) of the explainable variance (variance not attributed to inter-participant variability / measurement noise). 7/
Tweet media one
3
0
13
@GretaTuckute
Greta Tuckute
5 months
Finally, we are grateful to our funding sources @NIDCD @MIT_SCC @AmazonScience @AAUW We thank @SamNormanH & @Nancy_Kanwisher for collecting some of the brain data, @alexjkell & @dyamins for the original work that inspired this project, and Ian Griffith for help training models.
0
0
11
@GretaTuckute
Greta Tuckute
1 year
We collected brain responses to these novel drive/suppress sentences using two different fMRI designs (event-related vs blocked). We found that the model-selected sentences indeed modulate brain responses🧠 in new individuals as predicted, demonstrating non-invasive control. 6/
Tweet media one
1
0
12
@GretaTuckute
Greta Tuckute
12 days
6/ LMs vary along many dimensions, which we separate into 3 categories: model architecture, behavior & training. We summarize work from @martin_schrimpf @RichardAntone13 @c_caucheteux @KaufCarina @Khai_Loong_Aw @eghbal_hosseini @shaileeejain & many others, but a few take-aways:
Tweet media one
1
1
12
@GretaTuckute
Greta Tuckute
5 years
Had the pleasure to participate, absorb massive amounts of knowledge (!) and talk about closed-loop neurofeedback (work in collaboration with @SThereseH , @Tweetteresearch and @TWKjaer ) at @FENSorg Brain Conference this week! Thanks for some awesome days!
Tweet media one
1
2
12
@GretaTuckute
Greta Tuckute
27 days
Tweet media one
1
0
11
@GretaTuckute
Greta Tuckute
1 year
Using our encoding model, we identified novel sentences to activate the language network maximally (drive sentences) or minimally (suppress sentences). We searched across ~1.8M sentences to identify these novel drive/suppress sentences and recorded 🧠data in new individuals. 5/
Tweet media one
1
0
11
@GretaTuckute
Greta Tuckute
1 year
Approach: We recorded brain data🧠 while participants read 1,000 linguistically diverse sentences using fMRI. We fit an encoding model to predict the left hemisphere language network’s response to an arbitrary sentence from GPT embeddings🤖. 4/
Tweet media one
1
0
11
@GretaTuckute
Greta Tuckute
12 days
2/ First, what are we trying to model 🧠? We discuss the human language system as a well-defined target for modeling efforts aimed at understanding the representations and computations underlying language (see also e.g., ).
Tweet media one
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
1 month
Thrilled to share a review on THE LANGUAGE NETWORK AS A NATURAL KIND—a culmination of ~20 yrs of thinking about+studying language from linguistic, psycholinguistic, and cog neuro perspectives. @NatRevNeurosci With the amazing @neuranna @tamaregev 🥳 🧵1/n
Tweet media one
19
247
833
1
0
10
@GretaTuckute
Greta Tuckute
12 days
3/ Next, why, a priori, would we expect LMs to share similarities with the human language system? 1⃣ similar behavioral “outputs” 2⃣ prediction as a key shared objective 3⃣ sensitivity to multiple levels of linguistic structure 4⃣ dissociation between ling and non-ling abilities
1
0
10
@GretaTuckute
Greta Tuckute
4 months
8/ Highlighting one key finding: Surprising sentences with unusual grammar and/or meaning elicit the highest activity in the language network. In other words, the language network responds strongly to sentences that are “normal” enough to engage it, but unusual enough to tax it.
Tweet media one
1
0
9
@GretaTuckute
Greta Tuckute
1 year
In conclusion: We show that GPT can drive/suppress 🧠 responses in the language network of new individuals in a model-based, ‘closed-loop’ manner. We establish the ability of accurate models to control 🧠 activity in higher-level cortical areas, like the language network. 11/
1
1
10
@GretaTuckute
Greta Tuckute
2 years
Apply to work with us on a really cool project on brain plasticity!
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
2 years
I am looking for a post-bac RA to help with our growing InterestingBrains project on brain function in atypical brains (including perinatal stroke, cysts, resection, etc.). Apply here:
0
38
51
0
1
10
@GretaTuckute
Greta Tuckute
12 days
12/ While the paper is in press, the supplemental information can be found here:
0
0
10
@GretaTuckute
Greta Tuckute
12 days
7/ ARCHITECTURE Contextualized LMs significantly improve brain-LM alignment over decontextualized semantic models. However, within contextualized LMs, many architectures fit brain data well. Larger LMs predict brain data better, but become worse at predicting human lang behavior.
1
0
8
@GretaTuckute
Greta Tuckute
1 year
We address 2 key questions: 1⃣ How accurate are GPT models as models of language processing in the human🧠? Can we use these models to noninvasively control brain activity in new individuals? 2⃣ What are the stimulus properties that drive responses in the language network? 3/
2
1
9
@GretaTuckute
Greta Tuckute
12 days
11/ The lang LM-neuro field is moving so fast, but hopefully this review will serve as a useful ‘checkpoint’ – a big thanks to everyone who contributed to this paper & let us to reprint their work! PS. The current paper is a pre-proof version; final version will be out in July.
1
0
9
@GretaTuckute
Greta Tuckute
12 days
10/ Finally, we discuss the use of LMs as in silico lang networks: LMs are valuable tools for running expt simulations and identifying stimuli that expand the hypothesis space beyond our preconceived notions as experimenters (e.g., )
1
1
8
@GretaTuckute
Greta Tuckute
12 days
4/ And why might we not? In contrast to humans, LMs: 1⃣ learn from different amount/kinds of data without the broader context of interacting the world 2⃣ have access to exact long-range “memory” of preceding ling input 3⃣ have different hardware (e.g. lack of spatial pressures)
1
0
8
@GretaTuckute
Greta Tuckute
5 months
Our finding that a model’s input “diet” plays a key role in the development of brain-like representations is in line with recent work in vision neuroscience, such as:
@talia_konkle
talia konkle
10 months
7/ But visual input diet *does* appear to matter. Several of our experiments point to the impact of visual input – not in the size of the image database, but in the diversity of image content – on a model’s emergent brain predictivity.
2
6
42
1
0
8
@GretaTuckute
Greta Tuckute
1 year
PS: This approach takes inspiration from adaptive stimulus design in vision (e.g. @PouyaBashivan @KohitijKar @HombreCerebro ) where gradient-based modifications are used to generate optimal stimuli for modulating neurons/regions. (we attempted something similar, see SI 13) 13/13
1
1
8
@GretaTuckute
Greta Tuckute
5 years
Super productive 24 months for Neuralink et al!
0
1
8
@GretaTuckute
Greta Tuckute
2 years
How well does your computational model replicate various aspects of visual processing (brain & behavior)? Submissions for the @brain_score competition are open until Feb 15 @CosyneMeeting
@brain_score
Brain-Score
2 years
There are already over 30 submissions to the Brain-Score competition with some improving the behavioral alignment when compared to the previous best models! However, modeling low-level visual areas remains a challenge. Can you do better? Less than 1 month to submit!
Tweet media one
1
0
14
0
1
8
@GretaTuckute
Greta Tuckute
5 months
Covered by MIT news: We previously summarized this work here: See below for two of our favorite results, one of which was added in revision:
@GretaTuckute
Greta Tuckute
2 years
Excited to present a comprehensive analysis on the extent to which deep neural network (DNNs) can account for brain responses in the human auditory cortex! Work with @jenellefeather (co-1st) @dlboebinger @JoshHMcDermott
1
31
131
1
0
7
@GretaTuckute
Greta Tuckute
1 year
Finally, why do certain sentences elicit higher responses than others? We obtained 11 features to characterize our sentences: 1⃣ Surprisal; 2⃣ 10 behavioral norms (3,600 👥) reflecting 5 broad aspects of linguistic input (form/meaning,content,emotion,imageability, frequency). 9/
2
0
7
@GretaTuckute
Greta Tuckute
3 months
0
0
7
@GretaTuckute
Greta Tuckute
12 days
5/ We describe evidence that LMs represent linguistic information similarly enough to humans to enable relatively accurate brain encoding and decoding, and then turn to the question: Which properties of LMs enable them to capture human responses to language?
1
0
7
@GretaTuckute
Greta Tuckute
2 years
Brilliant framework for comparing language models against each other using controversial sentence pairs
0
1
7
@GretaTuckute
Greta Tuckute
2 years
We observe that fluent GPT2-XL-generated text 🤖 nevertheless exhibits fine-grained differences in psycholinguistic norms compared to that generated by humans 🧠
Tweet media one
1
1
7
@GretaTuckute
Greta Tuckute
2 years
Building on past work that suggested that words are encoded by their meanings, we hypothesize that words that uniquely pick out a meaning in semantic memory (i.e., unambiguous words with no/few synonyms) are more memorable.
Tweet media one
1
0
7
@GretaTuckute
Greta Tuckute
12 days
9/ TRAINING Even LMs trained on a developmentally plausible amount of data align with brain responses. Semantic properties of training data appear more key for brain-LM alignment than morphosyntactic ones. Fine-tuning can increase brain-LM alignment (task- and model dependent).
1
0
6
@GretaTuckute
Greta Tuckute
1 year
We also show that language regions🧠 have higher noise ceilings (high degree of stimulus-related activity to linguistic input) and are predicted better by GPT🤖 features compared to various other high-level brain regions. 8/
Tweet media one
1
0
7
@GretaTuckute
Greta Tuckute
12 days
8/ BEHAVIOR In line with the idea that neural representations are shaped by behavioral demands, LMs’ ability to predict text correlates with alignment. It remains unknown whether prediction per se or factors like representational generality is the key factor underlying alignment.
1
0
5
@GretaTuckute
Greta Tuckute
2 years
Memorability can be used to answer cool questions about how the mind and brain prioritizes and organizes information during semantic memory encoding. Understanding which words lead to longer-lasting memory traces can be leveraged to enable more effective information sharing.
1
0
6
@GretaTuckute
Greta Tuckute
5 months
In general, most models (but not all) showed greater brain-model similarity than a traditional filter-bank model, demonstrating that if the core goal is to obtain the most quantitatively accurate model of the auditory system, machine learning models move us closer to this goal.
1
0
6
@GretaTuckute
Greta Tuckute
5 years
Python-based toolbox for neurofeedback available here: - and thanks to @mne_python for great tools for neural time series analyses
Tweet media one
0
3
6
@GretaTuckute
Greta Tuckute
4 months
4/ Next, we used our encoding model to identify new sentences to activate the language network maximally (drive sentences ⬆️) or minimally (suppress sentences ⬇️). We searched across ~1.8M sentences to identify these new drive/suppress sentences.
Tweet media one
1
0
6
@GretaTuckute
Greta Tuckute
2 years
How can we most efficiently use neuroscience data to guide the next generation of brain models? Please fill out our poll with 7 questions! We will share the results by the end of today's GAC workshop.
@KohitijKar
Kohitij Kar
2 years
For folks in-person at #ccneuro22 (or not), our GAC workshop is starting in a bit. We are conducting an audience poll to gauge where we stand on few of these issues. If you can please send us your responses that will be great! Please RT and distribute 🙏
1
7
10
0
1
6
@GretaTuckute
Greta Tuckute
2 years
2⃣ Certain words are consistently remembered better than others across participants – so although individuals differ in their exposure to the amount and kinds of linguistic information across their lifetimes, memorability is largely an intrinsic word property.
1
0
6
@GretaTuckute
Greta Tuckute
2 years
3⃣ Critically, most memorable words have a 1:1 relationship with their meaning (e.g. PINEAPPLE). They uniquely pick out a particular meaning in semantic memory, in contrast to ambiguous words (LIGHT) or words with many synonyms (HAPPY). Num synonyms is a more important predictor.
1
0
6
@GretaTuckute
Greta Tuckute
2 years
This study wouldn’t have been possible without the data collected by @SamNormanH and @dlboebinger (papers: & ), and the original DNN-auditory cortex work by @alexjkell (paper: ) & developers sharing models!
1
0
5
@GretaTuckute
Greta Tuckute
4 months
2/ We address two main questions: 1⃣. Can we leverage GPT language models to drive ⬆️ or suppress ⬇️ brain responses in the human language network? 2⃣. What is the “preferred” stimulus of the language network, and why?
1
0
5
@GretaTuckute
Greta Tuckute
5 years
A couple of images from the very unique wilderness of rain forests, reptiles and lemurs in Madagascar!
1
0
5
@GretaTuckute
Greta Tuckute
10 months
@talia_konkle We found something similar in our work on audition: models that are trained to perform word/speaker recognition in the presence of background noise are much better at accounting for fMRI data -- also across architectures & for regression and RSA:
Tweet media one
0
0
5
@GretaTuckute
Greta Tuckute
4 months
3/ Approach: We recorded brain data while participants read 1,000 linguistically diverse sentences using fMRI. We fit an encoding model to predict the left hemisphere language network’s response to an arbitrary sentence from GPT embeddings.
Tweet media one
1
0
5
@GretaTuckute
Greta Tuckute
5 months
2⃣ A model trained on multiple tasks was the best model of the auditory cortex overall, and also accounted for neural tuning to speech and music. Training on particular tasks improved predictions for specific types of tuning, with the MultiTask model getting “best of all worlds”.
Tweet media one
1
0
5
@GretaTuckute
Greta Tuckute
4 years
with incredible team: @martin_schrimpf , @IbanDlank , @KaufCarina , EghbalHosseini, @Nancy_Kanwisher , JoshTenenbaum and @ev_fedorenko
1
0
5
@GretaTuckute
Greta Tuckute
27 days
@lexfridman @LanguageMIT Hey Ted, were you able to have a 3-hour long conversation without talking about rowing?
1
0
5
@GretaTuckute
Greta Tuckute
2 years
Finally, in light of recent discussion suggesting that the dimensionality of a model’s representation correlates with regression-based brain predictions, we evaluated how the effective dimensionality (ED) of each network stage correlated with both the regression and RSA metrics.
1
1
5
@GretaTuckute
Greta Tuckute
4 months
@neuranna Seriously.. also cutting 150 for a review.. :'(
1
0
5
@GretaTuckute
Greta Tuckute
2 years
SentSpace is open source! Access the Python module at and the hosted API (no installation) at . Work jointly with @aloxatel *, M. Wang^, H. Yoder^, @coryshain , and @ev_fedorenko . We thank @AAUW , NINDS, NIDCD for their support.
1
1
5
@GretaTuckute
Greta Tuckute
2 months
@NeuroTaha So excited for all the amazing things you'll do!
1
0
5
@GretaTuckute
Greta Tuckute
4 months
10/ Our work establishes the ability of models to control brain activity in higher-level cortical areas, like the language network. So grateful to everyone who contributed to this project! Thank you for the support @MIT_SCC @AmazonScience @AAUW #ICoN @mcgovernmit @mitbrainandcog
1
1
5
@GretaTuckute
Greta Tuckute
4 months
5/ We show that these model-selected new sentences indeed drive and suppress the activity of human language areas in new individuals. This generalization performance indicates that our encoding model has discovered features of language processing that are shared across humans.
Tweet media one
1
0
5
@GretaTuckute
Greta Tuckute
5 months
1⃣ A model’s training data significantly influences its similarity to brain responses: models trained to perform word/speaker recognition in the presence of background noise are much better at accounting for auditory cortex responses than models trained in quiet.
Tweet media one
1
0
5
@GretaTuckute
Greta Tuckute
5 months
However, our findings also highlight the explanatory gap that remains (predictions of all models are well below the noise ceiling), as well as the need for better model-brain evaluation metrics and finer-grained neural recordings to better distinguish models.
1
0
4
@GretaTuckute
Greta Tuckute
4 months
7/ What is the ‘preferred’ stimulus of the language network? What kinds of stimulus properties engage this network? To answer this question, we obtained 11 linguistic properties to characterize our sentences (crowd-sourced ratings of e.g., grammaticality, imageability, valence).
1
0
4
@GretaTuckute
Greta Tuckute
2 years
This study () adds to the body of evidence showing that a single hemisphere appears to be sufficient for language function — and in this case, the right hemisphere (which is the non language-dominant hemisphere in most individuals):
@ylecun
Yann LeCun
3 years
If you don't happen to have a left temporal lobe, language centers will locate themselves someplace else.
8
31
272
0
0
4