Delighted to share our new preprint! with
@AlecMarantz
,
@DavidPoeppel
and
@JeanRemiKing
:
"Hierarchical dynamic coding coordinates speech comprehension in the brain"
Summary below 👇
1/8
🧠how does the brain process rapid speech-sound sequences?
💬how is auditory content maintained over time?
🧮how is temporal order encoded using an evolving spatial pattern?
for answers, see our latest paper!
@JeanRemiKing
@davidpoeppel
@AlecMarantz
incredibly excited to share I was selected for the
@cogsci_soc
Glushko Dissertation Award!!
a summary of my thesis "Towards a mechanistic account of speech comprehension" is online here:
and in honour of
#LesbianVisibilityWeek
- yup, I'm a lesbian too 🔥
really happy to share our extensive MEG naturalistic listening dataset! official publication now out:
delighted to see research teams already using the data to answer really interesting and varied questions! 🧠👂🗣️
our new paper "Neural dynamics of phoneme sequencing" is now on bioRxiv!
conducted with dream-team
@jrking0
@AlecMarantz
@davidpoeppel
, we use MEG to study how phonemes are processed in continuous naturalistic speech
short summary in thread below:
1/8
so extremely grateful to be selected for the
@SNLmtg
dissertation award!
a huge thank you to my amazing PhD supervisors
@AlecMarantz
and
@davidpoeppel
!!
Interested in studying speech neuroscience using time-resolved neural measurements, and machine learning methods? I'm recruiting PhD students thru Stanford's Neuroscience and Psychology programs this fall!
Find me at
@CogCompNeuro
(Aug) and
@SNLmtg
(Oct)- very happy to chat!
"Recurrent Processes Emulate a Cascade of Hierarchical Decisions"
latest project with
@jrking0
now available as a preprint, here!
tracking hierarchical, feedforward and recurrent processes during the categorical perception of varyingly-ambiguous inputs
very happy to share
@DrMattDavis
and my review chapter preprint!
an overview and interpretation of studies using information theoretic measures (eg surprisal, entropy) to test how linguistic information is stored and accessed during speech comprehension
ecstatic to see this paper out! made possible through extraordinary teamwork, and special co-first duo with
@matt_k_leonard
. an astounding first insight into what Neuropixels can tell us about neural computation in human cortex, in support of language processing
Interested in recurrent processes in the brain? Visual perception? Ambiguity? Decoding hierarchical representations from MEG data? Check out my and
@jrking0
's new paper “Recurrent processes support a cascade of hierarchical decisions” now in eLife! 1/14
Our latest paper is out, in special issue "patterns of language"!
@AlecMarantz
,
@DavidPoeppel
and
@JeanRemiKing
:
"Top-down information shapes lexical processing when listening to continuous speech"
Summary below 👇
(free!)
1/9
updated preprint now online!
on morpho-syntactic processing in the brain; combining insight from theoretical linguistics, natural language processing and neuroscience.
⚡️The Laboratory of Speech Neuroscience
@StanfordBrain
is hiring a Lab Manager!⚡️
Successful candidates will have a passion for scientific discovery and a love of IRB protocols :)
To be considered, please complete this application:
Grateful for RTs! 🙌
NEW EPISODE OUT!!!!
@anjie_cao
chats with
@GwilliamsL
on understanding speech in the brain through the use of Neuropixels, tiny needles that were directly placed on human brain!
Very happy to share our new preprint! with
@AlecMarantz
,
@davidpoeppel
and
@JeanRemiKing
:
"Top-down information flow drives lexical access when listening to continuous speech"
Summary below 👇
1/8
just uploaded a new pre-print, which is a review/opinion piece on morphological processing in the brain. I focus on comprehension in listening and reading, but cover literature on production, also.
happy to receive comments and feedback!
interested in the topic of "Hierarchical oscillators in speech comprehension"? take a look at my commentary, here:
discussing the provocative ideas of Meyer, Sun & Martin (2019), found here:
naturalistic experimentation is great, but stimulus annotation remains a HUGE barrier
🚨LETS CHANGE THIS 🚨
know an annotation tool, pre-annotated stimulus, or a pre-existing neural dataset? fill this form!
we will make a website listing the resources!
our (
@BlancoElorrieta
(co-first author),
@AlecMarantz
,
@liinapy
) MEG study on neural adaptation to mis-pronunciation is now online!
tl;dr short-term (<1hr) accent exposure recruits a frontal-cortex repair mechanism, not re-mapping in auditory cortex
looking forward to being part of this 2 day discussion on Large Language Models, Language Structure, and the Cognitive & Neural Basis of Language 🧠🤖💬 tune in to the live stream!
website:
webinar:
Join us online for the May 13–14 for a star-studded
#NSF
-sponsored workshop: New Horizons in Language Science: Large Language Models, Language Structure, and the Cognitive & Neural Basis of Language! Interdisciplinary talks & discussion on three themes: 1/
interested in how phoneme sequences are processed in continuous speech?
come along to my talk tomorrow
@SNLmtg
in the exciting symposia starting at 8am PDT!
with a healthy sprinkling of MEG, decoding and speech theory :)
preprint:
That's a wrap, for the first Bay Area Language Processing Interest Group meeting! Featuring a diverse collection of talks from Stanford, UCSF, UC Berkeley and UC Davis, and bringing together around 100 language researchers from our community. Looking forward to the next one!
What does it say about me that I see an advert: “it’s never too late to consider a career in modelling” .. and I completely assume they mean computational modelling 🤔
The coliseum will be standing for many more years to come; I’ll only be stood at poster 318 for a couple of hours today!! Lets talk speech, linguistic representations, processing architectures, MEG, decoding/encoding models.
@OHBM
#OHBM2019
(back right of the top floor)
our (
@BlancoElorrieta
(co-first author),
@AlecMarantz
,
@liinapy
) new preprint is up online!!
we use MEG to explore the neural processes supporting adaptation to accented speech
It was a true pleasure to work with
@GwilliamsL
,
@anne_kosem
,
@FlorAssaneo
& Lin Wang on this review on MEG & Language. I hope we get to collaborate again in the future!
Head over to poster EEE16 to hear about decoding linguistic representations from MEG recordings of people listening to continuous speech - Hall A at the Trainee Professional Development Award session!
#SfN18
. I will also be presenting Tuesday morning :)
here is a copy of the poster of my latest project. we use MEG to decode linguistic representations from brain responses to continuous speech. catch me at HHH27 on Tues morning for the real-life run-down!
#SfN2018
@SfNtweets
At CCN 2023, attendees discussed ways to improve DEI - summary here:
CCN24 now has double-blind reviews, a non-weekend conference, gender neutral bathrooms, and a DEI representative (
@GwilliamsL
). We will continue the conversation at CCN this year!
on how DNN random initialisations can lead to sizeable differences in the representations that networks generate
-- a subtle degree of freedom that shouldn't be ignored when applying these models to neural data!
@TimKietzmann
@KriegeskorteLab
Language Neuroscience Podcast
#26
: a great conversation with the brilliant Laura Gwilliams
@GwilliamsL
. We talk about her unique & circuitous path to the cognitive neuroscience of language and her fascinating new paper on neural coding of sequential order.
artificial neural networks clearly have the potential to be powerful analysis tools and “top-down” models of neural data. though, in work using these techniques, I often find myself learning more about how the models work than how the brain works..
#SfNThoughts
How does sound become meaning in the human brain?
From eardrum vibrations to the storage and manipulation of concepts, neuro-linguist and faculty scholar Laura Gwilliams
@GwilliamsL
shares the complexities of speech comprehension and language processing.
I am really looking forward to speaking at the
@SNLmtg
tomorrow (23/10) and so grateful to receive this year's dissertation award!!
10.15am PDT - "Towards a mechanistic account of speech comprehension"
see you there!🍿
interested in learning how single neurons in human STG encode speech properties?
#SfN2022
come find me - you've got options!
10:15am Nov10
@apanhearing
poster 9 👂
6:30pm Nov12
@SfNtweets
trainee award 🎉
8:00am Nov16
@SfNtweets
poster X9 🧠
our discussion of diversity, equity and inclusion at
@CogCompNeuro
was really productive! looking forward to turning the conversation into action items.
here are my opening slides in case anyone would like to use or borrow from them:
excited to be part of the Minds, Brains, and Machines Initiative- bringing together neuroscience and machine learning to understand how the human brain achieves speech comprehension
Welcome both
@neurograce
and
@GwilliamsL
to the NYU Minds, Brains, and Machines Initiative (now listed on our key faculty page and recruiting collaborators!)
don't miss this special issue on language composition in the brain! the 15 papers cover a wide range of topics, pooling insight from linguistics, neuroscience, cognitive psychology and computer science
my contribution can be freely accessed here:
so excited to be part of the
@SNLmtg
symposia this year :) a huge thank you and congrats to
@AriannaZuanazzi
for organising such a great topic and line-up!!
October 24th, 8am PDT -- don't miss out!
beautiful study! comparing the neural dynamics of speech sequencing from my MEG work, to a self-supervised RNN. they find similar dynamic encoding, and ~position-invariant representations 🗣️👂🧠
Paper with
@larryniven4
, Naomi Feldman, and Sharon Goldwater to appear in CogSci 2024: "A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech" (1/7)
Our online seminar series starts again this Friday (3:30pm local time) with Dr Laura Gwilliams (
@GwilliamsL
@UCSF
) presenting on her excellent research into "Decoding the neural architecture of speech comprehension". Register for free at:
#AllWelcome
Not able to see "Neurobiology of language: Key issues and ways forward II" in real time? Fear not! The talk recordings are here:
Including how I think we should progress in understanding the neural architecture of speech comprehension
Really excited to be a part of this meeting! Looking forward to many interesting talks and useful discussions on the future of the neurobiology of language
@AlexWoolgar
and I are seeking a postdoc with strong analysis skills! Project applies ML decoding to EEG recordings of autistic non-speakers during audiobook listening, to understand receptive language processing. Based in Cambridge, UK. More details here:
Join BBI as William Idsardi hosts Laura Gwilliams, who will discuss her theoretically grounded, empirically tested, and computationally explicit account of how the brain achieves an understanding of speech with such speed and accuracy.
📅 12/8
⏰ 4PM
💻
So grateful for the opportunity to take part in
@Salzburg_SAMBA
, and ecstatic to top it off with a poster prize! 😁 thank you so much for a fantastic meeting
#SAMBA2018
Congratulations to the
@Salzburg_SAMBA
2018 poster prize winners: Laura Gwilliams, Lisa A. Velenosi and Mariya Manahova!
Something about this picture is telling me that there will be no shortage of brilliant female cog neuro speakers in future SAMBA meetings.
#SAMBA2018
When we group the features into 6 hierarchical levels, we find the hierarchy overlaps a lot in time. Also, the higher order features are decodable earlier than lower order features. Thanks to prediction, the meaning of a word is decodable before the person hears it!
6/8
a few people asked me about decoding analyses after my talk at
#SNLmtg18
- in case it’s helpful, here’s a link to a chapter preprint on exactly these issues, to appear in The Cognitive Neurosciences (co-written with other MNE-Python contributors)
Excited to speak and discuss "New Approaches and Technologies in Auditory Neuroscience" with the fantastic line-up of speakers at
#ARO2024
on Sunday, 1:45pm, Salon F 🧠👂
Here is the provisional schedule for the CNSP-Workshop 2021. We are looking forward to talks from an amazing lineup of speakers. More information to come
really enjoyed the language break-out session today. we had a great discussion about how linguistics, NLP and neuroscience can come together to better understand human language production and perception. summary of discussion topics coming soon :)
This week, Nicole Liddle (Sr. Clinical Research Specialist) presented
@GwilliamsL
and colleagues’ recent preprint on hierarchical dynamic coding of linguistic features during speech comprehension. This 🧵 explores her thoughts (🤍 & ❔)
Ilina Bhaya-Grossman
@bhaya_ilina
is a PhD student in the UC Berkeley-UCSF Bioengineering program and Deb Levy
@deb_f_levy
is a post-doc at
@UCSF
. Their video is a wonderful balance between being educational and engaging.
#CogSciMindChallenge
don’t miss out on the language cross-collaboration break out session, tomorrow 2pm! on how we can bring together linguistic theory, computational models and neuroscience to understand human language perception and production
@CogCompNeuro
#CCN2019
#keynotemagicmove
Very excited that our submission “Tracking the building blocks of pitch perception in auditory cortex”, with
@elliebeanabrams
and
@AlecMarantz
, got accepted for a talk at
#smpc2019
!! See you in NYC in August!
make sure this is on your radar! fantastic, intense, week-long summer course with an incredible line up of instructors. the location is pretty great, too :)
Really excited to be a part of this meeting! Looking forward to many interesting talks and useful discussions on the future of the neurobiology of language
"Neurobiology of Language: Key Issues & Ways Forward, Part II" is an upcoming 2-day online meeting on 16-17 March '22, with an outstanding lineup of speakers, live sign language interpretation + virtual posters. Full programme & free registration here:
EARS resumes next week! Tuesday, Feb 13 at 1 pm ET
@GwilliamsL
, Stanford University: “Neural architecture of speech comprehension”
Chris Rodgers, Emory University: “Active auditory processing in mice before and after hearing loss”
great talk by
@annakasdan
- bringing EEG out of the lab and “into the woods”! studying brain-to-brain synchrony between musicians listening to their own performances and professional performances
@smpc2019
Introducing Simply Neuroscience’s new Action Potential Advising Program (APAP)! APPA is a free virtual mentorship program that will pair high school and undergraduate students with older professionals in the fields of neuroscience and psychology.
#science
#opportunity
#brain