Published today in Journal of Neurolinguistics!
A new neural architecture for language (ROSE) that builds a compositional model of syntax from single-units to inter-areal brain dynamics. The culmination of an 8 year project 🧠⚡
Paper link:
Chomsky, 1963:
“A computer program that succeeded in generating sentences of a language would be, in itself, of no scientific interest unless it also shed some light on the kinds of structural features that distinguish languages from arbitrary, recursively enumerable sets”.
Published today in PNAS: "A model for learning strings is not a model of language".
We respond to recent work from Yang &
@spiantado
attempting to derive syntax development from domain-general learning (open access, w/
@EvelinaLeivada
).
Some comments on Ted Gibson’s (
@LanguageMIT
) appearance on Lex Fridman’s three-hour podcast and his statements concerning Chomsky, generative grammar, and neuroscience ⬇️
Very happy to be sharing this today!
A new neural architecture for language (ROSE) that builds a compositional model of syntax from single-units to inter-areal brain dynamics. The culmination of an 8 year project 🧠⚡
🧵[1/19]
Very happy to be sharing new intracranial results! We conducted whole-brain ECoG mapping of sentence reading, combining high spatiotemporal resolution recordings from depth & grid electrodes across a large cohort (58 patients, over 6 years)🧠
🧵[1/20]
New book published this October, on syntax and oscillations. It develops a novel neurocomputational model of language comprehension and presents what I hope to be a thorough, critical review of existing literature.
Pre-ordering available soon...
New paper with
@EvelinaLeivada
and
@GaryMarcus
!
We show that DALL·E 2 fails to capture syntactic processes reflexively parsed by young children.
A summary of results 🧵
“Of course that’s your contention, you’re a first year linguistics student, you just finished reading some usage-based account of ditransitives, probably Evans or Christiansen. You’re gonna be convinced of that until next month when you get to Berwick and Hornstein…”
New intracranial work just published on music and language, mapping melodies and syntax:
We used ECoG and cortical stimulation mapping during an awake craniotomy, while the patient (a professional musician) conducted comprehension and production tasks.
Some comments on Ted Gibson’s (
@LanguageMIT
) appearance on Lex Fridman’s three-hour podcast and his statements concerning Chomsky, generative grammar, and neuroscience ⬇️
Chomsky gave a lecture at the Linguistic Society of Japan in November 2020, which has recently been put on YouTube. In it, he eliminates control theory, the pro element, and the SM/CI interfaces from the grammar.
Below is a thread of the lecture.
New intracranial work published today!
We show that a narrow portion of pSTS hosts a cortical mosaic coding for linguistic phrase structure building with data from 19 epilepsy patients. Likely to be the core language region (the site of MERGE).
New intracranial work published today in
@NatureComms
!
We conducted whole-brain ECoG mapping during sentence reading, combining high spatiotemporal resolution recordings from depth & grid electrodes across a large cohort (58 patients, over 6 years) 🧠
@adelegoldberg1
These are not really “thoughts on the op-ed”, though. You’re not giving much in the way of direct rebuttal.
You have only said things like “oh, ok”, “sure, right”, “but ok”, “who believes this?”, “what?”, “err yeh sure”.
Not super convincing to anyone sitting on the fence.
Brilliant video series on linguistics with my PhD supervisor, Andrew Nevins, including some important advice for younger scholars.
"How Linguistics is Making our World Better".
New pre-print out!
With
@Emma_Holmes_90
and Karl Friston, we show how Lempel-Ziv complexity can predict grammaticality, in line with formulations of free energy minimization. This motivates a new principle of language design, Turing–Chomsky Compression.
New paper with Gary Marcus (
@GaryMarcus
) and Evelina Leivada (
@EvelinaLeivada
) — a brief dissection of what Large Language Models are "missing", and why sophisticated statistics can never replace theory.
Steven and I will be talking about Chomsky and large language models on April 25th, 11:00 CST!
The conversation will live streamed on
@InferenceActive
and available here:
I was planning on writing a blog about
@spiantado
's Chomsky paper, but there are too many points of major divergence and, to my mind, lapses in logic.
Steven: Would you be open to debating this at a mutually agreed venue?
Would this debate be of general interest to others?
New paper out today with Jill de Villiers and Sofia Lucero Morales.
If anybody needs more convincing that DALL•E lacks a facility for compositional syntax-semantics, it fails where 2-year-olds succeed.
🧵 Some notable findings (1/12) ⬇️
My talk, "A Neurocomputational Perspective on Syntax" for
@abralin_oficial
, discussing the neural code for phrase structure building, based mainly on ideas from my book "The Oscillatory Nature of Language".
Some brief notes on my conversation yesterday with
@spiantado
, picking up on a few themes and adding some critiques that we didn't have enough time to get to.
Short summary: Large language models still do not refute the generative linguistics enterprise.
What is a word?
This brief piece explores this question with an eye to functioning as a succinct pedagogical aid, and to help refine psycholinguistic experiential designs that address lexicality, morphosyntax, and semantic composition.
Some Sunday reading: "Why brain oscillations are improving our understanding of language" on
@PsyArXiv
+
@LingBuzz
with
@abenitezburraco
Addressing the What, How, Who, Why, and When questions of the neural code for language.
Chic recursion >>
Jacobs by Marc Jacobs For Marc by Marc Jacobs in Collaboration with Marc Jacobs for Marc by Marc Jacobs
Size ‘M’ for Marc by Marc Jacobs
If the purpose of language is communication, why is it that Fedorenko and Gibson regularly ignore invitations to debate this very point and dismiss criticisms with invective rather than open conversation and dialogue?
MIT Department of Brain and Cognitive Sciences faculty members Ev Fedorenko, Ted Gibson, and Roger Levy believe they can answer a fundamental question: What is the purpose of language?
Read more:
Very happy to be sharing new work with
@Emma_Holmes_90
and Karl Friston on language and active inference🧠
We show how syntactic theories invoking computational efficiency can find a first principle grounding within the free-energy principle [THREAD]
New blog critiquing a recent paper in Nature Communications reporting that deep language models align with intracranial ECoG activity in frontal cortex during language processing.
"Brain activity aligns with artificial contextual embeddings: What next?"
Very happy to have contributed a chapter to this new volume on Nietzsche, where I look at the relation between his philosophy of language and social thought. A short thread 🧵
Pre-print:
Book:
Chomsky, 1992: Language is a perfect computational system designed for expressing unique concepts.
MIT linguists: But what about these problematic cases from Icelandic syntax?
Chomsky: … these just prove that language is even more perfect than we thought.
Howard Lasnik:
Very happy to share new intracranial work on minimal phrase structure building. Posterior temporal regions display exclusive sensitivity to phrase composition, and not lexicality. We also track structural prediction and connectivity (with
@TandonLab
).
'Mathematical Structure of Syntactic Merge'
Marcolli, Chomsky, Berwick
"The syntactic Merge operation [...] can be described mathematically in terms of Hopf algebras, with a formalism similar to the one arising in the physics of renormalization."
Very happy to announce that I’ll be writing regularly for
@PsychToday
with a new column “Language and Its Place in Nature”.
Today’s piece is about excellent work from
@MBroderick8
in
@SciReports
looking at prediction in naturalistic speech processing 🧠
For
@PsychToday
— Artificial Language Models Teach Us Nothing About Language
A critique of some recent papers that have argued that large language models can pave the way towards genuine understanding of language processing in the brain.
Very excited to be sharing new work with Koji Hoshi and
@abenitezburraco
critiquing perennial cortico-centrism in the language sciences. Free to access for the next 2 months!
We address one of the oldest, and most vexing topics in cognitive neuroscience.
Published today in
@biolinguistics
, on formulations of Merge and their implications for philosophy of mind.
A review of Nirmalangshu Mukherji’s book “The Human Mind Through the Lens of Language”.
@AvrahamCooperMD
"Because our patient was right handed, we assume that speech function was localized to the left hemisphere. Although this has not been tested, this case confirms that motor functions can be maintained [...]."
So perhaps patient was simply right hemisphere language-dominant?
I’ll be discussing this lecture with Greg Hickok tomorrow — sure to be a fascinating and engaging event, and it’s also free to join on YouTube and ask questions!
In this talk,
@GregoryHickok
will give a brief history of the neurology of syntax and describe a new model built on a motor control architecture but preserving the representational core of modern linguistic theory.
📅 21 Jun 10PM UTC @
On the ancient debate of whether language is for thought or communication. Includes a critique of recent literature.
Language design and communicative competence: The minimalist perspective
**New paper**
Why Brain Oscillations Are Improving Our Understanding of Language
Joint work with
@abenitezburraco
on why neural oscillations can help explain how language develops, evolved and is processed. Covering the What, How, Where, Who, Why + When
@focusfronting
As DFW said of Dostoevsky:
“It’s a well-known irony that Dostoevsky, whose work is famous for its compassion and moral rigor, was in many ways a prick in real life – vain, arrogant, spiteful, selfish.”
What do compositionality, reference, and cognitive models have in common?
They are all basic components of human intelligence that are seemingly missing from cutting edge large language models!
Published today with
@GaryMarcus
Hey
@Cosmopolitan
, DALL-E isn’t nearly as smart as you seem to think.
@elliotmurphy
& I explain why in a new post today:
Sample (from DALL-E mini) to whet your appetite: Where’s the bowl with more cucumbers than tomatoes?
0 for 9 super-genius!
I think these kinds of ideas, that Martin often has, demonstrates how psychologically implausible most non-generative theories of language are.
To even raise this question and only offer these options (double lexicons, or one that is “twice the size”?). Truly ridiculous.
Excellent paper making a frustratingly underappreciated point: Lexical items don't just have meanings; they also host syntactic information. Hence "any functional neuroimaging experiment that manipulates lexicality will almost assuredly tax both syntactic and semantic resources".
Just published in Frontiers - I claim that the notion of an integrated lexico-semantic system has proven an obstacle to our understanding of the brain bases of syntax, given that lexical items are syntactic objects.
The world’s leading philosopher of causality cannot find the connection between openly shutting off food, water and power, consistent dehumanizing language, telling hospitals to evacuate due to imminent bombing, and already bombing a wing of the al-Ahli hospital two days ago.
The moment a hospital is bombed, the vultures howl: "Israel did it!" They can't get it into their vulture-heads that Israel does not target hospitals, period.
@jacksonhinklle
“Godfather of AI”: Geoffrey Hinton
“Father of AI”: Jürgen Schmidhuber
First-Born Son of AI: Yann LeCun
@ylecun
Holy Spirit of AI: Gary Marcus
@GaryMarcus
“—and therefore fails to block successive cyclic movement of ‘what’ to the main clause? You got that from Phillips, ‘Language Processing and Reductionist Accounts’, page 72. Yeah I read that too. Were you gonna plagiarize the whole thing for us? Have any thoughts of your own?”
Excited to present new work at
#SfN2023
this Sunday at the language Nanosymposium!
I’ll be talking about our attempts to isolate syntactic structure building by focusing on orthographic parsing of elementary functional grammatical structure, mitigating semantic confounds 🧠 ⚡️
Our lab will be in attendance at SfN this year!
@SfNtweets
#SfN2023
We will be presenting 6 talks over the course of various Nanosymposiums, showing new intracranial sEEG and ECoG research into speech decoding, syntactic structure building, visual attention, and lexical access!
Forthcoming in Journal of Linguistics, my review of
@cedricboeckx
's recent book "Reflections on Language Evolution: From Minimalism to Pluralism".
A thread of my major objections to the book (1/18)🧵
Distinct spatiotemporal patterns of syntactic and semantic processing in human inferior frontal gyrus
@NatureHumBehav
Ten intraoperative brain tumour patients. First paper I've seen that uses ECoG data to defend minimalism against construction grammar!
My piece today for Declassified UK on academic-arms trade connections.
@declassifiedUK
is publishing impeccable public-interest journalism at a time when mainstream outlets have effectively given up reporting on military, intelligence and surveillance issues.
Many UK universities are increasing their investments in arms companies and acting as their research and development partners. They risk becoming militarised spaces, losing their public ethos and giving the military industry a humane public face.
#linguistweets
#TW2300
Predicate order impacts copredication acceptability.
1/8🧵
Consider:
a. The White House is being repainted and issued a statement concerning taxes.
b. # The White House issued a statement concerning taxes and is being repainted.
(Research funded by
@ESRC
)
But the claim that human communication is efficient does not refute the language-as-thought view—what we need is to find cases where communicative efficiency and computational efficiency clash. This paper provides such an overview:
Very happy to join the editorial board for the John Benjamins book series ‘Language Faculty and Beyond’.
Many new projects coming soon and we are accepting book proposals!
Modern Language Models Refute Nothing (Rawski & Baumont)
The clearest - and shortest - refutation of
@spiantado
's propaganda campaign.
"Explanatory power, not predictive adequacy, is directly responsible for modern physics".
@VicBergerIV
He also had Glenn Greenwald on the same week - Joe's friendships are not restricted by ideological tint.
Early on in the podcast Joe also brings up Sandy as a clear example of when Jones screwed up big time (ditto last time Jones was on).
Very happy to share new work with
@EvelinaLeivada
discussing a range of ambiguous terms in linguistics -- sure to prove completely uncontroversial.
We cover I-/E-language, third factor, (un)grammaticality, entrainment, and even polysemy itself.
It doesn't help that the basic tenets of linguistics (the logic of Turing machines; read/write memory, recursive functions, symbolic representations, etc) are totally anathema to machine learning orthodoxy and much of modern neuroscience.
I dipped my toes into neuro because I was interested in finding neural evidence of theoretical constructs from linguistics--proving that things like merge or cyclic movement were REAL.
My neuro education has been a slow process of unlearning this impulse.
Excellent rebuttal to
@haspelmath
from the indomitable Peter Ludlow.
Friendly as always,
@haspelmath
says: “Ludlow is not a prominent figure and I could simply ignore him”.
Ludlow responds: “No one works in a framework-free way. No one can.”
A three hour break-down of my recent paper “ROSE: A neurocomputational architecture for syntax”, where I go through the paper section-by-section and give some overview and context.
Hosted by
@InferenceActive
!
Very happy to share a new pre-print on semantic internalism and philosophy of language, using complex polysemy as a case study.
A brief thread of some major positions developed in the paper.
“Did you read those new papers that show that Chomsky’s theories of language are totally on the wrong track? There’s obviously no innate linguistic structure, inductive biases or mechanisms in infant brains because ChatGPT can tell me all about astrophysics and chemistry and—”
Happy 93rd birthday Noam Chomsky. A towering mind and humanitarian who stands alongside the likes of Turing, von Neumann, Hume and Goethe as one of the most important figures in intellectual history.
Negation, ellipsis, theta-role reversal, the binding principles, comparatives, structural ambiguity and its various guises… not to mention common sense reasoning.
Current text-to-image models fail across the board with basic language, despite widespread media celebration.
Just learned about the amusing "negation" failure case for text-to-image models. I wonder how this will be overcome, aside from massive dataset augmentation with "not x," "not y," etc.
Another new pre-print: "Language Design and Communicative Competence: The Minimalist Perspective". On
@PsyArXiv
+
@LingBuzz
Reviewing the (broadly) communicative vs. computational perspectives on language use and evolution.