IIT has many problems. but "pseudoscience" is like dropping a nuclear bomb over a regional dispute. it's disproportionate, unsupported by good reasoning, and does vast collateral damage to the field far beyond IIT. as in vietnam: "we had to destroy the field in order to save it."
who saw LLMs coming?
e.g. decades (or even 5+ years) ago, X said: when machine learning systems have enough compute and data to learn to predict text well, this will be a primary path to near-human-level AI.
behavior(+) without consciousness: philosophical zombie
consciousness without affect: philosophical vulcan
conscious thought without senses: philosophical ...?
the video for my
#NeurIPS2022
talk on "could a large language model be conscious?" is now online. i've given versions of the same material at adelaide, deepmind, NYU, and the
#LearningSalon
, but this is the best version. written version coming soon!
so, what's the best reason to think large language models are not sentient? more precisely: what's the best candidate for X such that LLMs clearly lack X and X is required for sentience?
i'd like to publish "could a large language model be conscious?", the written version of my 2022 neurips talk, before it's entirely obsolete. any suggestions for where? it's a little informal for an academic journal and a little academic for general media.
tom nagel's keynote talk at
@ASSC26NYC
: "psychophysical monism as an ideal". this is a rare video of tom discussing consciousness. now emeritus at NYU, tom rarely speaks at conferences, but this one was just two blocks from his home so he couldn't say no.
my APA presidential address on "does thought require sensory grounding? from pure thinkers to large language models" is now published. i argue for no: so, even if LLMs lack sensory grounding, this doesn't entail that they can't think or understand.
one criticism of large language models: they only model (represent) text with no models of the world. i take it this is an empirical (and conceptual) issue: all LMs handle text, some may develop world models to do so. what's the best evidence that LLMs do/don't have world models?
the association for the scientific study of consciousness (ASSC) is meeting in june 2023 at NYU.
#ASSC26
@ASSC26nyc
@theASSC
here's the conference poster. thanks to luke roelofs for the amazing image of NYC in a brain. [1/7]
Remember: The argument for AGI ruin is *never* that ruin happens down some weird special pathway that we can predict because we're amazing predictors.
The argument is *always* that ordinary normal roads converge on AGI ruin, and purported roads away are weird special hopium.
what cognitive capacities does a mouse have that no current AI system has?
(extra points for intelligence-related capacities with associated behavioral tests. the AI system can have a virtual/robot body if necessary. capacities like eating that require a bio-body don't count.)
"could a large language model be conscious?" is now published (with an afterword, updating 8 months after i gave this as a
@NeurIPSConf
talk) in the
@BostonReview
.
my slides for last friday's
#phildeeplearning
debate on "do language models need sensory grounding for meaning and understanding" are now online at . i was on the "no" side. my final summary slide with a slightly more nuanced view is below.
i first knew sydney shoemaker through his writing. it was always complex, dense, and rich with ideas.
sydney's 1970's work on functionalism and qualia ran deep. many times i had what i thought was a philosophical insight only to found that shoemaker had gotten there long ago.
a new blockbuster 2023 issue of "philosophical perspectives" is out, on philosophy of mind, with many major pieces.
first up: Claudia Passos-Ferreira (
@cpassosf
) on "Are Infants Conscious?": a definitive philosophical analysis of infant consciousness.
are large language models sentient? all will be revealed october 13, in my talk launching the NYU program in mind, ethics, and policy. MEP (directed by
@jeffrsebo
, ) will be devoted to the nature and value of animal and AI minds.
call for abstracts for a conference on the philosophy of deep learning, at NYU march 24-26. co-organized by
@raphaelmilliere
(columbia),
@De_dicto
, and me, with many excellent speakers. deadline jan 22.
somebody might tell wikipedia that although i've met alan chalmers twice and i'm very fond of his book "what is this thing called science", he is pretty definitely not my father.
descartes lectures on "large language models and the philosophy of mind" in tilburg july 29-31 (alas the lectures are not by descartes, just by me). plus a workshop on the same topic to which anyone can apply.
a familiar thought experiment posits a being with no senses who is nevertheless conscious and able to think. the classic source is perhaps avicenna's "flying man" from the 11th century. where else does this thought experiment appear, in philosophy, science fiction, or elsewhere?
our long planned conference on the philosophy of deep learning is coming March 24-26 at NYU, starting with a debate on meaning/understanding in language models, followed by a weekend of talks, posters, and panels. there will be a livestream in case you can't make it in person.
Very excited to share the final line-up and program of our upcoming conference on the Philosophy of Deep Learning!
Co-organized with
@davidchalmers42
&
@De_dicto
and co-sponsored by
@columbiacss
's PSSN &
@nyuconscious
.
Info, registration & full program:
GPT-3: hallelujah for zombies:
Now I've heard there was a secret chord
That a zombie played, and it pleased the Lord
But you don't really feel anything, do you?
Your eyes are dead, your skin is cold
You stagger around with an empty soul
And you don't even know the Hallelujah
thread summary: although the scaling hypothesis is often discussed as if it's widely held, few seem to publicly endorse it. apparent exceptions:
@ilyasut
,
@irinarish
(some version), scott alexander (40%), and some say amodei and hinton. a survey & metasurvey would be useful!
@ESYudkowsky
that's true of most arguments. it's still good to have a canonical version, so that (serious) people who reject the conclusion can be asked which premise they're rejecting, and can be directed to a canonical argument for that premise in turn.
where are there good analyses of the text-to-meaning problem? i.e. given a body of text in an unknown language (and pretty much nothing else), figure out what it means?
in which noam chomsky seems to endorse both russellian monism (not so far from panpsychism?) and hard-problem eliminativism (not so far from illusionism?).
these are my
#NeurIPS2022
challenges for building extended large language models that may be conscious.
if these challenges (esp. 5-11) are met, will the result be conscious AI? (needn't be human-level AGI.) if no: what else is needed?
(slides: )
after 200 replies: here are the top 15 cognitive capacities where mice beat current AI. the top 5 by votes are closely tied to sensing, feeling, and drives:
1. survival (drive/ability)
2. consciousness/sentience/feeling
3. empathy/social cognition
4. emotions (many)
5. olfaction
what cognitive capacities does a mouse have that no current AI system has?
(extra points for intelligence-related capacities with associated behavioral tests. the AI system can have a virtual/robot body if necessary. capacities like eating that require a bio-body don't count.)
a key point about impact on the field, from the perspective of a policy maker: when you say "IIT is pseudoscience" loud enough, many people hear/infer "consciousness research is pseudoscience". well worth reading.
@hakwanlau
@kanair
Absolutely. But the wording is really the issue here. Basically, you're telling policymakers such as myself that a) one of the important theories in consciousness science is pseudoscience (which a policy maker reads as 'bullshit') 1/n
one place where sydney shoemaker strongly influenced me: his 1975 argument against zombies and for functionalist theories of consciousness was a "debunking argument" years ahead of its time. by this 2020 article he's got me arguing for illusionism!
machine learning pioneer
#yoshuabengio
sets out to "make a dent in the hard problem" of consciousness, in his keynote talk at
@ASSC26nyc
: "sources of richness and ineffability for phenomenally conscious states".
on two versions of the language of thought hypothesis. i've been meaning to write something about this for years. thanks to
@quiltydunn
,
@NicolasPorot
,
@Ericmandelbaum
for prompting this with their cool new BBS paper on LOT ().
thanks for all the names for conscious thinkers without senses! background: i'm talking at the APA in two weeks on "can large language models think?". one issue: do AI systems need sensory grounding to think? i need a name for the key case! egghead? floater? pure thinker?
behavior(+) without consciousness: philosophical zombie
consciousness without affect: philosophical vulcan
conscious thought without senses: philosophical ...?
what is the state of the art in using language models (and extensions thereof) for goal-directed action? which systems among these are closest to showing signs of agency (of various kinds), and what are the biggest challenges on the path to more robust agency?
ASSC 26 starts tomorrow in NYC with an amazing lineup and a record 700 people attending. the hashtag is
#ASSC26
, and you can tag
@ASSC26nyc
and
@theASSC
if you like. there are still free tickets available for friday's "25 years of consciousness" event: .
@GaryMarcus
yes, it's correct. mice are widely regarded as sentient (even insects are serious candidates). i think it's quite likely that within ten years we'll have AI systems with at least the cognitive capacities of mice. if so, those systems will be serious candidates for sentience.
an enjoyable conversation about consciousness and reality with swami sarvapriyananda, a hindu monk who is resident swami and head of
@VedantaNY
and who has a strong interest in both ancient and contemporary philosophical ideas.
i'm not exactly a doomer, but for a discussion of how to make one key argument without the concept "intelligence", see pp. 16-19 on "the intelligence explosion without intelligence" in .
What happens to the doomer arguments if you remove the "intelligence" concept (and similar concepts)? Mostly, they fall apart. They turn out to be of the from "AI gets us X, and Y leads to apocalypse, now let's call both X and Y 'intelligence'".
results and analysis of the 2020 philpapers survey: now published in philosophers' imprint as "philosophers on philosophy: the 2020 philpapers survey".
@PhilosophersIm1
@dbourget
@PhilPapers_CDP
i tweet from
@davidchalmers42
because both
@davidchalmers
and
@david_chalmers
were already taken. as it happens, both are associated with accounts that have never tweeted. any ideas on how to reach them? davids, if you're out there, let's talk!
@keithfrankish
a more common view is that introspection reveals "C exists" and some of its properties; a long theoretical inference then can get to the fundamental. cf: in newtonian physics perception reveals that motion exists and some of its properties; then inference gets to the fundamental.
a well-filmed BBC piece on AI sentience, made by , featuring
@alexhanna
, adrienne williams, and me, along with the dramatized words of
@cajundiscordian
and lamda 2.
what is the best evidence that grounding in perception and action will (or won't) improve a large language model's performance on text-only tasks? bonus points if the same improvement couldn't have been made with more text data.
with 231 votes in, "eleatic principle" beats out "alexander's dictum" by 55% to 45% as the name of the thesis that to be real is to have causal powers.
the hive mind got this right! everyone should call it the eleatic principle. this 🧵 explains why (warning: scholastic rant).
p.s. yes, i'm interested in this thought experiment partly because of "symbol grounding" views that say an AI system would need connections to the environment in order to genuinely think or mean anything at all. if a system without senses could think, that can't quite be right.
what are some new and interesting results about the relative capacities of multimodal models and pure language models on the same (text) tasks?
(yes, i just happen to be thinking about "do language models need sensory grounding for meaning and understanding?".)
there are rumors that the "consciousness wager" (), a 25-year bet between christof koch and me about the neural correlates of consciousness, made at the bremen ASSC conference in 1998, will be resolved at ASSC 2023. [5/7]
congratulations to
@jenmcweeny
and keya maitra on the publication of their volume "feminist philosophy of mind" (articles by
@ediazleon
@amykind
@susanbrison
@pauladroege
and many others) in the philosophy of mind series at OUP. it's an instant classic.
@GaryMarcus
@MetaAI
@ylecun
when i moderated the
@ylecun
vs
@garymarcus
debate at NYU in 2017, both sides agreed that current approaches wouldn't get us to human-level AI, and they disagreed about what more was needed (lecun: better learning, marcus: more innate machinery). has that changed?
you may have heard about the outrageous proposal by the australian catholic university
@acumedia
to close many institutes, including the extraordinary dianoia institute of philosophy
@DIP_ACU
, and to fire most of the academics involved.
TFW you get back from dinner to find not one but three extensive streams of live tweets about your talk. thanks to
@togelius
@kchonyc
and
@rgblong
for their fine tweeting and to
@jeffrsebo
for putting the event together.
joe ledoux's presidential address at
@ASSC26nyc
, on “our four realms of existence": biology, neurobiology, cognition, and consciousness.
@theamygdaloid
and check out , for "neuroscience meets rock and roll".
all of my earliest memories (age 3-6) are old memories: memories of things i've remembered before. i find it surprisingly difficult to come up with new memories even from teenage years. is there a good way to generate them?
elisabeth parés pujolràs (
@elisabethpares
), winner of the
@theASSC
william james prize for 2023, presents her prize-winning paper on "the neural bases of motor awareness" at
@ASSC26nyc
.
bonus homework question/tongue-twister (and the philosophical question i'm ultimately interested in):
what sort of thing could a pure thinker think, if a pure thinker could think things?
@ESYudkowsky
anyway, the best i could do is a good argument for AI risk, not AI ruin. i lack a decisive argument that (ruin-avoiding) alignment is superhard.