“A beacon of clarity”. Spoke at US Senate AI Oversight committee. Founder/CEO Geometric Intelligence (acq. by Uber). Rebooting AI & Taming Silicon Valley.
When I started working on AI four decades ago, it simply didn’t occur to me that one of the biggest use cases would be derivative mimicry, transferring value from artists and other creators to megacorporations, using massive amounts of energy.
This is not the AI I dreamed of.
OpenAI just released a model that can generate 1-minute videos.
You simply cannot argue that these models don't / won't compete with the content they're trained on, and the human creators behind that content.
What is the model trained on? Did the training data providers consent…
Rough Translation: We won’t get fabulously rich if you don’t let us steal, so please don’t make stealing a crime!
Don’t make us pay 𝘭𝘪𝘤𝘦𝘯𝘴𝘪𝘯𝘨 fees, either!
Sure Netflix might pay billions a year in licensing fees, but *we* shouldn’t have to!
More money for us, moar!
Amazing! But, um, ants have six legs.
We are about to have a whole generation of children educated by fake videos that are completely plausible to naive audiences yet biologically incorrect. 🤯
Black Mirror has arrived, ahead of schedule.
An entire cast of deepfaked people tricked a CFO out of $25 million. “(In the) multi-person video conference, it turns out that everyone was fake”
Deepfaked shit is getting real.
I will always stand in awe of Noam Chomsky. I just sent him my recent essay; 12 minutes later he replied with smart comments, including a point I should have thought to include. He’s 93.
Since
@OpenAI
still has not changed misleading blog post about "solving the Rubik's cube", I attach detailed analysis, comparing what they say and imply with what they actually did. IMHO most would not be obvious to nonexperts.
Please zoom in to read & judge for yourself.
If all we had was ChatGPT, we could say, hmm “maybe hallucinations are just a bug”, and fantasize that they weren’t hard to fix.
If all we had was Gemini, we could say, hmm “maybe hallucinations are just a bug”.
If all we had was Mistral, we could say, hmm “maybe hallucinations…
"This is the same climate of antisemitism that has led to the massacre of Jews throughout the centuries. This is not just harassment. This is our lives on the line."
@MIT
student Talia Khan highlights the rise of antisemitism at MIT.
Remember how Sam Altman told the US Senate he had no “direct” investment in OpenAI? and how they gushed over his apparent selflessness?
👉He didn’t mention that (presumably) he had equity in YC which likely has equity in OpenAI.
And now this:
👉“OpenAI Agreed to Buy $51 Million…
Now we know why Sam Altman went around the world last summer meeting world leaders: his company won’t make it big unless they can convince governments to give them one of the biggest handouts in history.
OpenAI lobbying:
(a) ChatGPT useless without copyright material,
(b) Requires special exemptions from the law.
So, LLMs are thinly disguised plagiarization algorithms and pilfering copyright is the only way to make them profitable, right?
@GaryMarcus
👉Seismic from
@geoffreyhinton
: ‘The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it”
Let us invent then a new breed of AI systems that mix an awareness of the past with values that represent the future that we aspire to.
Our focus should be on figuring on how to build AI that can represent and reason about *values*, rather than simply perpetuating past data.
The number of people simply unable to imagine that the Board acted in a good faith with a legitimate and serious concern boggles my mind.
We don’t of course know what happened, but simply dismissing that possibility seems foolish to me, especially given that they were not…
Take this seriously:
@geoffreyhinton
on AI possibly wiping out humanity: ‘It's not inconceivable’
Is coding faster and having fun chatbots to play with worth a 1% risk of that coming true?
Common view; almost certainly wrong. The board’s decision 𝘸𝘢𝘴𝘯’𝘵 a “𝖿𝗎𝖼𝗄 𝗎𝗉”. Instead, it was almost certainly a Hail Mary:
👉This Board’s sole job was to look out for humanity - NOT to protect the brand.
👉They must have seen danger in something Sam was doing
👉They…
@ilex_ulmus
@GaryMarcus
But something here really doesn't make sense. It seems like a huge and pointless fuck up by the board and they seem more capable than that
Prediction: By end of 2024 we will see
• 7-10 GPT-4 level models
• No massive advance (no GPT-5, or disappointing GPT-5)
• Price wars
• Very little moat for anyone
• No robust solution to hallucinations
• Modest lasting corporate adoption
• Modest profits, split 7-10 ways
Appalled that
@timnitgebru
, AI ethics icon, can’t lay off a lovely 87-year-old man, one of the great scientists of our time, whose journalist son was kidnapped and publicly beheaded in an act of political violence.
Her power has gone to her head;
her humanity has vanished.
A lot of people are missing the funny part. Let me break it down.
If OpenAI has actually achieved AGI, they get their software back from Microsoft.
Elon’s lawsuit has put them in a position having to prove that they *haven’t* reached AGI, even though OpenAI likes to hint that…
@OpenAI
Dear
@openAI
This is baloney: “We are dedicated to the OpenAI mission and have pursued it every step of the way.”
The original mission was to be “unconstrained by a need to generate a financial return”, “not organized for … private gain”, “seek[ing] to open source technology…
Just watched Noam Chomsky give a fascinating and up-to-the minute talk on deep learning, science, and the nature of human language.
I loved the first half and find myself deeply skeptical of the second
A 🧵summarizing what he said, and my own take.
I *am* an AI expert, and I don’t think AGI is coming soon. My track record is quite good:
- I anticipated the challenges of out of distribution generalization in 1998
- hallucination errors in 2001
- troubles w driverless cars in 2016
- that radiologists would not be quickly…
If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers. The experts might be wrong -- but it's irrational for you to assert with confidence that you know better than them.
Open letter to all European leaders,
The recent events at OpenAI are likely going to lead to considerable, unpredictable instability.
This highlights the fact that we can’t really trust the companies to self-regulate AI where even their own internal governance can be deeply…
Hot take on Google Gemini and GPT-4:
👉Google Gemini seems to have by many measures matched (or slightly exceeded) GPT-4, but not to have blown it away.
👉From a commercial standpoint GPT-4 is no longer unique. That’s a huge problem for OpenAI, especially post drama, when many…
The Gemini era is here. Thrilled to launch Gemini 1.0, our most capable & general AI model. Built to be natively multimodal, it can understand many types of info. Efficient & flexible, it comes in 3 sizes each best-in-class & optimized for different uses
"absolutely brilliant"
—Nobel Laureate Danny Kahneman
The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
It's what I wish I had had time to say at the
#AIDebate
:)
Finally ready, free, on arXiv. Happy Reading!
No. Tesla recall, MSFT Bing fail, and Google Bard fail are NOT independent; each reflect the fact that you cannot build AI in the real world from Big Data and deep learning alone.
Too many edge cases and not enough reasoning. We need new approaches; current AI has been oversold.
No, we are not even close. AGI would require systems that
👉essentially never hallucinate
👉reliably reason over abstractions
👉can form long term plans
👉understand causality
👉reliably maintain models of the world
👉reliably handle outliers
We currently have none of that.
@GaryMarcus
Gary, I’m non-technical… but the hype around agi coming very soon and annihilating us all is kind of stressing me out. Is this a valid concern?
The number of great AI-generated feature films created in the next three years will be approximately equal to the number of great AI-generated novels that were produced in the last 14 months.
For people who suddenly think they're going to be 'filmmakers' with AI, how are you going to:
Block a scene?
Get coverage?
Choreograph a fight?
Move specific lights?
Give acting notes?
Deal with continuity?
The confidence vs cluelessness on display is something else.
Holy shit! OpenAI just gave me sneak preview early access to GPT-5 (to do some red-teaming) — and it’s incredible!
What really makes me is happy is that they let me look at the training data, too, so I could do proper tests of its generalization. This thing is LIT!
And wow,…
If OpenAI does collapse, WeWork style, it will likely be seen as a tale of hubris.
They knew infringement was going to be a major issue.
They proceeded anyway.
That will not play well in front of juries.
In the end it could devour them.
OpenAI’s reply to
@ElonMusk
may turn out to be one of the biggest own-goals of all time, because (as explained below) it inadvertently revealed that they have been lying from the jump.
It may also put them in legal jeopardy over their 501(3)(c) non-profit filings.
Huge…
What a mendacious company to the very core.
For years OpenAI promised to “open source technology for the public benefit” when possible, even in legal filings as a “charitable” organization.
And yet they never meant it, except as a recruiting ploy,
@AGRobBonta
@Public_Citizen
GPT-3 is a better bullshit artist than its predecessor, but it's still a bullshit artist.
an investigation,
@techreview
, co-authored with Ernest Davis.
I was watching this AI generated video, and at one point in it I saw an image that I immediately recognized where it was "inspired" from. It was Leonardo Dicaprio in Wolf of Wall Street. Left the AI generated image and right where I think it came from:
@GaryMarcus
@geoffreyhinton
So many people are confused about the relation between human cognitive errors and LLM hallucinations that I wrote this short explainer:
Humans say things that aren't true for many different reasons
• Sometimes they lie
• Sometimes they misremember things
• Sometimes they fail…
OpenAI is in serious trouble.
👉The excerpt below is particularly damning, because the prompts that elicited the plagiarism in no way requested that the system draw on the NYT at all.
👉
@jason_kint
&
@CeciliaZin
largely converge on the overall seriousness of the suit.
👉OpenAI…
Here are four examples. Again, the lawsuit includes one hundred of them. You get the point. I find this exhibit to be an incredibly powerful illustration for a lawsuit that will go before a jury of Americans. Again, it's impossible to argue with this. /14
“Cesspools of automatically-generated fake websites, rather than ChatGPT search, may ultimately come to be the single biggest threat that Google ever faces. After all, if users are left sifting through sewers full of useless misinformation, the value of search would go to…
*Fabulous* question: How come smart assistants have virtually no ability to converse, despite all the spectacular progress with large language models?
Thread (and substack essay), inspired by a reader question from
@___merc___
Deep Learning Is Hitting a Wall. What would it take for artificial intelligence to make real progress?
#longread
in
@NautilusMag
on one of the key technical questions in AI.
Board is completely on the ropes, almost everyone hate them, they have no equity at stake, and are in a no-win situation.
Yet they are still apparently fighting to have a replacement board that respects the mission.
That’s integrity.
And suggests they can’t unsee something.
X has a moral duty to take steps to reduce bots,
@elonmusk
.
Something has clearly broken in the codebase in recent days,
Now getting pages and pages of these; most w no pic, no followers, just joined, blatantly autogenerated names.
5 minutes of coding could get rid of most.
Midjourney’s Merry Christmas to all
- We are likely using copyrighted materials
- We will ban you if you try to find out
- We may sue you if you try to find out
- If you get sued because the output that you generated violates copyright or trademark laws, don’t look at us.
Ok, game on. Since you asked, here are a few of my credentials; looking forward to comparing and hearing yours.
👉 My 2001 technical book on cognition and neural networks (The Algebraic Mind) anticipated many of the problems current AI is facing (including many that
@ylecun
now…
What has
@GaryMarcus
built in the field of machine learning so far? How is he relevant in the AI industry? We need to stop celebrating these self-proclaimed AI experts who haven't even written a single technical AI book and yet speak on behalf of the AI community about AI…
Counterpoint: scaling alone hasn’t even brought LLMs to reliable multi digit integer arithmetic.
Also: exponential progress in Go playing hasn’t led AlphaGo to take any interest whatsoever in human territory or even led it to ask what a stone is.
must-read new study from
@Google
confirms all central claims of Deep Learning: A Critical Appraisal (2018):
- machine learning often generalizes poorly
- extrapolation beyond training data is key
- urgent need for better ways of adding in domain expertise
Top 10 reasons
#deeplearning
isn’t getting us to artificial general intelligence. A critique of deep learning, 5 years into its resurgence, by
@garymarcus
Gotta love how the opening of
@ylecun
’s recent lecture
- complete confirms that his NYU colleagues Davis and Marcus in 2019 (Rebooting AI) diagnosis and problem formulation was—and remains—completely correct.
- yet somehow entirely fails to cite his colleagues’s analysis.…
Yann LeCun
@ylecun
delivered a lecture on Objective-Driven AI.
He began with a reality check: "Machine Learning falls short compared to humans and animals!"
Here's his insight on constructing AI systems that learn, reason, plan, and prioritize safety:
1/5
Remember how Sam Altman told the US Senate he had no “direct” investment in OpenAI? and how they gushed over his apparent selflessness?
👉He didn’t mention that (presumably) he had equity in YC which likely has equity in OpenAI.
And now this:
👉“OpenAI Agreed to Buy $51 Million…
Holy shit! “citing what he characterized as Mr. Altman’s history of manipulative behavior”
Murati and Sutskever both raised concerns about Sam. Huge NYT scoop. What Ilya saw was bad behavior.
PS
@karaswisher
, this is what real journalism looks like. Unbelievable that you…
Wait, what? We are going to address injustices by putting most humans out of work, funneling almost all cash to a few AI-lords? How’s that going to work exactly?
What if one could *prove* that hallucinations are inevitable within LLMs?
Would that change
• How you view LLMs?
• How much investment you would make in them?
• How much you would prioritize research in alternatives?
New paper makes the case:
h/t…
This Ultraman infringement decision in China is a big deal.
If the things shake out the same way in the U.S., GenAI will likely be forced to either
A. Make a ton of licensing agreements
or
B. Go out of business
The history of Napster may well repeat itself.
Conjecture: Elon has given up on Tesla.
He once said that “Solving self-driving was the difference between Tesla being worth a lot of money and being worth basically zero”
And now he has just let the vision lead at the FSD project walk out the door, without matching comp. He…
not a vote of confidence from Elon in FSD leading to fleets of robotaxis any time soon. sounds like he has all but given up.
if he thought that was imminent, he would match the compensation.
👇
@ilyasut
really does not look happy in this recent clip. he looks scared.
I doubt he has discovered something as significant as he seems to think, but if he has, and is this worried, I just wonder whether everyone has been cheering for the right team.
If
@elonmusk
loses against OpenAI
• Every new company in the valley is going to file as a “nonprofit”
• Each will give away a shitty, crippled version of its software in order to claim that it works in the benefit of humanity
Sorry but no, intelligence is not a fundamental property of matter.
Most arrangements of matter don’t have it.
(Also, anyone who studies animal cognition long ago realized that intelligence is not uniquely human; that’s not the news flash it seems to be.)
Q: "After doing AI for so long, what have you learned about humans?"
Sam Altman: "I grew up implicitly thinking that intelligence was this, like really special human thing and kind of somewhat magical. And I now think that it's sort of a fundamental property of matter..."
It's…
.
@openAI
in a nutshell:
Feb 2019: GPT is “too dangerous to release”; organization’s priority is “not enabling malicious or abusive uses of the technology“, a “very tough balancing act for us.”
Sept 2020: it costs $400/month
OpenAI’s singular quest to keep humanity safe.
Mayday.
The sudden pollution of science with LLM-generated content, known to yield plausible-sounding but sometimes difficult to detect errors (“hallucinations”) is serious, and its impact will be lasting.
Please share this short essay (link below) with scientists and with…
Potential copyright infringement & why generative AI may be in for a rough ride
Video below on how widespread the problem is – and a 5,000 word deep dive w
@rahll
@
👉 Why film & game studios, artists & actors may sue
👉 Why users are at risk
👉 Why the…
Stunning reversal of fortune: In less than a year, ChatGPT has gone from being mistaken for AGI to being an insulting shorthand for robotic, incoherent, unreliable, and untrustworthy.
BREAKING: Company valued at $86 billion tries to offer publishers peanuts in exchange for cutting massively into their business with factuality-impaired alternative.
@theinformation
Free nonlegal advice?
Publishers, just say no; wait for the NYT lawsuit to shake out.
Calling it now: The $86B OpenAI tender will someday be seen as the WeWork moment of AI.
👉GPT-5 will either be significantly delayed or not meet expectations.
👉Companies will struggle to put GPT-4 and 5 into production (see below)
👉Competition will increase, margins will be…
I know I say it a lot, but using LLMs to build customer service bots with RAG access to your data is not the low-hanging fruit it seems to be. It is, in fact, right in the weak spot of current LLMs - you risk both hallucinations & data exfiltration.
George Hotz “[GPT] is what kills Google”
Google: Game on!
——————————————-
Below, Round 1: Which country won the most Eurovision contests?
Google on left; GPT on right. Not sure I want that much personality in my search results…
People arguing for AI rights based on complex text processing algorithms need to ask whether they would assign the same rights to calculators, smart watches, and the internet.
“I don’t quite get it how works” + “it surprises me” ≠ it could maybe be sentient if I squint.
My new essay
@TheHill
argues we need an AI rights movement. AIs are no longer just tools. They are quickly becoming digital minds integrated in society as friends and coworkers. The future turns on whether and how we include them in the moral circle. 🧵
OpenAI is in much, much deeper doo-doo than they let on yesterday.
👉Fundamentally, they claimed that “regurgitation” stemmed from “misuse that is not typical or allowed user activity, and is not a substitute” for The New York Times.
👉 Without speaking to the NYT case in…
How much was OpenAI really worth, a week ago? And what are the employees thinking about right now?
Here’s a thought:
If OpenAI has a lot of breakthrough, difficult to replicate IP (code, data, infrastructure etc), with a real business case, a lot of employees would stick…
Literally every conversation I have on Twitter about long-term risk leaves me more worried than when I started.
Standard countarguments are mostly these
- Ad hominem, about who is in the long-termist movement, which is *entirely* irrelevant to the core question: do we have a…
GenAI is starting to look like Typhoid Mary.
Last May, the celebrated 54-year-old LexisNexis touted hallucination-free legal citations produced by Generative AI. Instead, it is making up cases — from 2025 and 2026!!!
Talk about torching one’s reputation on the altar of GenAI.
—…
With Sam Altman (
@sama
) acknowledging that scaling is not all that we need, & that AI has a need for new ideas, I am re-upping my February 2020 arXiv article (which got lost in the early days of the pandemic), The Next Decade in AI.