Christopher Manning Profile Banner
Christopher Manning Profile
Christopher Manning

@chrmanning

126,509
Followers
116
Following
132
Media
2,337
Statuses

Director, @StanfordAILab . Assoc. Director, @StanfordHAI . Founder, @stanfordnlp . Prof. CS & Linguistics, @Stanford . IP @aixventureshq . 🇦🇺 Do #NLProc & #AI . 👋

Palo Alto
Joined September 2014
Don't wanna be here? Send us removal request.
@chrmanning
Christopher Manning
2 months
I do not believe human-level AI (artificial superintelligence, or the commonest sense of #AGI ) is close at hand. AI has made breakthroughs, but the claim of AGI by 2030 is as laughable as claims of AGI by 1980 are in retrospect. Look how similar the rhetoric was in @LIFE in 1970!
Tweet media one
117
392
2K
@chrmanning
Christopher Manning
6 years
Machine Learning just ate Algorithms in one large bite, thx to @tim_kraska , @alexbeutel , @edchi , @JeffDean & Polyzotis at @Google —faster, smaller trees, hashes, bloom filters
Tweet media one
15
656
1K
@chrmanning
Christopher Manning
5 months
I’ve kept quiet on the @OpenAI fiasco, since I also don’t know what’s going on, 🤷 but I can’t possibly support today’s interim CEO—the below in a thread on “50/50 everyone gets paperclipped & dies”—or a residue board that believes in these EA-infused fantasy lands. HT @vkhosla .
@eshear
Emmett Shear
11 months
@BarbettiJames @ApriiSR @BellaRudd1 The Nazis were very evil, but I'd rather the actual literal Nazis take over the world forever than flip a coin on the end of all value.
73
11
115
69
141
1K
@chrmanning
Christopher Manning
1 month
“The fact that [transformer neural nets] model language is probably one of the biggest discoveries in history. That you can learn language by just predicting the next word with a Markov chain—that’s just shocking to me,” Mikhail Belkin says. By @strwbilly .
32
197
1K
@chrmanning
Christopher Manning
5 years
This paper gives some really nice insights and mathematical depth to what had previously (for us) been “the mystery of squared distance” in revealing the representation of parse trees in deep contextual representations (BERT, ELMo, etc.). Great to read!
@wattenberg
Martin Wattenberg
5 years
How does a neural net represent language? See the visualizations and geometry in this PAIR team paper and blog post
Tweet media one
9
332
984
4
288
1K
@chrmanning
Christopher Manning
1 year
This is truly an opinion piece. Not even a cursory attempt is made to check easily refutable claims (“they may well predict, incorrectly”). Melodramatic claims of inadequacy are made not of specific current models but any possible machine learning approach
40
161
1K
@chrmanning
Christopher Manning
4 years
Artificial Intelligence Definitions: This (northern) summer, I spent more time than I’d like to admit coming up with a handout defining key terms in AI in 1 page, trying to be informative and suitable for non-specialists – let me know if you like them!
Tweet media one
27
276
1K
@chrmanning
Christopher Manning
2 years
Meanwhile at @Stanford , we just encourage all students to take as many CS courses as they would like …
@IDoTheThinking
Darrell Owens
2 years
UC Berkeley to limit Computer Science degrees even harder. Even with intensely hard lower div courses, a 3.3 lower div GPA requirement to declare CS, they could still only weed down +2,000 intro CS courses to 800 a semester. A sad state of education.
40
68
703
49
29
779
@chrmanning
Christopher Manning
11 months
But most AI people work in the quiet middle: We see huge benefits from people using AI in healthcare, education, …, and we see serious AI risks & harms but believe we can minimize them with careful engineering & regulation, just as happened with electricity, cars, planes, ….
35
136
728
@chrmanning
Christopher Manning
1 year
Dear @emilymbender —and @Abebab —you need to keep “reminding” people of your viewpoint because it is not an argument that is convincing to all or a self-evident truth. It is a particular academic position, which lots of people support but a good number of others disagree with. 1/8
Yes, exactly this. I wish we didn't need to keep reminding people, and @Abebab is commendable for being gentle about it! For the long form of this argument, see Bender & @alkoller 2020:
12
25
165
19
130
686
@chrmanning
Christopher Manning
9 months
Reflecting again on how knowing all the architecture & equations of the Transformer model is really of no use at all in convincingly explaining to someone how an LLM like ChatGPT can write paragraphs of lucid text in response to a prompt. I guess I’m saying “Beware reductionism”.
40
80
673
@chrmanning
Christopher Manning
2 months
LLMs and other generative AI are enormously powerful, because they soak up, abstract, and can mashup the work of millions of humans. But they are only a bit more intelligent than an encyclopedia. Central to intelligence is the ability to learn, adapt, and act in novel situations.
30
73
614
@chrmanning
Christopher Manning
5 years
People most-cited by #AAAI papers shows 25 years of AI history. 1990s greats: Pearl—Kautz—Weld—Selman; rise&fall: Comitzer—Sandholm—Sutton—Domingos—Tambe—Littman—Jordan—Veloso—Koller—Boutilier—Ng—Barto; 2010s neural boomers: Bengio—Sutskever—Hinton—Manning
Tweet media one
3
188
583
@chrmanning
Christopher Manning
1 month
LLMs like ChatGPT are an amazingly powerful breakthrough in AI and a transformative general purpose technology, like electricity or the internet. LLMs will reshape work and our lives this decade. They are not just a blurry photocopier or an extruder of meaningless word sequences.
Tweet media one
Tweet media two
16
89
571
@chrmanning
Christopher Manning
3 years
👇 Honestly, this thread is 80% wrong. This is treating science as like front-end frameworks. Yes, if you’re a front-end developer who only knows 3-years-old JavaScript frameworks, then you’ll have trouble getting a gig. But that’s not what we’re teaching students 1/7
@tunguz
Bojan Tunguz
3 years
Say you are interested in furthering your career, and you are interested in NLP. You decide to pursue an MS that specializes in that topic. 1/
8
19
92
13
79
566
@chrmanning
Christopher Manning
11 months
AI has 2 loud groups: “AI Safety” builds hype by evoking existential risks from AI to distract from the real harms, while developing AI at full speed; “AI Ethics” sees AI faults & dangers everywhere—building their brand of “criti-hype”, claiming the wise path is to not use AI.
Tweet media one
22
101
538
@chrmanning
Christopher Manning
5 months
Let’s not get distracted from the main news of the day!
@satyanadella
Satya Nadella
5 months
Congratulations to Australia on winning the World Cup! Great run to the finals, India.
473
1K
41K
15
14
484
@chrmanning
Christopher Manning
4 years
I’d long wondered whether physicists were making good use of all those supercomputer clusters or just using really inefficient algorithms – it looks like some answers might be emerging. Simple AI shortcuts speed up simulations by billions of times | AAAS
13
105
457
@chrmanning
Christopher Manning
4 years
COVID-19 and AI: A Virtual Conference – Stanford’s Human-Centered Artificial Intelligence Institute (HAI) presents a special 1-day online conference, live-streamed starting 9am Pacific time, tomorrow, Wed April 1 (no joke!)
Tweet media one
5
196
451
@chrmanning
Christopher Manning
3 months
🏅 To me, this feels more like the kind of neural model interpretability research we should be doing than much of the recent work on interpretability of transformer models.
@zdeborova
Lenka Zdeborova
3 months
Emergence in LLMs is a mystery. Emergence in physics is linked to phase transitions. We identify a phase transition between semantic and positional learning in a toy model of dot-product attention. Very excited about this one!
Tweet media one
15
246
1K
4
56
431
@chrmanning
Christopher Manning
5 months
I agree. Too many PhD guidebooks recommend choosing a senior prof as advisor. Going with a new faculty may lead to a few rough edges, but most students do very well, benefiting from the top current knowledge, enthusiasm, time commitment, and aligned goals of the new faculty.
@kaiwei_chang
Kai-Wei Chang
5 months
If you're seeking an advisor in NLP, consider exploring options with recently appointed faculty. While established senior folks are well-known, junior folks shine as rising stars. In fact, many successful PhDs are the first few students of their advisor. Don't miss the chance!
3
59
484
8
33
431
@chrmanning
Christopher Manning
6 years
Pleased to be promoting Human-Centered Artificial Intelligence. Our focus areas: developing AI inspired by human intelligence; guiding, forecasting, and studying the impact of AI on human society; designing AI applications that enhance human capabilities.
4
93
424
@chrmanning
Christopher Manning
1 year
Happy to be at @CarnegieMellon today for the graduation of my oldest kid 🧑‍🎓
Tweet media one
14
0
422
@chrmanning
Christopher Manning
7 years
I’m delighted to be inaugural Thomas M. Siebel Professor in Machine Learning at @StanfordEng . Thx, @siebelfdn @TomSiebel @SiebelScholars
@SiebelScholars
Siebel Scholars
7 years
Computational Linguistics and #NLP scholar Chris Manning, has been named the 1st Thomas M. Siebel Professor in Machine Learning @Stanford .
1
15
36
21
53
406
@chrmanning
Christopher Manning
3 years
The amazing rise of reinforcement learning! (With graph neural networks and meta-learning in hot pursuit. ConvNets? Tired.) Based on #ICLR2021 keywords HT @PetarV_93
Tweet media one
8
88
391
@chrmanning
Christopher Manning
3 years
My attempt at understandable but technically correct definitions for key terms in Artificial Intelligence in one page for @StanfordHAI . With thanks for helpful feedback from people on Twitter, I’ve revised and hopefully improved a few of the definitions:
Tweet media one
15
102
387
@chrmanning
Christopher Manning
1 month
Now that everyone is writing LLM programs, the idea of doing approximate bayesian inference by sampling along linguistic pipelines (rather than k-best, etc.) is more relevant again
7
47
385
@chrmanning
Christopher Manning
3 years
The way I would improve spreadsheets is by allowing row numbering to start from any integer (including negative). Then row numbers could count what it makes sense to count. Books have done this forever via Roman numerals for front matter. Surely this isn’t so hard to do in 2021?
Tweet media one
12
22
364
@chrmanning
Christopher Manning
4 years
Standing desk, #COVID19 edition
Tweet media one
11
7
358
@chrmanning
Christopher Manning
2 months
One of the simplest but most useful and appropriate pieces of AI regulation to adopt at the moment is to require model providers to document the training data they used. This is something that the @EU_Commission AI Act gets right … on p.62 of its 272 pages (!).
Tweet media one
@bcmerchant
Brian Merchant
2 months
So when *the CTO* of OpenAI is asked if Sora was trained on YouTube videos, she says “actually I’m not sure” and refuses to discuss all further questions about the training data. Either a rather stunning level of ignorance of her own product, or a lie—pretty damning either way!
30
534
3K
12
68
360
@chrmanning
Christopher Manning
2 years
I’m happy to share the published version of our ConVIRT algorithm, appearing in #MLHC2022 (PMLR 182). In 2020, this was a pioneering work in contrastive learning of perception by using naturally occurring paired text. Unfortunately, things took a winding path from there. 🧵👇
Tweet media one
10
53
341
@chrmanning
Christopher Manning
5 years
Just worry less about AI hype? Lots 50 years ago—and world didn’t end: “In 1970, Life Magazine, overstating [Shakey the robot’s] abilities, called it ‘the first electronic person’ and suggested that true ‘thinking’ machines would arrive in the near future“
10
109
328
@chrmanning
Christopher Manning
4 years
The need for open data & benchmarks in modern ML research has led to an outpouring of #NLProc data creation. But @harm_devries , @DBahdanau & I suggest the low ecological validity of most of this data undermines the resulting research. Comments welcome!
Tweet media one
11
84
337
@chrmanning
Christopher Manning
3 months
GPT-4 is still 🥇, but I stare, amazed at how good “mostly open” LLMs have become in the last 6 months. On LMSYS Chatbot Arena , the best open LLMs—Mixtral-8x7b-Instruct & Yi-34B-Chat—are roughly tied vs. ChatGPT-3.5 and get ~30% wins against GPT-4-Turbo.
10
44
317
@chrmanning
Christopher Manning
3 years
🧐 @GoogleAI ’s neural machine translation isn’t yet perfect This is a good example of how neural language models still go haywire, especially when training data is sparse. See the discussion in @YejinChoinka ’s .
Tweet media one
10
57
309
@chrmanning
Christopher Manning
5 years
In Artificial Intelligence, all the good ideas are on @arxiv —together with many mediocre and bad ones. Students come to @StanfordAILab not to steal ideas but for training and community—to learn the creativity and boldness of thought that advances science.
4
81
304
@chrmanning
Christopher Manning
2 years
I co-chaired the 1st @iclr_conf with @AaronCourville & @rob_fergus in 2013. An exciting 3 days in Arizona but it was small: no area chairs—we did all paper decisions. Now—in < 10 years—ICLR is the highest h5-rank ML conference—above NeurIPS & ICML. Wow! 😲
Tweet media one
3
14
303
@chrmanning
Christopher Manning
4 months
. @Thom_Wolf : “Academia is back as we saw at NeurIPS 2023. With many private and open-source labs closing the doors on publishing their results and data, academia rises again in visibility and is shining with many impactful papers in 2023 and exciting new work coming.”
@Thom_Wolf
Thomas Wolf
4 months
Some predictions for 2024 – keeping only the more controversial ones. You certainly saw the non-controversial ones (multimodality, etc) already 1. At least 10 new unicorn companies building SOTA open foundation models in 2024 Stars are so aligned: - a smart, small and dedicated…
19
77
411
3
39
228
@chrmanning
Christopher Manning
5 months
Seriously, @OpenAI ? The future of AI? One of the new directors and the overall board composition are just … 🤦
@emilybell
emily bell
5 months
OpenAI fires women on the board- (board chair who oversaw fuck up stays🤷‍♀️) - joining board is *Larry Summers* who once said women don’t have the same ‘intrinsic aptitude’ for STEM, and associated with Jeffrey Epstein even after he was convicted of sex offences
81
444
1K
33
26
294
@chrmanning
Christopher Manning
4 years
. @SuryaGanguli & I are co-organizing the next @StanfordHAI Conference Apr 1, 2020—no joke!—on the topic Triangulating Intelligence: Melding Neuroscience, Psychology, and AI. Botvinick— @YejinChoinka @chelseabfinn @AudeOliva —Tenenbaum— @dyamins —save the date!
9
67
285
@chrmanning
Christopher Manning
7 months
This is also my vote.
@fchollet
François Chollet
7 months
Human-level AI is harder than it seemed in February 2023.
96
105
1K
9
19
277
@chrmanning
Christopher Manning
1 year
Early 2023 vibes: The AI Ethics crowd continues to promote a narrative of generative AI models being too biased, unreliable & dangerous to use, but, upon deployment, people love how these models give new possibilities to transform how we work, find information & amuse ourselves
Tweet media one
36
22
274
@chrmanning
Christopher Manning
5 years
BERT and ELMo are present improving #NLProc #Cs224n
Tweet media one
2
25
269
@chrmanning
Christopher Manning
4 years
Four @StanfordAILab faculty kicking off the @StanfordHAI seminar series for 2020-21, starting with @AndrewYNg next week. Register for events at
Tweet media one
0
59
269
@chrmanning
Christopher Manning
4 years
Twitter has decided I want to see tweets about Transformers, lol
Tweet media one
Tweet media two
7
13
266
@chrmanning
Christopher Manning
6 years
Beat Cal! #nips2017 papers
Tweet media one
4
68
243
@chrmanning
Christopher Manning
5 years
Just a couple of years ago, I found it hard to believe that vision people were still all working with rectangular bounding boxes. I guess they’ve fixed that now. 🙂
@fvsmassa
Francisco Massa
6 years
Today we are releasing Mask R-CNN Benchmark: a fast and modular implementation for Faster R-CNN and Mask R-CNN written entirely in @PyTorch 1.0. It brings up to 30% speedup compared to mmdetection during training. Check out the webcam demo!
Tweet media one
10
298
942
2
48
245
@chrmanning
Christopher Manning
1 month
Re-upping a piece from last year by @hamandcheese on LLMs and language meaning: “I see the success of LLMs as vindicating the use theory of meaning, especially when contrasted with the failure of symbolic approaches to natural language processing.”
18
53
251
@chrmanning
Christopher Manning
8 months
The enthusiasm of big generative AI/foundation model companies to claim AI existential risks and ask to be regulated by the government is the same old story of “Regulation is the friend of the incumbent”, and especially damaging for open source, argues @bgurley . HT @s_batzoglou
@alexandrosM
Alexandros Marinos 🏴‍☠️
8 months
Bill Gurley at the All-In Summit. STOP WHAT YOU'RE DOING AND WATCH THIS RIGHT NOW.
30
60
208
8
52
248
@chrmanning
Christopher Manning
5 years
Excited that—after a lot of work—the @Stanford Institute for Human-Centered AI is launching. We’re aiming at new AI applications that augment human capabilities through both developing new AI technologies and studying and guiding the human and societal impact of AI.
@StanfordHAI
Stanford HAI
5 years
Hi, we are the @Stanford Institute for Human-Centered Artificial Intelligence. AI has the potential to transform our world – how will we ensure it improves life for all of us? Join us in our work to explore this dream of a better future. #StanfordHumanAI
14
191
561
5
48
245
@chrmanning
Christopher Manning
1 year
I’d thought this swing to self-supervised learning was meant to reduce the need for data annotation? 🤔
@alexandr_wang
Alexandr Wang
1 year
we're starting to see top companies spend the same amount on RLHF and compute in training ChatGPT-like LLMs for example, OpenAI hired >1000 devs to RLHF their code models crazy—but soon companies will start spending $ hundreds of Ms or $ billions on RLHF, just as w/compute
37
105
881
24
24
245
@chrmanning
Christopher Manning
3 months
“Open source is indisputably one of the biggest drivers of progress in software and by extension AI. But it is under existential threat from regulation that will advantage entrenched interests. We believe that open AI is vital for research, innovation, competition, and safety.”
@nathanbenaich
Nathan Benaich
3 months
Open source is one of the biggest drivers of progress in software - AI would be unrecognizable without it. However, it is under existential threat from both regulation and well-funded lobby groups. The community needs to defend it vigorously. 🧵
Tweet media one
36
120
394
3
54
243
@chrmanning
Christopher Manning
5 years
.⁦ @dbamman ⁩ has many really great slides for his class Info 159/259. Natural Language Processing—beautiful examples and beautiful opening slides
1
49
240
@chrmanning
Christopher Manning
13 days
Good news for the future of humanity!
Tweet media one
9
11
242
@chrmanning
Christopher Manning
2 years
Heading to Seattle for #NAACL2022 . This will be my first travel to an in-person conference in over 2 ½ years (NeurIPS2019 in Vancouver to NAACL2022 in Seattle—but not via Puget Sound)
Tweet media one
3
0
232
@chrmanning
Christopher Manning
6 years
Surprised normally rigorous @beenwrekt calls this blog post excellent—I’d say poorly argued. 1st argument for deep learning stalling: @AndrewYNg tweeting less. 🤔 I put his data in a chart—because #infovis . Anyway, does rate correlate with AI or Ng’s jobs?
Tweet media one
@beenwrekt
Ben Recht
6 years
Excellent post by @filippie509 on saturation of the deep learning revolution. The only thing I’d add is that user-facing AI inside the big companies is already failing us at scale (recommendations, ads, engagement). More depth won't fix these problems.
11
52
150
12
28
218
@chrmanning
Christopher Manning
1 year
Human-in-the-loop reinforcement learning—DaVinci instruct—may be the most impactful 2022 development in foundation models. What can we achieve by reinventing the AI design process to start from people’s needs? Watch tomorrow’s @StanfordHAI conference 9 PST
Tweet media one
2
37
206
@chrmanning
Christopher Manning
2 months
@Simeon_Cps @LIFE A system that gains memories from one event, develops novel plans consistent with constraints, understands the implications of a changed environment, & reasons about new circumstances—without regular dumb goofs showing there’s no real world model & reasoning behind the curtain
20
13
214
@chrmanning
Christopher Manning
1 year
“The most important thing to remember about tech doomerism in general is that it’s a form of advertising, a species of hype.” The apocalypse isn’t coming. We must resist cynicism and fear about AI | Stephen Marche | The Guardian
19
48
209
@chrmanning
Christopher Manning
4 months
If you’re interested in LLMs, RLHF, etc., @natolambert has been doing a great series of interesting posts at interconnects dot ai. But I think I’m not meant to link to them these days on Twitter, right?
5
15
212
@chrmanning
Christopher Manning
1 year
As a Professor of Linguistics myself, I find it a little sad that someone who while young was a profound innovator in linguistics and more is now conservatively trying to block exciting new approaches. For more detailed critiques, I recommend my colleagues @adelegoldberg1 and
@adelegoldberg1
Adele Goldberg
1 year
Thoughts on Chomsky's NYT op-ed🧵 “Jorge Luis Borges once wrote” great beginning, we all appreciate Borges
19
77
493
16
20
211
@chrmanning
Christopher Manning
2 years
It’s great to see Bean Machine, a new Probabilistic Programming Language (a bit like @mcmc_stan ) built on @PyTorch . But how much impact will this have? Somehow Bayesian modeling has gone from the center of AI in the 2000s decade to the margins since 2015.
3
45
204
@chrmanning
Christopher Manning
5 years
1.5 MB really feels too low to me … but maybe I should read the article first or spend more time on compressing neural language models before commenting further. 🤔 [Kids store 1.5 megabytes of information to master their native language | Berkeley News]
11
49
204
@chrmanning
Christopher Manning
8 years
I’ve put up the slides for my #sigir2016 invited talk at – including a voiceover for the last slide so it‘s clearer.
4
131
201
@chrmanning
Christopher Manning
9 months
NLP is having a moment, where LLMs have become the Swiss Army knife of almost all AI, but, nevertheless, I was only trying to give a brief history of #NLProc not AI. However, I’ll go with it being wonderful and easy to read. 😊
4
29
199
@chrmanning
Christopher Manning
2 years
. @ylecun & J Browning’s What AI Can Tell Us About Intelligence in Noēma is excellent! 👍 It clearly & dispassionately contrasts two main views on the place of symbols, as hard-coded at the outset or learned through experience, arguing well for the latter.
5
47
196
@chrmanning
Christopher Manning
3 years
To use the currently trendy terminology, what we’re teaching students is ✨meta-learning✨—a strong foundation of approaches, ideas, understanding, and tools so that they will be able to quickly learn and evolve over the following decades, as science and engineering changes 2/7
3
15
194
@chrmanning
Christopher Manning
3 years
I succumbed to threats and wrote my 2019–20 faculty report. Hot papers last year: Electra: Pre-training text encoders as discriminators , Stanza: A Python toolkit for many languages & Universal Dependencies v2
Tweet media one
Tweet media two
Tweet media three
1
34
191
@chrmanning
Christopher Manning
6 months
I can question particular classifications (SHRDLU equal to unskilled human or Grammarly at Level 3 seems generous), but: This paper is a sensible, concrete framework for assessing progress towards AGI. Congrats to @Stanford grads @merrierm & @jaschasd !
Tweet media one
14
33
181
@chrmanning
Christopher Manning
11 months
It is admirable to apply the precautionary principle, and build & deploy transformative AI technology with exceptional care. But how is that best achieved by distracting from very real AI risks by making a remote, fanciful risk of extinction from AI a global priority? 🤔
@demishassabis
Demis Hassabis
11 months
I’ve worked my whole life on AI because I believe in its incredible potential to advance science & medicine, and improve billions of people's lives. But as with any transformative technology we should apply the precautionary principle, and build & deploy it with exceptional care
71
188
1K
10
20
187
@chrmanning
Christopher Manning
3 years
I really recommend this 20 minute video. It makes really tangible the obstacles coming from a lack of AI community, teachers, and mentors—oh, and electricity—but also creative and successful ways to circumvent these obstacles
Watch legendary @black_in_ai DJ Hassan discuss his journey in machine learning, distributed research outside the confines of academia, & how he was influenced by BAI & @DeepIndaba
2
65
202
4
40
177
@chrmanning
Christopher Manning
3 years
. @kahneman_daniel calls it: “Clearly AI is going to win” [against human intelligence], and lots of other interesting thoughts on system noise, exponentials, and human judgments. The big remaining question is how to use AI advances to augment human lives.
5
42
172
@chrmanning
Christopher Manning
19 days
A picture is worth … less than a word?
@DhruvBatraDB
Dhruv Batra
19 days
I have been working on vision+language models (VLMs) for a decade. And every few years, this community re-discovers the same lesson -- that on difficult tasks, VLMs regress to being nearly blind! Visual content provides minor improvement to a VLM over an LLM, even when these…
Tweet media one
23
116
780
6
14
179
@chrmanning
Christopher Manning
5 months
Congratulations to @demi_guo_ and @chenlin_meng – for leading the rapid development towards great quality video!
@pika_labs
Pika
5 months
Introducing Pika 1.0, the idea-to-video platform that brings your creativity to life. Create and edit your videos with AI. Rolling out to new users on web and discord, starting today. Sign up at
1K
6K
26K
6
15
178
@chrmanning
Christopher Manning
9 months
I’m on Jeremy’s side! A paper—with the same issues of the inscrutability of GPT-4—claims “GPT-4 Can’t Reason” by examining “21 diverse reasoning problems”. Showing that a lets-think-step-by-step prompt is enough for it to solve the first 3 seems a worthy contribution for a tweet.
@jeremyphoward
Jeremy Howard
9 months
I ran a small experiment, reported my results, and quoted from a paper. That's it. I really didn't expect that anyone would find that so threatening.
22
18
587
4
13
174
@chrmanning
Christopher Manning
11 months
Sensible words from a sensible bloke: “The Senate hearings, I felt a bit sad. AI has potential to optimize healthcare so we can implement a better, more equitable system, but none of that was actually discussed.”—⁦ @kchonyc ⁩ via ⁦ @sharongoldman
2
40
171
@chrmanning
Christopher Manning
4 months
My 2 point plan for improving productivity and world GDP: @Apple , @Google , @Microsoft coordinate a change so: • {Cmd|Ctrl}+V is Paste {Without|and Match} Formatting • In documents with paragraphs/lines {Cmd|Ctrl}+A first selects a paragraph/line; press it twice to “Select All”.
6
12
171
@chrmanning
Christopher Manning
5 years
Many computer scientists have been slow to appreciate the possibilities of neural networks – “What a waste of talent,” Alan Eustace said – but not visionary ⁦⁦ @JeffDean
2
62
165
@chrmanning
Christopher Manning
10 months
“The success of this ‘misdirected’ effort [i.e., building LLMs] has tended to support theories of meaning that explain it instead as a collective phenomenon—like Lévi-Strauss’s ‘universe made up of meanings’ or Foucault’s Archaeology of Knowledge (1969).”
8
33
164
@chrmanning
Christopher Manning
2 years
I finally read @boazbaraktcs ’s blog on DL vs Stats. A great mind-clearing read! 👍 “Yes, that was how we thought about NNs losing out due to bias/variance in ~2000” “Yes, pre-trained models really are different to classical stats, even if math is the same”
3
21
161
@chrmanning
Christopher Manning
2 years
50 years after John McCarthy’s Turing Award lecture on The Present State of Research on AI, what are now the key issues? Join @drfeifei & me on Tuesday as we focus on Foundation Models, achieving accountable AI, and AI modeling physical & simulated worlds.
Tweet media one
14
51
160
@chrmanning
Christopher Manning
3 years
Looking back on the hype, VC funding, and huge genuine progress in AI, ML, and autonomous vehicles in the 2010s, I think this will come to be seen as an inflection point: Uber, After Years of Trying, Is Handing off Its Self-Driving Car Project
4
39
158
@chrmanning
Christopher Manning
8 months
Thanks to the @huggingface trl team for their implementation and blog of using DPO training of LLMs with human feedback, following our paper !
@RisingSayak
Sayak Paul
8 months
Huge props to the `trl` team at @huggingface for authoring the best content around doing all kinds of policy optimizations for LLMs. They do it keeping accessibility at the forefront 🤗 Hope to bring some of that to 🧨 diffusers someday. For now, enjoy
4
28
140
3
19
161
@chrmanning
Christopher Manning
1 month
But it is shocking that next word prediction can drive learning the fine structure and meaning of human languages, profoundly so given Chomsky’s claims that dominated much of linguistics, while LLMs work less well on unnatural signals; see @JulieKallini :
14
19
159
@chrmanning
Christopher Manning
2 years
It must take a very particular kind of blindness to not be able to see that we have made substantial steps—indeed, amazing progress—towards AI over the last decade …
@VentureBeat
VentureBeat
2 years
10 years after deep learning's breakthrough year, @venturebeat spoke to AI pioneers Geoffrey Hinton, Yann LeCun and Fei-Fei Li, who say rapid progress in #AI will continue. But critics push back on hype, limitations and ethics/bias issues. Read more:
6
36
102
20
18
154
@chrmanning
Christopher Manning
4 years
A pandemic is such a good opportunity to teach (almost) grown kids basic life skills
Tweet media one
1
4
146
@chrmanning
Christopher Manning
6 months
My “Human Language Understanding & Reasoning” in @americanacad ’s Dædalus is a short, readable intro to language understanding and generation by computers (“artificial intelligence”). Thousands have read it in the last 3 months … so maybe you should too?
1
32
147
@chrmanning
Christopher Manning
6 years
Hey @spacy_io people ( @honnibal , @_inesmontani ), those speed comparisons on are not only outdated—as you note—but the speed for the Stanford Tokenizer is just way wrong. Time to take them down? Here are our measurements: #NLProc
3
22
141
@chrmanning
Christopher Manning
5 months
I’ll be at #EMNLP2023 ! 🇸🇬
2
3
141
@chrmanning
Christopher Manning
8 years
I made a page of all my Ph.D. graduates (procrastinating…). I have 20 of them now. Congratulations everyone! #NLProc
3
30
141
@chrmanning
Christopher Manning
1 year
Interestingly, ChatGPT presents a much more balanced perspective on this issue than you do! 8/8
Tweet media one
Tweet media two
7
13
137
@chrmanning
Christopher Manning
4 years
. @SuryaGanguli and I discuss @StanfordHAI ’s 6 hour online conference coming on Wed Oct 7, exploring the latest in machine learning, artificial intelligence, neuroscience, psychology, and how better to meld their insights; hashtag: #neuroHAI . Register now.
1
42
128
@chrmanning
Christopher Manning
8 months
When deep learning took off 2010–20, so few in Systems knew NNets or even matmuls. AI folk had to learn Systems to be stars like Krizhevsky & @ilyasut ! Now, many great Systems folk can make NNets go brrr. It’s high time for AI scientists to focus on novel AI modeling ideas again!
Tweet media one
2
14
133
@chrmanning
Christopher Manning
1 year
This does show something fascinating! But not that linguists’ knowledge of language is “bunk”. Rather, what has mainly been a descriptive science—despite what Chomsky claims!—hasn’t provided the necessary insights for engineering systems that acquire and understand language use.
@FelixHill84
Felix Hill
1 year
Happy New Year. Class that everyone's getting into language Weird that the folks doing it best are folks who've not really spent that time studying 'language' Suspect this reveals underlying truth - that extant knowledge of lang, of the sort that folks like me have, was bunk
7
2
40
11
7
129
@chrmanning
Christopher Manning
4 years
It’ll be interesting to see in a year’s time how the distribution of number of citations varies between Findings of EMNLP 2020 and regular EMNLP 2020 papers. Just randomly skimming papers as they appear on social media, a lot of the time they look equally interesting to me. 🤔
3
7
128
@chrmanning
Christopher Manning
6 years
Details in #nips2017 paper stats did surprise me—some cognitive bias? Stronger than would have guessed: @Princeton , @MSFTResearch ; weaker: @uwcse_ai , @facebook .
Tweet media one
4
51
124
@chrmanning
Christopher Manning
5 years
At #AICan . Canada really gets the importance of Artificial Intelligence. 29 CIFAR AI Chaired Professors just named. 9 women.
Tweet media one
1
26
122
@chrmanning
Christopher Manning
2 years
I would suggest that this thread errs by over-representing the proportion of the time in which human “reasoning” is actually anything akin to mathematical reasoning, such as the example of solving SAT instances. 1/
@rao2z
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
2 years
The impressive deep pattern recognition abilities of #DNN 's such as #LLM 's are sometimes confused for reasoning abilities I can learn to guess, with high accuracy, whether a SAT instance is satisfiable or not, but this not the same as knowing how to solve SAT. Let me explain. 1/
5
60
260
4
21
123
@chrmanning
Christopher Manning
2 years
Opening theme at @DigEconLab workshop: Human-level AI is the wrong goal! We should not seek AI that does what humans do—leading to AI competing with humans, reducing their power/wages—but should look away from lamplight for AI that augments humans, increasing the value of people
11
31
119