Christopher Potts Profile Banner
Christopher Potts Profile
Christopher Potts

@ChrisGPotts

10,980
Followers
621
Following
84
Media
1,891
Statuses

Stanford Professor of Linguistics and, by courtesy, of Computer Science, and member of @stanfordnlp and @StanfordAILab . He/Him/His.

Joined November 2011
Don't wanna be here? Send us removal request.
Pinned Tweet
@ChrisGPotts
Christopher Potts
9 months
All the screencasts for this course are now freely available on YouTube; if you take the official course, you also get access to an expert teaching team, and I myself do some webinar/discussion events:
@StanfordOnline
Stanford Online
9 months
Master the tech transforming AI. Natural Language Understanding taught by @ChrisGPotts starts 8/21 and runs 10 weeks. Course updated to reflect the latest developments in LLMs and generative AI. Enroll now! @stanfordnlp @StanfordAILab
0
19
65
6
39
195
@ChrisGPotts
Christopher Potts
9 months
According to the first sentence of 100% of the papers I have reviewed this year, large language models have achieved amazing success in recent years. Perhaps we could settle on the abbreviation "Language Models Are Outstanding", so papers could begin "LMAO, but …" to save space.
10
105
855
@ChrisGPotts
Christopher Potts
2 years
I'm extremely grateful to @StanfordOnline for making my 2021 Natural Language Understanding course videos accessible and available on YouTube. I've created a version of the 2021 course site with direct links to the YouTube videos:
7
112
623
@ChrisGPotts
Christopher Potts
5 years
The remaining lectures from my Natural Language Understanding course are now up: I have highest hopes for the contextual word reps one; I tried to methodically walk through those models with diagrams, to supplement the great tutorials already out there.
6
122
529
@ChrisGPotts
Christopher Potts
2 years
For my spring NLP/NLU course, I had a series of conversations with outstanding researchers, aiming to provide students a sense for these people and how they think about the field. The conversations were really rewarding, so I turned them into a podcast:
17
87
430
@ChrisGPotts
Christopher Potts
2 years
Sad news: Lauri Karttunen passed away peacefully this morning. Lauri was a towering figure in linguistics and NLP, and a vibrant presence at Stanford in Linguistics, @stanfordnlp & @StanfordCSLI . So many observations and concepts that we all take for granted trace to his work!
13
72
418
@ChrisGPotts
Christopher Potts
3 years
In nervous anticipation of my #ACL2021NLP keynote, I recorded myself giving my talk, and I've posted that version for people who can't attend the live event. The video has high-quality captions, and you don't particularly need video to follow along:
10
73
367
@ChrisGPotts
Christopher Potts
2 years
For my Introduction to Semantics and Pragmatics course at Stanford this quarter, I made screencasts of all the content, with high-quality transcripts, and put them on YouTube. All these videos and associated materials are available here:
7
79
319
@ChrisGPotts
Christopher Potts
5 years
Stanford has begun posting lectures from my course Natural Language Understanding on YouTube (a few are still to come): I'm actually happiest with the "bake-offs", which don't appear much in the videos but can be found here:
2
105
288
@ChrisGPotts
Christopher Potts
4 years
I'll be asking "Is it possible for language models to achieve language understanding?" at an upcoming @StanfordHAI / @OpenAI workshop on GPT-3. My current answer: "We don’t currently have compelling reasons to think they can't":
12
58
279
@ChrisGPotts
Christopher Potts
5 months
AI research has progressed so rapidly that a crisis is upon us, much earlier than any of us anticipated: the ACL anthology.bib file is now larger than the largest allowable file size for Overleaf.
4
18
248
@ChrisGPotts
Christopher Potts
2 years
Congratulations @jurafsky of @Stanford , winner of a 2022 @theNASciences Atkinson Prize in Psychological and Cognitive Sciences for landmark contributions to computational linguistics and the sociology of language! #NASaward
7
18
240
@ChrisGPotts
Christopher Potts
7 months
Jing Huang, Atticus Geiger, @KarelDoostrlnck @ZhengxuanZenWu & I found this OpenAI proposal inspiring and decided to assess it. We find that the method has low precision and recall, and we find no evidence for causal efficacy. To appear at BlackboxNLP:
@OpenAI
OpenAI
1 year
We applied GPT-4 to interpretability — automatically proposing explanations for GPT-2's 300k neurons — and found neurons responding to concepts like similes, “things done correctly,” or expressions of certainty. We aim to use Al to help us understand Al:
Tweet media one
355
999
5K
5
29
239
@ChrisGPotts
Christopher Potts
2 months
I know I am late in the project cycle for this, but I do have suggested edits for the team behind My overall comment is that the central claims in the original are lacking in empirical support.
Tweet media one
11
38
226
@ChrisGPotts
Christopher Potts
2 years
This is the OFFICIAL poster for my DistCurate Workshop talk at #NAACL2022 on Thursday (joint work with Erika Petersen), along with a pre-final recording of the talk on YouTube:
Tweet media one
1
20
177
@ChrisGPotts
Christopher Potts
1 year
This is kind of you! Here's the flowchart slide. Do reach out to me if you answer "Yes", as your strengths, as it were, complement my own, shall we say.
Tweet media one
@vivekkalyansk
Vivek Kalyan
1 year
I love the slide on how researchers can contribute to NLU when all headlines are dominated by gargantuan models @ChrisGPotts
0
2
12
3
22
170
@ChrisGPotts
Christopher Potts
1 year
@andriy_mulyar @sleepinyourhat @srush_nlp @chrmanning @mdredze There are so many topics that are made richer and more exciting by models being better! Explainability, language games, benchmarking and assessment, designing systems with in-context learning, everything relating to cognitive science.
4
16
157
@ChrisGPotts
Christopher Potts
1 year
@jayelmnop I assure you that Noam Chomsky does not now, and did not ever, need to do empirical work to support his claims about language. This is THE Noam Chomsky we're talking about! Also, he said "may" so it's safe.
4
1
131
@ChrisGPotts
Christopher Potts
2 years
We in Stanford Linguistics have just posted an ad for a tenure-track Assistant Professor (applications due Oct 14, 2022). It's an open area search, and we take a very expansive view of linguistics:
2
72
126
@ChrisGPotts
Christopher Potts
2 years
Could a purely self-supervised Foundation Model achieve grounded language understanding? Yes! (I don't see why not.) I'll give a fuller answer at the @sfiscience workshop "Embodied, Situated & Grounded Intelligence" today. Practice run:
2
17
120
@ChrisGPotts
Christopher Potts
5 months
There is an episode of Parks & Rec that is actually about AI research. Ron and Chris have a cooking competition. Chris spends all day crafting a fancy custom sandwich. Ron buys hamburger meat from a convenience store. The hamburger wins. Often, in AI, the hamburger wins.
6
9
118
@ChrisGPotts
Christopher Potts
2 years
For our NLU course, Omar Khattab & I have just posted a homework+bakeoff on "Few-shot OpenQA". Until recently, all systems would have completely failed at this new task. Now we predict students will find ways to do it robustly. Overview + notebook links:
1
17
115
@ChrisGPotts
Christopher Potts
6 years
As part of our forthcoming NAACL paper, @nick_dingwall and I released very fast vectorized GloVe implementations (Tensorflow and pure Numpy):
Tweet media one
0
47
114
@ChrisGPotts
Christopher Potts
3 years
Some stats about #EMNLP2021 ethics review (I was committee co-chair): * 280 papers were flagged by a reviewer for ethics review * 211 were deemed in need of ethics review * 8 led to no ethics-related suggestions for revisions * 58: optional revisions * 145: required revisions
5
8
110
@ChrisGPotts
Christopher Potts
3 years
We propose the AI Shibboleth Rule: "All autonomous AIs must identify themselves as such if asked to by any agent". Joint work with Tino Cuéllar, @thegricean , @mcxfrank , Noah Goodman, Thomas Icard, @DorsaSadigh ; supported by @StanfordHAI ):
5
27
107
@ChrisGPotts
Christopher Potts
1 year
Current information retrieval benchmarks are obsessively focused on accuracy metrics, which is over-stating progress and hiding incredible recent efficiency innovations. In this new paper, we advocate for multidimensional leaderboards:
4
18
103
@ChrisGPotts
Christopher Potts
2 years
New podcast episode with @Diyi_Yang – Moving to Stanford and @StanfordNLP , linguistic and social variation, interventional studies, and shared stories and lessons learned from an ACL Young Rising Star:
1
12
103
@ChrisGPotts
Christopher Potts
2 years
My NLU podcast is, I assure you, fully of deep scholarly discussion, but my guests are also often very funny. Here's a thread with some of my favorite funny (and/or random seeming) moments, to perhaps entice you into some summer weekend listening:
13
20
99
@ChrisGPotts
Christopher Potts
3 years
I'm glad that #EMNLP2021 imposes no length limits on supplementary materials, and I hope that remains the norm. NLP has moved towards more disclosure, and this is at odds with length limits here, especially where it is specified that the main paper be evaluable on its own.
1
6
101
@ChrisGPotts
Christopher Potts
4 years
The very impressive new ConvoKit from @Cristian_DNM and his Cornell NLP crew provides easy access to lots of conversational datasets and tools:
0
32
94
@ChrisGPotts
Christopher Potts
1 year
In filling out the checklist for ACL submission, I nearly panicked at "Have you used AI writing assistants when working on this paper?" My OS caught a typo in my response. This could be sophisticated contextual spelling correction. I averted my gaze and made a correction by hand.
3
5
93
@ChrisGPotts
Christopher Potts
3 years
DynaSent is a new benchmark for (currently, English-language) sentiment analysis. We think it is already a substantial resource, but our primary hope is that people use it to create amazing models that drive another DynaSent round:
1
14
89
@ChrisGPotts
Christopher Potts
4 months
DSPy's BootstrapFewShotWithRandomSearch optimizer will bootstrap demonstrations, but weaker LLMs may struggle with complex bootstrapping (e.g., CoT). This notebook shows how easy it is to invoke a stronger LLM just for this (leading to SoTA for ScoNe):
1
12
84
@ChrisGPotts
Christopher Potts
3 years
I'm struck by the many tweets assuming the #FoundationModels paper was shaped by a corporate PR effort of some sort. In truth, it was just @RishiBommasani and @percyliang keeping us all together, and 100% of it was done by diverse Stanford researchers in a deliberative, open way.
5
5
84
@ChrisGPotts
Christopher Potts
1 year
New paper from Atticus Geiger, me, and Thomas Icard. Among other things, we prove that constructive causal abstraction decomposes into three more basic operations, and show that LIME, causal mediation, INLP, and Circuits can be cast as causal abstractions:
2
14
81
@ChrisGPotts
Christopher Potts
11 months
Last fall @sleepinyourhat and I debated whether current LLMs can handle negation. This led to a collaboration, new dataset, and first publication for our intern @she_jingyuan ! @AtticusGeiger mentored. Short answer on the core question: it's complicated!
@ChrisGPotts
Christopher Potts
2 years
@sleepinyourhat Could you say more about the evidence for your claim about negation? We created an NLI dataset of negation interacting and not interacting with lexical entailment pairs that determine the labels, and the examples seem very hard for davinci-002 across a range of prompt types.
Tweet media one
2
1
19
3
12
79
@ChrisGPotts
Christopher Potts
5 years
We're now accepting applications for the 6th CSLI Undergraduate Summer Internship Program, which places students in Stanford labs for 8 weeks of mentored research. Housing and a stipend provided. Prior research experience not required:
4
72
78
@ChrisGPotts
Christopher Potts
1 year
Thank you! Quick summary: LLMs are probably inducing a semantics (mapping from linguistic forms to concepts), which may be the essence of understanding. This will make their behavior increasingly systematic. This does not imply that they will be trustworthy!
@Luohan_Academy
Luohan Academy
1 year
@carlbfrey We just spoke with Prof. @ChrisGPotts , Chair of Stanford Linguistics, also @stanfordnlp , and an expert in NLP. This exact question was one we discussed. Full talk will be posted tomorrow, but here's a sneak peek of his partial answer.
0
1
5
3
12
77
@ChrisGPotts
Christopher Potts
4 years
As an area chair for @aclmeeting , I am finding the author responses to be incredibly valuable. I wonder if people who grew cynical about the value of these responses should reconsider in a system where the area chairs have just a few papers and so can watch closely. #ACL2020
1
5
76
@ChrisGPotts
Christopher Potts
6 years
Exceptionally useful overview of new TensorFlow features from Guillaume Genthial @roamanalytics , including excellent tips for #NLProc and lots of self-contained code snippets:
0
24
73
@ChrisGPotts
Christopher Potts
2 years
New podcast episode with @percyliang : Realizing that Foundation Models are a big deal, scaling, why Percy founded CRFM, Stanford's position in the field, benchmarking, privacy, and CRFM's first and next 30 years:
2
13
70
@ChrisGPotts
Christopher Potts
3 years
I am pleased to be part of the Dec 9 @coling2020 panel "Should GPT3 Have the Right to Free Speech?" with Robert Dale, @emilymbender , and @pascalefung . I expect it to give me lots to think about. The panelists will make only short remarks before open discussion. Current thoughts:
4
14
72
@ChrisGPotts
Christopher Potts
6 months
This was a wonderful week for me, full of rewarding conversations about AI research and policy. Both my talks blossomed into wide-ranging discussions thanks to all the thoughtful questions and comments from the audiences. Thanks so much for hosting me!
@eth_cle
ETH Center for Law & Economics
6 months
🚀Last week, @ChrisGPotts @Stanford unveiled the secrets behind retrieval-augmented models, and explained cutting-edge methods to enhance interpretability and explainability of such models, as well as his exciting vision of the future! 🤖🖥️ @eth_cle & @UZH_en
Tweet media one
Tweet media two
0
2
8
1
5
68
@ChrisGPotts
Christopher Potts
2 months
Despite having almost no money, academics still developed (just since 2020) diffusion models, FlashAttention, prefix tuning, DPO, essentially every neural IR model, many of the methods for long contexts, and the majority of the important benchmarks, among many other things.
@StanfordAILab
Stanford AI Lab
2 months
Silicon Valley is pricing academics out of AI “ @drfeifei Li is at the forefront of a growing chorus who argue the sky-high cost of working with AI models is boxing researchers out of the field, compromising independent study of the burgeoning technology”
3
72
82
6
12
68
@ChrisGPotts
Christopher Potts
4 years
With @ebudur , Rıza Özçelik, and Tunga Güngör, I'm delighted to announce NLI-TR, automatic Turkish translations of SNLI and MultiNLI with extensive human validation (and experiments with different BERT embeddings and morphological parsers):
3
12
66
@ChrisGPotts
Christopher Potts
1 year
Here's a picture of Stanford NLP course enrollments 1999–2023, with estimates for the Spring 2023 courses. We'll likely hit 1400 students this year, topping the previous high of 1272. (The dip 2016–2022 is likely due to expanded AI course offerings at Stanford.)
Tweet media one
@stanfordnlp
Stanford NLP Group
1 year
CS224N - Natural Language Processing with Deep Learning taught by @chrmanning is the Stanford class with the 2nd highest enrollment this quarter! Behind only frosh class COLLEGE 102 - Citizenship in the 21st Century. (Photo is course staff, not students!)
Tweet media one
3
18
135
1
7
59
@ChrisGPotts
Christopher Potts
1 year
I have ventured a plagiarism policy for my upcoming course that relies entirely on existing Stanford legislation and embraces the fact that students can derive real benefits from AI writing assistants:
9
8
59
@ChrisGPotts
Christopher Potts
3 years
Congratulations to my colleague @thegricean on winning a Stanford H&S Distinguished Teaching Award this year, in the area First Years of Teaching!
0
4
56
@ChrisGPotts
Christopher Potts
2 years
Slightly vintage but newly posted podcast episode with @adinamwilliams – Neuroscience and neural networks, being a linguist in NLP, fine-grained NLI questions, the pace of research, and the vexing fact that, on the internet, people = men:
0
11
57
@ChrisGPotts
Christopher Potts
3 years
And, I've just learned, congratulations also to @mcxfrank on winning a Stanford H&S Distinguished Teaching Award this year, in the area of Graduate Education!
3
3
55
@ChrisGPotts
Christopher Potts
1 year
More evidence that in-context learning is not the end of programming, but rather the start of a new era in programming, with new primitives and fundamentally new capabilities.
@lateinteraction
Omar Khattab
1 year
🚨Introducing the 𝗗𝗦𝗣 compiler (v0.1)🚨 Describe complex interactions between retrieval models & LMs at a high level. Let 𝗗𝗦𝗣 compile your program into a much *cheaper* version! e.g., powerful multi-hop search with ada (or T5) instead of davinci🧵
3
97
442
2
7
55
@ChrisGPotts
Christopher Potts
2 years
Today is @nandisims ' first official day as Assistant Professor of Linguistics at Stanford! Welcome, Nandi!
1
4
55
@ChrisGPotts
Christopher Potts
2 years
Are there really research communities where this is the norm? Model selection based on the test set is always illegitimate, and doing it only for one's favored model is clearly intellectually dishonest. I've never been involved in a project where someone proposed to do this.
@fchollet
François Chollet
2 years
One of the biggest differences I've seen between research and applied ML: in research, most people tune their hyperparameters on the test set to achieve the highest possible score vs. other approaches in the paper's results table
23
82
864
4
4
53
@ChrisGPotts
Christopher Potts
9 months
To help with some work on English Preposing in PPs ("Happy though we were …"), I trained a classifier for detecting the construction. I am hopeful that it is a candidate for Most Obscure Model on the @huggingface Model Hub:
4
5
52
@ChrisGPotts
Christopher Potts
3 years
My view is that #FoundationModels is a better name than "large language models". The "language model" part is an over-reach given current evidence, and too restrictive, and the "large" part is certainly not definitional. The paper discusses the framing of "foundation" in detail.
4
5
53
@ChrisGPotts
Christopher Potts
28 days
@tallinzen My primary interest is in understanding understanding, and so a lot of my work is in the area of understanding what understanding understanding would mean. Can you help me?
2
2
50
@ChrisGPotts
Christopher Potts
3 years
The Stanford Department of Linguistics has an opening for a two-year Acting Assistant Professor specializing in phonetics/phonology (broadly construed to include interdisciplinary work):
2
26
47
@ChrisGPotts
Christopher Potts
1 year
As department chair, I felt a professional responsibility to study (binge watch) The Chair and now Lucky Hank. I aspire to be Ji-Yoon (desperately keeping everything together) and might occasionally daydream about being Hank (agent of chaos). My actual career goal is to be Pnin.
4
1
47
@ChrisGPotts
Christopher Potts
2 years
Aaron Chibb, a student in my @StanfordOnline course, has created a dataset of English/German false friends: Aaron has obtained illuminating results from Stable Diffusion, like this example, which means "Man going to work soon":
Tweet media one
1
3
46
@ChrisGPotts
Christopher Potts
2 years
New podcast episode with @srush_nlp – Coding puzzles, practices, and education, structured prediction, the culture of Hugging Face, large models, and the energy of New York:
2
11
46
@ChrisGPotts
Christopher Potts
29 days
A striking analysis! A high-level takeaway: just as with essentially every other area of AI, optimizing prompts can create solutions that are highly effective and unlikely to be found with manual exploration.
@haizelabs
Haize Labs
29 days
🕊️red-teaming LLMs with DSPy🕊️ tldr; we use DSPy, a framework for structuring & optimizing language programs, to red-team LLMs 🥳this is the first attempt to use an auto-prompting framework for red-teaming, and one of the *deepest* language programs to date
Tweet media one
9
44
263
1
7
46
@ChrisGPotts
Christopher Potts
1 year
Many of our best language models use subword tokens. Great for representing meaning and context, but pretty bad when it comes to character-level tasks (e.g., spelling, word games). Type-level interchange intervention training can help!
1
6
44
@ChrisGPotts
Christopher Potts
7 months
This was a really rewarding conversation. @life_of_ccb is an exceptional interviewer. And these snippets are great. The thumbnails make me look like I am experiencing various forms of distress, but I am merely thinking hard!
Tweet media one
@life_of_ccb
Cristian Cibils Bernardes
7 months
Chris Potts ( @ChrisGPotts ): Existential Threats Full conversation at:
0
4
3
4
3
43
@ChrisGPotts
Christopher Potts
1 year
I love Cunk on Earth. I feel strongly that Cunk should not be likened to Ali G. Ali G is merely confused about the world. Cunk is confused about the world and about pragmatic language use. I feel that the combination of these two things is much harder to pull off comedically.
2
1
42
@ChrisGPotts
Christopher Potts
1 year
@azpoliak @andriy_mulyar @srush_nlp @sleepinyourhat @chrmanning @mdredze For work related to linguistic theory, I feel that the current artifacts are more exciting by a wide margin than the sparse linear models of the previous era. Today's models give comprehensive linguistic representations and embed rich conceptual structure!
1
2
42
@ChrisGPotts
Christopher Potts
1 year
The incredible fluency of ChatGPT (and its regressions from davinci-002) make it dramatically clear how important it is for these systems to track the provenance of information the offer. Omar Khattab, @matei_zaharia , and I wrote about this a while back:
1
10
40
@ChrisGPotts
Christopher Potts
4 years
I really enjoyed my conversation with Ed Andrews for this accessible, informative @StanfordHAI piece on how adversarial testing regimes can encourage progress in NLP:
3
6
41
@ChrisGPotts
Christopher Potts
5 months
I assume the AI researchers in my network are panicking and/or deleting all pre-2019 entries as unnecessary and/or designing super-intelligent, hard-to-control LM-based retrieval pipelines to learn to fetch the desired bib entries. Another option is to split the file into two.
1
0
39
@ChrisGPotts
Christopher Potts
1 year
@sleepinyourhat @rgblong This only goes through after one declares either that all the foundational work on neural networks done by cognitive scientists (e.g., McClelland, Smolensky, Elman) was not cognitive science or that today's LLMs would have been developed without that work. I reject both claims.
5
5
40
@ChrisGPotts
Christopher Potts
1 year
I learned from the Stanford Linguistics majors that Jeopardy! is an excellent way to review course material in class. Here is the midterm game for my Natural Language Understanding Course:
1
4
40
@ChrisGPotts
Christopher Potts
6 years
Wow, in this comment on @_shrdlu_ 's post – – I said it might be a long time before NLP could digest and use ideas from continuation-based models of semantics, and two days later @sleepinyourhat and WooJin Chung posted
0
10
39
@ChrisGPotts
Christopher Potts
2 years
New podcast episode with @roger_p_levy : From genes to memes, evidence in linguistics, central questions of computational psycholinguistics, academic publishing woes, and the benefits of urban density:
0
8
38
@ChrisGPotts
Christopher Potts
1 year
@nandisims For our department review this year, I studied lots of past admissions data, and had to confront the fact that we have rejected lots of now-very-prominent scholars! The best I can say is that you're not the only one in this group to eventually get hired by us in some capacity!
1
2
38
@ChrisGPotts
Christopher Potts
3 years
This project was begun years ago by Bob West ( @cervisiarius ), and it seemed to slip away from us all too soon, but then Bob resurrected it, and now its somewhat depressing message truly lives on! Joint work with @jure :
1
8
37
@ChrisGPotts
Christopher Potts
2 years
It's such a pleasure to teach this course – the students are accomplished, self-motivated people who are deeply curious about the field and often looking for creative ways to apply the ideas in their work. (Also, they are expert at catching and fixing bugs in my code!)
@StanfordOnline
Stanford Online
2 years
One week left to enroll in our next professional AI course, Natural Language Understanding! Build an original project as you learn to develop systems and algorithms for robust machine understanding of human language. Enroll now. @ChrisGPotts #NLU #NLP
0
6
27
0
3
34
@ChrisGPotts
Christopher Potts
4 years
The benchmarks for sentiment analysis have become easy, but the task is far from "solved". @douwekiela , @AtticusGeiger , and I have a Dynabench task based on a model that is impressive, but we want to find all its faults. Examples/insights most welcome!
0
5
35
@ChrisGPotts
Christopher Potts
3 years
I'm absurdly happy with myself for finishing Doom 2016 on its ultra-nightmare (one death and its over) mode on PS4. It's the only video game I've ever played seriously. Huge thanks to @RedW4rr10r and Byte: I only got through this by stealing your techniques as best I could!
Tweet media one
3
0
35
@ChrisGPotts
Christopher Potts
2 years
In present-day AI, valuable high-level insights from experience are often left by the wayside. Interchange Intervention Training is a flexible causal abstraction method that lets you bring those insights directly into data-driven learning:
2
9
34
@ChrisGPotts
Christopher Potts
2 years
@ethanCaballero Important reminder that Hinton won the top prize from the Cognitive Science Society in 2001: More generally, if you are looking for the next big ideas in AI, you'd do well to look to cognitive science!
2
1
33
@ChrisGPotts
Christopher Potts
4 months
This is an incredible resource for people seeking to navigate the space of tools for creating LLM-based systems right now. @heathercmiller and the team really did a deep dive here!
@heathercmiller
Heather Miller
4 months
A new thing I’ve been up to lately, along with Peter Zhong, Haoze He, @lateinteraction , @ChrisGPotts , & @matei_zaharia … A Guide to LLM Abstractions it’s one thing to call the OpenAI APIs from a webapp… it’s entirely another to build crazy rich…
Tweet media one
5
71
334
0
8
33
@ChrisGPotts
Christopher Potts
1 year
Thank you! I do make predictions, but I also announce that I am getting out of the prediction game. Half the predictions I made for 10 years out came true in 2, and my secret pessimistic predictions turned out to be false in even less time. My remaining predictions are all scary.
@adnanmasood
Adnan Masood
1 year
🎧 @Stanford webinar GPT-3 & Beyond - with Christopher Potts @ChrisGPotts - Great explanation and history of the LLMs to date w/ predictions. Potts breaks down the impact of recent #NLP advancements incl. #GPT3 👉 #AI @stanfordnlp @StanfordAILab
1
11
30
4
6
33
@ChrisGPotts
Christopher Potts
1 year
This seems inspired to me! I suppose it acknowledges that we are moving into an era in which editing and fact-checking are the crucial skills, rather prose production from scratch.
@ndyjroo
Andrew Garrett
1 year
@JoFrhwld I thought about having 1-2 exam questions that say "here's the answer this AI gave to this question, explain what's wrong here".
3
2
34
1
4
31
@ChrisGPotts
Christopher Potts
2 years
This is a powerful explanation method and also a technique for creating emergent symbolic structure in neural networks.
@KarelDoostrlnck
Karel D’Oosterlinck
2 years
🚨Preprint🚨 Interpretable explanations of NLP models are a prerequisite for numerous goals (e.g. safety, trust). We introduce Causal Proxy Models, which provide rich concept-level explanations and can even entirely replace the models they explain. 1/7
2
21
132
0
5
32
@ChrisGPotts
Christopher Potts
9 months
In celebration of #InternationalCatDay , my wife ordered us these shirts with our cat's face printed on them, from :
Tweet media one
0
0
31
@ChrisGPotts
Christopher Potts
1 year
This was a wonderful event for me! I may acquire more regrets if I rewatch the talk, but my only regret now is that there wasn't time for more of the insightful and diverse audience questions!
@StanfordOnline
Stanford Online
1 year
Our lastest AI webinar is now availabe to stream. Listen in as Prof @ChrisGPotts discusses the significance and implications of recent NLU developments including GPT-3. Click below to watch. #AI #NLU #NLP #GPT3 @stanfordNLP @stanfordAIlab
2
5
33
0
4
31
@ChrisGPotts
Christopher Potts
6 years
@jacobandreas @_shrdlu_ @emilymbender Introducing ML into semantics leads to a reassessment of distinctions like sentence/speaker meaning, and I think this is very healthy. (These issues have been on my mind; I drafted a commentary on @Joe_Pater 's Language target article: )
1
12
31
@ChrisGPotts
Christopher Potts
4 months
For anyone interested in the coming human and societal impacts of AI, I wholeheartedly recommend the Borgesian short story Lena () and the films After Yang and Robot & Frank.
1
7
30
@ChrisGPotts
Christopher Potts
4 years
@sh_reya This is very thoughtful, thank you @sh_reya ! I should say that I learned a lot about how best to approach this from the students in my courses this winter and spring, and from the many sympathetic and insightful ideas exchanged by colleagues on the CS faculty mailing list.
1
0
30
@ChrisGPotts
Christopher Potts
2 years
This means the 2021 course is fully open (except for my secret test sets!). It's almost entirely new content, including a wonderful new intro to neural IR methods from Omar Khattab.
1
0
28
@ChrisGPotts
Christopher Potts
1 year
A post inspired by conversations with students in my semprag course and in CS182. I am against university-wide policies concerning AI writing assistants on the grounds that such policies will inevitably be unjust and arbitrary. Informal rules are better.
2
7
29
@ChrisGPotts
Christopher Potts
1 year
I am delighted by this question because it is literally the most maximally vexing question one could ask Herb Clark, so I cannot wait to ask him. He may just hassle me about not having studied this, though:
@yoavgo
(((ل()(ل() 'yoav))))👾
1 year
It seems plausible that (a subset of) pragmatics will be much easier for an LM to pick up than (a subset of) semantics. so here's a question: what are the consequences (for communication) of fully modeling pragmatics, but without capturing semantics?
8
4
55
4
0
28
@ChrisGPotts
Christopher Potts
2 years
These really are wonderful resources – thanks! Also, Omar would like everyone to know that "ColBERT" stands for "Contextualized late(-night) interaction over BERT", and is thus a fittingly multi-layered reference. User's choice on how to pronounce the "BERT" part.
@jobergum
Jo Kristian Bergum
2 years
Very nice podcast episode on ColBERT V1/V2, also see our blog post on ColBERT V1
1
7
68
1
4
27
@ChrisGPotts
Christopher Potts
1 year
I have a LingBuzz paper with Erika Petersen on LLMs that, I assure you all, is just as much a provocative hot take as these. "Lexical Semantics with Large Language Models: A Case Study of English 'break'". Recursive refutation attempts most welcome.
@patrickdelliott
patrick elliott
1 year
go off
Tweet media one
2
5
29
0
1
27
@ChrisGPotts
Christopher Potts
3 years
In addition to being a brilliant scholar, Martin was incredibly funny (and was in a Monty Python-esque comedy troupe as a student). Geoff Pullum told me that Martin described Unix manual pages as being "very good at explaining your question to you provided you know the answer".
0
3
27
@ChrisGPotts
Christopher Potts
3 years
Omar Khattab's entire series on the exciting and fast-moving area of combining NLU and IR:
@nandan__thakur
Nandan Thakur
3 years
Extremely glad to see 🍻 BEIR, one of the few preprints included in Stanford's CS224U course on NLU and IR 🔥🔥 Refer to slides 14,15 &16 here: #NLProc #IR #NLU
0
4
39
0
5
27
@ChrisGPotts
Christopher Potts
2 years
It is going to be impossible to stop me from telling people that ColBERT is the top-performing OpenQA system in Space (... mission design):
3
3
27
@ChrisGPotts
Christopher Potts
3 years
@arkosiorek If you're doing deep learning, you don't have to be awake to be working, because any long-running training jobs count as work. I worked 5,000 hours last week this way!
1
2
27
@ChrisGPotts
Christopher Potts
2 years
@ykilcher I signed the letter. Your video was well-done and reports a valuable result about TruthfulQA, but the deceptive deployment was indefensible, so I would be reluctant to link to it. Our statement might increase viewership, but I hope it reduces this kind of deception in our field.
3
0
26