"Are Emergent Abilities of Large Language Models a Mirage?" is a NeurIPS outstanding paper!🙌🏿
Congrats especially to the students
@RylanSchaeffer
@BrandoHablando
& other awardees.
If you want to learn more, check out the oral & poster 👇🏿this afternoon (Dec 14)
1/2
**Test of Time**
Distributed Representations of Words and Phrases and their Compositionality
**Outstanding Main Track Papers**
Privacy Auditing with One (1) Training Run
Are Emergent Abilities of Large Language Models a Mirage?
My first tweet!
I'm excited to share my recent interview on Metric Elicitation and Robust Distributed Learning with
@samcharrington
for the
@twimlai
podcast. Check it out! via
@twimlai
#NeurIPS2020
will be holding a symposium on the COVID-19 response in the
@NeurIPSConf
community. We ask that you do not submit workshop/symposium proposals that are entirely on the same topic. We are happy to consider workshops with additional and/or complimentary themes.
We had meant to keep this under wraps for a few weeks, but it seems that the cat is out of the bag. Excited to announce our newest preprint!!
**Are Emergent Abilities of Large Language Models a Mirage?**
Joint w/
@sanmikoyejo
&
@BrandoHablando
1/12
Location & time for our paper: "Are Emergent Abilities of Large Language Models a Mirage?"
#NeurIPS2023
Presentation: 3:20pm, CST Hall C2 (level 1 gate 9 south of food court)
Poster:
#1108
, 5pm CST, Great Hall & Hall B1+B2 (level 1)
Paper link:
2/2
Are you interested in human or algorithmic challenges when learning from human feedback? Check out the
@StanfordHAI
Postdoc with
@msbernst
and me starting Fall 2023. Information here:
Postdoc position: How should people and communities articulate how AIs should navigate difficult tradeoffs? Prof.
@sanmikoyejo
and I have a jointly mentored postdoctoral scholar position open at
@Stanford
CS starting in the fall. Information here:
@NeurIPSConf
#NeurIPS2020
workshop proposal deadline has been extended by one week. The new deadline is 3 July 2020. We will update the other due dates soon as we complete the planning of the virtual workshops.
I gave an RLHF lecture at Stanford today, here are the slides. The newer figures from other talks I've given:
* visuals on history of RLHF / related fields
* figures on advanced RL methods (CAI / DPO / rejection sampling)
@russpoldrack
@tallinzen
@glupyan
@RylanSchaeffer
Some have argued that some improvements in model capabilities are unpredictable (along with a semi-precise definition of emergence). We argue that many claimed emergent capabilities are predictable, either using better statistics or alternative metrics. See thread for more.
We had meant to keep this under wraps for a few weeks, but it seems that the cat is out of the bag. Excited to announce our newest preprint!!
**Are Emergent Abilities of Large Language Models a Mirage?**
Joint w/
@sanmikoyejo
&
@BrandoHablando
1/12
FedAvg / fine-tuning will fail in federated domain adaptation when the domain shift is large. To address this, we propose FedGP, an effective aggregation rule, and a theoretical framework showing why it works. . Exciting work with
@enyij2
and
@sanmikoyejo
.
@autreche
@NeurIPSConf
From your title, the workshop proposal sounds like its broader than COVID-19 only and should be fine. Feel free to contact us directly if you need more details. We will be happy to answer.
Generative AI adoption is growing fast, but computational resources are not keeping up. Can adaptive pricing help, and how does one implement auctions for Generative AI? See some of our early work on this (led by Zachary Robertson).
🚀 Thrilled to share some work out of our lab researching how to better price AI content using auction design theory! We consider both consumer and data worker payment in this work. Paper: .
#OpenAI
#AI
#Stanford
Thread 🧵