We support our
@StanfordAILab
alum
@timnitGebru
, her critical research on fairness & ethics in AI, and her tireless work in organizing for a more ethical & inclusive AI community. It’s urgent that voices like hers be heard and that we prioritize work on inclusion & ethics in AI.
We are proud to offer many of our best AI courses free on YouTube. Want to study with a cohort and course assistants? You can do that in the AI Professional Program via
@StanfordOnline
.
New this year: Deep Multi-Task and Meta Learning with
@chelseabfinn
Congratulations to
@StanfordAILab
Director
@chrmanning
, awarded the 2024 IEEE John von Neumann Medal, one of
@IEEEAwards
’s top awards “for outstanding achievements in computer-related science and technology”, for his advances in
#NLProc
.
The Stanford Artificial Intelligence community is sustained and built in a major way by our Asian-American and Asian members. ‘Them’ is ‘us’. Love not hate. 💔
Wouldn't it be great if AI could do debug our programs for us?
Check out our latest blog post on using ML to repair programs from error messages, with a promising approach that leverages program-feedback graphs and self-supervised learning.
The
@Stanford
AI Lab is excited to share that Chris Manning—
@chrmanning
has become the Director of SAIL. And many thanks to Fei-Fei Li—
@drfeifei
for doing so much to grow SAIL & its community. She is now Co-Director of the new Human-Centered AI Initiative
Adapting to your data distribution changing after you train your model remains one of the big challenges of Machine Learning. Check out our latest blog post on work addressing this challenge -
Curious about why self-training with unlabeled data can magically improve a classifier’s performance? Check out the theoretical explanation in the following blog post: from
@ColinWei11
,
@jhaochenz
, and
@tengyuma
The Stanford AI Lab supports and deeply appreciates the talented Iranian members of our community. We strive for equitable treatment of people from any nation of origin in both admissions and the operation of the lab. Discrimination has no place in our community. 💚
Progress in Foundation Models needs not just GPUs but great ideas! Here are some of our impactful ones from 2023:
• Flash Attention—and FA 2—much faster transformers by careful rethinking of GPU data paths
• Stanford Alpaca—How to build small, good instruction-tuned LLMs 1/2
Artificial intelligence isn’t just language models! We’re hard at work on robotic systems that understand and work in the physical 3D world. Recent highlights include:
•
@leto__jean
et al. 𝗧𝗶𝗱𝘆𝗕𝗼𝘁: How can household robots learn your preferences from just a few examples?
Silicon Valley is pricing academics out of AI
“
@drfeifei
Li is at the forefront of a growing chorus who argue the sky-high cost of working with AI models is boxing researchers out of the field, compromising independent study of the burgeoning technology”
AI courses at
@Stanford
: “It’s all student demand-driven, which just reflects the huge breakthroughs that have been made in AI recently and the huge enthusiasm among students to learn this,” said Christopher Manning Ph.D. ’94
@chrmanning
, director of SAIL.
Congratulations to
@StanfordAILab
faculty Dorsa Sadigh on receiving an MIT Tech Review TR-35 award for her work on teaching robots to be better collaborators with people
Now on the SAIL blog:
@hazyresearch
and
@krandiash
write a retrospective on the Hazy lab's journey in data-centric AI, and a new effort to build a community resource for data-centric AI on GitHub ().
Check it out!
AI postdocs available! The Stanford AI Lab is trying to help in the current
#COVID19
pandemic. Some of that is via research but another need is jobs for great young people. We’re opening positions for 2 years of innovative research with Stanford AI Faculty
Wouldn't it be great if AI could reason with commonsense knowledge?
Check out our latest blog post on a new question answering model, QA-GNN, that jointly reasons with language models and knowledge graphs, by
@michiyasunaga
!
Congrats to Ron Dror,
@raphaeljlt
, Stephan Eismann, and Masha Karelian at
@DrorLab
and coauthors at
@RDasLab
for the front page of
@ScienceMagazine
Geometric deep learning of RNA structure
Perspective Podcast
AI postdocs available! Stanford AI Lab is delighted to offer postdocs to some exciting young AI researchers in these difficult times. Positions for 2 years working with SAIL faculty. If you’ve procrastinated, this is the week to get your application in!
Read our newest blog post!
On using weak supervision, or high-level noisy sources of labels, to efficiently label training data.
Courtesy of
@paroma_varma
,
@ajratner
,
@bradenjhancock
, and others at Professor Chris Ré's
@HazyResearch
lab.
What if we can teach robots to do new task just by showing them one demonstration?
In our newest blog post,
@deanh_tw
and
@danfei_xu
show us three approaches that leverage compositionality to solve long-horizon one-shot imitation learning problems.
How contextual are contextualized word representations?
In our new SAIL blog post,
@ethayarajh
discusses how ELMo, BERT, and GPT-2 contextualize words — how they’re alike, and how they’re different.
Language is a powerful mechanism for people to communicate goals, beliefs and concepts. Can we use language to train machine learning models?
Read our new blog post on Learning from Language Explanations:
A major long-term goal of computational neuroscience is to identify the brain's learning algorithms. Can we use artificial neural networks to guide this discovery?
Check out this blog post by
@aran_nayebi
about recent work accepted at
#NeurIPS2020
:
Large language models can learn in-context even with random labels. This post from
@sangmichaelxie
and
@sewon__min
explains this with a Bayesian inference framework for in-context learning, connecting and .
The Stanford AI Lab has joined twitter—eventually! Some of our 2018 news: We welcome star new faculty member
@tengyuma
;
@ermonste
won the 2018 IJCAI Computers and Thought Award; and Oussama Khatib and Nils Nilsson were elected to the National Academy of Engineering (
@theNAEng
).
ICLR 2020 is being hosted virtually right now -- we’re excited to share all the work from SAIL that’s being presented with this blog post in which you’ll find links to papers, videos and blogs posts for all our papers:
LLMs have achieved remarkable success in code generation, but even 99% accuracy in small-scale generation can accumulate errors when scaling up. Curious about how to address the issue? Check out Clover: from
@chuyue99
,
@ying11231
,
@oded
,
@ClarkBarrett7
.
SAIL Postdoc
#3
: Ruohan Gao (
@RuohanGao1
) obtained his Ph.D. from The University of Texas at Austin and won its best doctoral dissertation award for 2021. Ruohan is currently working on teaching machines to see, hear, touch, and interact with the world through multisensory data.
Ever wondered just how we could ever really trust our notoriously unexplainable neural nets with anything substantial? Check out our latest blog post! Courtesy of
@StanfordHAI
Director of Research Steve Eglash.
How can a robot solve complex sequential problems?
In our newest blog post,
@KuanFang
introduces CAVIN, an algorithm that hierarchically generate plans in learned latent spaces.
GPT-3 is a powerful in-context learner, but what does that mean? Learn about some of the quirks behind this little-understood phenomenon at our latest SAIL blog post by
@frieda_rong
:
Check out our latest SAIL Blog Post!
Courtesy of
@qi2peng2
and
@danqi_chen
from
@stanfordnlp
, an overview of two new datasets meant to enable more conversational, explainable, and capable question answering systems.
SAIL is delighted to welcome two new Asst Profs this year!
@Diyi_Yang
works on Computational Social Science & socially aware
#NLProc
@sanmikoyejo
works on trustworthy machine learning, including federated ML & metric elicitation
Congratulations to
@YSongStanford
,
@StefanoErmon
, and colleagues (at Google, some ex-Stanford!) on their ICLR 2021 Outstanding Paper Award for Score-Based Generative Modeling through Stochastic Differential Equations
Where do the rewards for robotic reinforcement learning come from? In this blog post we explore how using crowdsourced language annotations and videos of humans, we can learn reward functions and enable them to generalize more broadly.
Hop, hop! In new work published at
#ACL2022
,
@michiyasunaga
@jure
and
@percyliang
demonstrate how explicitly modeling document relations during LM pretraining significantly improves multi-hop reasoning across documents. Check it out!
#NeurIPS2021
is happening this week and we are super excited!
Check out all the amazing talks and papers from the Stanford AI Lab that will be presented in our new blog post.
Our video with
@StanfordVideo
, AI at Stanford 1962–2022 last night won Gold at the 52nd Northern California Area Emmy Awards 2023 (
@EmmySFTV
) in the Science and Technology Category, following on from earlier Gold and Silver Telly Awards.
Full video (13m):
Check out our post on WILDS, a benchmark of in-the-wild distribution shifts! Distribution shifts are ubiquitous, but real-world shifts are underrepresented in the literature. WILDS presents 10 datasets with real-world shifts to help close this gap!
Popular
@StanfordAILab
ML teacher
@AndrewYNg
tells
@_KarenHao
lessons learned doing AI in companies: “Forget about building an AI-first business—start with a mission”. Focusing on technology is great for a research lab but almost never for a business.
“There’s nothing artificial about AI. It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”—
@drfeifei
in Wired.
Now live, a new blog post by
@ebiyik94
!
"Have you ever felt you are losing too much time in traffic? In this post, we present a mathematical model of traffic congestion and show that autonomous cars have the potential of significantly reducing it!"
The International Conference on Robotics and Automation (ICRA) 2020 is being hosted virtually from May 31 – Jun 4.
We’re excited to share all the work from SAIL and Stanford that’s being presented in our new blog post:
Sometimes, machine learning models focus on irrelevant patterns in data; yet, removing those patterns can hurt performance, and this issue can disproportionately affect certain groups. Why is that?
Read our latest post to find out more! -
While humans can seamlessly combine our sensory inputs, can we teach robots to do the same?
@michellearning
writes about how we can use self-supervision to learn a representation that combines vision and touch.
Large language models are inefficient, opaque, and static—but *retrieval* can help. New post highlights how retrieval-based systems ColBERT-QA and Baleen advance the state of the art for answering open-domain questions and verifying complex claims.
Our latest blog post is out!
Check it out for a recording and summary of highlights from the event "AI and the Future of Work" featuring Dr. Kai Fu-Lee
@kaifulee
, professor
@Susan_Athey
, and professor Erik Brynjolfsson
@erikbryn
.
We do a lot of cutting edge research at the Stanford AI Lab, but really our main job is training people.
Here is a list of the great SAIL Graduates of 2024, who are variously looking for academic and industry jobs. Compiled by
@judyhshen
and
@johnhewtt
.
Want something better than k-means? Great work from
@mo_tiwari
& co on BanditPAM, a SOTA k-medoids algorithm. `pip3 install banditpam` and you're good to go! Check out the blog post:
ChatGPT is 11 months old today!
It’s hard to still grasp that it’s been less than a year. 😲
Interest in Artificial Intelligence has always been much higher than for Data Science, and increased in the 2020s, but it exploded in the wake of the release of ChatGPT & other GenAI.
Did you know chatbots can be used in self-paced learning to significantly improve student recall and retention over flashcards?
Check out an exciting new post on this on our blog!
By
@brycebowl13
@sherrysruan
, with leadership from
@AIforHI
@landay
Doug Lenat (1950–2023) passed away yesterday. Doug got a Stanford CS PhD in 1976 for his work at SAIL on AM (Automated Mathematician) and was Stanford faculty 1977–1982.
Thereafter, Doug worked until the end on Cyc, his vision of building computers with common sense knowledge.
How can we enable autonomous vehicles to naturally navigate social scenarios, such as merging into traffic?
@iamborisi
tackles this question in our latest blog post, diving into recent trajectory forecasting methods for autonomous driving.
Congratulations to the
@StanfordAILab
Chirpy Cardinal team, led by Ryan Chi and mentored by
@chrmanning
, which has won first place for Scientific Invention and Innovation in the Alexa Prize Science SocialBot Grand Challenge 5! Full team details are here:
Announcing the winners of the
#AlexaPrize
SocialBot Grand Challenge 5: Team GauchoChat of
@UCSB
for the overall competition, and team Chirpy Cardinal of
@Stanford
for the scientific innovation category. Congrats, all! 🎉
Hot off the press: an interactive dashboard showing the predicted effects of COVID-19 restrictions that won KDD Best Paper in the Applied Data Science Track! A must-read post. Congrats to
@serinachang5
et al on their awesome work.
New blog post on AI for education: how can meta-learning help teachers give feedback to student work at scale?
Spoiler: model was deployed to 16,000 student solutions in a large online course for code education.
NYT Coverage:
How can we design collaborative AI-agents that can adapt to the conventions you prefer? We explore the use of modular policies for transferring and learning new conventions!
Check out our latest blog post by
@RickardGabriels
!
Topology is a combinatorial property that is tricky to utilize in gradient based methods, and so underexploited in Machine Learning - TopologyLayer might change that.
The International Conference on Machine Learning (ICML) 2020 is being hosted virtually this week.
We’re excited to share all the work from SAIL that’s being presented, and you’ll find links to papers, videos and more in our latest blog post:
A full house at
@StanfordEng
for today’s
@StanfordAILab
workshop on Generative AI and Foundation Models organized by Chris Ré
@HazyResearch
. (Want to be at next year’s workshop? Get in touch!)
The NLP community has made great progress on open-domain question answering, but our systems still struggle to answer complex questions using lots of text. Read our latest blog post, by
@qi2peng2
, about enabling multi-step reasoning in these systems!
Get a look at the whole trajectory of artificial intelligence research at Stanford in our AI at Stanford 1962-2022 video.
Released in 2023, it received an Emmy Award!
(Northern California area, Science and Technology category)
How heavy is an elephant? In our latest blog post,
@xikun_zhang_
explains that a weakness in current language models is that they don't capture scales well, and proposes a new model, NumBERT, that does. Length of reading time? 2 minutes to 2 years.
EMNLP 2020 is being hosted virtually this week, and we're looking forward to seeing everyone there!
We’re excited to share all the work from SAIL that’s being presented, and you’ll find links to papers, videos and more in our latest blog post:
Congratulations to SAIL PhD student Anna Goldie on receiving an MIT Tech Review TR-35 award for her work on AI layout of computer chips with deep reinforcement learning
Reinforcement Learning has been making great progress, but is still hard to apply to settings where collecting a lot of experience is difficult -- such as robotics.
Check out our latest blog post on a method for speeding things, with teachers!
Congratulations to
@chelseabfinn
for her ONR Young Investigator Award for 2021 on Flexible Vision-Based Robotic Manipulation via Meta-Learning and Deep Reinforcement Learning!
What's one key to help take the next leap in AI?
In this blog post,
@HazyResearch
,
@tri_dao
,
@realDanFu
, and
@krandiash
explain why their answer is *longer sequences*, and highlight recent work in this direction (FlashAttention, S4). Check it out!
Read the latest SAIL blog post, out now!
SAIL PhDs Shushman Choudhury, Michelle Lee (
@michellearning
), and
@andrey_kurenkov
review answers to "what are best practices AI researchers can follow to avoid unintended consequences of their research?"
Causal abstraction provides a powerful set of tools for accurate, human-interpretable explanations of AI models. Check out our latest from Atticus Geiger,
@ZhengxuanZenWu
,
@KarelDoostrlnck
,
@ElisaKreiss
, Noah Goodman, Thomas Icard, and
@ChrisGPotts
:
The
@StanfordAILab
&
@StanfordOnline
pioneered the AI Professional Program to offer up-to-date, deep, technical
#AI
content—the material we teach at
@Stanford
—to everyone, either as free YouTube playlists or in a cohort-based class with TAs for a fee. 👇
#ACL2021NLP
is happening this week and we couldn't be more excited!
Check out all the great work from the Stanford AI Lab being presented in our latest blog post.
Congratulations to
@DorsaSadigh
,
@chelseabfinn
, and students, Annie, Dylan, and Ryan, on winning the Best Paper Award at the 4th Conference on Robot Learning (CoRL) for: Learning Latent Representations to Influence Multi-Agent Interaction.
SAIL Postdoc
#2
: Esin Durmus (
@esindurmusnlp
) is working on building more faithful text generation systems and improving metrics for evaluation of generation systems. She is also exploring social applications of
#NLProc
such as detection of racism, colorblindness on social media.
Congratulations to former
@StanfordAILab
postdoc Ginger Smith on receiving an MIT Tech Review TR-35 award for her work on federated learning preserving fairness and privacy
Looking for a good read for the holidays?
Check out our new blog post by
@fereshte_khani
, with a simple example explaining when machine learning models are incentivized to discriminate and why they devalue a good candidate from a lower-performing group.
Do neural networks trained with SGD contain hidden geometries? By combining symmetry and differential equations,
@KuninDaniel
and
@Hidenori8Tanaka
uncover broken conservation laws, which they verify empirically!
Find out more in our latest post -
How can we combine data from different sources of human feedback to better learn their reward functions? Read about in our new blog post from Andy Palan, where inverse reinforcement learning meets preference-based learning
If you would like to keep up with Stanford
#AI
Lab researchers and affiliates, we now have a Twitter list for you!
Subscribe to follow our MS, PhD, post docs, PIs, and labs:
• DPO—Direct Preference Optimization: A better way to instruction-tune models than RLHF
• H3—Language Modeling with State Space Models and (almost) no attention
Refs:
•
•
•
•
Interested in automatically fixing your code, or in breaking other people's code realistically? In our latest post,
@michiyasunaga
describes BIFI, an unsupervised approach that learns how to do both at the same time to repair code better. Check it out!
Check out the latest blog post on our blog!
Does adding a theorem to any paper increase its chance of acceptance? On algorithms and a python package for measuring the importance of ngram features on an outcome, while controlling for confounds.