Mitchell Gordon Profile
Mitchell Gordon

@mitchellgordon

1,271
Followers
394
Following
1
Media
94
Statuses

Incoming assistant prof @MITEECS , postdoc @uwcse . PhD @StanfordHCI . Former intern @Apple @Google @cmuhcii . HCI, human-centered AI, social computing.

Stanford, CA
Joined September 2007
Don't wanna be here? Send us removal request.
@mitchellgordon
Mitchell Gordon
1 year
Beyond excited to join @MITEECS @MIT_CSAIL as an assistant professor, starting fall 2024! I’m so lucky to have had @msbernst @landay @tatsu_hashimoto @foil as my advisors and mentors at Stanford. I can’t thank you all enough!
@landay
James Landay
1 year
So proud of @mitchellgordon and his brilliant PhD defense talk on “Human-AI Interaction Under Societal Disagreement”. Great lead advising by @msbernst ! Excellent example of human-centered AI research for @StanfordHAI . On to his faculty position @MITEECS ! Congrats!
Tweet media one
Tweet media two
17
11
236
32
12
348
@mitchellgordon
Mitchell Gordon
5 months
I’m recruiting PhD students and there are still a few days left to apply! If you’re excited about working at the intersection of HCI and AI, come join my new group @MITEECS . Please submit at by 12/15!
4
118
265
@mitchellgordon
Mitchell Gordon
2 years
1/n What should ML models do when a dataset’s annotators — the people that models are trying to emulate — disagree? In today’s typical supervised learning pipeline, we model an aggregate pseudo-human, predicting the majority vote label while ignoring annotators who disagree.
7
35
139
@mitchellgordon
Mitchell Gordon
2 years
hi, I’ll be at #CHI2022 ! So excited to present jury learning, and please say hi if you’re interested in talking about human-centered AI, re-thinking today’s ML pipeline, evaluation metrics, or anything else!
1
1
56
@mitchellgordon
Mitchell Gordon
4 years
I’m so excited and honored to be included among Apple’s first class of PhD fellows in AI/ML — and grateful for the opportunity to work with incredible mentors like @foil .
1
1
30
@mitchellgordon
Mitchell Gordon
4 years
We’re presenting “HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models” as an oral at #NeurIPS2019 at 4:50 today in West Exhibition Hall C + B3!
0
1
13
@mitchellgordon
Mitchell Gordon
2 years
4/n We introduce jury learning, a new supervised learning architecture — a technical and normative approach — that models the individual voices in your training dataset...
1
1
11
@mitchellgordon
Mitchell Gordon
2 years
3/n There’s not really a good answer, so we tried asking a different question: whose voices is our model emulating? Datasets are ultimately made up of individual people. So when annotators disagree, instead of modeling some aggregate pseudo-human, let’s model individual people.
1
0
11
@mitchellgordon
Mitchell Gordon
2 years
2/n Or, maybe we do a bit better, and have our AI predict a distribution (e.g. 40% of annotators would think A, 60% would think B). But if you’re a practitioner who’s then got to make a single decision (e.g., do I remove this comment or not), what do you DO with that information?
1
1
10
@mitchellgordon
Mitchell Gordon
10 months
So excited for this workshop, come join us!
@msbernst
Michael Bernstein
10 months
Join us at the upcoming #UIST2023 workshop, Architecting Novel Interactions with Generative AI Models. Featuring a keynote by Will Wright (creator of The Sims and Simcity) and Lauren Elliot (Where In The World Is Carmen Sandiego)!
Tweet media one
2
18
65
0
0
9
@mitchellgordon
Mitchell Gordon
2 years
5/n enabling us to then design a system for interactively exploring, tuning, and shifting the behavior of the classifier, by explicitly choosing which annotators our classifier will emulate, in what proportion.
0
0
9
@mitchellgordon
Mitchell Gordon
2 years
Insightful and important work from a great researcher and person!
@harmankkaur
Harmanpreet Kaur
2 years
Beyond excited to share our #FAccT2022 paper "Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory" Building on incredible recent work in this space, this paper is about *who* interpretability and XAI are intended for. 🧵
9
35
175
1
0
7
@mitchellgordon
Mitchell Gordon
3 years
@ani_nenkova @msbernst @tatsu_hashimoto For instance, while the Kaggle competition’s most popular model achieved a precision of .527 and recall of .827 over standard aggregated labels, our approach adjusted that down to a precision of .514, and recall of .499.
1
0
4
@mitchellgordon
Mitchell Gordon
5 years
@jeffbigham Feel free to just tweet me your feedback
Tweet media one
1
0
3
@mitchellgordon
Mitchell Gordon
3 years
@ani_nenkova @msbernst @tatsu_hashimoto Agreed that Jigsaw's class imbalance makes AUC a pretty bad metric for that task, even ignoring the issue of disagreement. Though I'd suggest that precision and recall still aren't doing enough to encode the task's subjectivity.
2
0
3
@mitchellgordon
Mitchell Gordon
2 years
@SkotBotCambo @JessicaHullman Thanks @JessicaHullman ! Just checked out @SkotBotCambo 's paper and definitely agreed.
0
0
2
@mitchellgordon
Mitchell Gordon
2 years
@sal_giorgi Hoping to release code and a demo in the near future!
0
0
2
@mitchellgordon
Mitchell Gordon
1 year
0
0
1
@mitchellgordon
Mitchell Gordon
5 years
@landay @jeffbigham @ProfJayFo Okay actually that’s leftover from my lunch practice talk in case you were there :)
1
0
1
@mitchellgordon
Mitchell Gordon
1 year
@arvindsatya1 @MITEECS @MIT_CSAIL Thank you Arvind!! So excited to grow @mithci with you.
0
0
1
@mitchellgordon
Mitchell Gordon
2 years
@michelle123lam thanks michelle, very grateful to have you on the team :)
1
0
1
@mitchellgordon
Mitchell Gordon
1 year
0
0
1