Dylan Slack Profile Banner
Dylan Slack Profile
Dylan Slack

@dylanslack20

815
Followers
568
Following
50
Media
382
Statuses

Research Scientist at Google. Ph.D. @UCIbrenICS . Prev @awscloud and @googleAI . I tweet about misc findings + plug my papers

San Fransisco
Joined March 2019
Don't wanna be here? Send us removal request.
Pinned Tweet
@dylanslack20
Dylan Slack
1 year
🚨Instead of collecting costly datasets for tabular prediction, could we use natural language instructions?💡 In our paper "TABLET: Learning From Instructions For Tabular Data" with @sameer_ we evaluate how close we are to this goal and the key limitations of current LLMs
Tweet media one
Tweet media two
2
13
58
@dylanslack20
Dylan Slack
13 days
Career update: pleased to share I’ve joined Google Gemini as a research scientist! I’m excited to continue my work on LLMs at Google 🎉
14
0
278
@dylanslack20
Dylan Slack
2 years
Imagine having conversations🗣️ with ML models like you would with a colleague💡 In our paper "Rethinking Explainability as a Dialogue: A Practitioner's Perspective" with @hima_lakkaraju @sameer_ Yuxin Chen and @ChenhaoTan , we evaluate this possibility!🧵
Tweet media one
Tweet media two
4
47
224
@dylanslack20
Dylan Slack
2 years
Explaining ML models can be a difficult process. What if anyone could understand ML models using accessible conversations? We built a system, TalkToModel, to do this!🎉 Paper: Code & Demo: 1/9
6
51
223
@dylanslack20
Dylan Slack
2 years
@nabeelqu Ghibli style dog images via stable diffusion are my favorite
Tweet media one
Tweet media two
Tweet media three
1
1
58
@dylanslack20
Dylan Slack
10 months
🚨🚨🚨I'm pleased to share this work with @SatyaIsIntoLLMs @hima_lakkaraju @sameer_ is out at Nature MI 🚨🚨🚨
@dylanslack20
Dylan Slack
2 years
Explaining ML models can be a difficult process. What if anyone could understand ML models using accessible conversations? We built a system, TalkToModel, to do this!🎉 Paper: Code & Demo: 1/9
6
51
223
1
13
50
@dylanslack20
Dylan Slack
2 years
Interested in our group’s recent work in #MachineLearning explainability and where we think things are heading? I gave a seminar @UCIbrenICS yesterday about this. Check out the recording!
0
3
40
@dylanslack20
Dylan Slack
1 year
Happy to share this work won honorable mention for best paper at the TSRML workshop at NeurIPS 😀🎉 @hima_lakkaraju @sameer_ @SatyaXploringAI
@dylanslack20
Dylan Slack
2 years
Explaining ML models can be a difficult process. What if anyone could understand ML models using accessible conversations? We built a system, TalkToModel, to do this!🎉 Paper: Code & Demo: 1/9
6
51
223
3
7
38
@dylanslack20
Dylan Slack
4 years
In non-COVID news, I'm going to start posting about a paper every few weeks. I read an interesting paper about efficient approximations of Shapley values recently () that provides two neat methods to do this based on data structure. 1/n
1
6
36
@dylanslack20
Dylan Slack
2 years
📢Come check out our NeurIPS paper later today🗓️ "Reliable Post hoc Explanations: Modeling Uncertainty in Explainability" We introduce a method to generate more robust local explanations using uncertainty Poster: Date: 9 Dec 4:30pm PST — 6pm PST🧵👇
1
5
34
@dylanslack20
Dylan Slack
2 years
it's absolutely WILD that my grandmother's book Everywhere Babies is getting banned from some Florida libraries
@AlexisCoe
Alexis Coe
2 years
No book should be on this list, but EVERYWHERE BABIES??? It is a board book celebrating babies being babies while adults love, feed, and care for them.
1
6
21
2
5
32
@dylanslack20
Dylan Slack
2 years
Some could argue being a good fig 1 artist is the most valuable skill as a researcher😂
@kweku_ag
Kweku Kwegyir-Aggrey
2 years
the way they never told me that i'd have to become a part time illustrator to get papers accepted
3
5
61
0
1
28
@dylanslack20
Dylan Slack
2 years
Presenting two papers this year @NeurIPSConf ! The first paper, "Reliable Post hoc Explanations: Modeling Uncertainty in Explainability" concerns generating robust local explanations Paper: Poster: Date: Dec 9 4:30pm — 6 PST
2
5
27
@dylanslack20
Dylan Slack
4 years
Ever wondered when you shouldn't use a fair ML model? Pleased to share our new paper in #FAT2020 "Fairness Warnings & Fair-MAML: Learning Fairly from Minimal Data " (w/ @kdphd , twitterless Emile) where we investigate such questions.
1
3
25
@dylanslack20
Dylan Slack
3 years
How reliable are fairness metrics with limited data? *They're not* but we can do better using unlabeled data + Bayesian interference. Highlighting work from colleagues @ji_disi , Padhraic Smyth, and Mark Steyvers from NeurIPS. How to assess? [1/10]👇
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
4
25
@dylanslack20
Dylan Slack
2 years
Often, the choice of superpixel generating function & hyperparameters is ignored for local image explanations This choice has significant effects on explanation faithfulness Let's sweep hyp. params for different superpixel algs & run LIME explanations for 50 imagenet images
Tweet media one
Tweet media two
Tweet media three
1
3
21
@dylanslack20
Dylan Slack
4 years
Exciting next couple weeks presenting work! I'll be at FAT* this week to talk about our recent paper "Fairness Warnings & Fair-MAML: Learning Fairly from Minimal Data" ()
1
4
19
@dylanslack20
Dylan Slack
5 years
Wondering if you can game explainability methods (e.g. LIME/SHAP) to say whatever you want to? Our recent research suggests this is possible.
@hima_lakkaraju
𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞
5 years
Want to know how adversaries can game explainability techniques? Our latest research - "How can we fool LIME and SHAP? Adversarial Attacks on Explanation Methods" has answers: . Joint work with the awesome team: @dylanslack20 , Sophie, Emily, @sameer_
6
74
225
0
6
19
@dylanslack20
Dylan Slack
4 years
Best advisor!
@srush_nlp
Sasha Rush
4 years
#acl2020nlp If you are going to grad school, apply to Sameer Singh's group ( @sameer_ ). It is hard to find problems to work on, and his work consistently targets new and impactful areas. We are always kicking ourselves two years later rereading his papers.
1
8
156
0
0
18
@dylanslack20
Dylan Slack
2 years
I'll be @naacl next week! reach out if you'll be there and want to chat about research :)
2
0
16
@dylanslack20
Dylan Slack
4 years
Check out @CharlieMarx9 and @rlanasphillips at NeurIPS poster #107 ! Feature correlation is confusing in influence methods and they have answers!
Tweet media one
0
5
14
@dylanslack20
Dylan Slack
3 years
How to find model errors beyond those available in the data? We propose methods to automatically generate high-level "model-bugs" in image classifiers. This preprint includes some of my summer work @awscloud with @kkenthapadi and Nathalie Rauschmayr.
@Arxiv_Daily
arXiv Daily
3 years
Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy by Dylan Slack et al. #MachineLearning #Classifier
0
4
9
0
2
14
@dylanslack20
Dylan Slack
2 years
anyways, if you're looking to read other ~edgy~ titles like "sweet stories for babies", "bear in the air", or "puppies, puppies, puppies" check out her stuff at
1
2
12
@dylanslack20
Dylan Slack
1 year
Let me just plug this incredibly easy to use latex scrubbing tool for all my fellow researchers out there:
You might know that MSFT has released a 154-page paper () on #OpenAI #GPT4 , but do you know they also commented out many parts from the original version? 🧵: A thread of hidden information from their latex source code [1/n]
Tweet media one
27
314
1K
0
2
12
@dylanslack20
Dylan Slack
2 years
🌇Overall, we hope our work serves as a good starting place for engineers & researchers to design interactive, natural language dialogue systems for explainability that better serve users’ needs. 📑
1
0
12
@dylanslack20
Dylan Slack
2 years
We uploaded this talk to youtube, in case you weren't at NeurIPS this year🙂
@dylanslack20
Dylan Slack
2 years
📢Come check out our NeurIPS paper later today🗓️ "Reliable Post hoc Explanations: Modeling Uncertainty in Explainability" We introduce a method to generate more robust local explanations using uncertainty Poster: Date: 9 Dec 4:30pm PST — 6pm PST🧵👇
1
5
34
0
0
10
@dylanslack20
Dylan Slack
2 years
It's been pretty fun to explore the sample space of #stablediffusion & really impressed by how expressive it is, especially considering the *relatively* smaller model size This one is from the prompt "a lithograph of a butterfly"
1
0
9
@dylanslack20
Dylan Slack
2 years
The second paper, "Counterfactual Explanations Can Be Manipulated" addresses how to adversaries might manipulate gradient-based counterfactual explanations and how this can be prevented. Paper: Poster:
1
0
8
@dylanslack20
Dylan Slack
4 years
My awesome coauthor Sophie Hilgard (twiterless) talking about our paper on fooling post hoc explanation methods at SafeAI workshop @RealAAAI #AAAI2020
Tweet media one
1
0
8
@dylanslack20
Dylan Slack
2 years
Link to talk:
@MedaiStanford
MedAI Group
2 years
This Thursday, @dylanslack20 from UC Irvine will be talking to us about exposing shortcomings & improving the reliability of machine learning explanations. Catch it at 1-2pm PT this Thursday on Zoom! Subscribe to #ML #AI #medicine #healthcare
Tweet media one
0
2
15
0
1
7
@dylanslack20
Dylan Slack
2 years
pretty crazy to me my grandmother, a person who writes cute and loving children's books about bunny rabbits, dogs, and cats, now has a banned book in the US
1
1
7
@dylanslack20
Dylan Slack
2 years
Our system helps address a few critical difficulties with explanations 1/ Deciding which explanations to use 2/ Computing the explanations 3/ Interpreting the results 4/ Engaging with further questions beyond the original explanation 2/9
Tweet media one
1
1
7
@dylanslack20
Dylan Slack
4 years
Really excited to share this work at @AIESConf ! 😀
@hima_lakkaraju
𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞
4 years
Two of our papers just got accepted for oral presentation at AAAI Conference on AI and Ethics (AIES): 1. Designing adversarial attacks on explanation techniques () 2. How misleading explanations can be used to game user trust? ()
5
22
137
1
1
7
@dylanslack20
Dylan Slack
2 years
Now the #1 best seller on Amazon children’s books 😂
Tweet media one
@dylanslack20
Dylan Slack
2 years
it's absolutely WILD that my grandmother's book Everywhere Babies is getting banned from some Florida libraries
2
5
32
0
0
6
@dylanslack20
Dylan Slack
2 years
Both these papers are with Sophie Hilgard, @hima_lakkaraju , and @sameer_ . If you're interested in chatting about either of the papers, please reach out😀
0
0
6
@dylanslack20
Dylan Slack
4 years
26 people across 10 institutions 🤯🤯🤯 NLP models can learn artifacts in data instead of solving the actual task, leading to good performance #-wise but not really language understanding. Check out this impressive & compelling project @nlpmattg led!
@nlpmattg
Matt Gardner
4 years
Evaluating NLP Models via Contrast Sets New work that is a collaboration between 26 people at 10 institutions (!) Trying to tag everyone at the top of the thread, here it goes:
10
81
344
1
0
6
@dylanslack20
Dylan Slack
2 years
@tallinzen How will other people know I workout then?
0
0
5
@dylanslack20
Dylan Slack
3 years
Congrats guys!
@rloganiv
Robert L. Logan IV
3 years
I'm proud to announce that our submission to the #EMNLP2020 #NLPCOVID19 workshop received the best paper award! Massive congratulations to my co-authors @thossainkay , Arjuna Ugarte, @yoshitomo_cs , @SeanYoungPhD , and @sameer_ .
1
14
51
0
0
5
@dylanslack20
Dylan Slack
6 months
Very excited that Summer is joining us to lead this effort! Please consider applying ‼️
@summeryue0
Summer Yue
6 months
Here’s the job link for joining SEAL: If you have questions about the role feel free to DM me. I might not be able to get through all the pings but I’ll start reviewing all the applications next Friday.
1
4
21
0
1
5
@dylanslack20
Dylan Slack
2 years
e.g., an amazon reviewer calls this image of *two guys sitting on a bench together*.... subversive messaging??
Tweet media one
1
0
5
@dylanslack20
Dylan Slack
1 year
I'll be at NeurIPS this week until Saturday, presenting two of our recent works on explaining ML models with conversations! Reach out if you want to chat😀
@hima_lakkaraju
𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞
1 year
[Workshop Paper] TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations (joint w/ @dylanslack20 @SatyaXploringAI @sameer_ ) at TSRML workshop -- . More details in this thread [9/N]
1
5
13
0
0
5
@dylanslack20
Dylan Slack
2 years
We release the code for TalkToModel and link to a demo on a diabetes prediction task: TalkToModel can be extended, modified, and adapted to your own models and datasets. We provide tutorials here: 8/9
Tweet media one
1
1
5
@dylanslack20
Dylan Slack
2 years
i'm guessing the reason is that it includes images of what may be same-sex couples
1
0
5
@dylanslack20
Dylan Slack
4 years
We'll also be presenting a version of this paper at the SafeAI workshop at AAAI. If you're at any of these events and want to chat, please reach out!
0
0
4
@dylanslack20
Dylan Slack
2 years
exploring the space of painting of different animals with #stablediffusion
0
0
4
@dylanslack20
Dylan Slack
2 years
this book is literally just about loving all sorts of babies, celebrating them, and doing fun things with them
1
0
4
@dylanslack20
Dylan Slack
5 years
@arrayslayer @hima_lakkaraju @sameer_ We look at model agnostic explanation methods -- those that don't exploit the specific structure of a model to perform explanations. TreeSHAP has access to full tree structure of the model so would be outside the scope of this attack.
0
0
3
@dylanslack20
Dylan Slack
2 years
First, we interviewed doctors👩‍⚕️, healthcare professionals⚕️, and policymakers👨‍💼 about their 1/experiences with existing explainability methods 2/needs & desires for future techniques We learned A LOT about what practitioners want out of explainability in these conversations!
1
0
3
@dylanslack20
Dylan Slack
2 years
Agree with this sentiment. Cool new stuff coming out every week.
@DimitrisPapail
Dimitris Papailiopoulos
2 years
In spite of all its hype and the vitriol against it, the bad reviews and unreasonable rejects, despite the uncharitable takes by everyone on everything, this is SUCH an exciting time to do research in ML. So much fun seeing all the new work. Please DO tweet about it. Seriously.
1
5
78
0
0
3
@dylanslack20
Dylan Slack
2 years
🗣️ "I don't know anything about how correct the explanation is! How do you expect me to use it meaningfully? I constantly struggle with worrying about using an incorrect explanation and missing out on not using a correct explanation that is giving me more insights."
1
0
3
@dylanslack20
Dylan Slack
1 year
🚨However, we also observed several critical limitations of LLMs as well on these tasks☝️ 1/ LLMs predict the same thing on logically inverted instructions, indicating unfaithfulness 2/ LLMs misclassify instances across many possible few-shot examples, indicating biases
Tweet media one
Tweet media two
1
0
3
@dylanslack20
Dylan Slack
4 years
Also, GitHub plug: . We tried to make these models pretty user friendly. It's fun to play around with data sets to see what you can get LIME/SHAP to explain!
0
0
3
@dylanslack20
Dylan Slack
2 years
@bernhardsson Slurm is pretty widely used, no?
1
0
3
@dylanslack20
Dylan Slack
2 years
@arankomatsuzaki it's all we need
0
0
3
@dylanslack20
Dylan Slack
2 years
Beyond taking over my Twitter timeline, wordle is also taking over information theorists blogs? Really interesting post by Richard
@rljfutrell
Richard Futrell
2 years
An information-theoretic analysis of Wordle
3
48
238
0
0
3
@dylanslack20
Dylan Slack
2 years
Overall, we found, 1⃣ experts aren't satisfied w/ current explainability methods 2⃣ they want increased interaction to understand model behavior over one-off explanations
1
0
3
@dylanslack20
Dylan Slack
4 years
@kdphd Congratulations!!! Thank you for being such an amazing mentor!
1
0
3
@dylanslack20
Dylan Slack
1 year
In our evaluation of TABLET, we found natural language instructions were quite helpful for LLMs in solving tasks solely from instructions (i.e., the zero-shot setting)😀
Tweet media one
1
0
3
@dylanslack20
Dylan Slack
1 year
Interestingly, we observed significant benefits of instructions within the few-shot setting as well on the tasks within TABLET🧐
Tweet media one
1
0
3
@dylanslack20
Dylan Slack
4 years
Overall, enjoyed reading this paper and recommend checking it out. I know there's some larger concerns floating around about the use of Shapley values to assign feature importance () and recommend reading these as well. 11\n
1
0
3
@dylanslack20
Dylan Slack
4 years
!!
@hima_lakkaraju
𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞
4 years
Super excited to share that I recently received an NSF grant to work on exposing vulnerabilities of post hoc explanation methods and enhancing their robustness. More details at
12
4
205
1
0
3
@dylanslack20
Dylan Slack
2 years
TalkToModel learns to parse user inputs for a new model and dataset into a programming language designed for model understanding. It then executes these instructions, potentially comparing many explanations to ensure accuracy, and composes the results into a response. 3/9
Tweet media one
1
0
3
@dylanslack20
Dylan Slack
4 years
@willieboag Glad you thought it was interesting! Also shouts out to team Sophie, Emily, @hima_lakkaraju , and @sameer_ . Code not linked to in this version of the paper but if you're interested:
1
0
3
@dylanslack20
Dylan Slack
2 years
🗣️ "I can see myself using explainable tools a ton more if only it were like a free-flowing dialogue... I can't wait for that day."
1
0
2
@dylanslack20
Dylan Slack
3 years
0
0
2
@dylanslack20
Dylan Slack
4 years
@scheidegger Congrats!!
0
0
2
@dylanslack20
Dylan Slack
4 years
@vivwylai @HanLiuAI @ChenhaoTan This looks really cool! Excited to read this paper.
0
0
2
@dylanslack20
Dylan Slack
1 year
📦We compile the tasks in TABLET from different sources: 1/ UCI datasets, such as credit, adult, churn 2/ Differential diagnosis (ddx) tasks, such as identifying whooping cough from patient symptoms
Tweet media one
1
0
2
@dylanslack20
Dylan Slack
2 years
📢📢Want to demo our 🗣️conversational system for XAI? Check out the demo linked here: Code: Paper: #NoCodeXAI #ConversationalXAI #XAI
@dylanslack20
Dylan Slack
2 years
Explaining ML models can be a difficult process. What if anyone could understand ML models using accessible conversations? We built a system, TalkToModel, to do this!🎉 Paper: Code & Demo: 1/9
6
51
223
0
0
2
@dylanslack20
Dylan Slack
4 years
I'm interested to hear any thoughts on evaluation of interpretability methods, what such a common set of expectations could be, SHAP etc! n\n
0
0
2
@dylanslack20
Dylan Slack
2 years
@KarenLMasters @haverfordedu maybe I should have taken intro to physics... looks cool!
0
0
2
@dylanslack20
Dylan Slack
2 years
tried this optimizer on a synthetic polynomial factorization task with a encoder/decoder transformer (e.g., 2*s*(26-7*s) ==> -14*s**2+52*s) It's lagging a bit behind Adam/AdamW w.r.t. validation loss, but maybe I should be looking at perplexity/accuracy considering authors...
Tweet media one
@davisblalock
Davis Blalock
2 years
"Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models" Another optimizer paper attempting to descend through a crowded valley to beat Adam. But...maybe this one actually does? [1/11]
Tweet media one
4
83
399
1
0
2
@dylanslack20
Dylan Slack
11 months
0
0
2
@dylanslack20
Dylan Slack
5 years
Really interesting work making the environmental impact of programs easily accessible!
0
0
2
@dylanslack20
Dylan Slack
6 months
0
0
2
@dylanslack20
Dylan Slack
6 months
Also, we’re hiring @scale_AI , reach out or visit us at our booth if you want to learn more about us or our new SEAL lab!
1
0
2
@dylanslack20
Dylan Slack
2 years
@tallinzen I've had some good interactions on gather town—that said there have been several frustrating technical issues (posters won't load on gathertown, login issues with underline)
0
0
2
@dylanslack20
Dylan Slack
2 years
it's a pretty well-loved book that is so incredibly inoffensive
1
0
2
@dylanslack20
Dylan Slack
2 years
Finally, we evaluate TalkToModel in human trials. We compare health care worker and ML practitioner performance on several model understanding tasks using TalkToModel and a popular point-and-click dashboard as a baseline. 6/9
1
0
2
@dylanslack20
Dylan Slack
10 months
@s_mandt congrats!
0
0
1
@dylanslack20
Dylan Slack
1 year
👉TABLET consists of a benchmark of tabular prediction tasks annotated with several different natural language instructions that vary in **complexity, phrasing, and source** We evaluate how well models use natural language instructions for tabular prediction with TABLET!
Tweet media one
Tweet media two
1
0
2
@dylanslack20
Dylan Slack
2 years
@ChrisJBakke Tough to beat majority class as a baseline here
0
0
2
@dylanslack20
Dylan Slack
1 year
📜We collect the natural language instructions from high-quality sources such as NIH, Merck Manual, and the national library of Medicine. 💻 We introduce scalable and controllable methods for generating natural language instructions robustness evaluation in different settings.
1
0
2
@dylanslack20
Dylan Slack
2 years
👉Based on these responses, we think there are ~very exciting research opportunities~ in the space of interactive explanations!👈 We propose a set of 5 principles such systems should follow...
1
0
2
@dylanslack20
Dylan Slack
1 year
**outstanding paper
0
0
2
@dylanslack20
Dylan Slack
3 years
There's definitely truth to this...
@boazbaraktcs
Boaz Barak
3 years
@ben_golub This is a graph I showed admitted grad students in the last visit day. The point was that they will never have as much confidence as they do right now, but with time they will regain ~75% of it back.
Tweet media one
2
17
104
0
0
2
@dylanslack20
Dylan Slack
2 years
0
0
2
@dylanslack20
Dylan Slack
3 years
Neat read!
@shengjia_zhao
Shengjia Zhao
3 years
How can you guarantee the correctness of each individual prediction? New work with @StefanoErmon (AISTATS'21 oral) provides a new perspective on this age-old dilemma based on ideas like insurance and game theory. Blog: Arxiv:
Tweet media one
0
10
51
0
0
2
@dylanslack20
Dylan Slack
2 years
Considering these principles, we suggest *natural language dialogues* as a highly promising way to achieve interactive explanations! 🤔What could such a system look like? ❓How could you achieve it?
1
0
2
@dylanslack20
Dylan Slack
2 years
looks like you can compare pay grades with Hooli and Pied Piper today @Levelsfyi 😂
Tweet media one
0
1
2
@dylanslack20
Dylan Slack
5 years
@AndrewM_Webb @sirajraval wow this is pretty damning
0
0
1
@dylanslack20
Dylan Slack
4 years
@willieboag @hima_lakkaraju @sameer_ These slides are pretty awesome too 😁
0
0
2