I am joining
@NYUDataScience
as an Assistant Professor/Faculty Fellow this fall!
I look forward to continuing research in trustworthy
#machinelearning
and building
#AI
systems that augment and complement human decision-makers 🚀
Meet CDS Assistant Professor/Faculty Fellow Umang Bhatt (
@umangsbhatt
), who will join CDS this fall. Umang will pursue research in trustworthy
#machinelearning
,
#ResponsibleAI
, and human-machine collaboration at NYU. More on today's blog!
I successfully defended my PhD thesis today 🎉
Many thanks to my examiners,
@erichorvitz
and
@lawrennd
, for their thoughtful comments and a very fun viva!
Announcing the 2022
@icmlconf
Workshop on Human-Machine Collaboration and Teaming!
We are interested in the algorithmic and socio-computational paradigms needed to design and support human-machine collaboration. Do consider submitting!
Deadline: May 16
Excited for our
@ELLISforEurope
Workshop on Human-Centric Machine Learning on Monday, May 10th from 3pm to 9pm UK Time (7am to 1pm Pacific Time)
Register to attend at
Spending the next few months in the *other* Cambridge at
@HCRCS
and
@HSEAS
, visiting
@hima_lakkaraju
and
@MilindTambe_AI
Happy to chat if you’re in the Boston area! I’m looking for nearby coffee recs too ☕️
My collaborator, Botty Dimanov, presenting recent work on concealing model unfairness from explanation methods
#AAAI20
TLDR: Don’t use feature importance to justify the discriminatory nature of a model.
Only *two* weeks left to submit to the
@icmlconf
Workshop on Human Interpretability. Submit your latest work on interpretability, explainability, and transparency
#ICML2020
In non-COVID news, the CFP for the 5th annual
@icmlconf
Workshop on Human Interpretability in Machine Learning (
#WHI2020
) is now live 🙃 Consider submitting your work on interpretability and XAI; this year’s focus will be “Intepretability in Practice”
Excited to share a preprint of our forthcoming
#IJCAI20
paper: "Evaluating and Aggregating Feature-based Model Explanations"
Joint work with
@adrian_weller
and
@josemfmoura
(1/3)
.
@hannawallach
vehemently urging us not to forget ML is full of value-laden decisions, just like other disciplines — remember this while trying to build human-centric, “intelligible” systems 🔥
#NeurIPS2019
In non-COVID news, the CFP for the 5th annual
@icmlconf
Workshop on Human Interpretability in Machine Learning (
#WHI2020
) is now live 🙃 Consider submitting your work on interpretability and XAI; this year’s focus will be “Intepretability in Practice”
Sadly not in Barcelona for
#ICASSP2020
. I will be presenting our paper "On Network Science and Mutual Information for Explaining Neural Networks" remotely this Friday!
w/ Brian Davis, Kartikeya Bhardwaj,
@rmarculescu
@utexasece
,
@josemfmoura
@CMU_ECE
Nice to attend
#EAAMO23
where we proposed "FeedbackLogs" led by Matthew Barker and
@EmmaKallina
We provide a new perspective on documentation that holds practitioners accountable for actioning upon stakeholder feedback when building AI systems
@ACMEAAMO
Sharing an RA opportunity to collaborate on explainability w/ Frens Kroger, Beate Grawemeyer,
@jeffhancock
, and me. We're looking to study trust formation via transparency and its relation to online harms.
Excited about our working paper on human-centered, *interactive* evaluation of LLMs as assistants for mathematicians.
Room for future work on how beginners and experts alike may start to use LLMs appropriately and selectively in their existing workflows!
Evaluating large language models is hard, particularly for mathematics. To better understand LLMs, it makes sense to harness *interactivity* - beyond static benchmarks.
Excited to share a new working paper ! 1/
Love-hate relationship with Barcelona.
PROs: experiencing a mix of urban grunge and rich Spanish history was refreshing; plus
@FAccTConference
was lovely
CONs: my laptop bag (and its contents) found a new home, thanks to ladrones. I hope the new owners enjoy my (ex-)machine 😭
We have extended the submission deadline for our ICML Workshop on Human-Machine Collaboration and Teaming to May 26!
Consider submitting any recent or in-progress human-AI partnership work, including your
@NeurIPSConf
submissions or your
@acm_chi
/
@FAccTConference
papers :)
Announcing the 2022
@icmlconf
Workshop on Human-Machine Collaboration and Teaming!
We are interested in the algorithmic and socio-computational paradigms needed to design and support human-machine collaboration. Do consider submitting!
Deadline: May 16
@umangsbhatt
spearheads our work in AI
#explainability
- including quantitative evaluation criteria for feature-level explanations and understanding how information flows through deep neural networks.
@PartnershipAI
is hiring for a Research Fellow in Explainable ML! Our goal is to study how XAI can better enable transparency, taking into account the nuances of context and the needs of different stakeholders.
Perspectives on incorporating expert feedback into model updates
@valeriechen_
and co-authors review in detail methods for updating
#machinelearning
&
#AI
models with expert knowledge, and propose a taxonomy for categorizing different feedback types
Mozilla has been a champion in making systems open and accessible. As they begin establishing their trustworthy AI agenda, algorithmic transparency (explainability, uncertainty, etc.) is a key piece of the puzzle. Looking forward to the new collaborations this brings!
Entertaining to hear
@zacharylipton
talk to a room of artists and creative ML folks at
@mldcmu
-- "interpretability is the catch-all word for all things humans want from ML models": it's basically an antelope, your local waste-basket taxon (aka the growing dumpster fire out back)
The
#ICML2020
Workshop on Human Interpretability in ML has begun!
Check out the papers here: Huge thanks to
@hen_str
for helping put this virtual experience together!
@trustworthy_ml
Cambridge has been a dream 🙌🏾Can’t wait for my final months as a student
@CambridgeMLG
@LeverhulmeCFI
, but I’m stoked to move back to 🇺🇸 and live in NYC 🗽
Many thanks to my advisor
@adrian_weller
! I look forward to more collaboration, adventure, and learning for years to come
While there are countless articles on methods and techniques that could be used to enable
#explainability
, how are these techniques actually used in deployed systems? Join PAI Research Fellow
@umangsbhatt
this Wednesday as he presents his latest research:
PhD positions in Machine Learning at Cambridge. Please apply by Dec 3! Opportunities to work with me on trustworthy ML including fairness, interpretability and robustness. Feel free to get in touch if you have questions.
Life update: Excited to have joined
@PartnershipAI
as a Postgraduate Research Fellow today! Looking forward to a summer full of explainability and responsible AI.
Breakthroughs in
#AI
come at an ever more rapid pace. We are building a bridge between the newest research and its relevance for businesses, product leaders and data science teams across the world in this new report➡️
Apply for a phd in machine learning at
@CambridgeMLG
to work with me or wonderful colleagues. I have broad interests, mostly centred on
@trustworthy_ml
that can be deployed at scale.
*US citizens apply by Oct 12 to be eligible for a Gates scholarship*
What a surreal experience meeting so many interdisciplinary researchers at
#FAT2018
- everyone here is radiating a deep desire to understand algorithmic fairness, tackle model interpretability, and make a damn difference in this world
Nice to see LinkedIn using fair rank aggregation in search and deploying feature importance explanations for engineers during model selection; h/t
@kkenthapadi
in the
@fiddlerlabs
tutorial
#FAT2020
Interesting thoughts from
@harari_yuval
I read on the way to
#AAAI19
and
#AIES
: “The real problem with robots [and models] is not their own artificial intelligence but the natural stupidity and cruelty of their human masters...”
Wonderful day at the
@Affectiva
Emotion AI Summit on Trust in AI. Thanks for putting together an amazing lineup
@kaliouby
and thanks for the intro
@navrina_singh
- looking forward to building a human-machine social contract together!
The hardest part was unifying the seemingly disparate literature and piecing together the varied language in each discipline. We wanted to understand how to effectively design and deploy uncertainty-aware ML models to maximally benefit practitioners and end-users.
Have you had a chance to checkout all the exciting Practicals that will be happening at the Indaba? A huge shoutout goes out to all the teams that have been developing exciting code and content for all of us to learn from. Which one are you looking forward to the most?…
Maybe every preprint should come with a video "to catapult immature work into the public view" -- can someone at
@wef
and/or
@KayFButterfield
please read
@zacharylipton
's sobering piece on OpenAI instead of stringing together apocalyptic phrases to cue Doomsday
Our workshop touches upon two themes:
1. The differential treatment by algorithms of historically under-served and disadvantaged communities
2. The development of machine learning systems to assist humans for better performance, rather than replace them
A few months later,
@viegasf
pointed us to the work of
@jessicahullman
-- what a gold mine! So now we were in an abyss between the Bayesian approaches to ML and absolutely beautiful depictions of uncertainty (seriously just check out anything from
@adamrpearce
).
When visiting
@AmiBhattMD
and
@neilmaniar
recently, I learned about Interface Age (a "computer" magazine running from 1976 to 1984). What a hidden gem!
Love-hate relationship with Barcelona.
PROs: experiencing a mix of urban grunge and rich Spanish history was refreshing; plus
@FAccTConference
was lovely
CONs: my laptop bag (and its contents) found a new home, thanks to ladrones. I hope the new owners enjoy my (ex-)machine 😭
There is a bunch of great existing experimental and theoretical work in this direction and even a few discipline-specific review papers. Our current review, while likely imperfect, helped us organize our thoughts, and we hope others find it helpful too!
@JessicaHullman
By any chance, do you know of work that explicitly considers separately visualizing epistemic and aleatoric uncertainty?
@javiac7
made the attached for however, we didn't come across user studies that consider displaying these two types of uncertainty
In the paper, we start by explaining the different types of uncertainty that can arise when working with ML models. We discuss how to estimate and quantify these uncertainties and how to evaluate the quality of our estimates.
After moving to the Cathedral of Bayes (
@CambridgeMLG
), I realized that I grossly undervalued the importance of uncertainty. Simultaneously in work with
@PartnershipAI
on explainability, we had multiple stakeholders mention their desire for well-calibrated uncertainty estimates.
It’s nerve-racking to see people from different parts of your life meet: you’re unsure if their interactions will be like napalm or like fireworks. But with proper priming and timing, the latter can be so heartwarming to see
We then discuss leveraging uncertainty for (1) fairness specifically measurement and sampling bias, (2) decision making with special care for mode failures, and (3) trust formation in human-AI teams.
Uncertainty quantification adds much-needed intellectual humility to AI. Visit the
@AIESConf
poster "Uncertainty as a Form of Transparency: Measuring, Communicating and Using Uncertainty" by
@QVeraLiao
@prasatti
@umangsbhatt
and others today to learn more.
I'm also at
#NeurIPS2019
@NeurIPSConf
in Vancouver this week if you want to discuss explainability, fairness, and human-AI teams. Feel free to reach out! I'll be presenting a subset of this as a poster at the
#HCML2019
workshop on Friday (Dec. 13) in West Level 2, Room 223-224
I hope uncertainty becomes a keystone in the FAccT community: specifically, uncertainty ought to be as important as explainability in the world of ML transparency
@trustworthy_ml
#XAI
@Aaroth
We ran a survey around how people use different explainability techniques in practice. Feature importance literature seems most “mature” — explanations aren’t exposed to end users but are used by ML engineers to sanity check. Peep
Super excited to be at
#SOCML2018
at Google Toronto - I’d love to talk to you if you’re here and learn from you! Thanks for organizing it
@goodfellow_ian
:)
So last June, we set out to wrap our heads around the uncertainty literature from various disciplines: Bayesian Stats (Prior Lovers 😊), Frequentist Stats (Glorified Counters 😳), Viz, HCI, etc. With folks from each area, we put together a review, but we struggled… a lot.
"Predictive risk assessments are a new form of algorithmic violence" - what a powerful and salient call to arms for the use of causal inference
#FAT2018
First
#steepedmoment
on a cool evening in Pittsburgh. Super easy to make and definitely beats all other to-go single serve coffees I've had. Can't wait to use it when I travel. Thanks
@steepedcoffee
@peterbhase
@__Owen___
Awesome resource! You might be interested in our ICLR 2021 paper on explaining uncertainty estimates or our IJCAI 2020 paper on evaluating feature importance explanations
Excited to be visiting
@adrian_weller
and the rest of the
@Cambridge_Uni
MLG later this week. Will be giving a short talk () on Thursday on some recent work with José and Pradeep regarding how to aggregate local feature-level explanations.
So in
@sirajraval
's livestream yesterday he mentioned his 'recent neural qubit paper'. I've found that huge chunks of it are plagiarised from a paper by Nathan Killoran, Seth Lloyd, and co-authors. E.g., in the attached images, red is Siraj, green is original