Just arrived in DC for
#AAAI23
excited to present on Predictive Multiplicity in Probabilistic Classification (work with
@berkustun
and David Parkes)
Oral presentation: Feb 11 at 9:30am ET (ML Bias and Fairness session 1)
Poster: Saturday Feb 11 6:15pm ET
We ask:
If there are multiple equally good models, are predictions similar across those models?
If not, how do we report when risk estimates change significantly over near-optimal models?
And for a dataset, why are certain individuals’ predictions impacted more than others?
Why u care:
When models are used to predict risk of default on a loan, risk of patient illness, or risk of failure to appear in court… they inform real world decisions. But what if there are many models, equally good, with different predictions? Does that really happen?
Yes!