Associate Professor of Machine Learning, University of Oxford
@OATML_Oxford
Group Leader
Director of Research at AISI (formerly UK Taskforce on Frontier AI)
Really excited to release Bayesian Deep Learning Benchmarks - please share with others who you think might like this, and have a look at the blog/repo/colab:
This work was done over a period of a year and a half by many collaborators
@OATML_Oxford
I'm hiring!
I'm building 4 research groups under me at AISI (formerly the UK's Taskforce on Frontier AI) to work on foundational AI safety research.
[1/5]
All the slides from my Bayesian Deep Learning tutorial at MLSS 2019 Moscow, including a practical in Active Learning with jupyter notebooks (practical credit: Ivan Nazarov), are now online
I really like blog posts which try to teach the reader old ideas. This one is about ways to visualise concepts in information theory, mentioned (cautiously) in solution 8.8 in Mackay's book (remember, positive areas can be negative quantities!)
By
@BlackHC
Did you know that you can beat Deep Ensemble Uncertainty with a single deterministic net?
We prove that softmax nets can't normally capture epistemic uncertainty, but with an appropriate inductive bias any pre-trained net can implicitly capture uncertainty
This year's
#BDL
workshop at
@NeurIPSConf
will focus on the reliability of BDL in downstream tasks, with invited talks from practitioners and the two NeurIPS BDL challenges
Please consider submitting extended abstracts by Oct 1, or posters by Dec 1
Our car below never saw roundabouts at training time. But using dropout ensembles' epistemic uncertainty we can choose the best worst-case plan to follow at deployment
We put code online to make it as easy as MNIST to plug & play your own BDL tools:
Can autonomous 🚘 identify, recover from, and adapt to distribution shifts? We play with BDL and robust control to get cars to recover&adapt when they don't know what to do
At ICML with
@filangelos
@ptigas
@rowantmc
@nick_rhinehart
@svlevine
📄🎞️🕸️💻:
Is there a "reproducible research" website?
Which aggregates all open source reproductions of arXiv papers and people get credit for reproducing results?
Might lead to proper incentive for others to spend time recreating exps.
Useful for criticism, discussions, and follow up exps
Curious to see what a rejected
#ICLR2021
paper with scores 8,6,6,4 looks like? (might as well get some PR)
Presenting our work on tractable objectives for information bottlenecks! We propose IB bounds which scale to imagenet & are really easy to implement
Excited to announce our new research group at
@CompSciOxford
:
Oxford Applied and Theoretical Machine Learning Group (OATML)
Have a look at the website and ping me at NIPS if you'd like to join us!
Awesome line-up for this year's Bayesian deep learning workshop
@NipsConference
, with this year's theme "deep learning uncertainty in real-world applications"
I'm testing llama 2:
Bob and Alice went to the pub. Bob forgot his keys and went back to the car to take them. Alice waited for Bob for a long time and decided to go home at the end. When Bob got back and looked for Alice he couldn't find her. Where is Bob?
Entertaining responses
This year the
#BDL
workshop will take a new form, and will be organised as a
@NeurIPSConf
European event together with
@ELLISforEurope
We invite researchers to submit posters for presentation at the event (**deadline: December 1, 2020**)
Yay we're the 6th most cited paper from ICML from the past 5 years. Many thanks to everyone using these tools and to all my awesome collaborators over the years!
Big announcement: we classify ~72k protein variants, previously of unknown significance, using unsupervised ML trained on evolutionary and human sequences. We perform on par on variants studied in lab exps. Result of a year-long collaboration with
@deboramarks
lab
@harvardmed
Can autonomous 🚘 identify, recover from, and adapt to distribution shifts? We play with BDL and robust control to get cars to recover&adapt when they don't know what to do
At ICML with
@filangelos
@ptigas
@rowantmc
@nick_rhinehart
@svlevine
📄🎞️🕸️💻:
A new class of attention-based models which learn _datasets_ instead of datapoints, and are able to solve tasks that traditional supervised neural nets cannot.
Great work by
@OATML_Oxford
graduate students
@janundnik
* &
@neilbband
*,
@clarelyle
,
@AidanNGomez
Nice metaphor from Summers-Stay: “GPT is like an improv actor who has never left home and only read about the world in books. Like such an actor, when it doesn’t know something, it will just fake it. You wouldn’t trust an improv actor playing a doctor to give you medical advice.”
GPT-3 is a better bullshit artist than its predecessor, but it's still a bullshit artist.
an investigation,
@techreview
, co-authored with Ernest Davis.
Join us Thursday next week to hear
@DavidDuvenaud
talk about Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations!
Also if you want to advertise your research during the BDL socials, send your poster here by 6/12:
@OATML_Oxford
student Lewis Smith wrote a really interesting blogpost exploring his experience working with capsule networks -- explaining how to formulate a generative version of the model and how this revealed conceptual issues with capsules as a whole
We have a fully funded PhD studentship to join
@OATML_Oxford
to work on systematic generalisation in ML, co-supervised between me and
@egrefen
. You'll get industry salary, spend 50% of your time at
@UniofOxford
and 50% at
@facebookai
(FAIR), with access to lots of compute etc ⬇️
⚠️ APPLICATION PROCESS ⚠️
Apply by emailing a CV, personal statement, and research proposal to oxford-fair-generalization-2022
@googlegroups
.com by 📅 Nov 30 📅 (any time). Indicate if you would like Prof Foerster or Gal as your primary supervisor. 5/9
I often get questions about what it’s like to do an undergrad at Oxford
@CompSciOxford
. I didn’t do my own undergrad here, but I do find it a lot of fun teaching here (we have great students!)
This thread is to give you some inside info on
@UniofOxford
if you plan to apply [1/n]
#BDL
Schedule, Accepted Papers, Contributed Talks, and Awards are updated online:
Congrats to everyone who will be presenting at the workshop (136 abstracts accepted!). I'm looking forward to it
A brilliant way to introduce ML to the general public (and to decision makers specifically) by Google Comics Factory. Making a good step towards answering a gap in education we discussed recently at
@ESA_EO
I collected some stats from my
@NeurIPSConf
AC stack:
* 66 reviewers
* 46 engaged in discussion
* 54 updated their review
* 17 changed their score (highest delta was 3->6)
* 119 discussion threads
Curios to hear stats from other ACs /
@NeurIPSConf
itself
Accelerating RL from rich observations, such as images, without relying on either domain knowledge or pixel-reconstruction:
w/ Amy Zhang,
@rowantmc
,
@RCalandra
,
@svlevine
Can we learn dynamics from images w/o reconstruction? By learning a latent space where distances obey bisimulation metric, we get latent states that group semantically similar but visually distinct states!
w/ Amy Zhang,
@rowantmc
,
@RCalandra
,
@yaringal
Happy to share that I'll be helping the UK taskforce as director of research (together with
@DavidSKrueger
). We're heavily recruiting - if you have technical expertise and want to work on Frontier Models (LLMs, generative AI), please read here
@geoffreyhinton
@ylecun
@ericschmidt
@sama
15/
@yaringal
will join as Research Director of the Taskforce from Oxford where he is head of the Oxford Applied and Theoretical Machine Learning Group. Yarin is a globally recognised leader in Machine Learning.
Nice metaphor from Summers-Stay: “GPT is like an improv actor who has never left home and only read about the world in books. Like such an actor, when it doesn’t know something, it will just fake it. You wouldn’t trust an improv actor playing a doctor to give you medical advice.”
Postbox is such a brilliant idea! Instead of continuously being bombarded by notifications, the app collects them and delivers them all together 3 times a day. Why isn't this an Android default?
This year
@NeurIPSConf
AC guidelines mention that the AC can "email the authors in the exceptional situation in which the [reviewers] discussion brings up new elements that would need to be clarified with the authors".
This is very interesting - I wonder how many ACs will do this
Long awaited work together with student
@Adam_D_Cobb
and Steve Roberts, on improving the safety of BDL models used in self-driving cars and in medical applications:
"Loss-Calibrated Approximate Inference in Bayesian Neural Networks"
twitter just told me that they've literally shadow banned me (reducing exposure of my posts) as punishment for not engaging enough with the platform
I don't expect many people to see this...
I really like the example distinguishing aleatoric from epistemic uncertainty "a lottery ticket (where the future, random outcome depends on chance) and a scratch card (where the outcome is already decided, but you don't know what it is)"
'Although numbers are often treated as cold, hard facts, we should be willing to acknowledge how uncertain they can be’. In my blog for Scientific American
A simple and robust tool for effective network pruning (few lines of code!). Just to make clear that this is work done by the great student
@AidanNGomez
(
@Deep__AI
please at least cite the first author of the paper...)
Deep Kernel Learning combines GPs and deep learning in a principled way, keeping GPs' awesome properties
But sometimes its uncertainty fails.. We investigated why - turns out it's Feature Collapse!
Using recent tools we fix it to get principled Single Forward Pass Uncertainty
Excited to share our work on single forward pass uncertainty for classification *and* regression!
"On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty"
👉
Simple & extensible implementation:
Summary:👇
"There comes a time where you'll know so many things that as you forget them, you can reconstruct them from the pieces that you can still remember. It is therefore of first-rate importance that you know how to triangulate - figure something out from what you already know"
Feynman
PhD opportunity with us at
@OATML_Oxford
funded by
@facebookai
, co-supervised between me and
@egrefen
to work on systematic generalisation. Take a look at the thread if interested / share to others who might be interested
🧵THREAD 🧵
Are you looking to do a 4 year Industry/Academia PhD? I am looking for 1 student to pioneer our new FAIR-Oxford PhD programme, spending 50% if their time at
@UniofOxford
, and 50% at
@facebookai
(FAIR) while completing a DPhil (Oxford PhD). Interested? Read on… 1/9
Did you like
@dustinvtran
's
@NeurIPSConf
tutorial on Uncertainty in Deep Learning? join us *tomorrow at 11am GMT* for the NeurIPS Europe meetup on Bayesian Deep Learning, co-organised with ELLIS, to find out about the latest research in the field!
Snippet 26: Open challenge of benchmarks. Announcing Uncertainty Baselines lead by
@zacharynado
to easily build upon well-tuned methods! Joint effort with
@OATML_Oxford
Excited to be speaking at the UN's "AI for Good" Global Summit this Wednesday! I'll be talking about our work with
@NASA
@nasa_fdl
and
@esa
.
Also looking forward to meet the formidable collection of speakers at the summit:
@OATML_Oxford
@UniofOxford
AI Safety Gridworlds, from DeepMind
"We present a suite of RL envs illustrating safety properties of intelligent agents [..] We evaluate A2C and Rainbow on our envs and show that they are not able to solve them satisfactorily"
We need more papers like this
Work by many collaborators (including three
@OATML_Oxford
members) modelling the effect of different non-pharmaceutical interventions against
#COVID19
transmission, with extensive empirical validation. Great job Jan Brauner,
@sorenmind
, and everyone!
I disagree. It is our responsibility as ACs to ensure reviewers read the rebuttal and defend their decision. A reviewer ignores author rebuttal? AC should report them so they are not invited to review again. But many ACs don't engage.. We need mechanisms to highlight that to SACs
Christ Church college, Oxford, is advertising a Junior Research Fellowship (JRF) in Computer Science, tenable from 1 Oct 2021:
JRFs are offered by Oxford colleges to early career researchers to develop their independent research
Deadline 20h November 2020
I remember thinking "what can I do to help". Then I realised that I'm lucky to actually be in a very powerful position--doing admissions at
@UniofOxford
. I used to complain that there's not enough diversity. Now I work to _create_ diversity. Guys, instead of complaining, act.
@fhuszar
Your wealthy tech/liberal bubble. In which how many people are black? (Or even women?) In which how many of us do more than tweet about these issues?
We want to believe that people can look to us for social progress. But they can’t. Change starts with ourselves.
We have a fully funded PhD studentship with the Satellite Application Catapult and Deimos Space UK, to work on AI for Good & Earth Observations (EO), or to develop new ML methodology towards EO.
Deimos has some amazing challenges in AI for Good & EO
[1/2]
This is not a normal
#neurips
year.
We've had disruptions on a global scale and many have been disadvantaged by these. I'll email the PC advocating for
@roydanroy
's suggestion to offer extensions to anyone who's been personally affected and willing to write to them
CC
@kat_heller
Have you been wondering what
@dpkingma
has been up-to over the past couple of years? The BDL schedule is now online:
Registration for the event is free, but we can only fit limited numbers so make sure to register early:
It's been online for a few days, but still:
Continual learning is important for medical applications, yet evals in the field completely ignore the point of learning continually! worse, with more sensible evals existing algos fail.. gap in research for u :)
PhD applications are now open for the EPSRC CDT in Modern Statistics and Statistical Machine Learning at Imperial and Oxford for October 2020 starts. Apply now!
Very well put
@andrewgwils
.
"BDL is gaining visibility because we are making progress. We shouldn’t discourage these efforts. If we are shying away from an approximate Bayesian approach because of some challenge or imperfection, we should always ask, “what’s the alternative”?"
Bayesian methods are *especially* compelling for deep neural networks. The key distinguishing property of a Bayesian approach is marginalization instead of optimization, not the prior, or Bayes rule. This difference will be greatest for underspecified models like DNNs. 1/18
Ping me if you want to join us for a PhD at
@OATML_Oxford
under the AIMS programme (fully funded PhD). Unrelated, also opportunities to work on ML projects with
@esa
AIMS is accepting applications for fully-funded
@UniofOxford
PhD places to work on machine learning, vision, robotics, sensor networks and more.
The deadline is **25 Jan**. More details and the application site:
Please spread the word!
Thank you all for coming to the
#BDL
workshop
@NipsConference
2018, and thank you so much to my wonderful co-organisers Christos Louizos, Miguel Hernández-Lobato, Andrew G. Wilson, Zoubin Ghahramani, Kevin Murphy, and Max Welling
@wellingmax
@andrewgwils
@ChrLouizos
Congratulations to
@OATML_Oxford
graduate students
@filangelos
and
@timrudner
for receiving a 2021 J.P. Morgan PhD Fellowship and a 2021 Qualcomm Innovation Fellowship!
We've added a summary of reviewers' points and rebuttals for our ICLR rejected paper, for the convenience of new readers:
The paper sets the scene for future research into robustness to adv examples in BNNs, and gives lots of insights into what's going on