We have a paper in ICLR! The title is “Learned Coarse Models for Efficient Turbulence Simulation.” We wanted to see if we could train general-purpose ML models to predict turbulent dynamics accurately at low spatial and temporal resolutions (1/n).
Slide deck from recent
@CompCogNeuro
Tutorial with
@behrenstimb
on "Representing States and Spaces." 4.5 hours, ~200 slides and we covered almost 2/3 of them! A public-speaking Everest :) 1/3
Cognitive processes like exploration, memory consolidation & planning use sequences from a mental model. New work in
@NatureNeuro
shows how hippocampal-entorhinal neurons can be easily modulated to generate sequences optimised for different cognitive goals
Applications open for student researchers (formerly interns)!
There will likely be projects with me+colleagues (including
@kevinjmiller10
!) on comp neuro, and ML directions on memory+retrieval. Locations in NYC/London.
Reach out if any questions!
Applications are open for the
@GoogleDeepMind
Student Researcher Program!
There will likely be projects available to work with me and with others in computational neuroscience. If interested, please feel free to get in touch!
Learn more and apply here:
i've been looking for reviews+textbooks in neuroscience that emphasize historical context/the evolution of ideas in a neuro fields. do people have any favorites to recommend?
Tremendously excited to be giving an RL tutorial at
#cosyne2023
!
Materials for the tutorial can be found here.
slides
coding colab (File > Make a copy; Runtime > Change runtime type > GPU)
The (brand new!) AI Institute ARNI (ARtificial + Natural Intelligence) is hiring postdocs to work at the intersection of AI+neuro, at Columbia U. Please share!
Info:
Apply: (reviewed on rolling basis)
We had a paper accepted to ICLR 2022! I'm very excited about this, and my next post will be about that. But first, I feel I need to post a tribute to the recently deceased musical artist Meatloaf. (1/n)
Very excited to share this new work with
@ZePoLiTaT
and
@KelseyRAllen
. Motivation: we want AI that can invent new things! In this work, we use fancy new learned simulators (made of Graph Neural Networks) to optimize designs in varied settings with complex dynamics.
Could machines design new tools for us? With my co-first authors
@neuro_kim
and
@ZePoLiTaT
, we are excited to announce "Physical Design using Differentiable Learned Simulators" where we solve complex, physical design problems by differentiating through GNN-based simulators. 1/9
Announcing the new class of
#35InnovatorsUnder35
. Past honorees include Mark Zuckerberg, Sergey Brin, Larry Page, and Jonny Ive. This crew should be just as successful. Here's your chance to meet them first.
This was a fantastically fun conversation with (fellow neuroscientist!!)
@JonKrohnLearns
about neuroscience, memory, and the possibilities for ruminative AI that is as anxious and neurotic as us! Should be available end of October on the SuperDataScience podcast.
Next week, I'm interviewing Dr.
@neuro_kim
— Research Scientist at Google DeepMind and Affiliate Professor of Theoretical Neuroscience at Columbia University — for a podcast episode.
Got questions for her?
In addition to the above, Kim:
• Is an exceptional and fun explainer
Everyone,
@neuro_kim
is a doc now & she has this hippocampus poster signed by all PNI to show for it! She's that friend I share science thoughts, conference rooms, & life stories with - for 5ish years!! Here's to 50 more years of science & friendship!
#successorrepresentation
Organizing a workshop on "Model-Based Cognition: Hierarchical Reasoning and Sequential Planning" for
#cosyne2018
with
@roozbeh_kiani
@basvanopheusden
@kevinjmiller1
. We have gathered some great speakers and you should come hang out with them! See more:
The evidence is growing that the brain’s grid cells, used to identify our position in physical space, may also be keys to organizing and navigating more abstract concepts, like time and memory.
I was on a PODCAST, how cool is that? Hear me wax lyrical on the neuroscience + AI of learning and prediction. Find fresh takes, like maybe LLMs are kind of a big deal (...but there are still some open problems for the rest of us). Thanks
@superdatasci
for the interview!
Today's episode with is one of my favorite conversations ever. In it, the hilarious and fascinating Dr.
@neuro_kim
(of both DeepMind and Columbia) blows my mind by detailing relationships between human neuroscience and A.I.
Watch here:
More on Kim:
•
I've been working with Jraph internally for a while now and totally love it. Makes it super easy to put together new graph NN models. Congrats to colleagues on its release!
Very excited about the release of Jraph. Finally an easy-to-use, extensive and fast Graph Neural Network library in JAX! Fully compatible with NN libraries such as Flax and Haiku:
Come see
@eckstein_maria
and
@kevinjmiller10
talk about classical cognitive modeling, data-driven modeling with neuroscience, and what lies in between!
Then catch up with all three of us later for a hands-on tutorial where we learn to fit some models 😎
Join us now for Keynote+Tutorials (8:30-10:15)!
(1) A High-dimensional View of Computational Neuroscience
(2) Cognitive models of Behavior: Classical and Deep Learning Approaches
(3) Behind the Eyes: Insights into Cognition from Naturalistic Gaze Behavior in VR
Here's where we are:
1. Local elections officials are taking the time necessary to make sure every eligible vote is counted.
2. Trump is losing and trying to crown himself the winner.
3. Voters choose the future. We fought for fairer elections and it’s working.
We’re looking for a Research Scientist on our Structured Intelligence team at DeepMind - especially with experience or interest in GNNs. See the link below to apply, or feel free to reach out to me directly.
But -- his over-the-top, theatrical, maximalist, recklessly passionate music and his performing persona are just my absolute favorite. Rest in peace, Meatloaf.
@CompCogNeuro
@behrenstimb
If you want to explore some of these ideas with code, check out this jupyter notebook from the Dartmouth MIND 2019 Summer School. This github page contains a lot of great resources (slides+code) for thinking about cognitive maps. 2/3
Very excited for
@neuro_kim
today at 4PM EST on
#LearningSalon
. While reading I was reminded of one of my favorite hippocampal papers
, specifically, the Bayesian model with continuity constraints.
if you're at
#cosyne2020
, come check out poster I-20 in tonight's session by
@jesse_geerts
+
@NeilBurgess10
+ me!
Keywords: hippocampus, probabilistic, prediction, structure learning, replay, and (of course) flexibility :)
To summarize our approach, we…
1. Use classical physics solvers to produce high-res trajectories
2. Downsample in space and time to produce training data
3. Train a neural network to do next-step prediction on low-res frames
4. Run it iteratively to generate long trajectories
It is a weird feeling to be sitting at home the night before a talk without having to give a talk the next day (videos were submitted in advance). Without slides to present tomorrow, I'm not sure what I should be making unnecessary changes to. Maybe I should cut my hair?
This was SO much fun. I really missed the part of research where you have fun after hours conversations about science. This was a perfect antidote to lockdown drear 🥰
It was SUCH an incredible honor to give a cosyne tutorial. Thanks so much for everyone who attended.
In the spirit of learning from reinforcement, we'd love to get some feedback on what worked and what could be improved -- see this *very* short survey:
Tremendously excited to be giving an RL tutorial at
#cosyne2023
!
Materials for the tutorial can be found here.
slides
coding colab (File > Make a copy; Runtime > Change runtime type > GPU)
CCN paper on probabilistic Successor Representation with
@jesse_geerts
and
@NeilBurgess10
! Think SR + uncertainty + flexibility through non-local updates.
"Now girls, tonight you are going to hear some very bad words. And I want you both to promise me, right now, that you are NEVER going to use these words at home."
The audience laughs. Eyes wide, we nod dutifully to Meatloaf.
"But... in school? It's perfectly okay."
Turbulence simulation is vital to many areas of forecasting, engineering, and science. Simulators require high spatial and temporal resolution in order to make accurate predictions, as running at lower resolutions can lead to large errors because of chaotic dynamics.
Tune in on or after June 26 to hear Andrew Saxe, Kristin Branson, Leila Wehbe, and me say things about ml+neuro at this virtual SfN conference!
Register:
Info:
What is the future for the convergence of
#machinelearning
and
#neuroscience
? The June 26
#SfNvirtual
conference with Kristin Branson, Andrew Saxe,
@neuro_kim
, Leila Wehbe offers insights. Registration lets you watch any time up for 6 months afterwards:
I thought this work by esteemed colleagues
@jhamrick
and
@shakir_za
translating Marr's levels into ML was really interesting. Curious what other neuroscientists think.
You may remember a while back that I asked folks if they knew about Marr's Levels of Analysis. Many didn't, and I promised an explanation. At long last, I and
@shakir_za
are excited to share one, which will be presented at
#BAICS2020
!
AI storytime with
@FeryalMP
! particularly looking forward to checking out chelsea finn's metalearning lectures and david mackay's information theory lectures.
@behrenstimb
A British colleague and I once wondered why a group of ~20 unfamiliar people in our building were wearing matching raincoats and rainboots. I decided to go over and ask.
He was horrified.
I love being an American in England.
#cosyne2022
come check out the insanely talented
@NotGeorgeTom
present our poster I-050 8pm tonight (17th) on learning predictive representations using STDP and phase precession! Work done with the awesome
@neuro_kim
&
@caswellcaswell
Neat work unpacking transitive inference mechanism in hippocampus, using fancy schmancy fmri methods, from deepmind neuro + u magdeburg peops! Congrats :)
Our latest
@NeuronCellPress
paper, in collaboration with
@etude_yi
&
@david_berron
from
@DuzelLab
, describes how our brains derive new insights by drawing on multiple related experiences connected by dynamic links within a network of hippocampal memories
We use a “domain-general” architecture comprised of CNN modules that are not specific to turbulence simulation. It consists of a linear CNN Encoder and Decoder, as well as processors comprised of a U-shaped stack of dilated CNNs and residual skip connections.
Come check out this work on graph-structured agents that build things, and see some cool videos of agents playing with blocks! 👷♀️🤖🏗️
Also just now realizing we may have missed an opportunity to name this something with the acronym BOB (as in, the builder). Alas.
How do you get an agent to build complex structures out of blocks? Come see our answer on Wednesday at
#ICML2019
from 11:40am-12pm in Hall B & during posters! We use 3 key ingredients: structured policies, object-centric actions, & model-based planning.
This is going to be awesome! Looking forward to participating. If you're a grad student interesting in learning neuro methods related to cognitive maps, apply!
We are now accepting applications for the 3rd annual Methods in Neuroscience at Dartmouth (MIND) Computational Summer School (Due 4/15). This year we will be focusing on cognitive maps from the perspectives of systems, cognitive, & social neuroscience
We also find that the learned model outperforms a state-of-the-art classical turbulence simulator, Athena++, at a comparably low-resolution, in terms of pixel-wise loss (blue v. orange). An intermediate resolution Athena++ (green) beats both.
Stability can be hard for long rollouts: small errors accumulate, which can eventually lead to domain-shift. Adding noise to the inputs during training can lead to dramatically better stability (although as shown below, this is less of a problem for large timesteps).
However, where the learned model really shines is preserving high-frequency information: spectral error is lower than Athena++ at the low and intermediate resolution, as the learned model maintains the sharpness of certain features Athena++ smooths away.
Our simple general-purpose learned models quantitatively outperform more specialized, parameterized models. However, most learned models do qualitatively pretty well, suggesting learned models are generally a good fit for this type of problem.
Thanks to the other panelists and Hannah for the wonderful discussion and to the audience at
@CheltSciFest
for the brilliant questions! What a delightful experience!
it is weird to circle back to memorable events that coincided with the start of the pandemic. for me that is cosyne and asparagus being back in season.
For me
#cosyne21
marks the one year anniversary of COVID. When I left for
#cosyne20
the world was normal. During the conference covid changed from being a news story to being all around me. By the time I returned to Seattle, the city was on lockdown and work was remote. 1/3
Accelerating turbulence simulation is of particular interest to astrophysics, which requires very large, very high-resolution grids to simulate turbulent phenomena, such as eg interstellar medium.
Generalizing to different box sizes for “Mixing Layer” turbulence is also tricky. Training on a range of box sizes improves predictions on an even larger range. But artifacts persist, and quantitatively, the predictions of both models remain outside of the expected error.
@behrenstimb
my dad used to offer awards for outrunning him. he was not very fast but he would tell good stories, so when i was thoroughly absorbed he would shove me into shrubbery and sprint off. i imagine this strategy could generalize to mario kart.
If you are at
#cosyne2022
, collaborator Gabriela Michel (not on Twitter) is presenting work tonight from her previous lab modeling cortical development across species. Check it out, it's super cool!
Understanding properties of learning in an artificial context can translate to insights about learning in the biological context, and discuss how machine learning problems and unexplained neuroscience data can mutually inform each other... (
@neuro_kim
and Andrew Saxe at
#SfN19
)
Excited to share some new work on ArXiv today: "Fast Task Inference with Variational Intrinsic Successor Features" Done with
@wwdabney
, Andre Barreto, Tom Van de Wiele,
@dwf
, and
@VladMnih
TL;DR: Unsupervised pre-training for efficient RL
Meatloaf died, probably of covid, possibly unvaccinated, within a year of his musical partner, Jim Steinman, who wrote Meatloaf's hits (also others like "Total Eclipse of the Heart"). I didn't have a lot of ideological overlap with Meatloaf (as "possibly unvaccinated" suggests).
This work was an interdisciplinary collaboration between researchers at
@deepmind
,
@FlatironCCA
,
@googIeresearch
and Princeton University. This was a wonderful experience, and involved fascinating conversations about the fluid dynamics of outer space.
Since I was little, my family has been obsessed with Meatloaf. When my sister and I were little, we would bounce around the house, enthusiastically belting out the lyrics to Paradise by the Dashboard Light, understanding about 8% of it (why won't she just let him sleep?) (2/n)
When we were young (maybe 8 and 10?) my parents took us to a Meatloaf concert. This was a huge deal. It was a 10pm on a school night, it was in New York City, and we had seats pretty close to the stage. And it was Meatloaf!
Amazing tour de force of a talk on graphnet agents that build,
@jhamrick
powered through a briefly broken projector like a badass without missing a step, and by faaaar the most fashionable headwear of ICML 👷♀️ check out poster
#36
tonight for more amazingness
#ICML2019
Excited to share our newest work (& my first final-author paper!) led by Victor Bapst and Alvaro Sanchez-Gonzalez (w/
@CarlDoersch
@neuro_kim
@pushmeet
@PeterWBattaglia
). We investigate structured, object-centric agents that can build things!👷♀️🏗
One time the only doctor on an Antarctic expedition removed his own appendix when he came down with appendicitis.
Now these scientists produce a paper on long covid while actively having long covid! Kudos.
And here is the preprint: "Characterizing Long COVID in an International Cohort: 7 Months of Symptoms and Their Impact"
Special thanks to thousands of you who participated in our study on
#LongCovid
.
a thread 🧵 0/
@AthenaAkrami
@criticalneuro
I've been recently taken in with how cognitive immune systems are... Even without the neuro prefix, they have memory, recognition, learning, search. Cryptographically secure protocols! They have biases and misunderstandings that seem almost like mental illness.