I wrote up a post on a wide range of resources and advice that has helped me as an academic who delved into climate action and work; please share widely!
It's happening! My book, "Models of the Mind: How physics, engineering and mathematics have shaped our understanding of the brain", is coming out March 4th! More information on the book and how to get it worldwide here 👉
Have you heard the word "attention" thrown around in both neuroscience & machine learning? Have you wondered if/how its different uses relate to each other? My new review aims to summarize how this giant topic is studied & modeled across different domains!
🚨News!🚨 This September I will be joining
@nyuniversity
as an Assistant Professor of Psychology and Data Science! The position is split evenly between
@NYUPsych
and
@NYUDataScience
and is part of the Minds, Brains, & Machines Initiative ().
As I'm now on maternity leave and charged with looking after, IMHO, the best baby ever, I probably won't be Twittering much for a bit. But I couldn't go without sharing this...THE COVER OF MY BOOK!!
Models of the Mind is coming Spring 2021!
My blogpost on how & why we use convolutional neural networks as a model of the visual system is probably the most read thing I've ever written and it's now been expanded & updated into a proper review article, complete with 136 references & 5 new figures!
Guys...BIG news. I'm writing a book!!!!! It's called "Models of the Mind" & will be published by
@sigmascience
. The plan is for each chapter to explain (in an abundantly clear & entertaining way ofc) a fascinating example of how mathematics has helped us understand the brain.
@itsdylanasher
It's computational neuroscience. Specifically, "Modeling the impact of internal state on sensory processing" I actually blogged the intro to it here:
This article is funny to me because it assumes Silicon Valley people have special knowledge about the effect of screen time, rather than viewing them as people who commonly take extreme actions in response to little or no scientific evidence.
Python notebooks are great for pedagogy but I can't for the life of me figure out why anyone would use them for normal coding; yet, I see people do it. Am I missing something?
If you are thinking about gifts (for yourself or others) today, I know of a certain paperback book coming out next week --- and it's on sale 😉
"Models of the Mind: How physics, engineering, and mathematics have shaped our understanding of the brain"
Classic von Neumann: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!"
New paper alert⚡️In this perspective, I provide an in-depth argument for the claim that we should test tools of neural analysis on artificial neural networks. This will help us be confident they can lead to progress on understanding the brain!
"In 1998, a neuroscientist and a philosopher bet on whether we would find the neural basis of consciousness within 25 years. On June 23 at NYU’s Skirball Theater, that bet will be resolved."
New paper "Grounding neuroscience in behavioral changes using artificial neural networks"
This opinion piece shows how focusing on changes in brain state that cause behavior changes helps pinpoint neural mechanisms. Specifically, I show how ANN models in neuro & AI help with this
I can never get over how the popular press words ML findings. It's like they think there is a single unified AI that the company DeepMind is building and that that AI has now learned how to control fusion.
No wonder people get scared of super powerful AI!
Applying to grad school? Want to join my lab?? I'm accepting students through Psychology () or Data Science ()! For more info and possible project areas, see below or check out my website 👉
🚨New paper on arXiv! In this work, we wanted to know how the visual system of a fully-embodied reinforcement learning agent compares to a network with the same architecture embedded in systems trained through supervised or unsupervised learning. More 👇🧵
Starting grad school ~10 years ago, I blogged to work through the overflow of ideas & questions I had. Now at the start of my professorship I find myself in a similar position. So here is my first lab blog post --- on why my research includes climate
I'm posting the lecture slides for my ongoing course on Machine Learning for Climate Change here <>, if you want to follow along or incorporate any material into your own teaching.
Very much a work-in-progress though, so don't hold me liable for mistakes!
I'm hiring a postdoc! If you'd like to work on a research project that fits into either of these two research areas () then send a CV, half page project proposal & contact info for 3 references to grace.lindsay
@nyu
.edu with subject "Postdoc Application"
Are you a computational neuroscientist? Do you have friends/family who despite their best efforts still have no idea what you do? This gifting season (which starts early this year due to global supply chain issues), consider giving Models of the Mind 😉
If you're starting to think about grad school, reminder that I will be looking for PhD students through the Cognition & Perception () & Data Science () programs. Research areas listed below and available here:
I believe that we are unlikely to understand the brain by simply looking directly at neural data, yet at the same time the more analysis steps that come between neural activity and a paper's conclusion, the less I believe it. Is this contradictory?
David Van Essen has a tie with his famous wiring diagram of the visual system on it and now *I* want a tie with his famous wiring diagram of the visual system on it. (I will start wearing ties for this)
Who would've guessed that the next advance in neural recording technology would be announced via livestream by a black-masked billionaire with a pig pen in the background. I'm old enough to remember when we would just publish these things in Science.
The emerging findings that untrained neural networks with the right architecture can perform fairly well (on tasks and on predicting brain data) are making me more interested in the details of the brain's structural connectivity.
Intrinsic architectural properties (like size and directionality) in some models already yield representational spaces that - without any training - reliably predict brain activity. These untrained scores predict scores after training. 11/
My paper on testing the tools of systems neuroscience on artificial neural networks has been plagiarized basically in full (after being run back and forth through Google translate a few time it seems 🤣) 🙄
NEW EPISODE! We delve into a modern form of the nature vs. nurture debate! Current AI approaches rely heavily on training from data, but many animals function well innately, with little exposure to the world. So how important is learning for intelligence?
Give me your best tips/resources for code management in a research lab (with the typical challenges: people w/o proper CS training working largely individually and frequently coming and going)
Is there a review article summarizing all the forms of dimensionality reduction designed for/by neuroscientists? I've certainly seen a lot thrown out there, but I'd like them all gathered up.
@PatrickTBrown31
@Richvn
I'm sorry, reviewer 1 very clearly raises the issue. While it was possible to get published without including these factors, it seems likely the reviewers would be *more* supportive of work that had in fact included them.
"An Annotated Journey through Modern Visual Neuroscience" by Stuart Trenholm and Arjun Krishnaswamy goes through 25 landmark papers in the history of vision neuroscience, starting in the early 20th century
🚨 Paper Alert! Time for a proper
#tweeprint
of my recent biorxiv paper with Tom Mrsic-Flogel & Maneesh Sahani. We wanted to know: how do recurrent connections help the visual system process degraded images? 1/n
Just watched the 1990 Total Recall, which is a movie that features self-driving cars, a Mars colony, neural implants, and a giant boring machine---and now I know where
@elonmusk
got all of his ideas.
Every bit of progress I've made in modeling has immediately felt obvious and simple after the fact. Even if I just spent weeks confused and struggling over it.
I have a new paper with Dan Rubin &
@kendmil
on biorxiv! "A simple circuit model of visual cortex explains neural and behavioral aspects of attention"
We replicated findings (and figures) from several experimental papers, all using the same basic model!
I was surprised by all the confusion/pushback around this idea that data bias is causing a photo-enhancing algorithm to make people look more white. I think it was the result of people confusing an explanation of a specific type of bias for an explanation of bias in general. 1/n
There seems to be a lot of misunderstanding about this recent tweet of mine ) where I try to explain the cause of the bias seen in this work on face super-resolution:
Here is a long explanatory thread responding to
@timnitGebru
I've recently decided to transform my twitter feed---through unfollows and muting of words and accounts---to be mostly politics- and cultural commentary-free. It already feels lighter and more engaging. It's something I'd recommend others considering if it sounds appealing. 1/n
I like the Rao/Ballard predictive coding model as a theory because it makes clear experimental predictions about how the visual system should work. The flip side of that is we can find that those predictions aren't borne out, as shown here:
Write your papers as though a sleep-deprived mother of two children under two is trying to read them---because I am.
And then they will be extra clear for everyone else 😁
Reading this paper by
@KriegeskorteLab
and
@weixx2
for lab meeting & I wanna recommend it to anyone curious about the relationships between common words like tuning, representational geometry, manifold, information, decoding, etc. Very clearly written!
Does anyone know of a good argument against the claim "If we cant understand a neural net, we dont have much hope for understanding the brain"? ie the idea that if tools of systems neurosci are applied to ANNs & don't work well, then theyre probably not working well on the brain
I shared my hardback cover just after having my first kid, so only fitting for the paperback to be revealed while I start maternity leave with the second!
"Models of the Mind", now in blue, coming fall 2022. Pre-order now at your fav bookseller!
As I'm now on maternity leave and charged with looking after, IMHO, the best baby ever, I probably won't be Twittering much for a bit. But I couldn't go without sharing this...THE COVER OF MY BOOK!!
Models of the Mind is coming Spring 2021!
This was a way of visualizing the activity of deep nets in the 80s and I think I now know how it feels to be an archaeologist trying to interpret cave paintings.
Presenting a poster at
#CCN2019
this afternoon on how to incorporate biological details into deep nets. Certainly on the prettier side of posters I normally make...
Vision transformers appear to rely less on high frequency info and can have higher robustness to adversarial attacks than CNNs for the same clean performance.
Has anyone done explicit shape vs. texture-based processing work on transformers?
The amount of excitement I feel as I embark on reading a well-written and relevant review article is almost embarrassing. It's like I'm all revved up to mainline some pure INFORMATION.
Does anyone else feel like when they scroll through titles in a table of contents or conference schedule that they're not really *reading* them? It's more like a semi-conscious bag-of-words model where if enough words are associated with "interesting" in my mind, I pause.
Well this is super helpful:
"Dynamical systems, attractors, and neural circuits" - a review of how dynamical systems theory is applied in neuroscience by Paul Miller
For those interested in history of neuroscience, Charles Gross (who died last year) was a neuroscientist who wrote a lot of history. His work has been very helpful in my book research & I just came across his book of essays, which may be of interest:
NEW EPISODE! We talk about the (ideally) synergistic relationship between deep learning & neuroscience! How has the infiltration of deep nets reframed old questions in neuroscience? What tools can be used to understand both? What has been learned already?
If you consider yourself a scientist of some stature, please write an autobiography. And write it towards the end of your career, when your inhibitions are down. And fill it with gossip & funny tales & revealing quotes.
Sincerely, someone trying to spice up a book about science
Guys, I know we like to joke around on this site and all, but I just have to say, in all seriousness...
....moving a picture in Microsoft Word is really hard.
There's a sense that peer reviewers have an antagonistic relationship with authors, relentlessly trying to prove a paper is no good. I don't feel this way. As a reviewer, I love when people write good papers; makes my job easier. Please, write good papers---and I'll say you did!
If you completed the
@neuromatch
academy computational neuroscience course (as a student or TA), send me a screenshot of the title slide of your project presentation and I'll send you a 20% discount code for my book "Models of the Mind"
I realized that it was very timely to start reading this wonderful book by
@neurograce
during the
@neuromatch
CN course. It provides a historical overview of many, if not all the concepts covered in the CN course and thus it allowed me…
Maybe I'm missing something, but it is hard for me to not see the decision to call certain forms of neural correlations "connectivity" as a mistake that impacts both communication and thinking.