Surya Ganguli Profile Banner
Surya Ganguli Profile
Surya Ganguli

@SuryaGanguli

14,976
Followers
457
Following
195
Media
2,100
Statuses

Associate Prof of Applied Physics @Stanford , and departments of Computer Science, Electrical Engineering and Neurobiology. Venture Partner @a16z

Stanford, CA
Joined December 2013
Don't wanna be here? Send us removal request.
@SuryaGanguli
Surya Ganguli
5 years
My new post for the @Stanford Human Centered AI Initiative: a personal vision of how #neuroscience #psychology #ai #physics #mathematics and other fields can work together to both understand biological intelligence and create artificial intelligence!
Tweet media one
19
473
1K
@SuryaGanguli
Surya Ganguli
2 years
Science twitter please don’t leave. We have great community here. I know no other place to keep up and interact with ideas from neuroscience, psychology, physics, mathematics, machine learning, philosophy, economics & linguistics. No reason to stop; just block others if needed
31
116
1K
@SuryaGanguli
Surya Ganguli
9 months
1/Our paper @NeuroCellPress "Interpreting the retinal code for natural scenes" develops explainable AI ( #XAI ) to derive a SOTA deep network model of the retina and *understand* how this net captures natural scenes plus 8 seminal experiments over >2 decades
Tweet media one
13
258
929
@SuryaGanguli
Surya Ganguli
5 years
Officially got tenure in Applied Physics @Stanford ! This feat is really due to a brilliant, creative, and fun group of students and postdocs I have had the great fortune of working with over the years. And thanks to amazing colleagues and mentors at Stanford and beyond!
Tweet media one
Tweet media two
Tweet media three
63
11
917
@SuryaGanguli
Surya Ganguli
2 years
1/Is scale all you need for AGI?(unlikely).But our new paper "Beyond neural scaling laws:beating power law scaling via data pruning" shows how to achieve much superior exponential decay of error with dataset size rather than slow power law neural scaling
Tweet media one
9
159
880
@SuryaGanguli
Surya Ganguli
5 years
#KatieBouman speaking @Stanford on black holes! Amazing: in 100 years humanity derived black holes from pure mathematics, then turned the whole earth into a telescope to see one 53 million light years away, and figured out it weighs 6.5 billion suns! brings a tear to my eye...
Tweet media one
Tweet media two
6
108
693
@SuryaGanguli
Surya Ganguli
2 years
This might be the closest thing to an adversarial example for the human visual system that I have ever seen.
@Rainmaker1973
Massimo
2 years
This deceiving grid tricks you into thinking there's a curved line somewhere, but you can't find it. The purposefully placed gray lines of squares in a curved formation will induce your peripheral vision to interpolate curved lines [read more: ]
Tweet media one
131
4K
16K
9
71
640
@SuryaGanguli
Surya Ganguli
2 years
Anica, Keyan and I are so happy to welcome our new daughter Ishaani into this world!
Tweet media one
Tweet media two
29
1
562
@SuryaGanguli
Surya Ganguli
1 year
Our new review chapter in honor of Nobelist @giorgioparisi on the history of physics approaches to neural networks: replica theory, error landscape geometry, generalization theory, learning dynamics, scaling laws, role of data, diffusion models & more fun!
5
118
484
@SuryaGanguli
Surya Ganguli
6 years
1/ New #deeplearning paper at the intersection of #AI #mathematics #psychology and #neuroscience : A mathematical theory of semantic development in deep neural networks: Thanks to awesome collaborators Andrew Saxe and Jay McClelland!
Tweet media one
5
160
462
@SuryaGanguli
Surya Ganguli
2 months
:) discuss!
Tweet media one
50
77
461
@SuryaGanguli
Surya Ganguli
4 years
1/ Our new paper in @AnnualReviews of Condensed Matter Physics on “Statistical Mechanics of #DeepLearning ” with awesome collaborators @Stanford and @GoogleAI : @yasamanbb @kadmonj Jeff Pennington @sschoenholz @jaschasd web: free:
Tweet media one
2
125
431
@SuryaGanguli
Surya Ganguli
4 years
@SussilloDavid Oh you mean a no hidden layer deep neural network with only 1 sigmoidal output neuron? Why didn’t you just say so - instead of using obscure statistical jargon like “logistic regression?” :)
7
25
413
@SuryaGanguli
Surya Ganguli
3 years
Immigrants... they get the job done!
3
41
400
@SuryaGanguli
Surya Ganguli
2 years
Our new paper in @PNASNews : "Neural representation representation geometry underlies few shot concept learning'' lead by Ben Sorscher and @HSompolinsky : a quantitative theory of neural geometry & few shot learning, tested in both deep networks & monkey IT
Tweet media one
4
72
389
@SuryaGanguli
Surya Ganguli
4 months
NIH budget: $51B. NSF budget: $9.8B. Amazon’s R&D budget alone: $73B. Top 10 Nasdaq companies R&D: $222B. We can better prioritize American public investment in research, which despite its meager inputs has delivered outsized public returns in science, technology and society
@emollick
Ethan Mollick
4 months
It is startling to see how much of the world's R&D spending comes from (mostly American) tech giants. The R&D spending of Amazon is greater than the R&D spending of all companies and government in France. Alphabet beats Italy. Pepsi's R&D spending beats all sources in Nigeria.
Tweet media one
Tweet media two
127
667
3K
11
59
300
@SuryaGanguli
Surya Ganguli
3 years
1/ Our new work: "How many degrees of freedom do we need to train deep networks: a loss landscape perspective." We present a geometric theory that connects to lottery tickets and a new method: lottery subspaces. w/ @_BrettLarsen @caenopy @stanislavfort
Tweet media one
5
71
302
@SuryaGanguli
Surya Ganguli
2 years
1/ Our new @Nature paper "Emergent reliability in sensory cortical coding and inter-area communication" lead by talented @EbrahimiSadegh in another fun collab w/ Mark Schnitzer & team Paper: Free: News: 🧵->
Tweet media one
3
73
300
@SuryaGanguli
Surya Ganguli
3 years
1/ New preprint: "Understanding self-supervised learning without contrastive pairs" w/ awesome collaborators @tydsh and @endernewton @facebookai We develop a theory for how self-supervised learning without negative pairs (i.e. BYOL/SimSiam) can work...
Tweet media one
2
66
296
@SuryaGanguli
Surya Ganguli
2 years
Our new paper @NeuroCellPress "A unified theory for the computational and mechanistic origins of grid cells" lead by Ben Sorscher & @meldefon w/ @SamOcko & @lisa_giocomo explains when & why grid cells appear (or don't) in trained neural path-integrators and
Tweet media one
5
49
288
@SuryaGanguli
Surya Ganguli
3 months
Agreed! if you are interested in the retina, here is a good neural network model + XAI techniques to explain how it already builds a predictive world model that can detect violations of: (1) Newton's first law of motion, (2) periodicity (3) global motion
@docmilanfar
Peyman Milanfar
3 months
The retina is arguably the most impressive part of the brain. It’s the only part of the brain that faces the world directly - it’s a sensor and processor in one Its consumes 50% more energy per gram than the rest of the brain. 1000:1 compression from retina to optic nerve…
Tweet media one
41
360
2K
4
39
276
@SuryaGanguli
Surya Ganguli
5 years
Our paper on "A mathematical theory of semantic development in deep neural networks" with Andrew Saxe and Jay McClelland is now out in @PNASNews : (arxiv version here: ) And old tweetstorm here:
@SuryaGanguli
Surya Ganguli
6 years
1/ New #deeplearning paper at the intersection of #AI #mathematics #psychology and #neuroscience : A mathematical theory of semantic development in deep neural networks: Thanks to awesome collaborators Andrew Saxe and Jay McClelland!
Tweet media one
5
160
462
2
84
274
@SuryaGanguli
Surya Ganguli
4 years
A new algorithm, SynFlow, for finding winning lottery tickets in deep neural networks without even looking at the data! Yields a highly sparse trainable init. w/ great collaborators @Hidenori8Tanaka Daniel Kunin, @dyamins thread->
@Hidenori8Tanaka
Hidenori Tanaka
4 years
Q. Can we find winning lottery tickets, or sparse trainable deep networks at initialization without ever looking at data? A. Yes, by conserving "Synaptic Flow" via our new SynFlow algorithm. co-led with Daniel Kunin & @dyamins , @SuryaGanguli paper: 1/
6
157
561
7
56
274
@SuryaGanguli
Surya Ganguli
1 year
A gentle giant has left this earth. But his spirit will live on in all the lives he has touched through his wonderful mentorship, friendship, and imparted knowledge. You always will be a beacon to so many of us @shenoystanford . I am sure you will give the angels their BMI’s!
7
16
272
@SuryaGanguli
Surya Ganguli
3 years
1/ Our new preprint lead by Ben Sorscher @meldefon & @SamOcko w/ @lisa_giocomo : “A unified theory for the computational and mechanistic origins of grid cells.” Shown are emergent hexagonal patterns extracted from a neural net trained to path integrate
3
67
262
@SuryaGanguli
Surya Ganguli
3 years
I dunno - looks pretty suspicious to me.
@packetlevel
PacketLevel infosec.exchange/@packetlevel
3 years
Tweet media one
126
4K
20K
2
11
263
@SuryaGanguli
Surya Ganguli
5 years
1/ New in @sciencemagazine w/ @KarlDeisseroth lab: : new opsin + multi-photon holography to image ~4000 cells in 3D volumes over 5 cortical layers while also stimulating ~50 neurons to directly drive visual percepts; data analysis and theory reveal…
Tweet media one
Tweet media two
5
125
258
@SuryaGanguli
Surya Ganguli
2 years
I want to learn about useful applications of philosophy to science. Other than the scientific method itself, what are concrete examples of philosophy done by “card carrying” academic philosophers that lead, thru a direct/transparent chain of causation, to new successful science?
83
36
243
@SuryaGanguli
Surya Ganguli
3 years
Yes, yes, a thousand times yes....
@TristanNaumann
Tristan Naumann
3 years
@zacharylipton @overleaf Imagine @overleaf with a \cite that can search Google Scholar/Semantic Scholar/Microsoft Academic and add the entry to your .bib as you cite it
7
10
236
5
7
245
@SuryaGanguli
Surya Ganguli
4 years
1/ New paper in @Nature : “Fundamental bounds on the fidelity of sensory cortical coding” with amazing colleagues: Oleg Rumyantsev, Jérôme Lecoq, Oscar Hernandez, Yanping Zhang, Joan Savall, Radosław Chrapkiewicz, Jane Li, Hongkui Zheng, Mark Schnitzer:
Tweet media one
4
97
241
@SuryaGanguli
Surya Ganguli
1 year
Our "Beyond Neural Scaling laws" paper got a #NeurIPS22 outstanding paper award! Congrats Ben Sorscher, Robert Geirhos, @sshkhr16 & @arimorcos awards: paper: 🧵
@SuryaGanguli
Surya Ganguli
2 years
1/Is scale all you need for AGI?(unlikely).But our new paper "Beyond neural scaling laws:beating power law scaling via data pruning" shows how to achieve much superior exponential decay of error with dataset size rather than slow power law neural scaling
Tweet media one
9
159
880
8
31
239
@SuryaGanguli
Surya Ganguli
8 months
Our new paper led by @johnhwen1 & Ben Sorscher w/ @lisa_giocomo revealing how grid cells fuse landmarks & self-motion to create maps of new environments in a *single* exposure; our model predicts the detailed map of a new env *before* the mouse enters it!
Tweet media one
2
42
231
@SuryaGanguli
Surya Ganguli
3 years
1/ Super excited to share our work with @drfeifei and @silviocinguetta , lead by the mastermind @agrimgupta92 on Deep Evolutionary Reinforcement Learning (DERL): which leverages large scale simulations of evolution and learning to...
@agrimgupta92
Agrim Gupta
3 years
Excited to share our work on understanding the relationship between environmental complexity, evolved morphology, and the learnability of intelligent control. Paper: Video: w/ @silviocinguetta @SuryaGanguli @drfeifei
8
44
208
7
48
228
@SuryaGanguli
Surya Ganguli
2 months
1/Our new paper "Geometric Dynamics of Signal Propagation Predict Trainability of Transformers" lead by Aditya Cowsik,Tamra Nebabu w/Xiaoliang Qi yields theory experiment match for how token rep geometry evolves thru transformers, reveals two phase transitions and 4 phases and..
Tweet media one
3
48
225
@SuryaGanguli
Surya Ganguli
1 year
I concur. I have not yet used any AI to help me write and I don’t plan to in the foreseeable future. The act of starting from a blank page and writing a coherent argument is crucial for our thought processes. If we cede our writing to AI we cede our capacity to think also.
@russpoldrack
Russ Poldrack
1 year
If you want your writing to actually have your voice, don’t let AI write for you.
15
11
102
10
27
207
@SuryaGanguli
Surya Ganguli
2 years
Our recent work lead by Omer Hazon and Pablo Jercog reveals that, due to noise correlations, mouse hippocampus only encodes space with a limited resolution of 10cm (about the size of the mouse) & only ~1000 neurons are needed to decode space to this limit!
6
34
205
@SuryaGanguli
Surya Ganguli
3 years
So wonderful to see Giorgio receive the Nobel! His discovery of hidden order in apparent disorder has had a profound impact on multiple fields from physics all the way to machine learning and neuroscience, resulting in some of the most beautiful papers I have ever read!!
@NobelPrize
The Nobel Prize
3 years
Giorgio Parisi – awarded this year’s #NobelPrize in Physics – discovered hidden patterns in disordered complex materials. His discoveries are among the most important contributions to the theory of complex systems.
Tweet media one
31
1K
4K
2
31
202
@SuryaGanguli
Surya Ganguli
4 years
Our paper lead by @ItsNeuronal on "Discovering Precise Temporal Patterns in Large-Scale Neural Recordings" now published @NeuroCellPress : learn how to turn the spike trains on the left into those on the right just by reordering the rows.
Tweet media one
2
42
195
@SuryaGanguli
Surya Ganguli
7 months
1/ Our new paper w/tour de force analysis lead by @atsushi_y1230 w/ @hmabuchi uses replica theory, supersymmetry, Kac-Rice formula, Dyson's Brownian motion & high-dim geometry to show how an Ising machine of coupled photons finds spin glass ground states
Tweet media one
2
31
190
@SuryaGanguli
Surya Ganguli
2 years
Experimentalists looking at theorists. Engineers looking at theorists. Theorists looking at theorists.
Tweet media one
3
8
190
@SuryaGanguli
Surya Ganguli
4 years
1/ I have followed with great heartbreak the events surrounding #NeuromatchAcademy , immoral US sanctions, the distressing exclusion of many scientific colleagues far and wide, and some reactions to all of this. But I want to call for unity amongst our scientific community
5
36
191
@SuryaGanguli
Surya Ganguli
4 years
Our new paper on a theory of self-supervised learning with dual pairs of deep networks, provides insights into how methods like SimCLR and BYOL extract hierarchical features from data: Also my first collab w/ @facebookai & @tydsh @yulantao1996 Xinlei Chen
Tweet media one
0
42
186
@SuryaGanguli
Surya Ganguli
4 years
While I usually only use twitter for work, just thought I would share photos of the newest member of our family! His full legal name is Albert Einstein John Ganguli. But we call him Albie for short :)
Tweet media one
Tweet media two
Tweet media three
Tweet media four
3
0
179
@SuryaGanguli
Surya Ganguli
5 years
1/ New #neuroscience paper @NeuroCellPress : Accurately estimating neural population dynamics without spike sorting Congrats @EricMTrautmann , @sergeydoestweet and @Deephype and many members of @shenoystanford lab! Also first @neuropixels data in primates!
Tweet media one
2
65
176
@SuryaGanguli
Surya Ganguli
10 months
I am surya_ganguli on the th***ds app - you know the one EM is suing :) I am hoping to resurrect there the intellectual paradise of ML/AI, neuro, physics, math, stats, economics, philosophy & more that my twitter feed once was. If working on these topics reply w/your profile!
52
13
175
@SuryaGanguli
Surya Ganguli
4 months
Many researchers I know in big tech are working on AI alignment to human values. But at a gut level, shouldn’t such alignment entail compensating humans for providing training data thru their original creative, copyrighted output? (This is a values question, not a legal one.)
@CeciliaZin
Cecilia Ziniti
4 months
🧵 The historic NYT v. @OpenAI lawsuit filed this morning, as broken down by me, an IP and AI lawyer, general counsel, and longtime tech person and enthusiast. Tl;dr - It's the best case yet alleging that generative AI is copyright infringement. Thread. 👇
Tweet media one
341
5K
18K
36
14
171
@SuryaGanguli
Surya Ganguli
10 months
1/ Our new paper lead by @AllanRaventos @mansiege , @FCHEN_AI asks when in-context learning of regression can solve fundamentally *new* problems *not* seen during pre-training, and reveals it as an emergent capability arising from a phase transition...
4
37
172
@SuryaGanguli
Surya Ganguli
3 years
1/ Our new paper in @nature lead by @mannktwo & @StephaneDeny in a collab w/ Tom Clandinin's lab @StanfordBrain on causal coupling of neural activity, energy metabolism and behavior across the entire Drosophila brain: Free link:
Tweet media one
4
43
165
@SuryaGanguli
Surya Ganguli
2 years
Bill Bialek recently wrote a very interesting article in @PNASNews on the dimensionality of behavior: Here is my (positive!) commentary on it:
2
21
164
@SuryaGanguli
Surya Ganguli
4 years
1/ New paper at #NeurIPS2019 : A unified theory for the origin of grid cells through the lens of pattern formation: lead by Ben Sorscher, Gabriel Mel and Sam Ocko. Spotlight: Thu Dec 12th 10:35 -- 10:40 AM @ West Exhibition Hall C + B3
Tweet media one
6
45
161
@SuryaGanguli
Surya Ganguli
4 years
Our review article on Coherent Ising machines, a quantum computer built out of photons whose classical limit is a neural network, is now published in Applied Physics Letters: has connections to chaos, spin glasses, message passing, and neural computation
Tweet media one
6
33
160
@SuryaGanguli
Surya Ganguli
3 years
Our theory/exp collab w/Liqun Luo's lab @StanfordBrain now in @NeuroCellPress . Synapse formation interacts with dendritic growth; inhibit the first and beautiful square dendritic trees of Purkinje cells become triangles; @linniejiang 's model explains why!
Tweet media one
Tweet media two
2
31
158
@SuryaGanguli
Surya Ganguli
3 years
1/5 4 papers at #NeurIPS2020 this year on diverse topics including deep learning and kernels, pruning neural networks at initialization, noise chaos and delays in predictive coding, and identifying neural network learning rules; links and tweeprints and awesome collaborators ->
2
29
156
@SuryaGanguli
Surya Ganguli
7 months
I am excited to join @a16z 's @a16zBioHealth team as a Venture Partner to work with their investment team and portfolio companies, advising on AI and its enormous potential to advance science and human health. Looking forward to interacting with the vibrant startup ecosystem here!
@a16zBioHealth
a16z Bio + Health
7 months
We're excited to welcome @SuryaGanguli as our newest Bio + Health Venture Partner! A pioneering leader in AI & neuroscience, Surya brings invaluable expertise in leveraging tech to advance healthcare. Read more from @vijaypande here:
0
3
14
10
8
156
@SuryaGanguli
Surya Ganguli
2 years
@Neuro_Skeptic As I have said once before - I think there is no definition of the word computer for which the the truth value of the proposition “the brain is a computer” is at all interesting. Depending on the definition - this statement is either trivially true or trivially wrong
13
11
153
@SuryaGanguli
Surya Ganguli
2 years
I picked up my son from school today and held him close and gave him an extra big extra long hug. My heart goes out to those Texas parents who will never be able to do that again. But no thoughts and prayers. Gun control!!
1
5
155
@SuryaGanguli
Surya Ganguli
6 months
Extremely grateful to @SchmidtFutures to receive a Schmidt Science Polymath award to generously support our research spanning biological and artificial intelligence. This recognizes an amazing set of students & collaborators I am so lucky to work with! & thank you @StanforHAI !
@StanfordHAI
Stanford HAI
6 months
Kudos to HAI's @SuryaGanguli for joining the latest cohort of the @SchmidtFutures Science Polymath Program! The program supports faculty with remarkable track records and the interest to explore game-changing research. Learn more about Surya’s work:
Tweet media one
0
6
20
21
4
151
@SuryaGanguli
Surya Ganguli
3 years
@yoavgo An Indian mother would say why only 10 :)
3
1
153
@SuryaGanguli
Surya Ganguli
2 years
Need a “science of deep learning” topic area @icmlconf @NeurIPSConf . Engineering w/o science is not sustainable. E.g the interaction between engineering, theoretical physics, experimental physics is so successful. By analogy CS confs act like experimental physics does not exist
@PreetumNakkiran
Preetum Nakkiran
2 years
Honest ICML question: What's the best subject-area/reviewer-pool for scientific work in DL that (1) doesn't prove a theorem, and (2) doesn't improve SOTA?
18
3
104
0
26
148
@SuryaGanguli
Surya Ganguli
2 years
An incredible example of adaptive co-evolution involving at least 3 species! This is a video you have to watch till the end and then watch again to appreciate
@BSL_MDX
Behavioural Science Lab Middlesex
2 years
Sometimes one wonders about adaptationism.
66
1K
3K
7
18
146
@SuryaGanguli
Surya Ganguli
9 months
Anisotropic diffusion is fun! It can also cause SGD in deep learning to get attracted to *higher* training error saddle points that nevertheless have *lower* test error than local minima with lower training error. Details here:
@gabrielpeyre
Gabriel Peyré
9 months
Anisotropic diffusion uses space-varying conductivity to slow down diffusion near edges, resulting in non-isotropic diffusion behavior.
7
116
736
3
20
143
@SuryaGanguli
Surya Ganguli
3 years
The importance of studying naturalistic behaviors. Scroll to the end of the video for extremely impressive Puffer Fish art!
@ErikSolheim
Erik Solheim
3 years
The Japanese 🇯🇵 Puffer Fish is probably nature's greatest artist 🐟 To grab a female’s attention he creates something that defies belief 😲
2K
23K
86K
6
21
139
@SuryaGanguli
Surya Ganguli
2 months
Had fun visiting the MIT math department yesterday. A highlight was seeing fluid mechanics demos of hydrodynamic analogs of quantum and gravitational dynamics from John Bush’s lab (check out the video which is an analog of wave-particle duality)
3
15
133
@SuryaGanguli
Surya Ganguli
6 years
New paper on resurrecting sigmoids in #deeplearning : We use free probability theory to find initializations that make very deep sigmoidal networks learn orders of magnitude faster than ReLUs. Fun collab w/ Jeffrey Pennington & Sam Schoenholz @GoogleBrain
1
48
137
@SuryaGanguli
Surya Ganguli
4 years
Our new preprint on associative memory storage and recall using cavity quantum electrodynamics. Here atoms and photons serve as the neurons and synapses of a neural network, and their combined dynamics leads to enhanced memory
Tweet media one
3
34
135
@SuryaGanguli
Surya Ganguli
3 years
The only subjects we truly, deeply understand “in our bones” so to speak are those for which we have read the books/papers several times and taught a course on (and ideally done research)
2
6
133
@SuryaGanguli
Surya Ganguli
2 years
My thesis title: “The holographic emergence of spacetime in string theory” yields this picture in . Try typing your thesis title in and see what you get!
Tweet media one
6
8
129
@SuryaGanguli
Surya Ganguli
4 years
Our new #neurips2020 paper combines geometry and dynamics to reveal a rapid universal chaotic to stable transition in deep learning dynamics where in a few epochs the final loss basin is determined and the neural tangent kernel rapidly evolves and improves
@stanislavfort
Stanislav Fort ✨🧠🤖📈✨
4 years
Excited to share our new #neurips2020 paper /Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel/ () with @KDziugaite , Mansheej, @SKharaghani , @roydanroy , @SuryaGanguli 1/6
Tweet media one
2
77
403
1
17
126
@SuryaGanguli
Surya Ganguli
2 years
Looking forward to speaking in person @Harvard 's Center for Mathematical Sciences and Applications (CMSA) on deep learning theory tomorrow, Sept. 28th at 2:00pm EST. Anyone welcome to join on zoom - link here:
Tweet media one
2
17
128
@SuryaGanguli
Surya Ganguli
2 months
Looking forward to giving the Harvard Physics Colloquium this Monday and the MIT Physical Math Seminar this Tuesday, and to catching up with colleagues in Cambridge!
Tweet media one
Tweet media two
2
15
125
@SuryaGanguli
Surya Ganguli
5 years
1/ Our new #neuroscience paper, "Emergent elasticity in the neural code for space" just appeared in @PNASNews : Awesome work lead by @SamOcko , with @kiahhardcastle and @lisa_giocomo . Take home messages...
Tweet media one
1
51
124
@SuryaGanguli
Surya Ganguli
1 year
Systems identification may in some cases now be a better paradigm for scientific discovery than hypothesis testing. Esp in biology, Rarely is the whole truth one of the few predetermined hypotheses chosen by the scientist.
@dgmacarthur
Daniel MacArthur
1 year
Yep. We’re witnessing the slow death of hypothesis-driven science, and (contrary to reports) this is mostly positive. It doesn’t mean ignoring biology; it means letting the data tell you what’s interesting, then designing experiments to validate and explore new patterns.
78
119
811
5
25
124
@SuryaGanguli
Surya Ganguli
6 years
New paper on single trial dimensionality reduction and demixing in #neuroscience through tensor decompositions now published in @NeuroCellPress Congrats to @ItsNeuronal for leading this and thanks to @shenoystanford and Schnitzer lab for collaborating!
0
45
123
@SuryaGanguli
Surya Ganguli
1 year
1/ @sussillodavid & I are looking for an excellent computational postdoc to work with us & collaborate w/ @KarlDeisseroth @lisa_giocomo & Eyal Seidemann on an NIH project to explore how internal states like spontaneous activity, attention, motivation, etc.. interact with...
2
56
121
@SuryaGanguli
Surya Ganguli
3 years
1/New preprint lead by Ben Sorscher and @HSompolinsky on a mathematical theory of a biologically plausible, computationally powerful neural mechanism for few shot learning that links neural manifold geometry to behavior with tests in monkeys and deep nets
Tweet media one
2
28
120
@SuryaGanguli
Surya Ganguli
5 years
There is only one thing that can sustain you in this job: love. Childlike love, wonder and curiosity about the subject you chose to study in your idealistic youth. If you can retain this, it helps you get through all the committee meetings in order to pursue your labor of love.
@AcademicsSay
Shit Academics Say
5 years
Why Associate Professors are some of the unhappiest people in academia | @chronicle
18
76
195
4
15
118
@SuryaGanguli
Surya Ganguli
2 years
Our new paper shows how two biologically plausible properties (non-negativity & energy efficiency) provably lead to highly desired disentangled neural representations in both brains and machines, including VAEs in machine learning and spatial reps in mice
@jcrwhittington
James Whittington
2 years
If everything's a big manifold, why do neurons often code for human-interpretable factors? In we show the most efficient biological representation puts different factors in different neurons. This lets us build machines that disentangle too! 1/9
5
143
643
2
14
116
@SuryaGanguli
Surya Ganguli
2 years
After our white paper , some questioned if Neuro has/can help AI. My @StanfordHAI blog post has 13 concrete/seminal past examples and suggests several future ones. To go fast go alone (AI). To go further go together (w/Neuro/Psych).
3
20
117
@SuryaGanguli
Surya Ganguli
2 years
An insightful post: the prevalence of emergent properties, or phase transitions in the performance of neural networks, with increasing size and data, suggests machine learning has a lot to learn from the analysis of complex physical and biological systems: “More is different!”
@JacobSteinhardt
Jacob Steinhardt
2 years
On my blog, I've recently been discussing emergent behavior and in particular the idea that "More is Different". As part of this, I've compiled a list of examples across a variety of domains:
3
27
199
2
11
112
@SuryaGanguli
Surya Ganguli
4 years
@behrenstimb I really hope the Gov understands the nuances of stochastic optimal control theory of unstable dynamical systems with high levels of measurement uncertainty and measurement delays, combined with the lack of a good dynamics model of both the system and the impact of control on it
3
15
111
@SuryaGanguli
Surya Ganguli
6 years
New preprint on state of the art #Deeplearning models of the retinal response to natural scenes; The model's interior functionally matches that of the retina and it generalizes to capture decades of #neuroscience experiments on artificial stimuli
Tweet media one
2
58
108
@SuryaGanguli
Surya Ganguli
2 years
@deliprao These calculations were all done by hand by Ben Sorscher, and are common in statistical mechanics. Here is a screen shot of my own closely related calculations to Ben's (only of the first ~5 pages; total calc was 23 hand-written pages). I will teach a class on this next winter...
Tweet media one
5
5
111
@SuryaGanguli
Surya Ganguli
9 months
Yep. But not limited to tSNE/UMAP. Even k-means clustering can hallucinate clusters in high dimensional noise. Here is an example with k=2. See e.g. Fig 5 in our review article "Statistical mechanics of complex neural systems and high dimensional data"
Tweet media one
@slavov_n
Prof. Nikolai Slavov
10 months
This toy example may change how you look at tSNE & UMAP. They find clusters in the most uniform data possible.
Tweet media one
46
177
1K
1
13
111
@SuryaGanguli
Surya Ganguli
2 years
6/ Overall this work suggests that our current ML practice of collecting large amounts of random data is highly inefficient, leading to huge redundancy in the data, which we show mathematically is the origin of very slow, unsustainable power law scaling of error with dataset size
Tweet media one
2
25
110
@SuryaGanguli
Surya Ganguli
4 years
To all on the front lines, we are thankful and humbled by your service. I hope the full might of the American nation soon provides you the support you so sorely need in your fight against #COVID19 . And I hope too that world nations cooperate to fight for our common humanity
Tweet media one
2
17
106
@SuryaGanguli
Surya Ganguli
2 years
Come April, I have decided that theoretical understanding is not important in machine learning. I think we should all stop worrying and learn to love scale, and ride it all the way to AGI.
Tweet media one
5
3
107
@SuryaGanguli
Surya Ganguli
2 years
Thank you ⁦ @StanfordBrain ⁩ for the kind gift of an institute onesie!
Tweet media one
4
0
106
@SuryaGanguli
Surya Ganguli
2 years
@thrasherxy Please don’t lead with prob of being vaxed given death. This is misleading (if all are vaxed this will be trivially 1 even if vaccines work really well). In communicating, please lead with comparing prob of death given vax to prob of death given no vax. What are these now?
6
1
103
@SuryaGanguli
Surya Ganguli
5 years
New from lab! Unsupervised algorithm to reveal spike-time patterns buried in data. i.e. it can take the left rasters and discover a hidden oscillation by just reordering them. code available! paper: thread: Congrats @ItsNeuronal !
Tweet media one
0
19
99
@SuryaGanguli
Surya Ganguli
3 years
Looking forward to speaking at the Harvard Machine Learning Theory group this Friday at 1pm EDT! The talk is open to those on the mailing list, and you can join the list here:
@boazbaraktcs
Boaz Barak
3 years
Excited for @SuryaGanguli 's Friday 1pm EDT talk, which promises to be a whirlwind tour of machine learning, theoretical physics, and neuroscience - from quantum many-body systems to the human retina! As usual, join mailing list at for zoom link.
Tweet media one
5
21
102
1
18
100
@SuryaGanguli
Surya Ganguli
3 months
An absolute failure of public funding mechanisms for research. Wasting the grant writing time of 162 labs just to fund *1* grant. Science would be better off *without* this particular call as it induces negative progress by slowing everyone down & 1 “winner” is effectively random
@Eran_MSU
Eran Andrechek
3 months
DoD Breast Cancer scores are out... good score and checked payline for why we were rejected. LESS than 1% funded. Not a typo! 1 of 163 funded. That is 162 labs that wasted significant time.
Tweet media one
9
14
92
1
15
99
@SuryaGanguli
Surya Ganguli
1 year
A remarkable ability of #ChatGPT to solve a simple geometric reasoning problem, reveal hidden assumptions when asked, and change its answer if hidden assumptions were violated. Would Chomsky ever change his answer or detect violations of his assumptions? :)
Tweet media one
Tweet media two
Tweet media three
5
13
96
@SuryaGanguli
Surya Ganguli
3 years
Happy to get an #icml2021 outstanding paper award honorable mention with awesome collaborators @tydsh and @endernewton for our work @facebookai on “Understanding self supervised learning dynamics without contrastive pairs”
@icmlconf
ICML Conference
3 years
ICML 2021 Outstanding Paper Award Honorable Mentions: 3/4 Yuandong Tian, and Xinlei Chen, and Surya Ganguli 📜Understanding self-supervised learning dynamics without contrastive pairs (Wed 8pm US Eastern)
1
21
176
6
9
95
@SuryaGanguli
Surya Ganguli
5 years
@KordingLab I stayed in a masters program to delay my PhD; I switched from string theory to neuro in post doc to explore; I turned town a faculty position to prolong my post doc - always with the goal of learning. And those decisions, which slowed me down, I would repeat if given the chance!
3
7
93
@SuryaGanguli
Surya Ganguli
6 months
Yep - theoretical physicists knew about double descent around 30 years ago…
@davidobarber
David Barber
6 months
In the 1990s "double descent" was a well known property for even linear nets. As the number of train points = dim of the model (alpha=1) and noisy data the gen error spikes before decreasing (see dashed line). See for a full analysis in the linear case.
Tweet media one
10
79
409
2
7
89
@SuryaGanguli
Surya Ganguli
5 years
The experimentalist's view of the role of a theorist :)
@OdedRechavi
Oded Rechavi
5 years
The co-author who didn’t do anything
230
5K
24K
3
10
91