@micahgoldblum
Micah Goldblum
10 months
We show that neural networks have a remarkable preference for low complexity which overlaps strongly with real-world data across modalities. PAC-Bayes proves that such models generalize, explaining why NNs are almost universally effective.
@DrJimFan
Jim Fan
10 months
There're few who can deliver both great AI research and charismatic talks. OpenAI Chief Scientist @ilyasut is one of them. I watched Ilya's lecture at Simons Institute, where he delved into why unsupervised learning works through the lens of compression. Sharing my notes: -…
53
434
3K
1
26
196

Replies

@phc27x
Peter Conlon
10 months
@micahgoldblum Sounds right, isnt it what @guillefix and Ard Louis proposed here
2
0
10
@micahgoldblum
Micah Goldblum
10 months
@phc27x @guillefix Yeah, this field of research has a long history, and we actually cited one of Ard's related papers. Interestingly, we found that neural networks (e.g. CNNs) can compress data from domains they were never designed for (e.g. tabular data).
1
1
1