Pikagi
Popular Tweets
About
Privacy Policy
Removal Request
Berivan Isik
@BerivanISIK
2 years
@miniapeur
There is also by
@utkuevci
that empirically compares different sparsity distributions.
1
0
3
Replies
Mathieu Alain
@miniapeur
2 years
How should sparsity be promoted in a neural network? First layers, last layers, uniformly? Any theoretical results about this?
4
1
24
Berivan Isik
@BerivanISIK
2 years
@miniapeur
There is an (not very tight) upper bound on the output distortion when pruning a single connection that helps with adjusting layer-wise sparsity in a greedy manner:
3
0
8
Berivan Isik
@BerivanISIK
2 years
@miniapeur
We extend this to a more general case beyond single-connection pruning in Theorem 1 here:
2
0
3
arXiv abstract
@arxivabs
2 years
@BerivanISIK
@miniapeur
@utkuevci
Check out the abstract!
0
0
0