amelie_schreiber Profile Banner
amelie_schreiber Profile
amelie_schreiber

@amelie_iska

782
Followers
323
Following
16
Media
350
Statuses

I ❤️ proteins! Researching protein language models, equivariant transformers, LoRA, QLoRA, DDPMs, flow matching, etc. intersex=awesome😎✡️🏳️‍🌈🏳️‍⚧️💻🧬❤️🇮🇱

California
Joined May 2023
Don't wanna be here? Send us removal request.
@amelie_iska
amelie_schreiber
2 months
Top 10 ❤️ tools rn, in no particular order: 1. ProteinDT 2. MoleculesSTM 3. RFDiffusion-AA 4. RosettaFold-AA 5. LigandMPNN 6. Distributional Graphormer (DiG) 7. DNA-Diffusion 8. OAReactDiff 9. RFDiffusion (original) 10. EvoDiff ❤️ Evo ❤️ Flow matching ❤️ Boltzmann generators
2
25
182
@amelie_iska
amelie_schreiber
2 months
Protein binding a small molecule designed with RFDiffusion-AA yesterday. I'm such a huge fangirl for these all-atom models. Baker Lab is awesome!
Tweet media one
2
17
171
@amelie_iska
amelie_schreiber
2 months
These two together make a really good pair: From this you get conformational ensembles and binding affinity for protein-protein, protein-small molecule, and protein-nucleic acid affinities, reducing the need for expensive MD sims.
0
24
139
@amelie_iska
amelie_schreiber
1 month
Found out yesterday some of my @huggingface blogs inspired some undergrads to start studying AI applied to proteins and someone applied to and received an internship based on their interest in replicating and extending some of them. 😎 Feeling very inspired and grateful now. ❤️
4
8
132
@amelie_iska
amelie_schreiber
27 days
In case it is helpful:
1
13
94
@amelie_iska
amelie_schreiber
7 months
Just thought I would share this new Hugging Face community blog post I wrote as a follow up post to the ESMBind post. It explains how to build an ensemble of Low Rank Adaptations (LoRAs) after you have finetuned multiple ESMBind LoRA models:
1
13
56
@amelie_iska
amelie_schreiber
1 month
An interesting and novel approach to applying transformers to graph structured data. This never got the attention it deserved and is likely an approach lost to time. It maybe “old” but it’s worth investigating further, especially for biochem/molecules:
3
8
55
@amelie_iska
amelie_schreiber
1 month
Damn, another E(3)-equivariant model that should have been SE(3)-equivariant. Molecules have chirality! Still exciting that it works for small molecules AND proteins:
0
7
55
@amelie_iska
amelie_schreiber
2 months
Has anyone else tried grafting two proteins together by first placing the proteins into AlphaFold-Multimer, then linking the proteins together with something like RFDiffusion motif scaffolding (treating the two proteins as though they are in the same chain)?
Tweet media one
3
4
51
@amelie_iska
amelie_schreiber
1 month
Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics: A Replacement for MD? TBD. More comments to come. OpenReview: GitHub:
4
9
46
@amelie_iska
amelie_schreiber
5 months
Working on a new method to cluster protein-protein complexes so I can finetune ESM-2 on them for predicting PPIs and for generating binders 😊. Also may try to finetune EvoDiff this way for generating binders. I ❤️ proteins so much.
2
1
41
@amelie_iska
amelie_schreiber
6 months
Recently wrote a new blog post on intrinsic dimension of protein language model embeddings and curriculum learning:
1
7
41
@amelie_iska
amelie_schreiber
1 month
Here’s a new method for sampling the equilibrium Boltzmann distribution for proteins using GFlowNets: If you aren’t familiar with GFlowNets, head over to @edwardjhu ’s twitter and watch his video. I’ll also post a link to a related lecture soon.
3
4
40
@amelie_iska
amelie_schreiber
6 months
Just cooked up a new tokenization method for protein language models and large language models. I can't wait to share :)
1
1
40
@amelie_iska
amelie_schreiber
20 days
Not specifically for proteins or other molecules, but this is a nice intro to flow matching. Thanks for the video @ykilcher any chance you’d ever do something on this applied to proteins?
0
8
38
@amelie_iska
amelie_schreiber
4 months
Shouldn't we be able to do something similar to this with LoRA? LoRA and SVD are conceptually very similar. If so, that would likely explain the results in this paper where LoRA turns out to be better than full finetuning Thoughts?
0
1
18
@amelie_iska
amelie_schreiber
2 months
Apparently you can in fact do flow matching on discrete data, for those interested in diffusion applied to discrete data like language and NLP, this is a good reference for how to do it with the more general flow matching models:
@json_yim
Jason Yim
2 months
Combining discrete and continuous data is an important capability for generative models. To address this for protein design, we introduce Multiflow, a generative model for structure and sequence generation. Preprint: Code: 1/8
2
92
443
0
1
17
@amelie_iska
amelie_schreiber
21 days
C_4 symmetric motif scaffolding with RFDiffusion.
Tweet media one
0
0
14
@amelie_iska
amelie_schreiber
1 month
Another E(3)-equivariant model that should be SE(3)-equivariant. E(3) doesn’t preserve chirality of molecules. GitHub:
@ML_Chem
Machine Learning in Chemistry
1 month
Transferable Water Potentials Using Equivariant Neural Networks #machinelearning #compchem
0
3
35
0
1
14
@amelie_iska
amelie_schreiber
8 days
This looks pretty amazing:
0
3
13
@amelie_iska
amelie_schreiber
5 months
So nervous about this one.
0
2
13
@amelie_iska
amelie_schreiber
1 month
Interestingly, quantizing state space models like Mamba doesn't seem to work very well, whereas we are now in the era of 1-bit quantization for transformers ~without~ performance degradation; it also isn't clear if Mamba is as expressive as Transformers.
2
2
11
@amelie_iska
amelie_schreiber
6 months
If you have opportunities to work at the intersection of AI and proteins, DM me. I have ideas and I like implementing them :)
2
5
11
@amelie_iska
amelie_schreiber
2 months
Okay, serious question. If you can accomplish the same thing with more general proteins, why restrict yourself to antibodies? Also, what are some problems that really truly require antibodies specifically and that can’t be done with more general proteins?
5
0
10
@amelie_iska
amelie_schreiber
1 month
@TonyTheLion2500 I highly recommend this reference along with his “smooth manifolds” book: Introduction to Riemannian Manifolds (Graduate Texts in Mathematics)
0
0
10
@amelie_iska
amelie_schreiber
1 month
Seems like an interesting method. I find it very interesting that it works better (SOTA?) if you give it conformational ensembles to work with. Could be very interesting to see how conformational sampling, Distributional Graphormer, or AlphaFlow might yield better results.
@JavierUtges
Javier Sánchez Utgés
1 month
Having a lot of fun visualising the ligand binding site predictions of #IFSitePred with #PyMol ! A new ligand binding site prediction method that uses #ESMIF1 learnt representations to predict where ligands bind! Check it out here: #Q96BI1
Tweet media one
0
2
20
2
1
10
@amelie_iska
amelie_schreiber
1 month
(1/n) Even if Sora isn't currently capable of accurately generating simulations of small molecules or proteins, open sourcing it or giving select researcher access to it would allow us to add in equivariance or use components of it such as those that maintain temporal coherence.
4
0
10
@amelie_iska
amelie_schreiber
1 month
Having solid temporal coherence, or modifying the architecture to be SE(3)-equivariant would allow us to create better versions of things like this: and we might actually be able to replace MD with AI, speeding up drug discovery and solving major problems
0
0
6
@amelie_iska
amelie_schreiber
1 month
To all those just getting into this stuff: You’re entering one of the most interesting and impactful areas at the most exciting time. Don’t give up, even when it feels impossible. Stay close to the open source biochem AI community. They’re a great crowd. Good luck and have fun!
1
0
7
@amelie_iska
amelie_schreiber
1 month
Selectively modulating PPI networks by designing high affinity and high specificity binders with RFDiffusion and checking that with AF-Multimer LIS score seems like low hanging fruit to me. What reasons might there be for this not being very actively & heavily worked on?
2
0
6
@amelie_iska
amelie_schreiber
1 month
Computational efficiency in equivariant models is often a concern. This model addresses that and creates fast SE(n)-equivariant models for tasks involving molecules:
0
1
6
@amelie_iska
amelie_schreiber
5 months
Eeep! It's wooorkiiing! So excited! 😊 I'll write a hf blog post on it once it's all done.
0
1
6
@amelie_iska
amelie_schreiber
2 months
Crowdsourcing suggestion…if you could selectively disrupt or augment a pathway or PPI network, where would you start? Assume you can block any PPI, or augment the PPI network by designing proteins that create intermediary interactions (ex: proteins that bind/link two others)
3
0
5
@amelie_iska
amelie_schreiber
2 months
@alexrives I have a method for detecting AI generated proteins that I would like to open source at some point if people are interested. It seems to work on proteins generated by most models out right now, although there are a couple models it does not work for, hesitant to say which ones.
2
0
5
@amelie_iska
amelie_schreiber
2 months
@maurice_weiler @erikverlinde @wellingmax Could someone recommend a similar resource for other architectures like equivariant transformers or equivariance in geometric GNN models? Just curious what the go to resources are for people for other architectures.
2
0
4
@amelie_iska
amelie_schreiber
5 months
Now, using persistence landscapes we can cut down clustering time from a full day to less than 30 minutes for 1000 proteins!
0
2
5
@amelie_iska
amelie_schreiber
4 months
@pratyusha_PS This is awesome. When will the code be available? I would love to try this with a protein language model like ESM-2 and see if it improves performance.
2
0
5
@amelie_iska
amelie_schreiber
2 months
@samswoora You should also check out flow matching models. Flow matching generalizes diffusion (diffusion is a special case of flow matching). They're doing a lot with proteins and flow matching, but there's less buzz about it in vision and language domains.
2
0
5
@amelie_iska
amelie_schreiber
1 month
Attempting to raise my signal to noise ratio today by making some quality posts about AI and biochemistry. 😊
0
0
4
@amelie_iska
amelie_schreiber
8 days
@310ai__ It might also be good to look into computing the LIS score based on the PAE output of RoseTTAFold All Atom similar to what was done with AlphaFold-Multimeter here. This is a new approach for protein-small molecule complexes.
1
1
4
@amelie_iska
amelie_schreiber
2 months
@biorxiv_bioinfo Cool idea, but how was the dataset split into train, test, and validation? Was sequence similarity/homology used to split the protein dataset? If not, this paper's results are unreliable. You have to split your data based on sequence similarity; 30% similarity is pretty standard
0
0
3
@amelie_iska
amelie_schreiber
5 months
Anyone have any idea why in silico directed evolution might increase perplexity and intrinsic dimension of a protein? Are more fit proteins generally more complicated?
3
0
4
@amelie_iska
amelie_schreiber
3 months
@GabGarrett CLIP but for proteins and small molecules...
1
0
4
@amelie_iska
amelie_schreiber
2 months
@HannesStaerk Still REALLY want to see this done with AlphaFold-Multimer. Maybe there’s a dynamic model of PAE and LIS that comes out of this that helps determine how strong or transient a PPI is.
1
2
4
@amelie_iska
amelie_schreiber
5 months
@andrewwhite01 You can also learn equivariance. I think equivariance is an overrated mathematical concept tbh. It's fancy and neat from a mathematical perspective, but otherwise I think you could have your network learn it and get just as far if not further.
0
0
4
@amelie_iska
amelie_schreiber
1 month
AlphaFlow-Multimer with the appropriate generalization of the LIS score would more or less solve PPI prediction. LIS alone already mostly solves it. Then the only bottleneck for giant detailed PPI networks is compute. This is a big deal. Explain to me why I might be wrong.
0
0
4
@amelie_iska
amelie_schreiber
1 month
Hot take for some, obvious to others: GPUs and LLM oriented ASICs along with AI operating systems will make CPUs mostly obsolete. Anyone out there capable of writing CUDA kernels who can explain why this might be an erroneous prediction?
2
0
4
@amelie_iska
amelie_schreiber
23 days
Really cool channel. Maybe we’ll get a video on SE(3)-equivariant neural networks one day🤞This would be great for folks trying to understand new SOTA models for proteins and small molecules. I would totally be down to collaborate @mathemaniacyt 🧬
@mathemaniacyt
Mathemaniac
24 days
Why do we require Jacobi identity to be satisfied for a Lie bracket? In the process, we also understand intuitively why tr(AB) = tr(BA) without matrix components. Watch now:
Tweet media one
2
104
613
0
0
3
@amelie_iska
amelie_schreiber
1 month
@MIT_CSAIL Using random train/test splits when the data should be split based on some similarity metric, especially for proteins/small molecules, to determine if the model generalizes well to unseen data. Also using E(3)-equivariance instead of SE(3) for small molecules/proteins.
0
0
3
@amelie_iska
amelie_schreiber
1 month
It would be very interesting and useful to see how this could be used in tandem with the following method for detecting binding sites of conformational ensembles of proteins using ESM-IF1:
0
0
3
@amelie_iska
amelie_schreiber
26 days
@nomad421 Is this at all related to evolutionary velocity of proteins as described in the paper for this model?
0
0
3
@amelie_iska
amelie_schreiber
2 months
@lexfridman @sama Can something like Sora one day be used for molecular dynamics simulation, perhaps along with Gaussian splatting?
0
0
3
@amelie_iska
amelie_schreiber
6 months
@biorxivpreprint I'm so fascinated by how geometric compression, information theoretic compression, and LoRA or QLoRA all seem to be closely related. Should we be choosing our ranks based on perplexity or intrinsic dimension? Also, LoRA and QLoRA end up regularizing models! How neat!
0
0
2
@amelie_iska
amelie_schreiber
1 month
@naterbennett0 Will this be attempted with all atom models, or would that not make much difference? Also, what pain points are blocking progress to better performance? Architecture? Data? Is more physics needed? Something else? Maybe there’s some hairy math in the way I could grapple with?
0
0
3
@amelie_iska
amelie_schreiber
3 months
2
0
2
@amelie_iska
amelie_schreiber
2 months
@gallabytes @samswoora Try out these: Frank Noé's work is pretty cool in general. Let me know if you find others related to proteins, small molecules, DNA, or RNA.
0
0
3
@amelie_iska
amelie_schreiber
1 month
@FrankNoeBerlin @CecClementi The turtles are adorable as are the others. Kinda makes me wanna play Subnautica 😆
0
0
2
@amelie_iska
amelie_schreiber
1 month
Claims of superiority of the model don't appear until late in the paper and are completely absent from the abstract and the first part of the paper, which gives off confidence vibes. We're all tired of the SOTA claims appearing in every abstract these days.
0
1
1
@amelie_iska
amelie_schreiber
1 month
@mmbronstein @BlumLenore 😂nice…I’ve actually read a lot of this…can confirm it is a good read. I need to reread the sections on equivariant GNNs and attention. It’s been a while.
1
0
2
@amelie_iska
amelie_schreiber
5 months
@KevinKaichuang Can you say more about the "better protein training" one?
0
0
2
@amelie_iska
amelie_schreiber
1 month
Mamba trained on zeros and ones without tokenization when?! Someone REALLY need to do this. Could be a game changer and the long context is perfect for such an experiment.
2
0
2
@amelie_iska
amelie_schreiber
2 months
@kharlikesticker @xennygrimmato_ People always say the benchmark must be bogus once it is solved. People did the same thing with Hinton and his group when AlexNet did so well on image classification. "Oh, well the benchmark is clearly flawed then if a neural network solved it." In hindsight it looks too easy.
1
0
1
@amelie_iska
amelie_schreiber
6 months
@jinyuansun39143 You should include information about EvoDiff in its knowledge base.
1
0
2
@amelie_iska
amelie_schreiber
3 months
@BoWang87 For function prediction, this looks quite good: Similarly for small molecules: I'm waiting for the big splash this will inevitably make. Thoughts? The use a CLIP based approach and get SOTA (but actually).
0
0
2
@amelie_iska
amelie_schreiber
2 months
Could be extended to multiple protein fragments too I suppose. I'm curious to know how others are scaffolding various things.
0
0
2
@amelie_iska
amelie_schreiber
22 days
@Lauren_L_Porter @tiwarylab Have you looked into things like Distributional Graphormer, Timewarp, Boltzmann generators, GFlowNets, AlphaFlow, or other methods based on flow matching for sampling the Boltzmann distribution?
1
0
2
@amelie_iska
amelie_schreiber
1 month
@mmbronstein @BlumLenore This looks really interesting.
0
0
2
@amelie_iska
amelie_schreiber
1 month
I have come to realize every tool added to Copilot makes it better and more useful. This platform/product (not a model) is leading the way in natural language interfaces to advanced biochem AI and computational biochemistry.
1
1
2
@amelie_iska
amelie_schreiber
5 months
@EvaSmorodina How do I try it?
1
0
2
@amelie_iska
amelie_schreiber
1 month
Friendly reminder: You can wish those celebrating it a happy Easter AND support transgender people. Just sayin’. I did it, and I’m not even a Christian. Here’s to elevating the conversation, raising our signal to noise ratio, and being a little more chill and supportive.
0
0
2
@amelie_iska
amelie_schreiber
2 months
@amyxlu I want to see this applied to MD simulation so bad. Perhaps coupled with image-to-3D. I'm curious to know what approach people would take.
1
0
2
@amelie_iska
amelie_schreiber
5 months
@KevinKaichuang You could try ordering the data such that the intrinsic dimension of the embeddings associated to the data gradually increases. This might smooth things out some. With so few examples it may not help much, but it's worth a try.
1
0
2
@amelie_iska
amelie_schreiber
6 months
Is there noise added to the embeddings by QLoRA due to fitting everything into bins (quantization)? If so, I think there is a connection to NEFTune which might explain the improved performance of QLoRA over full finetuning. Thoughts?
1
0
2
@amelie_iska
amelie_schreiber
1 month
@mmbronstein @BlumLenore Would love an updated expanded section on equivariant GNN and transformers. Just putting that out there 😎
1
0
2