Agrim Gupta Profile
Agrim Gupta

@agrimgupta92

2,310
Followers
306
Following
34
Media
307
Statuses

Simulating reality @stanford

Joined January 2017
Don't wanna be here? Send us removal request.
Pinned Tweet
@agrimgupta92
Agrim Gupta
5 months
We introduce W.A.L.T, a diffusion model for photorealistic video generation. Our model is a transformer trained on image and video generation in a shared latent space. 🧵👇
55
267
1K
@agrimgupta92
Agrim Gupta
1 year
How should we leverage internet videos for learning visual correspondence? In our latest work we introduce SiamMAE: Siamese Masked Autoencoders for self-supervised representation learning from videos. web: paper: 👇🧵
16
122
484
@agrimgupta92
Agrim Gupta
5 years
We have released LVIS v0.5 dataset for long tail object detection with 1200+ categories and 700k+ high quality instance segmentation masks Paper: Website: API: with Ross Girshick and Piotr Dollar @facebookai
2
91
287
@agrimgupta92
Agrim Gupta
2 years
1/ Can we replicate the success of large scale pre-training --> task specific fine tuning for robotics? This is hard as robots have different act/obs space, morphology and learning speed! We introduce MetaMorph🧵👇 Paper: Code:
3
46
262
@agrimgupta92
Agrim Gupta
3 years
Excited to share our work on understanding the relationship between environmental complexity, evolved morphology, and the learnability of intelligent control. Paper: Video: w/ @silviocinguetta @SuryaGanguli @drfeifei
8
44
208
@agrimgupta92
Agrim Gupta
2 years
1/ Can we build video prediction models by masked visual pretraining via Transformer? We present MaskViT: a simple & parameter efficient method to generate high res. videos in real time. Paper: Web: 🧵👇
7
34
171
@agrimgupta92
Agrim Gupta
3 years
1/ Excited to share that our work on Deep Evolutionary Reinforcement Learning (DERL): a framework for large scale evolution of embodied agents in physically realistic environments is now published in @NatureComms Paper Video
3
25
109
@agrimgupta92
Agrim Gupta
5 months
6/ Finally, our model can be used to generate videos with consistent 3D camera motion.
3
16
96
@agrimgupta92
Agrim Gupta
5 months
2/ website: Our approach has two key design decisions. First, we use a causal encoder to compress images and videos in a shared latent space.
Tweet media one
2
14
72
@agrimgupta92
Agrim Gupta
5 months
5/ We can also use our model to animate any image.
1
5
47
@agrimgupta92
Agrim Gupta
5 months
4/ Our model can generate photorealistic, temporally consistent motion from natural language prompts.
1
4
46
@agrimgupta92
Agrim Gupta
11 months
Foundation models can dexterously manipulate the world of bits but what about the world of atoms? Excited to introduce 🤖RoboCat🐈, the first foundation agent: ✅ multi-embodiment ✅ self-improves ✅ vision to action ✅ dexterous & generalist: 100s of tasks + objects How? 👇🧵
@GoogleDeepMind
Google DeepMind
11 months
Introducing RoboCat, a new AI model designed to operate multiple robots. 🤖 It learns to solve new tasks on different robotic arms with as few as 100 demonstrations - and improves skills from self-generated training data. Find out more:
45
281
1K
2
12
42
@agrimgupta92
Agrim Gupta
5 years
Dataset are extremely hard to get right and often under appreciated. I can only try to imagine the tremendous foresight and hard-work which went into making ImageNet. It's mind boggling that @drfeifei was able to envision where the community should go as early as 2006-07! #CVPR19
1
2
41
@agrimgupta92
Agrim Gupta
4 months
A great thread summarizing the state of video generation in 2023. 2024 looks promising!
@venturetwins
Justine Moore
4 months
2023 was a breakout year for AI video. In January, there were no public text-to-video models. Now, there are dozens of video gen products and millions of users. A recap of the biggest developments + companies to watch 👇
Tweet media one
25
173
647
0
6
39
@agrimgupta92
Agrim Gupta
5 months
3/ Second, for memory and training efficiency, we use a window attention based transformer architecture for joint spatial and temporal generative modeling in latent space.
Tweet media one
2
5
32
@agrimgupta92
Agrim Gupta
5 months
Thanks @_akhaliq for sharing!
@_akhaliq
AK
5 months
Photorealistic Video Generation with Diffusion Models paper: present W.A.L.T, a transformer-based approach for photorealistic video generation via diffusion modeling. Our approach has two key design decisions. First, we use a causal encoder to jointly…
11
186
776
1
1
27
@agrimgupta92
Agrim Gupta
3 years
We are excited to announce the LVIS 2021 challenge @ICCV_2021 . This year we introduce new metrics to better measure progress made by our algorithms in the challenging regime of long tail object recognition. Checkout the challenge hosted @eval_ai
@AIatMeta
AI at Meta
3 years
The LVIS 2021 challenge is live! It uses our #dataset that contains 1203 object categories, 160k images, and 2M instance annotations. The deadline to submit your challenge entry is September 27. Learn more about LVIS and the challenge here:
3
34
142
0
3
27
@agrimgupta92
Agrim Gupta
10 months
LLMs are extremely powerful and are very good at writing code. However, they lack visual grounding. Exciting work led by @wenlong_huang shows how we can combine LLMs + VLMs for robotic manipulation.
@wenlong_huang
Wenlong Huang
10 months
How to harness foundation models for *generalization in the wild* in robot manipulation? Introducing VoxPoser: use LLM+VLM to label affordances and constraints directly in 3D perceptual space for zero-shot robot manipulation in the real world! 🌐 🧵👇
10
141
582
0
10
25
@agrimgupta92
Agrim Gupta
1 year
Latest project from friends at @NVIDIAAI : Voyager. An AI agent based on GPT-4 that plays Minecraft and keeps learning new skills. Congrats @guanzhi_wang , @DrJimFan and the whole team!
@DrJimFan
Jim Fan
1 year
What if we set GPT-4 free in Minecraft? ⛏️ I’m excited to announce Voyager, the first lifelong learning agent that plays Minecraft purely in-context. Voyager continuously improves itself by writing, refining, committing, and retrieving *code* from a skill library. GPT-4 unlocks…
365
2K
9K
2
4
22
@agrimgupta92
Agrim Gupta
3 years
Great use of AI for video editing. Dynamic ad creation using @Rephrase_AI + hyper local targeting based on pincodes of users = Having one of the most popular Bollywood star @iamsrk being a brand ambassador for your local business! Video:
Tweet media one
0
6
18
@agrimgupta92
Agrim Gupta
5 years
Congratulations @ai_habitat for best paper nomination @ICCV19 . Work led by FAIR habitat team @facebookai
0
0
18
@agrimgupta92
Agrim Gupta
2 years
Thanks a lot @techreview and @strwbilly for inviting me to share our latest work on evolving agent morphologies and learning universal controllers! Joint work with my wonderful collaborators @drfeifei @SuryaGanguli @silviocinguetta @DrJimFan
@techreview
MIT Technology Review
2 years
First up: @agrimgupta92 from Stanford is interested in how the form of a machine changes its ability to learn, shifting the focus away from learning algorithms operating by themselves and onto learning combined with a kind of bodily evolution. #EmTechDigital
1
2
9
0
6
13
@agrimgupta92
Agrim Gupta
5 years
ICCV 2019 by numbers: 10k authors, 59 countries, 7.5k attendees and 1k accepted papers. @ICCV19
Tweet media one
Tweet media two
0
0
14
@agrimgupta92
Agrim Gupta
2 years
2/ MetaMorph is based on the insight that robot morphology is just another modality on which we can condition the output of a Transformer. We process an arbitrary robot by creating a 1D sequence of tokens corresponding to depth first traversal of its kinematic tree.
Tweet media one
1
1
11
@agrimgupta92
Agrim Gupta
5 years
People say datasets are opium for AI researchers. I like the original analogy better. Data is Oil. Both are unsustainable in the long run but nothing else works right now.
1
0
12
@agrimgupta92
Agrim Gupta
5 years
Came across this discussion from 1984: More than 35 years have passed but it still is reads the same if you just replace expert systems by deep learning.
Tweet media one
Tweet media two
0
0
11
@agrimgupta92
Agrim Gupta
5 years
VQA (V+L) was subject to much debate after Alyosha's talk in #CVPR2019 . Even though I agree with the sentiment that vision is just not there yet I don't think discarding an entire field is wise. The progress VQA benchmark has enabled is undeniable.
@deviparikh
Devi Parikh
5 years
VQA performance on a standard benchmark (VQA v2 dataset) has gone up 20% (absolute) in the last ~4 years. You can really tell the difference when interacting with these models! Check the VQA demo out here: .
2
16
49
0
1
11
@agrimgupta92
Agrim Gupta
1 year
5/ By predicting a majority fraction of the future frame, SiamMAE learns the notion of object boundaries. This emergent ability is unique and surprising as no loss function operates on the [CLS] token in SiamMAE. We're excited to explore this further!
Tweet media one
1
1
11
@agrimgupta92
Agrim Gupta
3 years
3/ Large scale simulations allow us to ask interesting scientific questions like what is the relationship between morphological intelligence and environmental complexity. We find that agents evolved in more complex environments are able to learn new tasks faster and better.
Tweet media one
1
1
11
@agrimgupta92
Agrim Gupta
1 year
2/ First, we note that images are (approximately) isotropic. However, the temporal dimension is special and not all spatio-temporal orientations are equally likely. Hence, symmetric masking across temporal dimension might be sub-optimal!
Tweet media one
1
1
10
@agrimgupta92
Agrim Gupta
5 years
pycls is a high-quality, high-performance codebase for image classification research. It can also serve as a great starting point for projects not necessarily on image classification. Code: by @facebookai
0
2
10
@agrimgupta92
Agrim Gupta
3 years
Very accessible and fun conversation about our work on evolving embodied intelligence. Thanks @a16z @lr_bio @vijaypande for hosting. Joint work with @silviocinguetta @SuryaGanguli @drfeifei
@SuryaGanguli
Surya Ganguli
3 years
Was fun doing a podcast at @a16z with @drfeifei @vijaypande and @lr_bio on evolving embodied intelligence!
1
22
75
0
2
10
@agrimgupta92
Agrim Gupta
4 years
1/2 We released LVIS v1.0 dataset for long tail object detection with 1200+ categories and 2M+ high quality instance seg masks on 160k images Paper: API: Website: with Ross and Piotr #CVPR2020 @facebookai
1
0
10
@agrimgupta92
Agrim Gupta
2 years
Task specification is a challenging problem in robotics. We introduce VIMA: A transformer based model which can perform *any* task as specified by multimodal prompts. Intuitive & multimodal task interface is going to be essential for useful embodied agents.
@DrJimFan
Jim Fan
2 years
We trained a transformer called VIMA that ingests *multimodal* prompt and outputs controls for a robot arm. A single agent is able to solve visual goal, one-shot imitation from video, novel concept grounding, visual constraint, etc. Strong scaling with model capacity and data!🧵
18
147
870
0
3
10
@agrimgupta92
Agrim Gupta
8 months
Great work showcasing the power of sim2real and dexterous manipulation! Congrats @chenwang_j and team.
@chenwang_j
Chen Wang
8 months
How to chain multiple dexterous skills to tackle complex long-horizon manipulation tasks? Imagine retrieving a LEGO block from a pile, rotating it in-hand, and inserting it at the desired location to build a structure. Introducing our new work - Sequential Dexterity 🧵👇
27
92
473
1
1
9
@agrimgupta92
Agrim Gupta
2 years
@ilyasut “It is slothful not to compress your thoughts.” Winston Churchill
0
1
8
@agrimgupta92
Agrim Gupta
6 months
Evaluation of modern generative models is challenging. Check out HEIM: amazing work led by @tonyh_lee @michiyasunaga @chenlin_meng . A new benchmark for evaluating text to image generation models 🧵👇
@michiyasunaga
Michi Yasunaga
6 months
Text-to-image models like DALL-E create stunning images. Their widespread use urges transparent evaluation of their capabilities and risks. 📣 We introduce HEIM: a benchmark for holistic evaluation of text-to-image models (in #NeurIPS2023 Datasets) [1/n]
Tweet media one
3
57
177
0
2
9
@agrimgupta92
Agrim Gupta
3 years
6/ ecological niche but also enable efficient multi task learning. This is joint work with @drfeifei @SuryaGanguli , @silviocinguetta at @StanfordBrain , @StanfordSVL , @StanfordHAI and published in @NaturePortfolio , @NatureComms
1
1
7
@agrimgupta92
Agrim Gupta
1 year
3/ We randomly select a pair of frames from videos and use an asymmetric masking strategy: mask a high portion of the future frame (95%) and keep the past frame intact (0%). Frames are processed independently via an encoder and future masked patches are predicted via a decoder.
Tweet media one
1
1
7
@agrimgupta92
Agrim Gupta
3 years
4/ Moreover, we observe a morphological Baldwin effect, where morphologies rapidly evolve over a few generations to reduce the sample complexity of reinforcement learning, cutting learning times in half in just 10 generations!
Tweet media one
1
2
7
@agrimgupta92
Agrim Gupta
5 months
Amazing results on real world humanoid locomotion using RL! Congrats @ir413
@ir413
Ilija Radosavovic
5 months
we have trained a humanoid transformer with large-scale reinforcement learning in simulation and deployed it to the real world zero-shot
95
259
2K
0
0
6
@agrimgupta92
Agrim Gupta
3 years
5/ We find a mechanistic underpinning for both the morphological Baldwin effect and the emergence of embodied intelligence. DERL finds morphologies which are energy efficient and highly stable which affords the agents the ability to not only survive in their...
1
1
6
@agrimgupta92
Agrim Gupta
5 years
People often want to work with the best people in a field. I think if life is like a race then it make sense to work with people who are just ahead of you, splitstream only happens when you are just behind.
0
1
6
@agrimgupta92
Agrim Gupta
1 year
6/6 This is joint work with @jiajunwu_cs , @jiadeng and @drfeifei . Work done at @StanfordAILab , @StanfordSVL and supported by @StanfordHAI .
1
1
6
@agrimgupta92
Agrim Gupta
1 year
4/ MAE features generally require finetuning and perform poorly in zero-shot settings. Asymmetric masking, siamese encoder and our decoder design fixes this. SiamMAE features can be used zero-shot and outperform state-of-the-art self-supervised methods in multiple tasks.
Tweet media one
1
1
6
@agrimgupta92
Agrim Gupta
2 years
4/ Thanks to iterative decoding, we can now use MaskViT for planning on real robots. In fact our video prediction is up to 512x faster than autoregressive video prediction.
1
0
6
@agrimgupta92
Agrim Gupta
5 years
Came across the YouTube channel of @Lux_Capital . Some really cool startups are being backed by them. A key difference I noticed in their portfolio was absence of Uber for X or Amazon for Y type startups. Really refreshing.
1
0
6
@agrimgupta92
Agrim Gupta
5 years
This is great! Almost all computer vision dataset depend on flicker. This trend which started with PASCAL was later followed by ImageNet, COCO etc. Finally we will have some thing which is not only a different type of image distribution but hopefully not too North America focused
@MGreenePhD
Michelle Greene
5 years
Big news! With help from the @NSF , some terrific colleagues ( @neuroMDL @bjbalas and Paul MacNeilage) and I are about to start a journey creating the Visual Experience Database: a first-person video database that will characterize how the world actually looks. 1/
21
39
223
1
0
6
@agrimgupta92
Agrim Gupta
3 years
1/ How do current advances in transformer architectures and representation learning transfer to the challenging setting of long tail instance segmentation? Are we close to detecting 1200+ categories? Checkout LVIS Challenge Workshop video:
Tweet media one
1
1
6
@agrimgupta92
Agrim Gupta
4 years
My pior on what AGI stands for has been updated from Artificial General Intelligence --> Adjusted Gross Income. #tax2020
0
0
4
@agrimgupta92
Agrim Gupta
6 months
Amazing work! Congratulations.
@sumith1896
Sumith Kulal
6 months
Super excited to announce the release of Stable Video Diffusion (SVD) -- the first set of video models in the Stable Diffusion series. To start with, we release 14-frame (SVD) and 25-frame image-to-video (SVD-XT) models. The code/weights are already out! SVD:…
7
43
307
1
0
5
@agrimgupta92
Agrim Gupta
2 years
2/ Our approach is based on two simple design decisions. First, for memory and training efficiency, we use two types of window attention: spatial and spatiotemporal. We find that this simple de-coupling greatly improves the training speed without sacrificing quality.
Tweet media one
1
0
5
@agrimgupta92
Agrim Gupta
1 year
@johnowhitaker Non-autoregressive decoding has shown great results with mask-predict in NLP, MaskGIT in images , and MaskViT in videos . The key idea is iterative refinement!
1
0
5
@agrimgupta92
Agrim Gupta
3 years
2/ DERL closely mimics the intertwined processes of evolution and learning and creates embodied agents exploit the passive physical dynamics of agent-environment interactions to survive in their evolutionary environment
Tweet media one
1
1
5
@agrimgupta92
Agrim Gupta
2 years
5/ Finally, we provide a mechanistic explanation of how MetaMorph is able to control 1000s of morphologies. MetaMorph simplifies the control problem by learning to activate different motor synergies depending on the input morphology!
Tweet media one
1
1
4
@agrimgupta92
Agrim Gupta
2 years
4/ Pre-trained controller can zero shot generalize to novel task and morphology combinations. Fine tuning our pre-trained controller is upto 3x more sample efficient than training from scratch on novel tasks.
Tweet media one
1
0
4
@agrimgupta92
Agrim Gupta
3 months
compute + data + transformer + details = magic! congratulations @_tim_brooks & @billpeeb on sora!
Tweet media one
1
0
6
@agrimgupta92
Agrim Gupta
3 years
Are you looking for more realistic and challenging environments to train your robots? Would you one day like a robot which can free you from household chores? Wondering how current RL algorithms perform in this challenging setting? Come join us at #ICCV2021
@RobobertoMM
Roberto @ICRA24
3 years
#ICCV2021 Join us this Sunday Oct 17 13-18 EDT @ BEHAVIOR Workshop: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments. We feature 7 world-renowned speakers in CV, embodied AI, and robotics: @leto__jean @chelseabfinn @hyogweon & more
Tweet media one
1
14
36
1
1
5
@agrimgupta92
Agrim Gupta
5 years
1/ You don't need if you are interested in ML engeering roles. If you care about doing independent research it is almost impossible now to get that freedom at a company without a PhD. @chipro
@chipro
Chip Huyen
5 years
Can we say once and for all that you DON'T need MS/PhD to do machine learning? If you're interested in a company, build up your portfolio and apply (or get people to refer you)! No tech company would pass up someone who has won Kaggle competitions or amazing GitHub repos.
Tweet media one
Tweet media two
20
49
255
1
0
5
@agrimgupta92
Agrim Gupta
4 years
I have a joke about human behaviour prediction but it's anti-social.
0
0
4
@agrimgupta92
Agrim Gupta
1 year
What should we learn from videos to accelerate robot learning? Key idea: learn high level planning from in domain human videos and low level skills from robot demonstrations. Really impressive results. Congrats @chenwang_j and team!
@chenwang_j
Chen Wang
1 year
How to teach robots to perform long-horizon tasks efficiently and robustly🦾? Introducing MimicPlay - an imitation learning algorithm that uses "cheap human play data". Our approach unlocks both real-time planning through raw perception and strong robustness to disturbances!🧵👇
20
144
742
0
0
4
@agrimgupta92
Agrim Gupta
2 years
3/ Second, during training, we mask a variable percentage of tokens instead of a fixed mask ratio. For inference, MaskViT generates all tokens via iterative refinement where we incrementally decrease the masking ratio following a mask scheduling function.
Tweet media one
1
0
4
@agrimgupta92
Agrim Gupta
5 years
2/ It was a bit better in the early days when you had less people and the companies were still trying to figure out the right structure of the research labs.
0
1
4
@agrimgupta92
Agrim Gupta
5 years
When we were working on modelling human behavior for social navigation in early 2017 we were not satisfied with the available datasets. Check the new datset from @StanfordSVL
@silviocinguetta
Silvio Savarese
5 years
New benchmark for social robot navigation in crowed environments is available at . This is the largest dataset up-to-date!
0
5
12
0
1
3
@agrimgupta92
Agrim Gupta
5 years
Of course it would not have been possible without an amazing group of people who shared the vision Jia Deng @RichardSocher @lijiali_vision Kai Li and made it possible.
0
0
3
@agrimgupta92
Agrim Gupta
2 years
Every day we have a new LLM paper but web search still sucks! Having trouble answering simple questions in 2022. Interestingly 4th link does have the correct age highlighted 🤷‍♂️. Also checked @YouSearchEngine same failure mode :/ Hopefully LLMs disrupt search soon!
Tweet media one
0
0
3
@agrimgupta92
Agrim Gupta
3 months
@jeffclune Flipbook animation of images on the bottom right?
1
0
2
@agrimgupta92
Agrim Gupta
2 years
3/ Our pre-trained policy zero-shot generalizes to 1000s of variations in dynamics and kinematic parameters and even completely unseen morphologies. This graph below shows zero shot performance 👇
Tweet media one
1
0
2
@agrimgupta92
Agrim Gupta
2 years
🤯🤯
@hojonathanho
Jonathan Ho
2 years
Excited to announce Imagen Video, our new text-conditioned video diffusion model that generates 1280x768 24fps HD videos! #ImagenVideo Work w/ @wchan212 @Chitwan_Saharia @jaywhang_ @RuiqiGao @agritsenko @dpkingma @poolio @mo_norouzi @fleet_dj @TimSalimans
58
734
3K
0
0
3
@agrimgupta92
Agrim Gupta
6 years
@dog_feelings Would be great when you retweet something twitter could show: Thoughts of Dog woofed :)
0
0
2
@agrimgupta92
Agrim Gupta
3 years
0
0
2
@agrimgupta92
Agrim Gupta
5 years
So true!
@legogradstudent
Lego Grad Student
5 years
Any time I feel smart, I remember that I am hesitant to watch new movies because they take too much time yet I gleefully chain-watch videos on YouTube.
9
114
1K
1
0
2
@agrimgupta92
Agrim Gupta
2 years
Amazing results!
@poolio
Ben Poole
2 years
Happy to announce DreamFusion, our new method for Text-to-3D! We optimize a NeRF from scratch using a pretrained text-to-image diffusion model. No 3D data needed! Joint work w/ the incredible team of @BenMildenhall @ajayj_ @jon_barron #dreamfusion
136
1K
6K
0
0
2
@agrimgupta92
Agrim Gupta
3 years
Congratulations @DeepMind @demishassabis . In 2014 people thought the mission statement of Deepmind was absolutely crazy "solving intelligence" and using it to solve other challenges. It's amazing what we can achieve with current capabilities. Future looks promising!
@demishassabis
Demis Hassabis
3 years
The #AlphaFold 2 papers on the methods and human proteome predictions are out today in hard copy in @Nature ! A really proud moment to see our work featured with a fantastic image on the front cover of the issue:
Tweet media one
35
534
3K
0
0
2
@agrimgupta92
Agrim Gupta
4 months
Sim to SF
@ir413
Ilija Radosavovic
4 months
hello san francisco
47
42
496
0
0
3
@agrimgupta92
Agrim Gupta
3 years
Woke up to two exciting pieces of news! Thanks @HennessyReports for the amazing article featuring our work in @TheEconomist , which was also featured in @NatureComms editor highlights.
0
0
2
@agrimgupta92
Agrim Gupta
6 years
Today we are open-sourcing code and pretrained models for our paper Image Generation from Scene Graphs. #CVPR2018
@jcjohnss
Justin Johnson
6 years
Code and pretrained models for our #CVPR2018 paper on generating images from scene graphs is now available! A step toward creating images with fine-grained control over visual content. With @agrimgupta92 and @drfeifei
1
101
316
0
0
2
@agrimgupta92
Agrim Gupta
4 years
2/2 LVIS v1.0 will be used for the Joint COCO and LVIS Workshop in ECCV 2020. Please see the section about best practices if you use LVIS in your research.
0
0
1
@agrimgupta92
Agrim Gupta
3 years
@maxjaderberg Congratulations on the release! Both the environment and the learnt behaviors are fascinating. Really like the procedural generation of environments which encompasses both cooperative and competitive games!
0
0
0
@agrimgupta92
Agrim Gupta
3 years
"Biology is far too complex and messy to ever be encapsulated as a simple set of neat mathematical equations. But just as mathematics turned out to be the right description language for physics, biology may turn out to be the perfect type of regime for the application of AI"
@demishassabis
Demis Hassabis
3 years
Thrilled to announce the launch of a new Alphabet company @IsomorphicLabs . Our mission is to reimagine the drug discovery process from first principles with an AI-first approach, to accelerate biomedical breakthroughs and find cures for diseases. Details:
Tweet media one
69
618
3K
0
0
0
@agrimgupta92
Agrim Gupta
6 months
1
0
1
@agrimgupta92
Agrim Gupta
3 months
1
0
0
@agrimgupta92
Agrim Gupta
1 year
Feature request for prompt: Interesting follow ideas on X. The model should do more than extracting future work snippets. Can potentially be really helpful with the paper as context combined with huge database of papers --> brainstorming parter!
@paperswithcode
Papers with Code
1 year
🪐 Introducing Galactica. A large language model for science. Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more. Explore and get weights:
283
2K
8K
0
0
1
@agrimgupta92
Agrim Gupta
5 months
0
0
1
@agrimgupta92
Agrim Gupta
3 months
1
0
1
@agrimgupta92
Agrim Gupta
3 years
Check out the video and the blog post describing our work.
@Stanford
Stanford University
3 years
. @StanfordHAI researchers created a computer-simulated playground where arthropod-like agents dubbed "unimals" (short for universal animals) learn and are subjected to mutations and natural selection.
3
21
59
0
0
1
@agrimgupta92
Agrim Gupta
6 months
@DrJimFan @sama @gdb What if they join Anthropic?
2
0
1