The Path to Autonomy is paved with Simulation!
The focus on building the best data engines for Embodied AI & Robotics has been a core mission at
@NVIDIARobotics
I am so excited to share that the simulation effort we have been working on for the last 2+ years will be part of…
As
@NeurIPSConf
#NeurIPS2021
deadline approaches, I've consolidated some technical writing tips.
Succinct answers to each of the questions,
Voila, you have a crisp Introduction!
Expand on prior work, intuition/tech details, & expts for the rest ensuring the why is always clear
We organized a seminar-style course
@UofTCompSci
on 3D and Geometric Deep Learning.
Here is the reading list (Videos + Slides):
Each paper comes with a 10-min tutorial:
Hope it helps folks looking to get up to speed on the topic!
The standards of what
@Twitter
thinks is machine learning have dropped somewhat.
Yet for some reason 1k+ people like a thread on definition of log.
...and here I am tweeting "excited to share our latest paper on....."
I prolly need education on HowTo Twitter
Kinda definitive guide to FAQs for students evaluating Ph.D. offers!
Always ask: Are there snacks in the lab?
Advisors: need to invest in munchies!
thx to
@igilitschenski
for sharing this!
Original:
Nvidia Robotics Research internships for summer 2024 are now open
If you are interested in learning for control, RL, and especially Foundation Models: VLM and LLMs for decision making.
DM me for more and apply here
BTW, these are located in both Santa…
"Physically Embedded Planning Problems: New Challenges for Reinforcement Learning"
the lack of reference to robotics folks working on this for decades and reinventing the problem as your own!
gotta do better
@DeepMind
How "New" is this?
Isaac Gym -
@NVIDIAAI
physics simulation environment for reinforcement learning research (preview Release)
- End-to-End GPU accelerated
- Isaac Gym tensor-based APIs fo massively parallel sim
Also get in touch for potential internships to flex in Gym!
Reviewers in ML continue the trend!
S4RL is shot down for being too simple and obvious!
All Reviewers "great work, strong empirically in many domains, but there is little fancy-pants theory!"
reject🚫
they missed the memo: Surprisingly Simple!
Hot take: deep RL research has stagnated because conferences have created bad incentives, rewarding researchers for vacuous claims of novelty, tenuous-at-best theoretical connections, or SOTA, while punishing boring analysis of the empirical tricks that actually make things work.
Semi-supervised Learning to generate high-resolution images with disentangled latent codes using less <2% labelled data
Paper:
w\
@wn8_nie
, T.Karras,
@shoubhikdn
, A.Patney, A.Patel,
@AnimaAnandkumar
Real win: can make your advisor happier (
@drfeifei
)
Super cool new paper from the
@NvidiaAI
Robotics group in Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience [Paper , video: ]
Object-oriented world models is *the* key for reasoning.
But, unsupervised task-agnostic methods are hard!
SlotFormer, at ICLR2023, is an unsupervised video prediction model that also works for tasks: VQA and model-based planning
Read on for more!
We are co-organizing
#NeurIPS2021
workshop on Differential Equations and PDEs.
Promises to be a very exciting agenda on a topic increasingly getting popular among ML folks.
Sept 17 deadline for contributed papers
stay tuned for more!
Our 'The Symbiosis of Deep Learning and Differential Equations' workshop has been accepted for
#NeurIPS2021
!
Send us your work on data-driven dynamical systems, neural differential equations, solving PDEs with deep learning etc.
Tentative submission deadline Sept. 17.
Very honored that DiSECt won a Best Student Paper Award at
#RSS2021
! Congrats to my co-authors
@milesmacklin
, Yashraj Narang, Dieter Fox,
@animesh_garg
and Fabio Ramos, and thanks for this great collaboration
@NVIDIAAI
! 🎉
Deep RL is not really using deep networks!
We found that dense connections and deeper networks help improve learning performance. Results hold across various manipulation and locomotion tasks, for both proprioceptive and image observations!
Deep learning has seen huge gains when you increase the number of layers, but what about Deep RL?
Introducing D2RL! Changing how you parameterize your policy + Q function boosts performance
Co-led with
@mangahomanga
.
@AravSrinivas
@animesh_garg
Link:
Sometimes you gotta give it to TAs.
The guy below has a plan for doing a discussion section in case of a nuclear attack, COVID19 seems less harrowing!
"No matter what happens, I will not give up on you" 🤯
This needs to be said out loud, particularly by gatekeepers of AI/ML academia (Profs, advisors, mentors, reviewers, SACs, ACs)
Novelty for the sake of it is not a virtue!
Many students trying to be different for the sake of novelty and often sacrificing utility in the process!
I still remember I marveled at how simple and elegant the Lucas-Kanade tracker is when I first learned about it as a grad student. Here is a fun story about it.
"Newness itself is not a virtue, usefulness is." - Takeo Kanade
Interested in how "Deep Learning meets Differential equations"?
Join us for
@iclr_conf
workshop on Apr 26.
Streaming and discussion platform details to be posted on workshop website.
@AnimaAnandkumar
@TanNguyen689
Excited to be at
#ICRA2019
Best Paper Award talk
Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Paper:
Video:
Every once in a while authors are bold enough to break the mold and surprise reviewers.
a 4 page paper on the effectiveness of patches in deep learning!
tl;dr: we had an idea and it works damn well!
grabbing🍿to see reviews!!
Exciting week in robot manipulation!
After the Berkshire Grey IPO, Google deems that its effort is ready to move out of X.
Intrinsic, a new alphabet entity to focus on industrial robotics.
Rather exciting time to be working in robot manipulation.
🦾
Learning Causal Graphs that capture Physical Systems has high potential yet challenging!
Check out End-to-End Causal Discovery from videos
Site:
Paper:
w\
@YunzhuLiYZ
@AnimaAnandkumar
, A.Torralba, D. Fox
Robot learning from BC based models has really come a long way, and continues to surprise me.
The complexity of real world tasks achievable with BC based methods on real data sidesteps the challenges in RL, particularly the sim2real aspects!
The new release of ALOHA Unleashed…
Perks of working at
@NvidiaAI
;) arguably the most kick-ass simulator is in-house and now for everyone else as well. Going to be at =
#NeurIPS2018
in case you are interested in chatting about opportunities in ML, Vision, and Robotics.
At
#NeurIPS2018
we announced that PhysX, the world’s most popular physics simulation engine, is now open source. Robotics researchers can easily train machines in realistic environments.
CNNs are biased towards high-frequency textural information.
New work on fixing CNN's over-reliance on texture through a curriculum that exposes texture slowly.
Results in better features that generalize both to new datasets and to new tasks.
Paper:
If you’re looking for a paper for your Friday readings, maybe check out our new work!
Joint work with
@hugo_larochelle
&
@animesh_garg
!
If you want to improve your CNN but don’t want to add trainable parameters or add any regularization loss, give our paper a try! 🙂
A leading car company makes a humanoid robot that appears to be ahead of its times, and hopefully rallies many researchers to work on the problem. This is ground breaking
...only that this tweet is a few years too late
I met Jensen in my first week at Stanford, and admittedly, did not realize the long game he was driving Nvidia towards!
Indeed the world has come a long way since but Jensen maintains his conviction to drive AI at
@nvidia
and broadly.
I am beyond excited to be building the PAIR group at
@ICatGT
working on ML and Robotics.
@mlatgt
&
@GTrobotics
is a nexus that will help us create the next wave in AI Robotics.
Thanks Nathan Deen for the kind praise.
The next 10 to 12 years will see a boom for advancement in robot manipulation, says
@animesh_garg
. The new assistant professor has chosen
@GeorgiaTech
as the place he wants to be during this crucial time.
@GTrobotics
@mlatgt
I talked about Structure in Reinforcement Learning for Robotics at
#ICRA2022
workshop on Behavior Priors
tl;dr:
Structured Biases in DL improve both efficiency & generalization. Robot Learning / RL needs new ones!
Slides:
Video:
We have been running an AI in Robotics (AIR) Reading Group.
PIs have so many talks, while students are in background.
AIR is a platform for students to network across their immediate network and get speaking experience.
For the students, by the students
ARR is now accepting submissions! Please see for an overview of the submission form and link to the submission site. Submit by 5/15 to be eligible for
@emnlpmeeting
!
#NLProc
We have been working on conditional video generation for a while and even have a paper in the upcoming CVPR
however the results in this paper are just amazing!
Now I can go revive my dream of being a TikTok celeb!
I am very excited about the potential of high perf simulation in enabling AI Robotics.
Looking forward to providing an overview of things we have been developing in the last couple of years, and a preview of things to come!
Come check it out: Nov 8 2:30pm EST.
Need a helping hand in the lab? Tired of manually doing tedious chemistry experiments over and over?
Meet ORGANA 🤖 🧪 – a modular and user-friendly robotic lab assistant that can interact, plan and automate a number of chemistry experiments!
This work is accepted at
@NeurIPSConf
#NeurIPS2020
Come talk to us:
Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3
#874
Paper and Talk at
Code is now out as well
Learning Causal Graphs that capture Physical Systems has high potential yet challenging!
Check out End-to-End Causal Discovery from videos
Site:
Paper:
w\
@YunzhuLiYZ
@AnimaAnandkumar
, A.Torralba, D. Fox
Are you interested in emerging questions in perception for manipulation? Consider our workshop on "Visual Learning and Reasoning for Robotic Manipulation" at
#RSS2020
@RoboticsSciSys
.
Work-in-progress ideas equally welcome as are polished ones!
Interested in Exploration in RL with image based inputs?
Self-supervised goal reaching requires committed exploration for long-horizon tasks.
LEAF: Latent Exploration Along the Frontier with reachability estimation
Paper:
@florian_shkurti
@therealhomanga
Grasping with multi-finger hands is so hard!
Sadly, sampling-based data scaling doesn't work 😭
Our
@eccvconf
paper presents Grasp'D!
Diff Sim for exploring in full grasp space with surface contacts.
It works for both robot and human hands without imitation data
Thread 👇
This is mind-blowing that this patent application is even possible and passed internal checks at
@GoogleAI
. Neither would this truly be novel nor unique. And yet just because someone else did not patent this, someone tries to patent not an idea but an entire field 🤣 🤦🏻♂️
Long term reasoning needs an understanding ofcontinuous changes in the world.
"What happens if I open the door"
Action Concept Grounding Networks learn these semantics
@GnosisYu
W.Chen
@SMEasterbrook
@UofTRobotics
@VectorInst
Must feel like end of an era at FAIR with Ross (
@inkynumbers
) going to AI2 and Kaiming going to MIT
Both Ross and Kaiming have taught the community the need for good systems and simplicity in research, especially at scale.
Incredibly excited to announce that Ross Girshick (
@inkynumbers
) will be joining the PRIOR team
@allen_ai
!
Ross is one of the most influential and impactful researchers in AI. I'm so honored that he is joining us, and I'm really looking forward to working with him.
Using LLMs for code generation allows a very intuitive and effective way to perform feedback guided multistage planning for robotics.
Come chat with me and
@Ishika_S_
about ProgPrompt on Thurs Jun 1st - Poster Hall, 3-4.40pm, Pod 10 at
#ICRA2023
Super excited to share our work ProgPromt! We show how LLMs can be used for situated robot planning by prompting them with pythonic code.
abs:
project page:
[1/9]
Great research, great food, and you get to stay after graduation! most tech jobs that one may want exist in Toronto. (3rd after SF and NYC)
students shopping for grad school and postdocs should take note 😉
@CadeMetz
does a profile in
@nytimes
Isaac Gym blog
@NVIDIAAI
Using just one A100 GPU, Isaac Gym achieves same perf. in ~10 hours — as compared to 30 hours on 6000+ CPUs.
A single GPU outperforming an entire cluster by a factor of 3x
Reel of current envs in IG:
Isaac Gym -
@NVIDIAAI
physics simulation environment for reinforcement learning research (preview Release)
- End-to-End GPU accelerated
- Isaac Gym tensor-based APIs fo massively parallel sim
Also get in touch for potential internships to flex in Gym!
Passionate about Math 🧮, CS 💻, AI 🧠, and Robots 🦾? This season, I am looking for my first cohort of graduate students 🎓 at the fantastic
@UofTCompSci
🇨🇦 ❤️💻. Find out more at and apply by Dec 1st.
Researchers at NVIDIA have built a robot that automatically adapts to different terrains helping delivery robots and other autonomous machines function more effectively in environments. See this technology in action.
#GTC20
Many schools have dropped (or made optional) the GRE requirement -- a step in the right direction.
btw GRE is optional at
@UofTCompSci
as well!
It is an undue burden for non-native speakers to memorize words far too recondite for prosaic usage!
UC Berkeley
MIT
Stanford
CMU
UIUC
U. of Washington
Cornell
Georgia Tech
Princeton
UT Austin
Michigan
Wisconsin
UCSD
Harvard
UMD
UPenn
Purdue
UMass Amherst
NYU
NEU
UChicago
And I hear more are coming. I'll add more if people reply.
Interested in Robotics Engineering in research?
We are hiring an Engineering Tech in Robotics to help with robotics education content + open-sourcing robotics frameworks + robot learning algorithms!
DM for info & retweet🙏
@UofTCompSci
@UofTRobotics
@UTM
Simulation is the data factory for robotics.
Yet, we seem to only use it for scale!
Scale is not all you need!, or atleast not the only ingredient. Algorithmic innovation matters🛠️
So what is beyond vectorized physics?
I provide a perspective on using additional information…
A fresh look at low cost mobile manipulation which simplifies the mechanism get seems to achieve a broad set of tasks
Ex-Googler's Startup Comes Out of Stealth With Beautifully Simple, Clever Robot Design.
Looking forward to interacting with friends from MILA, and talking about recent work on my series on Generalizable Autonomy
Part- I was last year at MIT Deep Learning Claas.
Hopefully this would be a worthy part-II
same old problem, all new methods!
Come talk to us about SlotDiffusion - an object-centric Latent Diffusion Model (LDM) designed for both image and video data
🗓️ Wed, Dec 13, 10:45
📌 Poster
#611
Hall B1+B2 (level 1)
The students couldn't be here but the advisors (
@igilitschenski
) are equally fun to chat with!
Join us on Nov 8 for the UofT free 1-day Robotics Institute AI virtual symposium. Strategize with academic leaders closing the gap between shop floor reality & practical robot & data platforms in retail & manufacturing. Details & Registration:
Unsupervised disentanglement is very difficult to learn. However, with ~1% labels, semantic alignment can be achieved.
Check out our paper on semi-supervised disentanglement tomorrow at
#ICML2020
are you a student from Ukraine facing program interruption, apply to
@UofTCompSci
Summer Program for Students from Ukraine. (
@VectorInst
,
@UofT
)
All expenses covered and quick turnaround.
📅 Apply by Apr 8, a decision by Apr 13.
🙏spread the word🙏
The intricacy, image quality and spatiotemporal consistency of the SORA generated videos is mesmerizing.
The problem of conditional video generation has been around, but this feels like a leap!
The video below is generated with a prompt
"Reflections in…
This is so reminiscent of India winning cricket WC in 2011 for
@sachin_rt
, so it goes for Messi in football (
@TeamMessi
)
The little master could not have not have retired without a WC title
NVidia Robotics Research lab is now open for business. Talented team of folk having fun with all kinds of robots. No more a roboticist can ask for! Thx Jensen for the new venture in
@NvidiaAI
If you are interested in joining my group as a Masters or PhD student, apply here . Deadline December 1st. I recommend you mention this fact in your research statement. Join the
#selfdriving
revolution!
#gradschool
#AI
This is an very exciting direction of my research in injecting latent inductive bias to do better meta learning. Go check out the poster and if interested in working on it apply to work with me at
@UofTCompSci
,
@VectorInst
or
@NvidiaAI
Telling a good story is as important as doing the research!
🎥🎙️🎬
Research communication requires effort and planning.
It is hard to expect students to do also learn all the media editing tools.
An amazing HowTo make a Research Video by
@martoskreto
👏
New paper on Goal-Based Imitation from Third Person Videos.
Motion reasoning that combines task
& motion planning to resolve semantic ambiguity demonstrator intent and outputs symbolic goal representations from video
Paper:
When you try to open a new door, do you try to yank it up? Likely, no.
Why should your robot continue to do so!
Check out our
#ICRA2021
paper on learning action spaces for efficient contact-rich manipulation.
paper:
Google just released Gemma, a new set of open LLMs (2B & 7B)
Reported perf tops Llama2 (both at 7B & 13B scales) and Mistral-7B! 🎉
(Comparisons with Mixtral 8x7B would be valuable both with and w/o MoE for Gemma)
Importantly, they are free to use…
It’s always great to visit MIT
This time for a intro to robot learning lecture in deep learning short course
Thanks to Alexandre Amini and
@apsoleimany
for inviting me
Super excited to share our work on video completion at
#ECCV2020
!🤩Our method seamlessly removes objects, watermarks, or expands field-of-view from casually captured videos.
Paper:
Project:
With
@gaochen315
, Ayush, and
@JPKopf
Latest paper with collaborators at
@NvidiaAI
on High-quality, long-range video interpolation, and extrapolation through unsupervised latent structure inference followed by a temporal prediction.
Paper:
K. Shih,
@aysegl_dndr
, R. Pottorff, A. Tao,
@ctnzr
@PrasoonPratham
@Twitter
I did not tag you because I did not want to poke fun at you, and my own reach is smaller and perhaps disjoint from yours.
Broadly I appreciate you being vocal about learning ML early on and you being an example.
although basics math should not be equated to ML
I am deeply saddened by the fact that so many students from
@UofT
and other Canadian institutions were on the ill-fated flight.
I am honored that the University has come out publicly in support of these tragic times.
Many dynamic processes, including many in robotics & RL, involve a set of interacting subprocesses, which can be decomposed into locally independent causal mechanisms.
Solution: Counterfactual Data Augmentation (CoDA)
Paper:
w\
@SilviuPitis
E. Creager
want to build the next generation in general purpose object manipulation to help with lab automation?
@A_Aspuru_Guzik
and I are hiring a joint postdoc in robotic manipulation and 3D object perception
@acceleration_c
@VectorInst
@UofTRobotics
+ email us