In order to be useful, Optimus not only has to be accurate but also safe, reliable and fast. Excited to share some of the work our team has been pushing including early testing in one of the Tesla factories!
Bring it on 2024 💪
Optimus is by far, the most exciting and challenging project I’ve worked on! Here is to unlocking new capabilities and pushing boundaries in robotics and AI.
2023 has been awesome for Optimus.
We’ve moved from an exploratory prototype (Bumblebee/cee) to a more stable, Tesla-designed platform (Optimus Gen-1).
We’ve improved our locomotion stack, frequently walking off-gantry without falls and with a faster, increasingly more…
@DrJimFan
IMO Biology would be one of the last disciplines to fall. Collecting data, reproducing results, testing new algorithms is way **easier** in robotics than bio/genetics.
We introduce PatchGame, where two agents learn to communicate via discrete compositional symbols in a referential game. Visit our poster tomorrow at
#Neurips2021
to learn more!
When 🗓: Tue Dec 07, 11:30 - 01:00 PM(EST)
Where📍:
#DeepLearning
#NLProc
(1/2)
Tiny life update: I'm excited to join the
@Tesla_Optimus
team. I'm "optimustic" of the future we are building with
#AI
in
#robotics
🤖
If you want to work on teaching humanoid robots to do anything, we are hiring: .
It’s amusing that even today ML courses don’t have a single lecture on “looking at the data” which is perhaps the single most important thing to do as a researcher/engineer.
@karpathy
’s blog to the rescue -
Optimus has a new avatar! Gen 2 is leaner, faster, and more dexterous than Gen 1. What’s even better is the contagious speed and passion of the team building it ❤️. Hope you enjoy the video.
P.S. Don’t forget to watch the stinger at the end!
P.P.S.
📢Check out our
#CVPR2021
paper "The Lottery Ticket Hypothesis for Object Recognition" in the Session 1 - on Monday, June 21, 2020 11:00 AM - 1:30 PM EDT.
✨We show how to perform object detection/segmentation/pose estimation with 10% of the weights with no drop in performance!
My deep learning journey started with word2vec. My very first DL project was writing a theano version of word2vec to learn graph embeddings for financial data back in 2014. We have come a long way!
Congrats to all the authors for producing gem of a work!
On behalf of our co-authors Tomáš Mikolov,
@ilyasut
and Kai Chen,
@greg_corrado
and I were delighted to accept the
#NeurIPS2023
Test of Time Award for the "word2vec" paper (). Thanks to the
@NeurIPSConf
test of time committee for honoring us with this…
We introduce LilNetX, a framework to train large neural networks which take up fraction of disk space and are much faster at inference!
Visit our poster
@iclr_conf
#iclr2023
in Rwanda to learn more!
When 🗓: May 02, 11:30 AM (Local Time)
Code & Video:
A
#CVPR2022
reviewer starts with "Very innovative work showing ..." and follows up with a strong reject coz image/video compression on a real-world dataset is a toy application?🤦♂️ Thx, we'll take the innovation elsewhere 😅
#gradlife
#rant
#computervision
📢 Join us as we present ASIC at
#ICCV2023
on Wed! We propose a method for dense correspondence that DOES NOT need tons of data/3D priors/manual annotations! How do we do it? Check out the 🧶 and visit
Oral: Wed 4:30-6:00 PM
Poster: Wed 2:30-4:30 PM
Web:
#NeRF
that use feature-grids such as
#InstantNGP
(by
@mmalex
and colleagues) are amazing since they can be trained in minutes and allow sampling in seconds! But these feature grids can be expensive to store!
SHACIRA can make your NeRFs 40x smaller. Checkout our work at
#ICCV2023
We introduce SHACIRA, a compression approach for Implicit Neural Representations. We reduce size of NeRFs by 35x and can also compress images/videos.
Visit our poster
@ICCVConference
#iccv2023
in Paris on Oct 5th, 10:30am (Local)! Project page with code:
@tim_zaman
I often think it is insane how short my ramp up time was in Tesla (I could train a model on the cluster on my day 1, took me > 2 weeks in Google intern, > 1 week in nvidia intern). Thanks for your contributions. You'll surely be missed!
Doing a Ph.D. is like being in New York City:
- Spend the first year re-learning all the basics
- Spend the rest of your life claiming how much better it is
#phdlife
Optimus can now sort objects autonomously 🤖
Its neural network is trained fully end-to-end: video in, controls out.
Come join to help develop Optimus (& improve its yoga routine 🧘)
→
Even after doing
#computervision
for so long, I often forget that RGB is just high-dimensional representation of a *mostly* scalar signal (wavelength).
Compositional Generalization is going to be the next big milestone in
#GenAI
and we need better datasets and benchmarks to make progress.
Checkout our work Chop-N-Learn at
#ICCV2023
!
Researchers
@umiacs
are teaching computers how to recognize and imagine fruits and vegetables in various forms—even as they’re being peeled, sliced or chopped into pieces.
Their work to advance generative AI will be presented at
#ICCV2023
.
Learn more:
@DhruvBatraDB
Nice benchmark although I am not sure how you conclude that VLMs are nearly blind! GPT-4V is clearly far better than GPT-4 in all categories.
While we missed
#ECCV2022
in person, do check out our work on finding a context-based raster scan order for images or a gif.
Top: Context-based Neural Space-filling Curve
Bottom: Peano-Hilbert Curve
We present Neural Space-filling Curves, a data-driven approach to infer a context-based scan order for a set of images. Thank you to all my great collaborators!
@kamalgupta09
, Larry Davis, and
@abhi2610
.
#ECCV2022
Paper:
Project:
@taiyasaki
@CVPR
There is a clause in reviewer guidelines that says "reviewers should not request substantial additional experiments" but
@CVPR
probably needs to expand on the definition of "substantial".
Excited to be a grad student volunteer for this one-of-a-kind initiative! There is an exciting speaker lineup and posters in Room 202 on Monday morning!
Feel free to DM if you want to chat. 😃
#CVPR2022
@CVPR
First time attending CVPR? Join the CVPR Academy on Monday morning!
#CVPR2022
Program:
Time: June 20, 9:30am - 1:30pm (CDT)
Location: Room 202 & Virtual
@childejc
A bit under-appreciated fact about factories (not just Tesla) is that a majority of the tasks are already automated by machines/robotic arms. For the remaining tasks, it is just too inefficient to design/manufacture special purpose robots.
📢 Join us as we present ASIC at
#ICCV2023
on Wed! We propose a method for dense correspondence that DOES NOT need tons of data/3D priors/manual annotations! How do we do it? Check out the 🧶 and visit
Oral: Wed 4:30-6:00 PM
Poster: Wed 2:30-4:30 PM
Web:
As ML roles grow, we need scalable ways to test candidates' practical ML skills even before interviews. (CS coding tests don't correlate well with ML skills.)
Introducing — create challenges, invite candidates, see how they do!
Would this be of interest?
Some highlights/notes from
@icvss
#icvss2022
Day 1 - We started off with a talk from Prof. Andrea Vedaldi on unsupervised learning and how to extract useful representations for various tasks from self-supervised models
I've loved
#ICCV
since my first one in 1990. In this blog post, I reflect on the last 31 years of ICCV and the field of computer vision. Hopefully you enjoy this on the last day of
#ICCV2021
. See you in Paris in 2023!
@CSProfKGD
One of our borderline reviews in all likelihood is written by LLM 🤦♂️. Summary is almost verbatim copy of abstract. Strength/Weakness don't have even one specific detail from paper but statements such as "analysis is interesting and thought-provoking", or "language is unclear"
Since 1969 Strassen’s algorithm has famously stood as the fastest way to multiply 2 matrices - but with
#AlphaTensor
we’ve found a new algorithm that’s faster, with potential to improve efficiency by 10-20% across trillions of calculations per day!
OpenAI’s chief scientist: expresses curiosity/openness about a mysterious idea, caveats with “may”.
Meta’s chief AI scientist: the certainty of "nope".
Probably explains a lot of the past 5 years.
Dear Meta AI researchers: My email address is sama
@openai
.com. We are hiring!
While I couldn't attend
#ICCV2023
in person (visa woes: 1 sanity: 0🥲), do visit our posters and say hi to my friends and colleagues presenting these awesome works!
Hit me up if you want to chat about
#robotics
,
#generativemodels
,
#compression
, or just about anything
#AI
@keenanisalive
I also love
@AaronHertzmann
's Ted Talk in this context. To paraphrase - technology has transformed the way humans create art multiple times in history, and AI software is yet another way for humans to create art
#dalle2
If you're interested in Vision Transformers (ViT) and attending
#CVPR2023
, checkout our work on understanding the role of supervision in ViTs and its impact on various downstream tasks.
tldr: checkout the 🧵
We’re looking forward to presenting our work “Teaching Matters: Investigating the Role of Supervision in Vision Transformers” next week at
#CVPR2023
! We’ll be in the Tues-PM poster session at board 321.
Links and some key results below.
@_sakshams_
@kamalgupta09
@abhi2610
[1/5]
Introducing 𝗥𝗧-𝗫: a generalist AI model to help advance how robots can learn new skills. 🤖
To train it, we partnered with 33 academic labs across the world to build a new dataset with experiences gained from 22 different robot types.
Find out more:
📢Interested in applying for a PhD in Computer Graphics? An opportunity to get mentorship from current grad students and postdocs for free! Apply ASAP to get a head start 💯
#graphics
#deeplearning
#phdchat
🧑🎓📚🏛️🙋
Grad school application season is upon us! If you want to apply to a PhD or Masters program in Graphics (especially if you are from an underrepresented group) this is for you!
Fill out our form to be paired with a mentor who will help you apply:
The future of storytelling is here! A glimpse into
@ILMxLAB
's approach to transition from storytelling to "StoryLiving," and see one of the greatest disruptor of entertainment in our generation.
I was part of the undergraduate mentorship sub-committee with some amazing peers and we had a lot of fun chatting, organizing and mentoring ug students.
Please consider joining 🙏 (there are opportunities at all levels and it is not a lot of workload 😅).
#graphics
#mentorship
📢📢 Attention graphics research community! The
@siggraph
Research Career Development Committee (RC⚡DC) has released its annual call for new members. Please fill out this form if you are interested in joining:
ICVSS - Numbers
4 days to the opening
631 applications from more than 40 countries in 2022
180 selected attendees
15 Keynote Speakers
1 Reading Group
1 Essay Competition
2 Poster sessions
1 Industry meets Students Session
4 Social Events 😉
Cannot wait more! ;-)
Next in the
#AI
essentials learning series, the lead architect of CUDA, Stephen Jones, shows how the
#GPU
works and connects the dots between physical hardware and parallel computing.