Ashish Profile
Ashish

@ashishkr9311

2,780
Followers
11
Following
9
Media
49
Statuses

Tesla | Ph.D. @ UC Berkeley

Joined April 2016
Don't wanna be here? Send us removal request.
@ashishkr9311
Ashish
5 months
@Tesla_Optimus team behind optimus (figuratively and literally)!
Tweet media one
285
418
4K
@ashishkr9311
Ashish
9 months
Just graduated from @UCBerkeley , now at @Tesla ! In my PhD, I showed the effectiveness of pure learning for: -Locomotion () -Dexterous manipulation () -Universal drone flight () Next up: humanoids! ( @Teslasbot )
37
41
835
@ashishkr9311
Ashish
5 months
Easily the most beautiful and elegant hardware I have seen. By a margin. Check out our new humanoid build at @Tesla and what its capable of!
15
26
414
@ashishkr9311
Ashish
8 months
A sneak peek of what we have been up to at @Tesla with the humanoid robot -- @Tesla_Optimus ! A robust controller is critical to real-world deployment -- we get it as a natural consequence of using end-to-end learning with large scale data!
@Tesla_Optimus
Tesla Optimus
8 months
Optimus can now sort objects autonomously πŸ€– Its neural network is trained fully end-to-end: video in, controls out. Come join to help develop Optimus (& improve its yoga routine 🧘) β†’
3K
8K
35K
15
31
335
@ashishkr9311
Ashish
4 months
Such elegance! Gen 2 @Tesla_Optimus
9
15
330
@ashishkr9311
Ashish
3 months
Looking more human by the day! @Tesla_Optimus , @Tesla
18
10
251
@ashishkr9311
Ashish
2 years
Check out our robot dog walking on stepping stones in my living room! Can also do stairs, construction sites, slippery slopes, etc -- all with just a single onboard RGBD camera, onboard compute, and no maps of the environment!
@JitendraMalikCV
Jitendra MALIK
2 years
Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to a disastrous fall, and rough terrain. All with just a single onboard RGBD camera & no maps.
23
66
620
7
13
112
@ashishkr9311
Ashish
8 months
This is one of the highest compliments you can get on a real robot video β€” too good to be true!!
@bgrahamdisciple
B Graham Disciple
8 months
Here is clear and convincing evidence that Tesla faked this robot video. I wonder if any reporters or prosecutors will notice.
160
45
383
5
5
86
@ashishkr9311
Ashish
6 months
Beautiful! Nice work @UnitreeRobotics ! They are on track to become the closest (and potentially the only) competitors to the Spot robot dog from @BostonDynamics !
2
6
63
@ashishkr9311
Ashish
4 months
For those who expect a BostonDynamics comparison, we have had jumping robots for years, and yet we don't see them around. Robustness is a much harder. The % of people who have seen this robot live is much higher than for BD, and that is true progress!
1
3
62
@ashishkr9311
Ashish
5 months
Amazing results!! First evidence of real-world walking using sim-to-real RL on humanoids!
@ir413
Ilija Radosavovic
5 months
we have trained a humanoid transformer with large-scale reinforcement learning in simulation and deployed it to the real world zero-shot
95
259
2K
1
6
58
@ashishkr9311
Ashish
1 year
Going beyond locomotion with only legs! Joint work with @xuxin_cheng and @pathak2206
@pathak2206
Deepak Pathak
1 year
While we have made progress towards replicating the agility of animal mobility in robots, legs aren't just for walking, they are extended arms! Our #ICRA 2023 paper enables legs to act as manipulators for agile tasks: climbing walls, pressing button etc.
3
73
358
0
0
27
@ashishkr9311
Ashish
2 years
Excited to present this real-world learning result where we start with a blind walking policy, and with just 30 minutes of real-world experience, learn to use vision to walk on complex terrains!
@antoniloq
Antonio Loquercio
2 years
We train a robot πŸ• to traverse complex terrains with a monocular RGB camera from its own real-world experience! To do so we propose Cross-Modal Supervision (CMS), an algorithm to supervise vision using proprioception. Project Page: 1/5
21
58
331
0
0
27
@ashishkr9311
Ashish
3 months
Wow! Looks super cool!
@xiaolonw
Xiaolong Wang
3 months
Let’s think about humanoid robots outside carrying the box. How about having the humanoid come out the door, interact with humans, and even dance? Introducing Expressive Whole-Body Control for Humanoid Robots: See how our robot performs rich, diverse,…
94
213
1K
1
3
22
@ashishkr9311
Ashish
2 years
Excited to share our follow-up work on RMA which achieves agile locomotion behaviors without any motion or imitation priors! Check out our high-speed gait with an emergent flight phase!
@pathak2206
Deepak Pathak
2 years
Excited to report our progress on agile locomotion! In CoRL'21 paper, we simplify RMA rewards with just an energy term motivated by biomechanics. Optimal gaits *emerge* across speeds w/o *any* priors like high-speed galloping with emergent flight phase!!
4
35
247
0
2
20
@ashishkr9311
Ashish
4 months
!
@ir413
Ilija Radosavovic
4 months
hello san francisco
47
42
496
2
0
15
@ashishkr9311
Ashish
1 year
Code released for: [1] RMA: Rapid Motor Adaptation for legged robots: [2] Learning Visual Locomotion with Cross-Modal Supervision: [1] trains an adaptive blind policy, [2] continually improves its visual system in the real world
@antoniloq
Antonio Loquercio
1 year
Just released the code for my legged locomotion project: Excited to see what others can do with it! #robotics #opensource
2
21
124
0
0
15
@ashishkr9311
Ashish
9 months
@TeslabotOTA @UCBerkeley @Tesla @Teslasbot Pure learning informally implies using large-scale data to learn controllers end-to-end. These controllers (neural nets in my case) go directly from sensors to motor position/torques. My work uses trial and error search (reinforcement learning) in sim to learn these controllers.
1
1
11
@ashishkr9311
Ashish
2 years
We go against our A1 robot's manufacturer's recommendation to make it walk on stairs! It can be seen crossing bar stools in my living room and climbing steps almost as high as the robot's shoulder -- see thread for details. We will demo this at CoRL'22 in New Zealand!
@pathak2206
Deepak Pathak
2 years
After 3yrs of locomotion research, we report a major update in our #CoRL2022 (Oral) paper: vision-based locomotion. Our small, safe, low-cost robot can walk almost any terrain: high stairs, stepping stones, gaps, rocks. Stair for this robot is like climbing walls for humans.
34
159
897
0
1
8
@ashishkr9311
Ashish
9 months
@seabassreyes @UCBerkeley @Tesla @Teslasbot I think it's a great testbed to understand motor control and coordination with multimodal inputs!
0
0
7
@ashishkr9311
Ashish
2 years
Join us at #CVPR2022 for the demo!
@pathak2206
Deepak Pathak
2 years
Attending first in-person conf since the pandemic at #CVPR2022 . We gave live demos of our robots during my talk at Open-World Vision workshop. The convention center mostly had dull flat ground, so we had to find scraps and be creative with them to build "difficult" terrains! πŸ˜…
2
17
215
0
0
6
@ashishkr9311
Ashish
9 months
@FahadAlkhater9 @UCBerkeley @Tesla @Teslasbot Thanks! That work operationalized the RMA algorithm for in-hand rotation and was led by @HaozhiQ . He has more interesting stuff coming soon I believe!
0
1
6
@ashishkr9311
Ashish
3 months
One of the fastest moving startups I have witnessed β€” in both tech velocity and product reach!
@AravSrinivas
Aravind Srinivas
3 months
Excited to share an announcement from MWC: SK Telecom, as our first mobile carrier partnership at Perplexity and our expansion in South Korea. All SK Telecom users will soon get access to Perplexity Pro, and SKT will work with Perplexity on many other applications of online LLMs.
Tweet media one
36
17
608
0
0
5
@ashishkr9311
Ashish
1 year
@chr1sa @ieee_ras_icra My work uses pure learning in sim with successful real world deployment in extremely challenging real world tasks (quadrupeds, bipeds, drones and multi fingered hands). And I’m in this photo!! My work:
1
0
4
@ashishkr9311
Ashish
1 year
@_jameshatfield_ @_jameshatfield_ we recently released a paper on walking with a monocular RGB camera. It learns to use vision directly in the real world without the use of external supervision. No simulation is used for vision. Work with @JitendraMalikCV and @antoniloq :
@antoniloq
Antonio Loquercio
2 years
We train a robot πŸ• to traverse complex terrains with a monocular RGB camera from its own real-world experience! To do so we propose Cross-Modal Supervision (CMS), an algorithm to supervise vision using proprioception. Project Page: 1/5
21
58
331
0
0
2
@ashishkr9311
Ashish
8 months
@mmitchell_ai Reinforcement Learning from Human Feedback (RLHF) β€” in this case from expert therapists. Curious to understand your perspective on why you think its unlikely to work.
1
0
2
@ashishkr9311
Ashish
9 months
@KyeGomezB @UCBerkeley @Tesla @Teslasbot Will post this soon on my webpage!
1
0
1
@ashishkr9311
Ashish
9 months
@romeo_sierra0 @UCBerkeley @Tesla @Teslasbot My work primarily uses reinforcement learning in simulation. For general manipulation tasks, I'd say the next step is to try imitation learning and we will definitely see a wave of imitation learning results in the coming years. That said, I suspect that RL will make a comeback.
0
0
1