Just graduated from
@UCBerkeley
, now at
@Tesla
! In my PhD, I showed the effectiveness of pure learning for:
-Locomotion ()
-Dexterous manipulation ()
-Universal drone flight ()
Next up: humanoids! (
@Teslasbot
)
A sneak peek of what we have been up to at
@Tesla
with the humanoid robot --
@Tesla_Optimus
!
A robust controller is critical to real-world deployment -- we get it as a natural consequence of using end-to-end learning with large scale data!
Optimus can now sort objects autonomously π€
Its neural network is trained fully end-to-end: video in, controls out.
Come join to help develop Optimus (& improve its yoga routine π§)
β
Check out our robot dog walking on stepping stones in my living room! Can also do stairs, construction sites, slippery slopes, etc -- all with just a single onboard RGBD camera, onboard compute, and no maps of the environment!
Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to a disastrous fall, and rough terrain. All with just a single onboard RGBD camera & no maps.
Beautiful!
Nice work
@UnitreeRobotics
!
They are on track to become the closest (and potentially the only) competitors to the Spot robot dog from
@BostonDynamics
!
For those who expect a BostonDynamics comparison, we have had jumping robots for years, and yet we don't see them around. Robustness is a much harder.
The % of people who have seen this robot live is much higher than for BD, and that is true progress!
While we have made progress towards replicating the agility of animal mobility in robots, legs aren't just for walking, they are extended arms!
Our
#ICRA
2023 paper enables legs to act as manipulators for agile tasks: climbing walls, pressing button etc.
Excited to present this real-world learning result where we start with a blind walking policy, and with just 30 minutes of real-world experience, learn to use vision to walk on complex terrains!
We train a robot π to traverse complex terrains with a monocular RGB camera from its own real-world experience!
To do so we propose Cross-Modal Supervision (CMS), an algorithm to supervise vision using proprioception.
Project Page:
1/5
Letβs think about humanoid robots outside carrying the box. How about having the humanoid come out the door, interact with humans, and even dance?
Introducing Expressive Whole-Body Control for Humanoid Robots:
See how our robot performs rich, diverse,β¦
Excited to share our follow-up work on RMA which achieves agile locomotion behaviors without any motion or imitation priors!
Check out our high-speed gait with an emergent flight phase!
Excited to report our progress on agile locomotion!
In CoRL'21 paper, we simplify RMA rewards with just an energy term motivated by biomechanics. Optimal gaits *emerge* across speeds w/o *any* priors like high-speed galloping with emergent flight phase!!
Code released for:
[1] RMA: Rapid Motor Adaptation for legged robots:
[2] Learning Visual Locomotion with Cross-Modal Supervision:
[1] trains an adaptive blind policy, [2] continually improves its visual system in the real world
@TeslabotOTA
@UCBerkeley
@Tesla
@Teslasbot
Pure learning informally implies using large-scale data to learn controllers end-to-end. These controllers (neural nets in my case) go directly from sensors to motor position/torques. My work uses trial and error search (reinforcement learning) in sim to learn these controllers.
We go against our A1 robot's manufacturer's recommendation to make it walk on stairs! It can be seen crossing bar stools in my living room and climbing steps almost as high as the robot's shoulder -- see thread for details.
We will demo this at CoRL'22 in New Zealand!
After 3yrs of locomotion research, we report a major update in our
#CoRL2022
(Oral) paper: vision-based locomotion.
Our small, safe, low-cost robot can walk almost any terrain: high stairs, stepping stones, gaps, rocks.
Stair for this robot is like climbing walls for humans.
Attending first in-person conf since the pandemic at
#CVPR2022
. We gave live demos of our robots during my talk at Open-World Vision workshop.
The convention center mostly had dull flat ground, so we had to find scraps and be creative with them to build "difficult" terrains! π
Excited to share an announcement from MWC: SK Telecom, as our first mobile carrier partnership at Perplexity and our expansion in South Korea. All SK Telecom users will soon get access to Perplexity Pro, and SKT will work with Perplexity on many other applications of online LLMs.
@chr1sa
@ieee_ras_icra
My work uses pure learning in sim with successful real world deployment in extremely challenging real world tasks (quadrupeds, bipeds, drones and multi fingered hands). And Iβm in this photo!!
My work:
@_jameshatfield_
@_jameshatfield_
we recently released a paper on walking with a monocular RGB camera. It learns to use vision directly in the real world without the use of external supervision. No simulation is used for vision. Work with
@JitendraMalikCV
and
@antoniloq
:
We train a robot π to traverse complex terrains with a monocular RGB camera from its own real-world experience!
To do so we propose Cross-Modal Supervision (CMS), an algorithm to supervise vision using proprioception.
Project Page:
1/5
@mmitchell_ai
Reinforcement Learning from Human Feedback (RLHF) β in this case from expert therapists.
Curious to understand your perspective on why you think its unlikely to work.
@romeo_sierra0
@UCBerkeley
@Tesla
@Teslasbot
My work primarily uses reinforcement learning in simulation. For general manipulation tasks, I'd say the next step is to try imitation learning and we will definitely see a wave of imitation learning results in the coming years. That said, I suspect that RL will make a comeback.