Otmar Hilliges Profile
Otmar Hilliges

@OHilliges

2,122
Followers
186
Following
61
Media
143
Statuses

Full Professor of Computer Science @ETH Zurich. Working on human-centric #ComputerVision .

Zürich, Switzerland
Joined March 2022
Don't wanna be here? Send us removal request.
@OHilliges
Otmar Hilliges
5 months
In 2022, I finished 2 half marathons, 2 MTB races, and my computer vision research group was firing on all cylinders. In 2023 our lives were turned upside down by severe #LongCovid and #MECFS . 1/
Tweet media one
Tweet media two
155
308
1K
@OHilliges
Otmar Hilliges
11 months
Some of you may have seen this already 😀 I'm still very excited about Vid2Avatar which we will present at #CVPR2023 this week. We propose a method to reconstruct detailed 3D avatars from monocular in the wild videos via self-supervised scene decomposition. 🧵👇 1/
21
249
1K
@OHilliges
Otmar Hilliges
1 year
Warning: personal and emotional thread. Today is #MEAwarenessDay and we are heartbroken 💔. Our smart, athletic, ball-of-energy 10 year old is now bedridden and unable to stand or walk since January due to #LongCOVID and #MECFS . #LongCovidKids 👇🧵 1/
34
332
900
@OHilliges
Otmar Hilliges
5 months
I am painfully aware that we are only two of 60+ million patients with #LongCovid ( #pwLC ) and tens of millions with #MECFS ( #pwME ). Yet despite these numbers, there has been zero public research funding and hence no cure, no therapy, or quality-of-life care. 4/
9
55
376
@OHilliges
Otmar Hilliges
5 months
To make matters worse, our younger son, who also has #LongCovid , has been unable to stand or walk since January, and has been too sick to attend school in over a year. #LongCovidKids 3/
8
63
343
@OHilliges
Otmar Hilliges
5 months
If you want to give some hope to people suffering from these conditions, please consider donating to any of the research foundations below. They do amazing work. You can also help by spreading the word. @polybioRF @OpenMedF @MECFSResearch #chronicillness #nichtgenesen
9
51
327
@OHilliges
Otmar Hilliges
5 months
Today I spend nearly 24/7 lying in bed in a quiet, darkened room. Even the smallest activity, such as eating, can cause me to crash (a rapid worsening of symptoms). 2/
8
36
287
@OHilliges
Otmar Hilliges
1 year
Sad and infuriating reality: 3 years in, and most medical staff remain ill-informed or even unwilling to acknowledge #LongCOVID . Grateful for the few doctors and therapists like @SusScro58355800 who are trying to help. It's time for a change. 4/
4
23
164
@OHilliges
Otmar Hilliges
1 year
Last summer, months after Covid he repeatedly developed headaches and felt ill then suddenly got better. Specialist after specialist dismissed his symptoms, urging him to return to school and exercise. Little did we know, this triggered #PEM , worsening his condition. 2/
2
29
160
@OHilliges
Otmar Hilliges
1 year
In our latest TPAMI paper we introduce FastSNARF! An efficient deformer for non-rigid shapes, represented as neural fields (SDF, NeRF, etc.). It's a 1:1 replacement to our previous work, SNARF but 150x faster! 🚀⚡ #FastSNARF #NeuralFields #NeuralAvatars 🧵👇 1/6
@ait_eth
AIT Lab
3 years
#ICCV21 Session 9 “SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes” by @XuChen71058062 , Yufeng, @Michael_J_Black , Otmar, Andreas @AutoVisionGroup homepage: code: paper:
Tweet media one
1
5
15
1
30
152
@OHilliges
Otmar Hilliges
1 year
It saddens and embarrasses me how comparatively easy it is to secure funding for my own research in computer vision, while in #MECFS researchers struggle to obtain support. May is #MEAwarenessMonth - let’s push for equitable funding and resources. 6/
1
25
151
@OHilliges
Otmar Hilliges
1 year
Awareness for #PEM , #LongCOVID , and #MECFS is crucial to prevent such devastating outcomes. 3/
1
16
149
@OHilliges
Otmar Hilliges
2 years
One step closer to realistic virtual humans: with gDNA we propose a method that generates diverse 3D virtual humans appearing in varied clothing and under full pose control! #CVPR2022 Paper: Video:
2
19
150
@OHilliges
Otmar Hilliges
2 years
Our latest toy is finally online 🎉🚀 Super excited about the upcoming computer vision and graphics research on neural avatars, human state estimation and much more leveraging one of the most advanced volumetric video capture studios in academia! @ait_eth , @mapo1 , @MarkusGross63
@CSatETH
ETH CS Department
2 years
Great opening of the new state-of-the-art volumetric video capture facility to create #4Davatars with exciting fields of application, e.g. in transportation, aging society or health care. @mapo1 @OHilliges @MarkusGross63 @ETH_en
2
15
86
1
23
147
@OHilliges
Otmar Hilliges
2 years
Need to scan and animate yourself? @YufengZzzz will present I M Avatar at #CVPR2022 (oral) - which learns implicit head avatars from monocular videos via correspondence search and differentiable ray-marching. Paper: Project:
2
19
146
@OHilliges
Otmar Hilliges
1 year
Even worse: #MECFS , known since the 50s, yet still no substantial funding or approved medications. With an estimated 17.5 million sufferers pre-pandemic, this is an urgent crisis. It's time to invest real money into basic and translational research. Lives are at stake. 5/
2
22
142
@OHilliges
Otmar Hilliges
1 year
Please help to spread understanding and support for millions - many of them kids and young adults - suffering from these debilitating conditions. Raising awareness is a first step towards better care and research. 💙. #LongCovidKids #LongCovid #MECFS 7/7
9
25
136
@OHilliges
Otmar Hilliges
2 years
Yufeng ( @YufengZzzz ) is about to present our paper on learning implicit morphable head avatars from videos at #CVPR2022 (oral: Image & Video Synthesis Session, 1:30pm). Sadly I can't be there in person but don't fret, Yufeng can still make my Avatar say cheeky things 😆
5
23
137
@OHilliges
Otmar Hilliges
11 months
Vid2Avatar #CVPR2023 : @ChenGuo96 , Tianjian Jiang, @XuChen71058062 , Jie Song and me. More cool demos: Code: Paper: 7/7
3
17
121
@OHilliges
Otmar Hilliges
2 years
For AIs to reason about human interaction with the world, we need generative models that can imagine a plausible future. In our upcoming #CVPR2022 paper, we introduce D-Grasp, an RL-based method that generates physically plausible grasp sequences.
2
16
112
@OHilliges
Otmar Hilliges
11 months
Implicit surfaces are great to reconstruct 3D humans. However, editing is hard because the geometry is represented by a single continuous function. In our upcoming #CVPR2023 paper, we overcome this by combining the advantages of explicit and implicit representations. 🧵👇
1
11
82
@OHilliges
Otmar Hilliges
2 years
The application window for the ETH-CLS PhD program is now open: This is a great opportunity to do cutting-edge research in 3D computer vision and be co-supervised by fantastic advisors at ETH and the MPI ( @Michael_J_Black , @JustusThies ).
1
23
80
@OHilliges
Otmar Hilliges
11 months
Just in time for #CVPR2023 we release Hi4D, a dataset of closely interacting humans in 4D, including 4D textured geometry, multi-view RGB images, registered parametric models, instance segmentation masks in 2D & 3D, and vertex-level contact annotations. 🧵👇
2
9
73
@OHilliges
Otmar Hilliges
7 months
Excited to share our latest collaboration between @ait_eth and @GoogleARVR on very high-resolution face synthesis. @mc_buehler will present this work at #ICCV2023 next week. More info 👇
@mc_buehler
Marcel Bühler
7 months
Introducing "Preface: A Data-driven Volumetric Prior for Few-shot Ultra High-resolution Face Synthesis". TL;DR: Novel views of faces at ultra-high 4K resolution from very few input images. @GoogleARVR @ETH_en @ait_eth #ICCV2023 . See thread below.
8
56
195
0
4
67
@OHilliges
Otmar Hilliges
11 months
For digital humans to come alive they need to be expressive! In our #CVPR2023 paper, X-Avatar, we propose an implicit human avatar model capable of capturing human body poses, hand gestures, facial expressions, and appearance 🕺🏻 1/ 🧵👇
2
14
63
@OHilliges
Otmar Hilliges
11 months
We are excited to announce the HANDS'23 workshop challenge () with AssemblyHands and ARCTIC at #ICCV2023 ! The challenge focuses on hand pose estimation and articulated hand-object reconstruction (Deadline: September 15). See 🧵 for more details.
1
19
54
@OHilliges
Otmar Hilliges
11 months
Building on Fast-SNARF, we take another important step towards real-time neural avatars. Our latest #CVPR paper InstantAvatar proposes a method to reconstruct animatable full-body avatars from a monocular video in less than 60 seconds. ⚡️ #NeuralAvatars #DigitalHumans 🧵👇 1/
@OHilliges
Otmar Hilliges
1 year
In our latest TPAMI paper we introduce FastSNARF! An efficient deformer for non-rigid shapes, represented as neural fields (SDF, NeRF, etc.). It's a 1:1 replacement to our previous work, SNARF but 150x faster! 🚀⚡ #FastSNARF #NeuralFields #NeuralAvatars 🧵👇 1/6
1
30
152
2
13
51
@OHilliges
Otmar Hilliges
7 months
AG3D is an important step on our quest towards fully generative models of realistic 3D humans. It is learned entirely from 2D image collections and requires no 3D supervision. To be presented at #ICCV2023 .
@dong_zijian
Zijian Dong
7 months
**Introducing AG3D**: Learning to Generate 3D Avatars from 2D Image Collections. 🔗Full Info: Catch our presentation at @ICCVConference in Paris! 🗓️Date: Thursday 🕰️Time: 14:30 - 16:30 📌Paper ID: 1836 📍Location: Room "Foyer Sud" 088 🔽Details Below:
2
44
146
1
6
51
@OHilliges
Otmar Hilliges
1 year
Last night @emreaksan has been awarded the Fritz-Kutter award for his outstanding PhD thesis. Congratulations! Very well deserved Emre! Proud advisor moment.
Tweet media one
7
2
51
@OHilliges
Otmar Hilliges
8 months
Checkout our new dataset EMDB for 3D human pose estimation in uncontrolled outdoor environments 👇. To appear at #ICCV2023
@ait_eth
AIT Lab
8 months
We are excited to share EMDB, a novel dataset of 3D human poses for in-the-wild monocular videos, including global trajectories. Data and toolkit code is now available. More details in the thread below. Project Page:
1
19
51
0
8
50
@OHilliges
Otmar Hilliges
2 years
I'm honored to have been awarded an ERC consolidator grant! Looking forward to working with my superstar students @ait_eth on next gen computer vision for collaborative AIs.
2
5
50
@OHilliges
Otmar Hilliges
2 years
@ait_eth The implication: papers are for reading, discussing, ideating - not for counting. To all the junior folks: focus on doing good work, the rest will follow.
0
7
48
@OHilliges
Otmar Hilliges
1 year
Excited to share our upcoming #CVPR2023 paper #PointAvatar that leverages learned, deformable point clouds to create high fidelity 3D facial avatars efficiently from video. 👇
@Michael_J_Black
Michael Black
1 year
Efficiently create accurate and realistic 3D facial avatars that can be animated and lit in new environments. Recent implicit shape models look good but are slow to learn and render. Our #PointAvatar method is high quality and more efficient. Appearing at #CVPR2023 . (1/9)
1
37
178
0
4
47
@OHilliges
Otmar Hilliges
2 years
Excited to be in Prague for #3dv2022 . First in person conference in close to three years.
Tweet media one
1
3
41
@OHilliges
Otmar Hilliges
11 months
Our method generalizes to diverse human shapes, garment styles, and facial features even under challenging poses and complicated environments without requiring any external segmentation. 5/
1
4
41
@OHilliges
Otmar Hilliges
2 years
Congratulations to all #CVPR2022 authors! I'm happy to announce that we also got X/Y papers accepted. Fantastic work by my very talented students @ait_eth lab and great collaborators. Stay tuned.
1
2
41
@OHilliges
Otmar Hilliges
2 years
We're looking for ELLIS PhD candidates at ETH Zurich @ait_eth . Topics in human-centric 3D computer vision include 3D pose, shape and appearance estimation and underlying methods such as neural fields, 3D generative models and more.
0
6
40
@OHilliges
Otmar Hilliges
11 months
I would have loved to go to #CVPR2023 this year. Alas our family’s health situation does not allow for that. For all of you attending: enjoy, stay healthy and go chat my students and postdocs (even if less than 50% of the authors from @ait_eth got a visa).
2
4
39
@OHilliges
Otmar Hilliges
2 years
On the way to the Swiss computer vision faculty retreat. First physical event since iccv ‘19? Exciting! #Gstaad
Tweet media one
2
0
39
@OHilliges
Otmar Hilliges
2 years
Last year we introduced a principled method to learn articulated neural surfaces from scans (). This year at #CVPR2022 we show how to learn personalized avatars from a single RGB-D sequence: . Great work by @dong_zijian & @ChenGuo96 !
1
9
37
@OHilliges
Otmar Hilliges
2 years
Hand pose estimation often ignores temporal information. In TempCLR, we introduce a time-contrastive learning objective that significantly improves hand pose reconstruction from in-the-wild videos and that improves cross-dataset generalization. #3dv2022 #handposeestimation
1
11
34
@OHilliges
Otmar Hilliges
11 months
Thursday poster presentations from @ait_eth and friends. 👇 AM - X-Avatar & instant avatar:
2
0
24
@OHilliges
Otmar Hilliges
11 months
Once reconstructed, the avatars can be animated using arbitrary input pose sequences, including pose dependent deformations. 6/
1
1
23
@OHilliges
Otmar Hilliges
11 months
Frau Dr. Strasser ruft im Schweizer Ärzteblatt für eine bessere Versorgung von #LongCovid und #MECFS Patienten, sowie für mehr und bessere Forschung auf! 👏🙌🙌. Der Paradigmenwechsel ist dringend nötig. @SusScro58355800
1
8
24
@OHilliges
Otmar Hilliges
11 months
We show that solving the problem entirely in 3D - and forgoing the use of 2D segmentation methods - leads to better results overall. We model both the human and the background in the scene jointly, parameterized via two layered neural fields. 3/
1
0
25
@OHilliges
Otmar Hilliges
2 years
Unfortunately Xu ( @XuChen71058062 ) couldn't attain a Visa. However, his method can generate detailed and varied 3D virtual humans. How? Find out at: Poster 167b #CVPR22 ! There will be real humans - I promise 😀. @jinlongyang , @Michael_J_Black , @AutoVisionGroup
@OHilliges
Otmar Hilliges
2 years
One step closer to realistic virtual humans: with gDNA we propose a method that generates diverse 3D virtual humans appearing in varied clothing and under full pose control! #CVPR2022 Paper: Video:
2
19
150
1
4
24
@OHilliges
Otmar Hilliges
1 year
I’m proud that two members of the AIT lab @ait_eth , Marcel Bühler @mc_buehler and Xu Chen @XuChen71058062 , have been recognized as outstanding reviewers for #CVPR2023 ! I always say that if you want to receive good reviews; write good reviews.
@CVPR
#CVPR2024
1 year
We sincerely thank all reviewers, area chairs and senior area chairs who contributed their time to #CVPR2023 ! Reviewers who did an outstanding job are recognized here:
3
9
95
1
1
23
@OHilliges
Otmar Hilliges
2 years
I really was super excited about finally seeing everyone in person at #CVPR2022 but in the end I decided not to go this year. Why? Some of my rationale below (1/6)
1
0
20
@OHilliges
Otmar Hilliges
11 months
Phd students at @ait_eth making the best out of the visa debacle
@XuChen71058062
Xu Chen
11 months
Friends and I finally made it to @CVPR without visa!🎉 Just kidding😄We made a fun "group photo" of @ait_eth with generative fill @Adobe .Thanks @OHilliges for shooting😉Truly astonished by #GenerativeAI ! More motivation to research generative 3D human: maybe 3D selfie one day😉
Tweet media one
1
12
70
0
0
16
@OHilliges
Otmar Hilliges
11 months
We formulate a global optimization over the background and canonical human model. A coarse-to-fine sampling strategy for volume rendering and novel objectives to cleanly separate the human and static background, yield detailed and robust 3D human geometry reconstructions. 4/
Tweet media one
1
0
18
@OHilliges
Otmar Hilliges
11 months
Most existing methods rely on background segmentation in 2D. This can upper-bound the final reconstruction quality due to mislabeled pixels. 2/
Tweet media one
1
1
17
@OHilliges
Otmar Hilliges
11 months
Coming up at #CVPR - hierarchical graph neural networks and physical self-supervision lead to neural garment simulation that generalizes across garments, handles varying topologies and models the dynamics of free flowing clothing. 👇
@ArturGrigorev57
Artur Grigorev
1 year
📢📢 Have been waiting for a garment modeling method, that - 👕👚👖needs just one model for all types of garments - 🥼handles changing topology (e.g. buttons) - 👗realistically models loose garments? Happy to present our #CVPR2023 paper HOOD Project:
4
21
133
0
2
15
@OHilliges
Otmar Hilliges
11 months
❄️ ARCTIC Challenge: is focussed on consistent motion reconstruction. The aim is to reconstruct 3D surfaces of two hands and of an articulated object in each video frame. Crucially, the hand-object contact must be consist to explain object articulation.
1
9
15
@OHilliges
Otmar Hilliges
11 months
@ugoerra Currently processing takes several hours but keep in mind this non-optimized research code. For quasi- real-time checkout #InstantAvatar (also at #CVPR2023 ).
1
1
15
@OHilliges
Otmar Hilliges
2 years
This is big news for @INSAITinstitute , Europe and Computer Vision research in the area. 👏
@INSAITinstitute
INSAIT Institute
2 years
Prof. Luc Van Gool, one of the world’s top AI and Computer Vision scientists, is joining @INSAITinstitute in Sofia. His arrival is made possible by the 6M EUR financial support provided by @SiteGround @CVPR
Tweet media one
0
5
23
0
1
12
@OHilliges
Otmar Hilliges
11 months
Today at #CVPR contributions from the AIT lab and collaborators. 👇 AM:
1
1
12
@OHilliges
Otmar Hilliges
11 months
We're also excited to share a novel 3D human dataset - CustomHumans. Our dataset contains over 600 high-quality scans of humans alongside accurately registered SMPL-X parameters. 5/
1
1
11
@OHilliges
Otmar Hilliges
1 year
Code, paper and more is available here: Github: Paper: 3D viewer: 5/6
2
1
12
@OHilliges
Otmar Hilliges
2 years
Luckily a lot of my students will be there to present their awesome work. So go see their talks and posters and discuss with them. Already looking forward to a virtual #CVPR2022 and an in person #CVPR2023 . (6/6)
0
0
11
@OHilliges
Otmar Hilliges
1 year
FastSNARF enables efficient training and inference of digital 3D humans. Powered by the latest release of aitviewer @ait_eth , we stream live network outputs in quasi-realtime - and so can you! 4/6
1
3
10
@OHilliges
Otmar Hilliges
1 year
@cjmaddison Thanks Chris. I was very saddened when I found out your suffering from post-COVID. Your outspokenness on the issue helped strengthen my resolve to share our story.
0
0
10
@OHilliges
Otmar Hilliges
2 years
Excited to be part of this initiative to build a new world-class AI institute in Europe. I'm happy to hire up to two doctoral students (PhDs) at @INSAITInstitute who will work closely with me and the AIT lab @ait_eth at ETH Zurich.
@INSAITinstitute
INSAIT Institute
2 years
We are excited to launch our world-class AI/CS PhD program (), the first of its kind in Eastern Europe, with @DeepMind PhD fellowships, please share :)
2
24
90
0
4
10
@OHilliges
Otmar Hilliges
2 years
PINA is a tribute to Pina Bausch; it's also the name of our method that creates personalised avatars from RGB-D videos - and makes them dance 🕺💃. Today at #CVPR2022 , Session: 4.2, Poster: 171b @dong_zijian , @ChenGuo96 , Jie Song, @AutoVisionGroup , me
@OHilliges
Otmar Hilliges
2 years
Last year we introduced a principled method to learn articulated neural surfaces from scans (). This year at #CVPR2022 we show how to learn personalized avatars from a single RGB-D sequence: . Great work by @dong_zijian & @ChenGuo96 !
1
9
37
1
2
10
@OHilliges
Otmar Hilliges
11 months
Tomorrow @sammy_j_c will present his work on vision-based hand off of objects from humans to robots at #CVPR2023 . Go see the talk and poster. 🦿 More info in the thread 👇
@sammy_j_c
Sammy Joe Christen
11 months
In our upcoming #CVPR2023 highlight paper, we propose the first framework to learn vision-based human-to-robot handovers. This task is challenging because it requires an accurate simulation of humans and a robot that can react to dynamic human movements. 🧵👇
1
4
16
0
1
8
@OHilliges
Otmar Hilliges
11 months
The reconstructed X-Avatars can be driven by motion that has been extracted from online RGB videos. 5/
1
1
7
@OHilliges
Otmar Hilliges
1 year
A fantastic collaboration between ETH Zurich @ait_eth , MPI-IS @MPI_IS , the University of Tuebingen @uni_tue , and NVIDIA @NVIDIA . Team: Xu @XuChen71058062 , Tianjian Jiang, Jie Song, Max Rietmann @why_maxim , Andreas Geiger @AutoVisionGroup , @Michael_J_Black , and myself! 6/6
0
0
7
@OHilliges
Otmar Hilliges
2 years
@ait_eth @mapo1 @MarkusGross63 Also: we're hiring post-docs and PhD students at ETH @ait_eth and INSAIT @INSAITinstitute . Come work with us on the future of human-centric computer vision.
0
1
7
@OHilliges
Otmar Hilliges
11 months
Code, paper, and dataset are now publicly accessible for research purposes! 👨‍💻Code: 📄Paper: 📊Dataset: 🎥Video: 🖼️ Poster: Come chat with us at #CVPR2023
0
2
7
@OHilliges
Otmar Hilliges
2 years
Fantastic work with Work by Andrea Ziani, @zc_alexfan , @mkocab_ and @sammy_j_c ! (first two authors contributed equally) More info and materials:
0
2
7
@OHilliges
Otmar Hilliges
11 months
Code, paper, video, and data at: Joint work by Kaiyue Shen, @ChenGuo96 , @ManuelKaufmann1 , @JuanJos , @JPCValentin , Jie Song, and myself.
0
2
5
@OHilliges
Otmar Hilliges
1 year
How? We replace the MLP and leverage a compact voxel grid to represent the skinning weight field, thanks to its inherent smoothness. Plus, we exploit the linearity of LBS to streamline computations, slashing time without compromising accuracy. 2/6
Tweet media one
1
0
6
@OHilliges
Otmar Hilliges
2 years
Our method can also be used to correct imperfect labels (e.g., from existing datasets) or predictions from static grasp synthesis methods and even image based pose estimates.
1
2
6
@OHilliges
Otmar Hilliges
2 years
Wanting the pandemic to be over and it actually being over, sadly, is not the same thing. My social media stream was full of folks coming back with Covid from #CHI2022 (also in NOLA), some getting stuck in quarantine hotels for significant amounts of time. (2/6)
1
0
6
@OHilliges
Otmar Hilliges
2 years
Today Sammy will present our paper on learning natural and physically plausible human object interaction sequences at #CVPR22 (Session: Faces and Gestures, Poster: 181b). Do stop by. @sammy_j_c , @mkocab_ , @emreaksan , @HwangboJemin Project
@OHilliges
Otmar Hilliges
2 years
For AIs to reason about human interaction with the world, we need generative models that can imagine a plausible future. In our upcoming #CVPR2022 paper, we introduce D-Grasp, an RL-based method that generates physically plausible grasp sequences.
2
16
112
0
5
6
@OHilliges
Otmar Hilliges
2 years
Another awesome collaboration with @XuChen71058062 , @Michael_J_Black , @AutoVisionGroup
0
0
5
@OHilliges
Otmar Hilliges
11 months
Explore the InstantAvatar project in depth: Project page: Github: Read our paper: Joint work by Tianjian Jiang, @XuChen71058062 , Jie Song, and myself @ait_eth . 5/5
2
3
4
@OHilliges
Otmar Hilliges
2 years
Also don't forget to stop by during the poster session. More info: @YufengZzzz , @mc_buehler , @vfabrevaya , @Michael_J_Black
1
2
5
@OHilliges
Otmar Hilliges
2 years
@3DVconf @SattlerTorsten Thanks for inviting me! Really enjoyed giving the talk and the discussions.
0
0
5
@OHilliges
Otmar Hilliges
2 years
With diminished value of social interaction and a #ClimateEmergency going on I cannot justify this additional long-haul flight (already committed to another US trip this year). (5/6).
1
0
5
@OHilliges
Otmar Hilliges
1 year
In FastSNARF, costly MLP evaluations and LBS calculations are replaced by a single tri-linear interpolation step—lightweight and super fast (18x faster). A custom CUDA implementation provides an additional speed-up factor of 8x! 3/6
Tweet media one
1
0
5
@OHilliges
Otmar Hilliges
11 months
And we're not stopping here either! Because Tianjian is awesome, he improved the method since acceptance and integrated it with SAM for accurate in-the-wild segmentation. Now InstantAvatar can reconstruct 3D avatars from monocular in-the-wild videos in just minutes! 4/5
1
0
5
@OHilliges
Otmar Hilliges
2 years
We propose a hierarchical RL-based method that decomposes the task into low-level grasping control and high-level motion synthesis. This method can generate novel hand sequences that approach, grasp, and move an object to a desired location, while retaining human-likeness.
1
1
5
@OHilliges
Otmar Hilliges
11 months
PM - part I:
1
1
2
@OHilliges
Otmar Hilliges
2 years
Anecdotally many of these cases came from social events. Despite being fully vaxxed I would probably stay away from many of these. (3/6)
1
0
4
@OHilliges
Otmar Hilliges
2 years
We supervise via unlabelled in-the-wild videos with a time-contrastive learning objective. We show that this 1) improves hand reconstruction and yields smoother estimates; 2) significantly improves cross-dataset generalization; 3) similar hand poses are closer in feature space.
1
2
4
@OHilliges
Otmar Hilliges
11 months
PM - part II:
0
0
3
@OHilliges
Otmar Hilliges
2 years
This is hard since this requires reasoning about the complex articulation of the human hand and the physical interactions with the object (e.g., collisions, friction, gravity). Even ground truth labels from existing grasping datasets do not lead to stable grasps.
1
1
4
@OHilliges
Otmar Hilliges
11 months
The workshop is organized by: Hyung Jin Chang, @zc_alexfan , @OHilliges , @tkhkaeio , Yoichi Sato, @mu4yang , @angelayao101 . Winners and prizes will be announced and awarded during the #ICCV2023 HANDS workshop. Come join us!
0
1
4
@OHilliges
Otmar Hilliges
11 months
🚗 AssemblyHands Challenge: the AssemblyHands dataset, includes 3rd-person and egocentric images of toy assembly and disassembly, along with 3D hand pose annotations. Participants must estimate 3D hand joints from an egocentric view.
1
1
4
@OHilliges
Otmar Hilliges
11 months
We propose a hybrid representation that incorporates the advantages of parametric meshes and neural fields. A skinned, animatable mesh is used to store local features at each vertex. A global decoder generates high frequency details from these features. 2/
1
0
3
@OHilliges
Otmar Hilliges
1 year
@ait_eth @mc_buehler @XuChen71058062 and not to forget former lab members Xucong Zhang @xucong_zhang and Wookie Park @swookpark !
2
1
3
@OHilliges
Otmar Hilliges
2 years
We introduce the novel task of dynamic grasp synthesis: given an initial object pose and a static grasp reference, the goal is to move the object to an arbitrary goal position in a human-like, physically plausible way.
1
1
3
@OHilliges
Otmar Hilliges
11 months
To facilitate future research on expressive avatars, we contribute the X-Humans dataset, containing 233 sequences (20 participants), a total of 35,500 frames. It includes high-quality textured scans of expressive human motions and the corresponding SMPL[-X] registrations. 4/
1
0
3
@OHilliges
Otmar Hilliges
2 years
@NicoChauvin74 @YufengZzzz That should be a matter of days once everyone is back from CVPR; stay tuned
0
0
2
@OHilliges
Otmar Hilliges
11 months
AM - part II - HOOD & HI4D:
1
0
2
@OHilliges
Otmar Hilliges
11 months
Our method reconstructs individual actors in dynamic interaction with complete geometry & detailed contact info. Thus we attain 3D/2D instance segmentation masks, body model registrations, and vertex-level contact labels. 4/
1
0
2