Siyu Tang @VLG-ETHZ Profile
Siyu Tang @VLG-ETHZ

@SiyuTang3

7,044
Followers
493
Following
20
Media
162
Statuses

Assistant Professor at ETH Zurich. Working on Computer Vision, Machine Learning, and Digital Humans.

Switzerland
Joined June 2020
Don't wanna be here? Send us removal request.
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
We actually won the 3DV Best Paper Award for the Grasping Field Paper!!! Super proud of my first year PhD student Korrawe Karunratanakul! Paper:
Tweet media one
9
33
378
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
I'm excited to share that our new generative human motion model GAMMA ( #CVPR2022 , with @cnsdqzyz ) has gained life in the exhibition "Motion. Autos, Art, Architecture” at the Guggenheim Museum Bilbao curated by Norman Foster Foundation! (1/3)
3
34
333
@SiyuTang3
Siyu Tang @VLG-ETHZ
11 months
Flight canceled. First time unable to attend CVPR due to visa issues since 2012. Being part of the organization team, I've seen the effort made by the organizers to assist with visa processes. The frustration is shared by everyone involved. IRCC is not doing any favor for Canada.
@CVPR
#CVPR2024
11 months
For several months, the organizers have actively raised concerns with Canadian immigration authorities (IRCC), government agencies, and politicians. In some cases, we have been successful in helping people obtain visas, but in many cases, our efforts were unsuccessful. (2/4)
2
4
28
29
27
307
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
5 papers accepted at #CVPR2021 . Congrats to my students and all the collaborators. Topics are 1. Learning neural articulated occupancy of people; 2. Piecewise transformation field for 3D human body fitting; 3. Marker-based human motion synthesis;
7
3
265
@SiyuTang3
Siyu Tang @VLG-ETHZ
4 months
I’m very happy to share another recent work: Diffusion Noise Optimization (DNO). We demonstrate how to optimize diffusion latent noise using criterion functions defined in motion space, which serves as universal motion priors for a wide range of motion-related tasks.
1
26
227
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Super excited to share MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images! We build human avatars from monocular depths or a single scan efficiently, using meta-learned generalizable and controllable neural SDFs. #NeurIPS2021 Code:
0
20
153
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
SCALE models 3D clothed humans with hundreds of articulated surface elements, resulting in avatars with realistic clothing that deforms naturally even in the presence of topological change! #cvpr2021 Awesome work from @qianli_m !
@_akhaliq
AK
3 years
SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements pdf: abs: project page:
0
11
48
1
20
130
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
We are excited to announce the EgoBody challenge at #ECCV2012 ! The EgoBody benchmark provides pseudo-ground-truth body meshes for natural human-human interaction sequences captured in the egocentric view. Details about the dataset and the challenge:
4
26
119
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
#3DV2021 Oral session 1. Excited to share HALO: a neural implicit representation of human hands that is fully driven by 3D keypoints. A new hand model tailored for interaction modelling. Joint work with Korrawe, @zc_alexfan and @ait_eth Code is here: .
Tweet media one
0
12
96
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 year
We are excited to announce the upcoming 3DV 2024 will take place in Davos, Switzerland. Submission: July 31, 2023 Conference: March 18-21, 2024 Location: Davos Congress Centre (the same venue as the annual World Economic Forum) Mark your calendars and join us in Davos! ⛷️🏂
@3DVconf
International Conference on 3D Vision
1 year
📣 Exciting news! 📣 The call for papers is now open for 3DV2024, taking place in the stunning Davos, Switzerland! 📝 Submission: by July 31. 🔗 Details: 🌟 Don't miss the chance to share your research at #3DV2024 🎉 #CallForPapers #Davos #Switzerland
Tweet media one
1
37
123
1
9
96
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Thanks for sharing! Our new #iccv oral paper, LEMO, leverages large-scale Mocap datasets to learn powerful motion priors, and enables accurate human motion capture in 3D scenes with a monocular Kinect. Great collaboration between #Microsoft and @CSatETH !
2
11
89
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
#ECCV2022 . Inhabiting the Virtual. We made another step toward synthesizing virtual humans interacting with 3D scenes. The key idea is to learn a joint representation that effectively captures human body articulation, 3D object geometry, and interaction semantics. (1/2)
@_akhaliq
AK
2 years
Compositional Human-Scene Interaction Synthesis with Semantic Control abs: project page: github:
Tweet media one
0
11
60
3
10
85
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 year
🥳SDF Studio is online now: a common framework and an open-source repo for implicit surface reconstruction!
@AutoVisionGroup
Autonomous Vision Group
1 year
🎅 Today, we have an early Christmas present for you: SDF Studio. Building on the fantastic @nerfstudioteam code, we have integrated various implicit surface reconstruction techniques in one common framework! More algorithms and results coming soon..
Tweet media one
6
156
593
0
11
85
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Another #3DV2021 paper: 4D Human Body Capture from Egocentric Video via 3D Scene Grounding. Capturing humans from egocentric videos is a key building block for the embodied and immersive future. Code: . Joint work with Miao, @cnsdqzyz and @RehgJim .
Tweet media one
0
14
82
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
I will be looking for PhD students working on neural body modeling for 3D human reconstruction and synthesis. If you are interested in the topic, please consider to apply for the PhD fellowship @ETH_AI_Center @CSatETH
@arkrause
Andreas Krause
3 years
Calls out for Ph.D. and postdoc fellowships in #AI at the @ETH_AI_Center at @ETH_en ! Applications close November 30, 2021. Please spread the word! More information at
3
75
259
1
10
77
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Neural implicit human body representations provide an efficient way to resolve collisions with objects and 3D scenes. For us, COAP ( #CVPR 2022) is an important step forward, an essential tool to capture and synthesize humans moving in and interacting with complex 3D scenes.
@marko_mih
Marko Mihajlovic
2 years
📢 Our generalizable neural implicit body leverages a localized encoder-decoder to model volumetric humans #CVPR2022 COAP is useful for resolving self-intersections and collisions with other objects w/ @psyth91 @aayushbansal @MZollhoefer @SiyuTang3
0
13
109
1
16
74
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Great opportunities for exciting AI research! I will be looking for Ph.D. students who are interested in the intersection of Computer Vision, Digital Humans, and Egocentric perception.
@ETH_AI_Center
ETH AI Center
2 years
Apply now and start your PhD or Post-Doc at the ETH AI Center to shape the future of #AI and #ML with interdisciplinary AI research and open your career path to academia, industry and start-ups! Apply before Nov 30: @CSatETH @ETH
Tweet media one
3
43
67
2
11
64
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
@cnsdqzyz Successfully inhabiting this six-hundred-meter-high vertical city is a paradigm shift in creating autonomous virtual humans.  Flight Assembled City: Human motion model (GAMMA @CVPR22 ): Human body model: SMPL-X from @PerceivingSys
1
11
66
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 month
3DV keynote @BenMildenhall , the live stream is here:
Tweet media one
0
5
59
@SiyuTang3
Siyu Tang @VLG-ETHZ
5 months
Very excited to share this work. We achieved a high-quality reconstruction of clothed human avatars with disentangled geometry, albedo, material, and environmental lighting from only a monocular video. Excellent and solid work by @sfwang0928 and the team! #IntrinsicAvatar
@AutoVisionGroup
Autonomous Vision Group
5 months
Given a monocular video, IntrinsicAvatar learns animatable clothed human avatars with decomposed intrinsic properties including albedo, material, and geometry. It was a great @ELLISforEurope collaboration with @SiyuTang3 , @sfwang0928 and @anticboz !
1
13
74
1
1
55
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 month
Couldn’t agree more!
@taiyasaki
Andrea Tagliasacchi 🇨🇦🏔️
1 month
The @3DVconf PC. Couldn’t have asked for a better team, really 🥰
Tweet media one
1
0
75
0
0
56
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
PTF provides a robust and fully automatic way to fit parametric human models to sparse point cloud, dense scans and monocular depth frames! Paper id: 2691. Q&A Session: 12-14pm (CET) on June 23. #CVPR2021 . with Shaofei from @CSatETH and Andreas @AutoVisionGroup
Tweet media one
1
6
55
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
The power of points!
@_akhaliq
AK
3 years
The Power of Points for Modeling Humans in Clothing pdf: abs: project page:
0
11
50
2
7
55
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 year
Excited to share Mask3D - a new state-of-the-art for 3D instance segmentation on Point Clouds on ScanNet. #ICRA2023 Online demo: . You can test your own point clouds and download the results!
@FrancisEngelman
Francis Engelmann
1 year
Mask3D 🎭 is now at #ICRA2023 , great work @JonasSchultCV ! We use Mask Transformers for 3D Instance Segmentation on Point Clouds ~ 🥇 on ScanNet 📰Paper: 🛠️Project: 👨‍💻Code: @Pandoro89 @orlitany @SiyuTang3
3
51
205
0
8
52
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Check out our new #3dv2020 oral paper: Grasping Field: Learning Implicit Representations for Human Grasps. Time (CET): Friday: 6 - 6.30 AM 16.30-17.00 PM Paper: Video: code:
0
4
52
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
It was super fun! Glad to be there! I talked about our recent work on capturing and modeling of 3D humans. Here is the slides (including an overview of our research):
@Joel_Mesot
Joël Mesot
3 years
Yesterday the @ETH_en session on augmented reality glasses with @SiyuTang3 & @ait_eth was well attended at the Digital Festival @DiFe_Zurich #DiFe21 , after a fascinating address by Professor Ash @ellliottt on #AIsystems earlier in the day #AR #AI #AIglasses
Tweet media one
1
3
26
0
3
52
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Going to give a talk on neural bodies and hands for interaction capture and synthesis at the ICCV workshop SoMoF, hope to see you there: 😀 Papers will be covered:
Tweet media one
1
5
47
@SiyuTang3
Siyu Tang @VLG-ETHZ
11 months
Excited to attend and present at ICRA for the very first time! Dora’s paper, Interactive Object Segmentation in 3D Point Clouds, is one of the outstanding paper finalists for physical human-robot interaction at ICRA 2023! 🥳
@DoraKontog
TheodoraKontogianni
11 months
Happy to be at #ICRA2023 !Please come and see our work ( @SiyuTang3 ,Konrad Schindler, @ekincelli ) during the poster session (30th May, Pod 12, 15:00-16:40) and the oral presentation (31st May, ICC Auditorium, 10:20-10:30) 📜 👩‍💻
0
3
34
0
4
43
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 year
Want to give a comprehensive overview of a specific topic that is related to CV at @CVPR 2023? The deadline for submitting a tutorial proposal is fast approaching! (Dec 9, 2022) Details: For any questions, please contact me and JianxinWu ( tutorial chairs)
0
6
41
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 year
It was a great honor to present our research from VLG at BMVC 2022. Thanks for the invitation!
@C_ReyesAldasoro
Constantino Carlos Reyes-Aldasoro
1 year
Very engaging keynote presentation on virtual humans by Prof Siyu Tang @SiyuTang3 #bmvc2022 @TheBMVA
Tweet media one
1
0
6
1
0
41
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
LEAP (4356) provides neural occupancy representations for SMPL bodies. Paper Session 8 (Wed, 10 PM EDT / Thur, 4 AM CET). Project & code: with @marko_mih @cnsdqzyz from @CSatETH and @Michael_J_Black !
2
5
41
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
If you are attending #3dv2020 , please stop by our poster: PLACE: Proximity Learning of Articulation and Contact in 3D Environments, with @Michael_J_Black @qianli_m Yan zhang and Siwei Time: 5.30-7.00 pm (CET) Code:
0
6
39
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
This video summarizes the #ECCV2022 papers we are going to present in the next three days. If you are interested in egocentric human body estimation, scalable human avatar creation, and human motion and interaction synthesis, don't miss it. @eccvconf
1
5
33
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
I will give a spotlight presentation about the latest work from my group, including neural body representation, human motion modelling, human body estimation from sparse point cloud, and human grasps synthesis. Hope to see you at the event!
@CSatETH
ETH CS Department
3 years
🎉 6 Months ETH AI Center! April 15, 2021, 17:00 (ECT): Celebration of Six Months ETH AI Center and AI+X Kick-​off, registration + link to livestream here @ETH_en @ryandcotterell @SiyuTang3 @SwissCognitive #AI #Foresight
0
8
20
0
0
33
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
In MOJO, we propose a novel representation of the body in motion, a novel motion generative network, and a novel scheme for 3D body mesh recovery. #cvpr2021 Great work from @cnsdqzyz !
@Michael_J_Black
Michael Black
3 years
MOJO predicts 3D human movement. Previous work predicts 3D joints, which are just sparse point clouds. Instead, we predict a point cloud of body surface points and ensure that they correspond to a true 3D body. @CVPR ( #CVPR2021 ).
0
8
61
2
4
32
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 year
Thank you very much for the invitation!! The view from Cabot Tower was fantastic🥳🥳 I learned so much about egocentric vision from your group!😃
@dimadamen
Dima Damen
1 year
"Inhibiting the Virtual"... @SiyuTang3 thanks for your visit to give a fantastic talk at MaVi seminar @BristolUniEng @bristolcs and spending the day hearing about our research. The weather didn't disappoint - typical gloomy but we managed to get up #cabotTower #Bristol
Tweet media one
Tweet media two
Tweet media three
0
1
23
0
0
30
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
GAMMA ( #CVPR2022 ) is now deployed for HoloLens. Autonomous virtual humans are coming for AR applications.
@cnsdqzyz
Yan Zhang
2 years
Can you imagine how virtual humans wander in the ETH main building? Based on SMPL-X, Hololens, and our method GAMMA(), now we have it! Stay tuned...
1
4
30
0
4
28
@SiyuTang3
Siyu Tang @VLG-ETHZ
10 months
Congratulations to @SiweiZhang13 🥳!! Well deserved! #VLG is very proud of you!
@QCOMResearch
Qualcomm Research & Technologies
10 months
QIF Europe is an excellence award through which @Qualcomm rewards and mentors the most innovative PhD students in Europe working on breakthrough #AI and #cybersecurity solutions. Congratulations @tychovdo @confusezius @SiweiZhang13 and Attri Bhattacharyya
Tweet media one
0
7
26
3
3
28
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
LEAP: Learning Articulated Occupancy of People #CVPR2021 . It is a neural network architecture for representing volumetric animatable human bodies. Project page: Great work from @marko_mih and congrats to the team!! @cnsdqzyz @Michael_J_Black
1
1
28
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Capturing human motion and activities from egocentric videos is a key building block for our embodied and immersive future. Come to join us tomorrow at 2 pm. The HBHA workshop features three benchmarks (EgoBody, H2O, and Assembly101) and has 5 amazing speakers. @eccvconf
@SiweiZhang13
Siwei Zhang
2 years
Our HBHA workshop on human body, hands and activities from egocentric and multi-view cameras will take place at #ECCV2022 on Monday (Oct.24) 2pm-6pm (GMT+3)! Don’t miss out the invited talks from 5 wonderful speakers @dimadamen @gulvarol @RehgJim @kkitani and Vincent Lepetit!
Tweet media one
2
8
47
0
2
26
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
I'm very excited to work on this project with @CSatETH , @Microsoft , @derbalgrist , and the team!
@mapo1
Marc Pollefeys
2 years
Excited to participate in this project with the Microsoft Mixed Reality and AI Zurich Lab and collaborate closely with Philipp Fürnstahl, @SiyuTang3 , Grabner Helmut and teams. #microsoft #ai #mixedreality #mesh #azure #ethzurich #balgrist #zhaw
0
4
28
0
1
27
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 year
Great opportunity! Andreas is a brilliant supervisor, I cannot recommend him highly enough.
@AutoVisionGroup
Autonomous Vision Group
1 year
I am hiring PhD students and PostDocs! If you like to join a great team to conduct curiosity-driven research on implicit neural 3D representations join us now! Flyer:
Tweet media one
11
74
378
0
3
26
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Our human motion priors for 4D human motion and interaction capture in 3D scenes will be presented today. #ICCV2021 oral. Session 9. Code is available: Video:
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Thanks for sharing! Our new #iccv oral paper, LEMO, leverages large-scale Mocap datasets to learn powerful motion priors, and enables accurate human motion capture in 3D scenes with a monocular Kinect. Great collaboration between #Microsoft and @CSatETH !
2
11
89
0
0
25
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Happy to share PTFs: Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration. #cvpr2021 . PTFs simultaneously learn to predict shape and per-point correspondences for sparse point clouds of humans. Video:
1
5
25
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Self-contact is ubiquitous in human behavior, in TUCH, we introduce the first human pose and shape regressor for self-contact poses, along with a novel dataset of 3D human meshes with realistic contact. #cvpr2021 oral. Great work from @LeaMue27 !!!
@Michael_J_Black
Michael Black
3 years
TUCH: training human pose and shape regression with novel *self-contact* losses improves accuracy even on poses without self-contact. Mimic-The-Pose (MTP): novel crowd-sourced, high-quality, 3D reference data with self-contact. @CVPR ( #CVPR2021 oral).
Tweet media one
3
13
95
1
4
24
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
KeypointNerf ( #ECCV2022 ) enables novel view synthesis for human faces and bodies using only keypoints, without mesh-based parametric models. Marko talked about this work in great detail in this podcast.
@talking_papers
Talking Papers Podcast
2 years
@marko_mih is a 2nd year PhD student at ETH, supervised by @SiyuTang3 . His research focuses on photorealistic reconstruction of static and dynamic scenes and also modeling of parametric human bodies. This work was done mainly during his internship at Meta Reality Labs.
1
0
4
0
1
23
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
I will present this work today at #CVPR2022 Session 4.2 Poster 172 14:30AM-17:00PM, happy to chat in person during the session for more details! Code is available: .
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
I'm excited to share that our new generative human motion model GAMMA ( #CVPR2022 , with @cnsdqzyz ) has gained life in the exhibition "Motion. Autos, Art, Architecture” at the Guggenheim Museum Bilbao curated by Norman Foster Foundation! (1/3)
3
34
333
0
3
23
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Wonderful news! Congrats!!!
@ICCV_2021
ICCV2021
3 years
Best Student Paper @ICCV_2021 Pixel-Perfect Structure-from-Motion with Featuremetric Refinement Philipp Lindenberger (ETH Zurich), Paul-Edouard Sarlin (ETH Zurich), Viktor Larsson, Marc Pollefeys [Session 5 A/B]
1
10
60
0
0
20
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
If you missed it, here is the recording of the VLG part (Capture and Synthesis of 3D Humans):
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
I will give a spotlight presentation about the latest work from my group, including neural body representation, human motion modelling, human body estimation from sparse point cloud, and human grasps synthesis. Hope to see you at the event!
0
0
33
0
1
20
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Joint work with Shaofei, @marko_mih @qianli_m and @AutoVisionGroup 🥳🥳🥳
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Super excited to share MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images! We build human avatars from monocular depths or a single scan efficiently, using meta-learned generalizable and controllable neural SDFs. #NeurIPS2021 Code:
0
20
153
0
1
19
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Congratulations!!!
@AutoVisionGroup
Autonomous Vision Group
3 years
This is unbelievable: Michael Niemeyer @Mi_Niemeyer won the CVPR 2021 best paper award!! Thanks to Michael and all my fantastic students and postdocs for their extraordinary work! You are just amazing..
Tweet media one
45
49
577
0
2
19
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Thrilled that our interdisciplinary collaboration with Gramazio Kohler Research on inhabiting a vertical city with autonomous virtual humans is featured on the ETH front page! Joint work with @cnsdqzyz and Jonathan
@ETH_en
ETH Zurich
2 years
What happened if architects let their buildings be tested by virtual inhabitants before actually building them? A museum project by Matthias Kohler, @SiyuTang3 and Fabio Gramazio could serve as a blueprint: @CSatETH #Architecture #ComputerVision
0
4
14
0
1
16
@SiyuTang3
Siyu Tang @VLG-ETHZ
11 months
A fantastic list of speakers covering super exciting topics in the incredible city of Paris. Don't miss out 🥳
@paschalidoud_1
Despoina Paschalidou
11 months
📢Our #ICCV2023 workshop on AI for 3D Content Creation organized with @geopavlakos @amlankar95 , @KaichunMo and @davrempe from, Paul Guerrero, @SiyuTang3 and Leo Guibas has a fantastic list of speakers! Workshop Website: Paper Submission Deadline: July 17
Tweet media one
3
24
147
0
2
16
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
MOJO is a surface marker based human motion prediction model. Our recursive projection scheme supported by the marker-based representation directly yields body model parameters and hence realistic body meshes. The paper [2253] will be presented today at 12-14:30pm (CET)
@cnsdqzyz
Yan Zhang
3 years
#CVPR21 #CVPR2021 #CVPR please come to our paper MOJO [2253] with @Michael_J_Black and @SiyuTang3 at (CET) 12:00– 14:30 on June 22, or (EDT) 6:00– 8:30 on June 22. 👇CODE IS RELEASED. Please go to or scan it for everything.👇
Tweet media one
2
3
25
0
1
15
@SiyuTang3
Siyu Tang @VLG-ETHZ
11 months
What a wonderful retreat! 🥳 Absolutely loved the hike. Thanks so much for the invitation!
@ImagineEnpc
Imagine-ENPC
11 months
Lucky to have two great invited talks from @chriswolfvision and @SiyuTang3 !
Tweet media one
Tweet media two
2
1
15
0
0
15
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
I think the articulated local elements are powerful representations for building clothed human avatars. Paper session: 12-14pm (CET) on June 25. #CVPR2021
@qianli_m
Qianli Ma
3 years
@cnsdqzyz Thanks! A point is not guaranteed to always be the same physical (semantic) point on the surface across different frames, as we allow the patches to freely move. But as the network also predicts point colors (the one in the middle), the results still look coherent across frames.
0
0
1
1
0
14
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
By leveraging the attention mechanism and transformer, our method can synthesize compositional human-scene interactions without requiring composite interaction data. (2/2) Joint work with Kaifeng, @sfwang0928 @cnsdqzyz , and Thabo. @CSatETH Code:
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
#ECCV2022 . Inhabiting the Virtual. We made another step toward synthesizing virtual humans interacting with 3D scenes. The key idea is to learn a joint representation that effectively captures human body articulation, 3D object geometry, and interaction semantics. (1/2)
3
10
85
0
1
14
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
KeypointICON 🥳
@yuliangxiu
Yuliang Xiu
2 years
KeypointNeRF's "relative spatial keypoint encoder" is a general plug-n-play module for different downstream tasks. I have integrated it with ICON, which achieves comparable performance, compared with expensive body SDF. More details at:
1
2
22
0
0
13
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Very happy to hear your comments! It was super fun and we had an exciting panel discussion with @SattlerTorsten @ait_eth @GerardPonsMoll1 @angelaqdai
@Michael_J_Black
Michael Black
3 years
Folks, this was a great talk -- inspiring and clear. If you missed it, you can still watch it on YouTube
1
3
31
0
1
12
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
This is a great opportunity! Dimitris is a great person to work with!
@dimtzionas
Dimitris Tzionas
2 years
Motivated MSc/BSc students & prospective PhD candidates can always reach out to me -- plz see the contact instructions on my website. Plz help me spread the word: 🆘 🆘 🆘 We are actively hiring 1 PhD candidate together with @theogevers ! Separate tweet coming soon 📢 📢 📢 (8/8)
5
5
18
0
0
12
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
@cnsdqzyz Together with Gramazio Kohler Research, we developed a visionary project in which an entirely pedestrianized city is populated with many autonomous virtual humans who possess diverse body shapes and move perpetually in an automatic, scalable, and controllable manner. (2/3)
1
2
11
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
4, Clothed human with Surface Codec of Articulated Local Elements; and 5 on self contact and human pose!
1
0
11
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Looking forward to this #CVPR2022 tutorial! A great opportunity to learn more about biological vision.
@Li_Zhaoping
Li Zhaoping
2 years
@CVPR #CVPR2022 I am giving a CVPR2022 tutorial "A post-Marrian computational overview of how biological (human) vision works", see could adjust (a bit) some contents according to participants' interest, let me know.
0
1
8
1
0
10
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
🥳🥳🥳
@Michael_J_Black
Michael Black
3 years
Neural people are coming! SCANimate, SCALE, and now LEAP - all at #CVPR2021 . We are exploring how to learn 3D humans that go beyond models like SMPL. Great collaboration between ETH and MPI with @marko_mih , @cnsdqzyz and @SiyuTang3 . (1/4)
1
7
49
0
0
9
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
I'm looking forward to this colloquium. It provides us an opportunity to discuss how our human motion and interaction models could improve the sustainability of the built environment with the AEC researchers. Virtual humans may help us to build a more sustainable real world.
@ir0armeni
Iro Armeni
3 years
It will be held as a series of online sessions: the 1st one will focus on building reuse & will happen on November 23rd 4-7 PM CET. We are excited to have @billmcdonough give a keynote talk, as well as Ruchi Choudhary, Clara Olóriz Sanjuán, & @SiyuTang3 present their perspective.
0
1
4
0
1
8
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Congrats to the team @SiweiZhang13 , @cnsdqzyz , and the amazing collaborators @FedericaBogo , @mapo1
0
0
7
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
Looking forward to this workshop. @SiweiZhang13 will also present our EgoBody dataset there!
@ego4_d
Ego4D
2 years
Challenges! We concluded 16 different Ego4D challenges earlier this month and @_rohitgirdhar_ will share for the first time the winners and provide a synthesis on what methods were effective. 🤓🧐
1
0
2
0
2
7
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
0
0
7
@SiyuTang3
Siyu Tang @VLG-ETHZ
4 months
It is so simple that it can be used directly without per-task training or finetuning, and in this paper, we show how to use it for motion editing, motion in-between, motion denoising, and motion blending with exactly the same algorithm!
1
0
5
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
The egocentric human capture and reconstruction weren't easy! Great work by @SiweiZhang13 and the collaborators @mapo1 @FedericaBogo @qianli_m @cnsdqzyz @big_stamp @CSatETH @ETH_AI_Center .
0
0
5
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Great collaboration between VLG and AVG 🥳🥳🥳
@AutoVisionGroup
Autonomous Vision Group
3 years
In our upcoming CVPR'21 paper (joint work with Shaofei and @SiyuTang3 ), we propose Piecewise Transformation Fields (PTF) to simultaneously learn to predict human shape and per-point correspondences.
0
3
23
0
0
4
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 months
0
0
4
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
0
0
4
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
More information and baseline results can be found in our ECCV paper: If you are interested in egocentric human motion capture and understanding, download the data and give it a try — the submission deadline is October 1st.
2
0
4
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 months
1
0
2
@SiyuTang3
Siyu Tang @VLG-ETHZ
4 months
Project page: Paper: Excellent work by Korrawe, and the team @phizaz @emreaksan , Thabo, and Supasorn
0
1
2
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
0
0
2
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
0
0
2
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
code is released on Github:
0
0
2
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
There is a typo. The EgoBody paper and challenge will be presented at #ECCV2022
0
0
2
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Consequently, rigid per bone transformations and joint rotations can be obtained efficiently via a least-square fitting given the estimated point correspondences, circumventing the challenging task of directly regressing joint rotations from neural networks.
0
0
2
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
@Michael_J_Black thanks!! Michael!
0
0
1
@SiyuTang3
Siyu Tang @VLG-ETHZ
2 years
@mohomran Congratulations!!! 🥳🥳🥳
1
0
1
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
project page:
1
0
1
@SiyuTang3
Siyu Tang @VLG-ETHZ
1 year
@M_E_Hassan Congratulations!!!
0
0
1
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
0
0
1
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
Our key insight is that the translation vector for each query point can be effectively estimated using the point-aligned local features.
1
0
1
@SiyuTang3
Siyu Tang @VLG-ETHZ
3 years
@GerardPonsMoll1 looking forward to reading these papers!
0
0
1