Yuliang Xiu Profile Banner
Yuliang Xiu Profile
Yuliang Xiu

@yuliangxiu

5,064
Followers
3,696
Following
41
Media
3,470
Statuses

Ph.D. in computer vision & graphics @MPI_IS , previously @USC_ICT . Focusing on democratizing human-centric digitization. Intern at @RealityLabs @Ubisoft

Tübingen, Germany
Joined August 2014
Don't wanna be here? Send us removal request.
Pinned Tweet
@yuliangxiu
Yuliang Xiu
7 months
The story of painter and architect.
@yangyi_huang_cn
Yangyi Huang
7 months
Given a single blueprint (image), TeCH involves the collaboration between an “architect (reconstruct w/ image)” and a “painter (imagine w/ the image descriptions)”.We specifically illustrate the correlation between generation and reconstruction w.r.t. the input views. (3/10)
Tweet media one
1
0
8
0
0
13
@yuliangxiu
Yuliang Xiu
1 year
ECON got accepted by #CVPR2023 Detailed clothed human recovery from single image via normal integration. Is implicit MLP a must? NO. Is data-driven/learning a must? NO. How to keep pose robustness w/o sacrificing the topological flexibility? See
16
90
540
@yuliangxiu
Yuliang Xiu
6 months
We need a thorough investigation to appease her spirit, ensuring that the real perpetrator receives the deserved punishment. @ETH
@CVL_ETH
Computer Vision Lab Zurich
6 months
Tweet media one
183
236
1K
5
28
306
@yuliangxiu
Yuliang Xiu
1 year
ECON's @huggingface is ready to play with! Besides human digitization from a single unconstrained image, it supports pose+prompt guided image generation (ControlNet) as well.
3
44
216
@yuliangxiu
Yuliang Xiu
10 months
Shout out to Lee Kwan Joong () for developing an "all-in-one" Blender Add-on, which includes an image-based clothed human reconstructor, an avatarizer for animation, and a texture generator. Tutorial:
5
54
209
@yuliangxiu
Yuliang Xiu
2 years
早早就把所有福利姬都移到小号关注了,确保这个大号是学术纯享版,但还是逃不掉推友们的疯狂点赞和转推,所以目前这个大号,还是没办法做到在实验室正大光明地刷,明明上一条还是arXiv,滚一屏,冷不丁跳出来那么一张两张,实在是...怎么说呢...劳逸结合。
10
6
166
@yuliangxiu
Yuliang Xiu
9 months
ECON ( #CVPR2023 ) reconstructs high-fidelity 3D humans, even those wearing 𝗹𝗼𝗼𝘀𝗲 𝗰𝗹𝗼𝘁𝗵𝗶𝗻𝗴 or in 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗶𝗻𝗴 𝗽𝗼𝘀𝗲𝘀, from a single image. The reconstructions can be animated with SMPL-X poses. Here we demo the Rasputin dance using ECON+HybrIK-X. (1/n)
6
39
176
@yuliangxiu
Yuliang Xiu
5 months
Foundation models (LLMs, Diffusion, SAM) are like IBM's mainframe computers, while finetuning acts as the PC for personalization and customization. LoRA adjusts weights through addition, while BOFT uses matrix multiplication to rotate them. Project:
@_akhaliq
AK
5 months
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization paper page: Large foundation models are becoming ubiquitous, but training them from scratch is prohibitively expensive. Thus, efficiently adapting these powerful models to downstream…
Tweet media one
2
59
245
2
32
127
@yuliangxiu
Yuliang Xiu
1 year
Excited to share our #NeurIPS2022 (Dataset and Benchmark Track) work, DART, which extends MANO, the widely used hand model, with Diverse 3D Accessories and Rich Textures, to synthesize more realistic hand data. Project: Code:
1
26
122
@yuliangxiu
Yuliang Xiu
1 year
ECON was selected as a highlight at CVPR!
@yuliangxiu
Yuliang Xiu
1 year
ECON got accepted by #CVPR2023 Detailed clothed human recovery from single image via normal integration. Is implicit MLP a must? NO. Is data-driven/learning a must? NO. How to keep pose robustness w/o sacrificing the topological flexibility? See
16
90
540
2
12
119
@yuliangxiu
Yuliang Xiu
1 year
For those watching #WorldCup2022 , @nytimes builds up a very fancy and immersive 3D scene to cover the games, with the help of ICON (). Very interesting application from @NYTimesRD and @nytgraphics !
@nytgraphics
NYT Graphics
1 year
See how the U.S. goal against Wales unfolded. From the lenses of @atmccann and @saget to a team from @nytgraphics and @NYTimesRD
8
18
64
4
17
110
@yuliangxiu
Yuliang Xiu
2 years
Why not replace Google street view by Block-NeRF?The interaction design of Google street view is exactly the same as NeRF.
@_akhaliq
AK
2 years
Block-NeRF: Scalable Large Scene Neural View Synthesis abs: project page:
30
556
2K
3
11
106
@yuliangxiu
Yuliang Xiu
2 years
既然已经社死了,索性放飞自我一把。这里我发个宏愿——如果有朝一日我能中一篇SIGGRAPH,我一定自掏腰包请 @SpicygumL 帮我给demo video录voiceover
3
2
102
@yuliangxiu
Yuliang Xiu
2 years
Finally, ICON joined the big family of @huggingface and @Gradio . Upload or generate an human image, select method (PIFu/PaMIR/ICON), then you will get 1) body and clothed human meshes, 2) SMPL parameters, and 3) rendered video. Special thanks @NimaBoscarino @_akhaliq @fffiloni
@fffiloni
Sylvain Filoni
2 years
And it's alive ! There it is, working, the @Gradio demo for ICON: Implicit Clothed humans Obtained from Normals on @huggingface demo : Congrats @yuliangxiu 😌👏
Tweet media one
4
15
78
4
24
100
@yuliangxiu
Yuliang Xiu
1 year
Finally, it's safe to break the social media silence of #CVPR2023 . ECON gets in! Here are some testing results generated from its blender addon.
@carlosedubarret
Carlos Barreto
1 year
ECON Tests
Tweet media one
Tweet media two
Tweet media three
8
65
640
3
13
99
@yuliangxiu
Yuliang Xiu
7 months
Thanks @_akhaliq for sharing our new work TeCH. Reconstruction is a form of Conditional Generation, especially for one-shot and few-shot occasions. Reconstruct the visible like architect, imagine the invisible like painter. Projct:
@_akhaliq
AK
7 months
TeCH: Text-guided Reconstruction of Lifelike Clothed Humans abs: paper page: Despite recent research advancements in reconstructing clothed humans from a single image, accurately restoring the "unseen regions" with high-level…
8
168
631
2
22
101
@yuliangxiu
Yuliang Xiu
1 year
Happy 2023!
Tweet media one
Tweet media two
4
0
87
@yuliangxiu
Yuliang Xiu
5 months
TeCH gets in @3DVconf . Honestly, I am a bit depressed that this paper did not receive oral acceptance. However, I strongly believe that TeCH showcases a paradigm shift in avatar creation and bridging reconstruction and generation. Code:
@yangyi_huang_cn
Yangyi Huang
7 months
We see Reconstruction as a form of conditional Generation. Conditioned on a single image, and the descriptive prompts derived from it, TeCH could reconstruct a “Lifelike” clothed human. “Lifelike” refers to detailed shape and high-fidelity texture, even on BACKSIDE. (1/10)
2
18
127
2
13
90
@yuliangxiu
Yuliang Xiu
1 year
@yuliangxiu
Yuliang Xiu
1 year
ECON got accepted by #CVPR2023 Detailed clothed human recovery from single image via normal integration. Is implicit MLP a must? NO. Is data-driven/learning a must? NO. How to keep pose robustness w/o sacrificing the topological flexibility? See
16
90
540
0
9
64
@yuliangxiu
Yuliang Xiu
3 months
Tweet media one
5
2
66
@yuliangxiu
Yuliang Xiu
3 months
When I wake up, the first thing to check is the scholar-inbox rather than WeChat. Then I can't sleep anymore due to the anxiety.
@AutoVisionGroup
Awesome Vision Group
3 months
After 2 years of hard work by the team, we are thrilled to release today! Scholar Inbox is a personal paper recommender which enables you to stay up-to-date with the most relevant progress by delivering personal suggestions directly to your inbox.🧵
Tweet media one
13
127
661
7
4
65
@yuliangxiu
Yuliang Xiu
5 months
G-Shell is an explicit representation that can effectively model both watertight and non-watertight shapes. It exhibits compatibility with rasterization-based rendering, while also demonstrating speed and flexibility in various tasks such as reconstruction and generation.
@dihuang52453419
D. Huang
5 months
Ghost on the Shell: 'Open surfaces' is like a floating ghost on the template watertight mesh. I like this paper. The idea is clean, sounds reasonable, and offers strong modelling of non-watertight surfaces. Combining SDF and mSDF, G-shell integrates…
Tweet media one
3
23
105
0
9
56
@yuliangxiu
Yuliang Xiu
10 months
ECON + HybrIK-X
3
8
52
@yuliangxiu
Yuliang Xiu
1 year
⁦If these 3D models are automatically generated by ANY algorithms, regardless of data driven or optimization based. The algorithm itself should get best paper award and all the researchers working on monocular 3D reconstruction should switch their focus.
5
6
52
@yuliangxiu
Yuliang Xiu
7 months
Yes, we're also developing Text2Avatar, but TADA has distinct advantages: 1. Simplicity: TADA uses SMPL-X + displacement layer, no NeRF/NeuS needed. 2. Alignment (geometry & texture): TADA ensures semantic alignment on face and pattern alignment on clothing.
@HongweiYi2
Hongwei Yi
7 months
We present “3D magician”: TADA! Text to Animatable Digital Avatars. Given a textual description as input only, our method TADA generates expressive animatable 3D avatars with high-quality geometry and lifelike textures. (1/10)
1
42
161
0
12
52
@yuliangxiu
Yuliang Xiu
1 year
Google Colab for ECON is back to normal! It took me quite a lot of time to sort out the PyTorch3D compatibility issue. 😢
Tweet media one
2
11
49
@yuliangxiu
Yuliang Xiu
4 years
@tinyfool 这就很外行了,照付不议意思不是价格固定,而是石油天然气交易量固定,国际大宗原材料供应,照付不议是普遍规则,中俄协议是随行就市,价格关联北海布伦特原油期货价格,一般比市场价格低一美元左右。
6
5
46
@yuliangxiu
Yuliang Xiu
1 year
Mark 一下,等我英年早婚的时候用。
@laanlabs
Laan Labs
1 year
Used NeRF to make a "Bullet Time" effect for a friend's wedding. We set up 15 iPhones to capture slow-motion video, then used @NVIDIAAIDev Instant-NGP to train a bunch of NeRFs on the frames. ..need to work on improving the quality / resolution a bit more.
45
397
3K
1
2
48
@yuliangxiu
Yuliang Xiu
2 years
New add-on of #ICON to extract 3D garments from fashion images, by Daniel Gudmundsson, Marion Barrau-Joyeaux, Arthur Collette, and Amalie Kjaer from ETH Zürich. Check for the details. Special thanks to @songyoupeng for his mentorship in the 3DV course😉
Tweet media one
0
8
47
@yuliangxiu
Yuliang Xiu
7 months
The number of papers does not matter. - professors who always tweet "N (N>10) papers got into CVPR/ICCV/SIGGRAPH/NeurIPS..." and received tens of "congrats" afterwards
@tunguz
Bojan Tunguz
7 months
It’s always that way.
Tweet media one
113
766
7K
1
2
45
@yuliangxiu
Yuliang Xiu
7 months
I shared DELTA () yesterday, focusing on Video2Avatar, a traditional reconstruction task. What if we apply the hybrid representation in the Text2Avatar generation task? See @YaoFeng1995 's new work, TECA ()
@_akhaliq
AK
7 months
Text-Guided Generation and Editing of Compositional 3D Avatars paper page: Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description. While this challenge has attracted significant recent interest,…
Tweet media one
1
23
120
0
8
43
@yuliangxiu
Yuliang Xiu
2 years
#CVPR2022 来面基
Tweet media one
2
0
42
@yuliangxiu
Yuliang Xiu
2 years
Thanks @_akhaliq and @Gradio strong and supportive team to help me setup the space. Here are a few PERSONAL expectations for @huggingface space: terminal support, anonymous & unlisted mode (for double-blind review process), and behavior & traffic analysis.
@_akhaliq
AK
2 years
A @Gradio Demo for ICON: Implicit Clothed humans Obtained from Normals on @huggingface Spaces by @yuliangxiu demo: code: get started with Gradio:
0
24
160
0
7
41
@yuliangxiu
Yuliang Xiu
4 months
When you see a paper with @YaoFeng1995 as the first author, never put it into your read-it-later collections, READ it RIGHT NOW.
@Michael_J_Black
Michael Black
4 months
Multi-modal #LLMs understand a lot about humans. But do they understand our 3D pose? We train #PoseGPT to estimate, generate, and reason about 3D human pose ( #SMPL ) in images and text. This is the first true foundation model for understanding 3D humans.
9
77
459
1
4
43
@yuliangxiu
Yuliang Xiu
1 year
Since underneath parametric SMPL is already there, it's time to support automatic FBX exporting for ICON, will let you know once available.
@saraimartenj
sarai
1 year
Day 260 of #Blender3d | #b3d I generated an image using #stablediffusion and #DALLE 2 I put the image into #ICON to turn it into a 3D model. Then I used #mixamo to animate the model. (Then did a little extra thing with Blender) Details below!
Tweet media one
26
143
953
7
3
42
@yuliangxiu
Yuliang Xiu
1 year
第一次被所里大号转,感动。
@MPI_IS
Intelligent Systems
1 year
. @nytimes and @nytgraphics are using our @PerceivingSys Department's code to create 3D players for their #WorldcupQatar2022 coverage - how cool!
0
1
6
2
1
38
@yuliangxiu
Yuliang Xiu
4 years
@qingchn @williamlong 自信点,把貌似去了,这些贱骨头咬了赵立坚一个周了,也不看看蓬佩奥之前都说了啥,有人踢了华人与狗不得入内的牌子,他们过去捡起来,汪汪叫两声,然后把脖子上绳子递给挂牌子的人,然后回头淬了一口痰,“呸,破坏公物的讨厌鬼”
0
0
40
@yuliangxiu
Yuliang Xiu
1 year
Imagine that using hundreds of @huggingface spaces as @ChatGPTBot plugins.
1
7
40
@yuliangxiu
Yuliang Xiu
2 years
Making yourself an ICON.
@Michael_J_Black
Michael Black
2 years
Can we construct avatars from pixels? If we can take an image or video and get a detailed 3D likeness of a person that can be animated and inserted into games, it would open up many applications. ICON ( #CVPR2022 ) takes a step in this direction. (1/9)
1
27
104
2
4
38
@yuliangxiu
Yuliang Xiu
1 year
00 后相较于我们 90 后,纵有千般优越,可唯独有一点不幸,那就是我们 90 后傻逼的过去可以通过锁 QQ 空间和人人来一劳永逸得解决,而 00 后傻逼的当下却将永久留存在云端和互联网用户的集体记忆中,长期单曲循环,永世不得下架。
@Lslymlwxc
谁将十万横扫三江
1 year
十年寒窗无人问,半世愁眉九九七
179
71
385
4
0
40
@yuliangxiu
Yuliang Xiu
1 year
操他妈的,怎么可以这么帅。
@LumaLabsAI
Luma AI
1 year
Introducing the Luma✨Unreal Engine alpha! Fully volumetric Luma NeRFs running realtime on Windows in UE 5 for incredible cinematic shots and experiences, starting today! Try now:
86
526
3K
1
5
39
@yuliangxiu
Yuliang Xiu
9 months
Seems more like "conditional generation", instead of "pixel-aligned reconstruction". But still, very impressive results! Please correct me if I am wrong.🥸
@CSM_ai
Common Sense Machines
9 months
We're thrilled to announce a breakthrough in 3D world generation. Now, transform ANY image - AI-generated, concept art or real world shots - into high-resolution game-engine ready 3D asset. Check it out: 🎈 Public Showcase on Discord: 🤖 Generate your own…
8
80
344
2
4
37
@yuliangxiu
Yuliang Xiu
11 months
AFAIK, more and more Canadian visitor visa applications are SERIOUSLY delayed, including mine from March 11th. With @CVPR only a month away, is it possible to expedite this process via the exemption letter like @eccvconf ? @CitImmCanada @ctocevents
10
1
38
@yuliangxiu
Yuliang Xiu
9 months
Again, would it be possible for @ICCVConference to allocate space for @CVPR posters upon request? This may potentially attract more attendees and compensate for those who were unable to attend due to the visa issues.
@pesarlin
Paul-Edouard Sarlin
9 months
Seeing so many empty posters & missing authors at #CVPR2023 is heartbreaking - how many? 20%? Many PhD students worked hard but this absurd visa system jeopardized their chance to proudly present their work. I know that PCs @CVPR took action but this was largely insufficient… 1/
2
18
184
1
8
37
@yuliangxiu
Yuliang Xiu
9 months
#CVPR2023 Come to our poster, TUE-AM-049, if you want to discuss with @YaoFeng1995 @Michael_J_Black @dimtzionas about image-based human digitization. I will be online as well!
Tweet media one
Tweet media two
Tweet media three
0
2
36
@yuliangxiu
Yuliang Xiu
1 year
Apart from the design of per-point displacement, my favorite part of S3F is the supmat, where the authors explored a range of ideas that ended up degrading or not affecting performance. I learned a lot from it!
@enric_corona
Enric Corona
1 year
📢📢 Our paper "Structured 3D Features (S3F) for Reconstructing Relightable and Animatable Avatars" was accepted at #CVPR2023 ! S3Fs take an input image and generate a 3D human reconstruction that can be animated, relighted or edited (eg. change clothes) without post-processing!
9
38
268
1
8
36
@yuliangxiu
Yuliang Xiu
9 months
workshop 可以在线看,poster 可以让同学帮忙讲,但各个厂商的 t-shirt 我是真拿不到了。这对我很重要,因为去年在展台拿的一部分 t-shirt,已经洗缩水了(尤其是 tiktok 那件),或者掉色了(比如 scale AI那件),你们是去接受学术洗礼的,而我是去上货的。断了货源,这个夏天,我穿什么。 #CVPR2023
4
0
37
@yuliangxiu
Yuliang Xiu
1 year
I need this, and I need it NOW
@kdqg1
Siddharth Mishra-Sharma
1 year
tinkering
Tweet media one
45
196
2K
0
3
33
@yuliangxiu
Yuliang Xiu
7 months
The "Explicit" vs "Implicit" debate parallels the UI design dispute between "Skeuomorphism" and "Flat". Representations vary in suitability! @YaoFeng1995 's DELTA combines mesh for body+face, NeRF for clothing+haircut, and renders them in a unified way. Don't mark, READ RIGHT NOW.
@_akhaliq
AK
7 months
Learning Disentangled Avatars with Hybrid 3D Representations paper page: Tremendous efforts have been made to learn animatable and photorealistic human avatars. Towards this end, both explicit and implicit 3D representations are heavily studied for a…
Tweet media one
3
30
123
0
8
35
@yuliangxiu
Yuliang Xiu
2 years
五月初护照丢失,补办要两个月赶不上CVPR,遂改申旅行证,一个周制好证,chronopost邮寄延误一周,没有任何电话邮件,到楼下等竟然还能投递失败,最后不得不跑到巨远的快递总站取回,紧接着美驻德使馆面签日历关闭,邮件反馈学术会议不算expedited,PhD以来第一次线下会议彻底凉了,全方位多角度地凉。
4
1
33
@yuliangxiu
Yuliang Xiu
9 months
嫁人就嫁 @songyoupeng
@GMFarinella
Giovanni M Farinella
9 months
Best Presentation Prize @ ICVSS - Second Place - Songyou Peng
Tweet media one
1
0
37
0
1
32
@yuliangxiu
Yuliang Xiu
10 months
我只是一个孩子,在寻找爱的怀抱,这是飞翔的感觉,这是自由的感觉,在撒满星星的天空迎著风飞舞,凭著一颗永不哭泣,��敢的心。
@Dali_Yang
Dali L. Yang
10 months
Tweet media one
0
6
13
1
1
31
@yuliangxiu
Yuliang Xiu
1 year
Besides, AlphaPose supports mainstream DL frameworks @PyTorch @JittorHub @ApacheMXNet , and SMPL body estimation. It has been a pioneer in top-down multi-person keypoint estimation approaches. Easy to set up and use!
@jiefengli_jeff
Jeff Li
1 year
Excited to announce that AlphaPose got accepted at TPAMI🎉 AlphaPose now supports 136 whole-body keypoints estimation and tracking in real-time. arXiv: Code:
2
25
145
0
4
30
@yuliangxiu
Yuliang Xiu
2 months
Happy Chinese New Year!
@elonmusk
Elon Musk
2 months
The Year of the Dragon
Tweet media one
15K
33K
277K
1
0
31
@yuliangxiu
Yuliang Xiu
1 year
Me in the deadline month
Tweet media one
1
2
30
@yuliangxiu
Yuliang Xiu
1 year
在一片“淘汰画家,淘汰生产线工人,淘汰翻译,淘汰客服”的声浪中,程序员终于通过自己不懈的努力,争取到了AI铁拳的青睐。
@GoogleDeepMind
Google DeepMind
1 year
In @ScienceMagazine , we present #AlphaCode - the first AI system to write computer programs at a human level in competitions. It placed in the top 54% of participants in coding contests by solving new and complex problems. How does it work? 🧵
95
760
3K
1
3
31
@yuliangxiu
Yuliang Xiu
1 year
Make sense
@_akhaliq
AK
1 year
Synthetic Data from Diffusion Models Improves ImageNet Classification abs:
Tweet media one
19
204
962
0
2
31
@yuliangxiu
Yuliang Xiu
1 year
我替立党鸣个不平,拥有百万粉丝,精通移民留学转码及中南海床情,以普度华人为己任,生命不息教师爷不止的王欣然,不给个杰出人才EB1,那是美利坚的损失,那片上帝选择的土地,配得上王欣然的德行。
3
2
27
@yuliangxiu
Yuliang Xiu
10 months
New era for GAN manipulation, love it so much!
@XingangP
Xingang Pan
10 months
Have you thought about interactively 'dragging' objects in the image? Our #SIGGRAPH2023 work #DragGAN makes this come true!🥳 Paper: Project page:
44
273
1K
0
10
27
@yuliangxiu
Yuliang Xiu
2 years
I am happy to announce that 1/1 paper I gave reject with very confidence finally get rejected and 2/2 papers I voted for acceptance finally get in #ECCV2022
1
0
30
@yuliangxiu
Yuliang Xiu
9 months
See u guys in Paris. 🤩
@lixinyang__
Lixin YANG
9 months
I have one paper accepted by #ICCV2023 🎉 good luck. See you in Paris.
0
0
21
1
0
30
@yuliangxiu
Yuliang Xiu
1 year
Game changer
@LumaLabsAI
Luma AI
1 year
✨ Today we are launching NeRF Reshoot on iOS! Capture lifelike 3D and then create incredible shots all day using AI and the most intuitive 3D editor ever, right on your iPhone. Available on the AppStore, today! #3d #ai #nerf #lumaai
62
383
2K
0
3
27
@yuliangxiu
Yuliang Xiu
4 months
试了一下,六个小时过去了,还是很快乐,一点恶心的感觉都没有。
@mranti
Michael Anti
4 months
我努力戒掉短视频的方法是先彻底刷,每次刷5个小时这种,别睡觉,刷到终于恶心,然后逐步在意识层次产生我这么是纯傻逼的道德压力,然后卸掉短视频app之后,快一年了,我再也没回吸过短视频。
194
47
584
3
0
28
@yuliangxiu
Yuliang Xiu
1 year
CLIP is so powerful! OpenScene is a great example to show how to extend CLIP on 3D data, making a quite scalable paradigm on long-tail cases. Don't miss this awesome work from @songyoupeng
@ducha_aiki
Dmytro Mishkin 🇺🇦
1 year
OpenScene: 3D Scene Understanding with Open Vocabularies @songyoupeng , Kyle Genova, Chiyu "Max" Jiang, @taiyasaki , @mapo1 , Thomas Funkhouser tl;dr: CLIP meets point cloud.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
11
73
0
1
27
@yuliangxiu
Yuliang Xiu
2 years
@Michael_J_Black 最近写的一篇名为Novelty of Science的博客,受到很多关注,我读了以后很受启发,于是翻译了一个中文版本。
1
3
27
@yuliangxiu
Yuliang Xiu
11 months
这个高晓松是谁
@Michael_J_Black
Michael Black
11 months
PointAvatar represents avatars as rigged and animated point clouds, learned from monocular video. PointAvatar jointly optimizes the point geometry, texture and deformations. It disentangles the observed color into albedo and shading values, allowing basic relighting. (2/9)
2
2
12
0
1
25
@yuliangxiu
Yuliang Xiu
10 months
What an amazing team lineup! Unfortunately, I won't be able to attend CVPR due to the vexing visa issue. However, for Paris, there is no need to concern oneself with visa matters!
@paschalidoud_1
Despoina Paschalidou
10 months
📢Our #ICCV2023 workshop on AI for 3D Content Creation organized with @geopavlakos @amlankar95 , @KaichunMo and @davrempe from, Paul Guerrero, @SiyuTang3 and Leo Guibas has a fantastic list of speakers! Workshop Website: Paper Submission Deadline: July 17
Tweet media one
3
24
147
3
1
25
@yuliangxiu
Yuliang Xiu
6 months
“请选择你的英雄”
@drfeifei
Fei-Fei Li
6 months
. @Stanford @UCBerkeley & @Caltech computer vision faculty& their students meet today to exchange research ideas, topics include 3D vision, language-visual models, robotic learning, computational photography, vision foundation models, etc. At the EOD, AI is truly fun Science! 1/
Tweet media one
9
26
397
1
0
26
@yuliangxiu
Yuliang Xiu
2 years
大家可能在政治立场或倾向上有这样那样的不同,对于一些国际事件有不同的理解,但我相信,在“希望海内外的中国人,都能过上更有尊严的生活”这件事上,大家都是一样的。这个链接里面有联署签名的流程,每多一个签名,对于这个歧视事件的解决,就可以推进一点,拜托各位。
7
2
25
@yuliangxiu
Yuliang Xiu
1 year
Now the hands and fingers look much more promising than before.
@DiffusionPics
Stable Diffusion 🎨 AI Art
1 year
Tweet media one
1
3
74
4
2
25
@yuliangxiu
Yuliang Xiu
2 years
1k followers milestone. 🤘
2
0
25
@yuliangxiu
Yuliang Xiu
10 months
Would it be possible for @ICCVConference to allocate space for @CVPR posters upon request? This may potentially attract more attendees and compensate for those who were unable to attend either conference due to visa and COVID-related issues. @CSProfKGD
@yuming_du
Yuming DU
10 months
Been in this community for more than 4 years, got 2 CVPRs and 1 ICCV paper accepted, but have NEVER been to a computer vision conference in-person even for one single time…this is the last chance during my PhD but still didn’t make it. I don’t think I’m the only one🤔.
0
1
52
1
2
24
@yuliangxiu
Yuliang Xiu
2 years
ICON appears in the Github Trending for Python @gh_trending_py , Fight on!
Tweet media one
0
1
25
@yuliangxiu
Yuliang Xiu
4 months
LLM does connect isolated islands, e.g., 2D landmarks, 3D pose space, pixels, and language, under a unified knowledge space. Various "Projectors/Adaptors" will emerge for different output formats in downstream tasks.
@Michael_J_Black
Michael Black
4 months
I think about the field of 3D human pose, shape, and motion estimation as having three phases. 1: Optimization. 2: Regression. 3: Reasoning. With #PoseGPT , we are just entering phase 3. I summarize the coming paradigm shift in this blog post:
Tweet media one
2
55
276
0
2
24
@yuliangxiu
Yuliang Xiu
10 months
Why @github takes so much space?
@elonmusk
Elon Musk
10 months
Sorry this app takes up so much space
Tweet media one
47K
60K
803K
2
5
24
@yuliangxiu
Yuliang Xiu
4 months
@3DVconf COLMAP
0
0
22
@yuliangxiu
Yuliang Xiu
9 months
Really love this demo showcasing the stability of the pose estimated via IPMAN!
@Michael_J_Black
Michael Black
9 months
To evaluate the stability of poses estimated by IMPAN, we place them in a Bullet physics simulation. We find that IPMAN produces 14.8% more stable bodies than a baseline method. Example IPMAN poses are in blue, the baseline in orange. 8/9
1
5
23
0
1
22
@yuliangxiu
Yuliang Xiu
7 months
This thread thoroughly summarizes WHAT is TeCH, WHY needs TeCH, and HOW TeCH works. Good job @yangyi_huang_cn
@yangyi_huang_cn
Yangyi Huang
7 months
We see Reconstruction as a form of conditional Generation. Conditioned on a single image, and the descriptive prompts derived from it, TeCH could reconstruct a “Lifelike” clothed human. “Lifelike” refers to detailed shape and high-fidelity texture, even on BACKSIDE. (1/10)
2
18
127
0
1
22
@yuliangxiu
Yuliang Xiu
9 months
Come on, deceive me! Come on, ambush me!
@ChenGuo96
Chen Guo
9 months
First time to advertise Vid2Avatar personally! A surprising demo here that might be interesting to the young (Chinese) students/researchers. Though I cannot be in Vancouver due to the visa issue, do come by our poster session WED-PM-048 and talk to other co-authors!
17
199
1K
3
0
22
@yuliangxiu
Yuliang Xiu
5 months
I REALLY love the person behind this account.
@3DVconf
International Conference on 3D Vision
5 months
What makes you an average vs. a great 3D vision researcher? 🤔
7
0
41
1
0
22
@yuliangxiu
Yuliang Xiu
2 years
This is totally unacceptable, STOP ASIAN HATE.
@ShengyHuang
Shengyu Huang
2 years
It's so disappointing to see a ETH professor defended his inappropriate slide like this.I am also attaching his reply to the original post: This is NOT acceptable and we need a statement. @ETH_en @Joel_Mesot @G_Dissertori @springman_sarah
Tweet media one
4
19
54
3
1
22
@yuliangxiu
Yuliang Xiu
1 year
哭了
@not_2b_or_2b
To be, or not to be
1 year
目前看到的所有AI生成的照片,这组逝去名人的自拍是我感觉最棒的,技术的美好莫过于此。
Tweet media one
Tweet media two
Tweet media three
Tweet media four
39
481
2K
1
0
22
@yuliangxiu
Yuliang Xiu
1 year
KeypointNeRF's "relative spatial keypoint encoder" is a general plug-n-play module for different downstream tasks. I have integrated it with ICON, which achieves comparable performance, compared with expensive body SDF. More details at:
@talking_papers
Talking Papers Podcast
1 year
The episode + all relevant links and resources are available on: On my blog: On YouTube: On the Podcast: (Available on Spotify, Apple Podcasts, Google Podcasts and more! )
0
1
5
1
2
22
@yuliangxiu
Yuliang Xiu
3 months
四个说相声的对着骂街,你把那三个熬死了,你就是艺术家。——郭德纲
@SchmidhuberAI
Jürgen Schmidhuber
3 months
The GOAT of tennis @DjokerNole said: "35 is the new 25.” I say: “60 is the new 35.” AI research has kept me strong and healthy. AI could work wonders for you, too!
Tweet media one
170
148
2K
0
0
20
@yuliangxiu
Yuliang Xiu
9 months
Fight on!
@Michael_J_Black
Michael Black
9 months
As an advisor, there is nothing better than seeing your students and post docs succeed, grow, and become part of the community. This group is so impressive. I love how they support each other and I love their intellectual curiosity. I’m only sad for the ones who couldn’t come.
Tweet media one
14
9
340
1
0
21
@yuliangxiu
Yuliang Xiu
1 year
Imagine if 'GPT' could rapidly become an expert in ANY subject, autonomously gathering information from around the world, retaining it indefinitely, and continuously learning 24/7 for centuries on end... We should definitely fire the PhDs and purchase more GPUs
@Michael_J_Black
Michael Black
1 year
PhD students, don't worry. Technologies, trends, and even whole fields come and go. A PhD makes you an expert in a field but, more importantly, teaches you how to become an expert. Once you know that you can learn anything, you can adapt to major disruptions in your field.
65
455
3K
5
0
21
@yuliangxiu
Yuliang Xiu
2 years
Billie Jeans is not my lover~ #ICON
@thereisnomouse
Driss Hamadaine ドルス
2 years
1
1
6
0
3
20
@yuliangxiu
Yuliang Xiu
1 year
手动diff太累,有没有对前端和爬虫比较熟悉的朋友,愿意干这么个事——实时监控老胡每条微博的编辑记录(这个是公开的),并对编辑部分自动生成一条微博或者推特,统一发布到一个微博号,比如就叫“老胡反对老胡”,���保��这个号可以在很短时间内把关注量做上去,也好接推广。
1
4
19
@yuliangxiu
Yuliang Xiu
10 months
@songyoupeng @toomanyyifans 对得起我们吗,XXX,退钱!
Tweet media one
1
0
20
@yuliangxiu
Yuliang Xiu
2 years
Very impressive demo! Haven't seen any RGB based reconstructor can get such well-aligned and detailed avatars!
@_akhaliq
AK
2 years
SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video abs: project page:
2
85
414
0
2
20
@yuliangxiu
Yuliang Xiu
2 years
Have tried several SMPL-based pose estimators on very challenging images, and PyMAF performs most robustly. Hence I finally set PyMAF as the default HPS of ICON. Cannot wait to replace PyMAF with PyMAF-X!
@YebinLiu
Yebin Liu
2 years
PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images arxiv: project:
0
19
115
1
2
19
@yuliangxiu
Yuliang Xiu
2 years
Well trained RL agent
1
0
20
@yuliangxiu
Yuliang Xiu
8 months
哈哈哈哈哈哈哈哈哈哈哈
@orwell_benjamin
Benjamin Orwell🇦🇷🏆🗝️📦
8 months
看了这个才觉得自己太年轻,不了解蔡徐坤厉害之处。
121
323
2K
2
2
19
@yuliangxiu
Yuliang Xiu
1 year
一种非常典型的内卷工贼思维。
@PeterLin0732
Peter-Lin
1 year
我是一个穷人,读博是我通过留学润出去的唯一路径。我不希望博士的工资大幅度提高,因为每多招一个博士,就可能多一个我这样的穷人润出来,我不能因为自己过河了就拆桥。大学应该裁剪那些不必要的经费(尤其是什么LGBT之类的),多招几个人才是正道。
1
9
96
2
0
17
@yuliangxiu
Yuliang Xiu
2 years
挺好看,让我考虑下要不要把我的pornhub账号也公开在主页。
@yyw2000
vickieGPT
2 years
我的网站现在完美支持高并发了😂
2
0
12
4
0
15
@yuliangxiu
Yuliang Xiu
7 months
SMPL-X + displacement layer effectively models diverse shapes and clothing, even including dresses and shirts.
@HongweiYi2
Hongwei Yi
7 months
Compared with other methods, TADA excels in generating high-fidelity results on different avatars, with various shapes and clothes. TADA enables real-world applications, such as virtual try-on, texture editing, and geometrical editing between two avatars. (9/10)
1
0
3
0
2
17
@yuliangxiu
Yuliang Xiu
1 year
要不硕博深造,要不职业教育,不上不下的出路越来越窄,这恐怕就是未来中国,乃至全世界,教育变革的走向。
@infoxiao
Xiao Ma
1 year
Reading "GPTs are GPTs" paper. It's super interesting that those with bachelor's degree seems to be considered most exposed to LLMs for labor markets. You'd be less exposed with LESS OR MORE education.
Tweet media one
0
0
6
0
3
18
@yuliangxiu
Yuliang Xiu
2 years
STOP!!!!
@DiffusionPics
Stable Diffusion 🎨 AI Art
2 years
Tweet media one
373
1K
12K
1
0
18