We will be in
@siggraph
2023 with "3D Gaussian Splatting for Real-Time Radiance Field Rendering", have you ever seen radiance fields with 100+ FPS and MipNeRF360 quality?
Check out our website here:
Finally we are releasing the code for "3D Gaussian Splatting for Novel View Synthesis" that won the
#SIGGRAPH2023
best paper award. This is a huge milestone and we did a huge effort to provide clean code and reproducible results.
The most important factor that determines the quality of a reconstruction is not the actual NeRF variant you are using but rather where you place the cameras.
We give insights on practical ways to solve this problem in realistic environments.
Between thesis writing and exciting research projects, I managed to prepare the code and the website of my recent paper: "Improving NeRF Quality by Progressive Camera Placement for Free-Viewpoint Navigation"
Check it out here:
The most important factor that determines the quality of a reconstruction is not the actual NeRF variant you are using but rather where you place the cameras.
We give insights on practical ways to solve this problem in realistic environments.
Big news - we shipped Gaussian Splatting 🎉
Capture the impossible at: 🔥
Processing is free, and splats render fast in the browser. You can also export the .ply file. Enjoy 🚀
Yup, this week was my first week outside academia for a long time. I decided to join forces with
@jon_barron
and all the other great folks in his team. Hopefully together we will manage to find the optimal way to represent 3D scenes. I will be in London for the imminent future!
This week
@GKopanas
(of 3DGS fame) joined our team at Google! Everyone is thrilled to have him here, and I'm very excited to see what we can do together.
Pictured: Bard's rendition of "a Neural Radiance Field and 3D Gaussians giving each other a high-five."
I am happy to announce that our paper "Neural Point Catacaustics" got accepted into SIGGRAPH Asia 2022. On our website, you will find all our material, including a full video and a pre-print.
1/4
If you find dynamic NeRFs cool, check out this work where we exploit some of the very interesting properties of the 3D Gaussian Splatting for persistent tracking through time.
We learned a lot working with
@JonathonLuiten
that helped us understand better the power of 3DGS.
Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis
We model the world as a set of 3D Gaussians that move & rotate over time. This extends Gaussian Splatting to dynamic scenes, with accurate novel-view synthesis and dense 3D trajectories.
Another experiment with Gaussian Painters ✨🎨
By optimizing 3D Gaussian Splattings over separate images at several viewpoints, it is possible to get a Steganography effect! Three paintings are hidden in those gaussian splats
📢 My lab, GraphDeco, is hiring a software engineer to work on 3D Gaussian Splatting. We want to improve our method and incorporate ideas that span between research and engineering. We need experience in Graphics APIs and good programming skills. Send me a DM/drop me an email.
Introducing our paper "VET: Visual Error Tomography for Point Cloud Completion and High-Quality Neural Rendering" (
@SIGGRAPHAsia
2023).
Completing point clouds by lifting 2D visual error maps for photo realistic rendering.
📖:
📜:
Meet our inhouse tech Doggo "Rolo". Cinematic RGB Lighting
"More Doggo than Doggo"
Since July we've been redesigning our scanning pipeline to work with the amazing 3D Gaussian Splatting for Real-Time Radiance Field Rendering method from Inria.
IR's…
I finally managed to run some of my own datasets with '3D Gaussian Splatting for Real-Time Radiance Field Rendering'. Here is the first one, rendered in real-time. The output is incredible! More to come.
#AI
#NeRF
#gaussiansplatting
#radiancefield
#Computervision
Getting a
#SIGGRAPH2023
paper and getting memed? Best day of my life.
Come tomorrow morning to our talk at 10:45 in room 502AB.
We have special real time demos prepared for you.
Introducing SMERF: a streamable, memory-efficient method for real-time exploration of large, multi-room scenes on everyday devices. Our method brings the realism of Zip-NeRF to your phone or laptop!
Project page:
ArXiv:
(1/n)
One of the fair criticisms regarding 3DGS was the size of the representation.
@PapantonakisP
did a great job dealing with it. Great pleasure to work with him on this!
With our work "Reducing the Memory Footprint of 3D Gaussian Splatting," a method that reduces the size of 3DGS from several hundreds to just a few tens of MBs, you now have more space available for additional scenes! For more, check out our project page
The lights show at the Notre-Dame de Reims is spectacularly beautiful.
Reims Cathedral is where kings of France once went to be crowned - the cathedral has hosted 33 coronations in its history.
In 498, Clovis King of the Franks was baptized at Reims, making him the West's first…
My supervisor G.Drettakis is looking for Master Students to work on topics that extend between Graphics, 3D Reconstruction, and of course Gaussian Splatting.
Feel free to apply if you are a MSc student and you look for a thesis
We completely shift the paradigm from voxel-based representations to a point-cloud representation and achieve top quality with extremely fast rendering and training.
Thrilled to share Mosaic-SDF (M-SDF), a simple 3D representation suitable for 3D generative models!
Check out more results of text-to-3D generations here -
A very nice take on replacing the heuristics of 3DGS with something more principled + a huge boost in numbers. Way to go
@Shakiba_kh
, Daniel Rebain and all the rest!
📢📢📢 3D Gaussian Splatting brought you real-time rendering, but at slightly lower PSNR compared to mipNeRF360... 𝐚𝐬 𝐨𝐟 𝐭𝐨𝐝𝐚𝐲, 𝐭𝐡𝐚𝐭 𝐢𝐬 𝐧𝐨 𝐥𝐨𝐧𝐠𝐞𝐫 𝐭𝐫𝐮𝐞.
Introducing "3D Gaussian Splatting as
Markov Chain Monte Carlo"
What a cool result. Many projects have tried this auto regressive idea of generating and inpainting but the execution here is amazing. Amazing job from the whole team
Check out RealmDreamer ()--our new 3D scene generation method! No multiview data required :)
One of my favorites is this: "Fantasy lighthouse in the Arctic, surrounded by a world of ice and snow, shining with a mystical light under the aurora borealis."
Great work from the original NeRF team. So happy to see radiance fields become more and more robust in diverse scales and environments!
@jon_barron
how many input views are we talking about here?
Our freshly minted ICCV2023 paper: The nice anti-aliasing of mip-NeRF 360, but with most of the speed of Instant NGP. Error rate reductions of 8%-77% compared to either prior technique, and 24x faster than the most accurate NeRF baseline we tried.
Making NeRFs easy to edit is one step towards getting them widely adopted.
Proud to work with the very much talented
@clementjbn
for this.
Also the code is out and is built on top of
@NVIDIAAI
instantNGP. The main motivation is to make it easy for casual users so give it a try!
New 3D Gaussian Splatting recording! Those metallic reflections and leather were captured REALLY well!
When looking closer, you can also see how the watch hands are modeled with just a couple of elongated gaussians.
#GaussianSplatting
There are so many cool papers through the whole history of computer vision and graphics. I am happy to give shoutouts to the ones that could not make it to our related work section!
The recent 3D Gaussian Splatting paper triggered a bout of nostalgia. In the 1990s, inspired by an earlier
@szeliski
paper, I had played with the idea of using "particles". Similar to splats and worked quite well ... without deep learning ;-)
Testing out
@siggraph
2023 Best paper :
3D Gaussian Splatting for Real-time Radiance Field Rendering at Infinite-Realities with a head scan from our datasets by
@triplegangers
It's rendering at over 120fps!
#SIGGRAPH2023
@8Infinite8
@jon_barron
I am getting a bit sad seeing people get polarized and trying to pick sides. Thank you for not doing this and, I wish as a community we discuss more about the graphics representations, in a world where inverse rendering is a primal way to create 3d assets.
@janusch_patas
Well, I would consider this plain plagiarism of the original code-base. Not very academic to take a code base, copy paste half of it add a couple of new features and slap a new licence on it.
@pfau
Btw, I would not consider our method contradictory to NeRFs, the term is heavily ambiguous nowadays and we are heavily reusing the image formation model of NeRFs, so even if our method is not neural is pretty much an approximation of volumetric rendering through rasterization.
@Rafael_L_Spring
@siggraph
A) We explicitly model view-dependent effects using spherical harmonics, it's a common trick used in other works too (check Plenoxels, very inspiring work).
B) This is on-point, one would need to maintain meaningful gradients but possibly could scale the method even more.
@docmilanfar
@siggraph
@Snosixtytwo
It is embarrassingly beautiful to read these nice words for your work from researchers that you look up to. This is what academia and conferences is all about, it's worth all the hardships of doing a PhD.
Thank your for taking the time publicly discuss our research.
"Neural Point Catacaustics" code is out:
In other news, next week, I am heading to
#SIGGRAPHAsia2022
and I am going to present "Neural Point Catacaustics". Feel free to come to talk to me, I look forward to my first in-person conference as a Ph.D. student
I am happy to announce that our paper "Neural Point Catacaustics" got accepted into SIGGRAPH Asia 2022. On our website, you will find all our material, including a full video and a pre-print.
1/4
If you want to capture an object-centric scene, the solution is simple, just place cameras uniformly around the object.
We generalize this empirical observation to complicated scenarios. This is a first step to a problem that we should focus more energy if we want better NeRFs.
Our paper "Shrink & Morph: 3D-printed self-shaping shells actuated by a shape memory effect" was accepted to SIGGRAPH Asia!
We print flat plates with controlled trajectories so that, when heated, they morph into pre-programmed shapes
If you participate in
#I3D
today, don't miss the first presentation of the day, it's NeRFShop!
It's the outcome of
@clementjbn
's research internship in GraphDeco - Inria.
@taiyasaki
In 2023 if the code is not out is because the authors believe that is hard to reproduce. Hard enough that by not releasing it gives them the opportunity to monetize it.
I respect that but this means that in most cases reproducing is so hard that is not a reasonable request.
We model this phenomenon using a Neural Warp Field and point clouds. We outperform all state-of-art methods like MipNeRF, InstantNGP, and others. Our method can also render at ~5fps with OpenGL.
4/4
Shout-out to the organizers for sharing the talks through the website of the event. It's very valuable and more workshops and conferences should go into the habit of doing that. Kudos!
Unfortunately, we cannot release the used scenes because of their license terms. In retrospect, I should have considered using less photorealistic scenes but with better license terms to accommodate the research community.
@Rafael_L_Spring
One of the main contributions of the paper is the fact that we render with rasterization.
But 3D Gaussians as a representation of the volume are not tightly coupled with rasterization. One could use ray casting if needed 😅
I would like to thank everyone that showed interest to read our paper and sent us questions and extremely insightful comments. I would also like to express my gratitude to the
@siggraph
committee that honored us with the best paper award among such great papers.
@taiyasaki
Thank you, it means a lot! Congratulations to everyone involved
@Snosixtytwo
, Thomas Leimkhueler and George Drettakis. It's humbling being awarded among such incredible research works that is featured in this year's
@siggraph
!
Gaussian Splatting in VR, all code written by me from scratch on DX12 + OpenVR.
2x2016x2240 res, no clever multi-view stuff yet so just running the whole render pipeline once per eye. Left the frame timer on so you can see I've been truthful about my perf :D
View-dependent effects pose a substantial challenge for image-based and neural rendering algorithms. We introduce a new point-based representation to allow novel-view synthesis with curved reflectors.
2/4
For a planar reflector, a reflected point P results in a static virtual point P', independent of the camera. For curved reflectors, camera motion leads to P' tracing the Catacaustic Surface(CS). The envelope of virtual reflected rays defines the CS for a single point P.
3/4
@aras_p
@MartinNebelong
@edankwan
I am not lawyer either, but rather a main authors of 3DGS. I think viewers that are written from scratch would not have any problems. The asterisk goes regarding how the ply files weere originally created. If the original training code was used the non commercial clause applies.
You can now watch the talk Peter Hedman gave on the first day of I3D this year:
Scaling NeRF Up and Down: Big Scenes and Real-Time View Synthesis
#I3D2023
We are really proud to present at
@EGSympRendering
our Neural Rendering paper. This was a joint effort and I would like to praise my collaborators for all the great work. Watch us today at:
Project Webpage:
@keenanisalive
@dJourdan_
The discussion around these papers - including our work - is very interesting. To summarize my insight for nerf vs points: the latter can give you fast and sharp results but comparing to volumetric they can't come up with geometry in arbitrary places if not initialized properly.
@jon_barron
@nobbis
That's really not the best depth map you can get. I think this is a hard depth extraction. The expected ray termination should look much better.
@docmilanfar
@jon_barron
@siggraph
Wow! I remember reading the SIGGRAPH paper right in the beginning of my PhD, but then forgot about it and never made the connection. It might has influenced me unconsciously 🤗
I think is a nice connection to our work. Thanks for bringing this up.
@alexandertmai
@siggraph
It's not any easy question here.
This is the holy grail of this research field if you ask me.
Because before shading you have first disentangle materials and light from the current representation.
@gravicle
@jon_barron
They are confusing the medium with the art. The fact that I can put a few words in order doesn't make me a poet. And Genie will neither make me a 3D artist
@docmilanfar
@marcosalvi
@pfau
I agree there are trade offs, there is no free lunch. But I can tell you as much, that while development we didn't consider memory at all since we are spoiled from beefy GPUs. There are a few ways memory consumption could go down significantly at least on inference.
@nobbis
@jon_barron
That looks satisfying now 😅 I don't know how you render but I am curious why it's not almost for free since it shares so many computation with color
@jbhuang0604
@docmilanfar
@Michael_J_Black
I was randomly thinking yesterday that given that papers in CV are becoming mainly posters, the fact that the majority of the talks are invited is very much against peer review. The visibility is disproportional between accepted papers and invited talks
@drsrinathsridha
Super annoying. On top of that, I and others have been contacted by these accounts asking me to create a hugging face demo. I can't but assume they are on their payroll.
@whoshooz
@siggraph
Hi, great question. The light and material is currently baked in the representation similar to other NeRF-type methods. So there is no way to manipulate materials and light, but it is a great and very promising research direction.
Data is capture with commodity cameras.
Moving GANs to 3d! Compositing a generated image from StyleGAN to a 3d scene just happened. Kudos to Thomas Leimkuehler and
@GDrettakis
for making this happen at this year's SIGGRAPH Asia! The code is out
@skynetislov3
@siggraph
You are right, I share the same feelings on the topic. But nowadays the amount of work and materials needed for a paper release is huge.
Research groups often are overwhelmed during release periods.
I hope you will try our code when we manage to release it.
@makeshifted
@siggraph
That is a very interesting article,
@iquilezles
did it again.
But in the scale of our work, rasterizing the ellipsoids is not the bottleneck, but rather the alpha blending. Maybe
@Snosixtytwo
has some input here...
@aman_gif
@benito_link
The quality is pretty nuts for not doing any cool tricks. Wondering how it compares against a standard nerf instead of 3DGS. Hell, even add some Score destillation loss