Associate professor leading EPFL's Realistic Graphics Lab. My research involves inverse graphics, material appearance modeling and physically based rendering
Differentiable rendering of meshes tends to produce horrible, tangled geometry 😬. We propose a simple and efficient way to fix this (with
@b_nicolet
and
@_AlecJacobson
, to appear in SIGGRAPH Asia'21, 1/7.)
We're excited to present a new method to render Signed Distance Functions (SDFs) in a differentiable manner, enabling high-fidelity image-based shape reconstruction. This is joint work with
@DelioVicini
and
@seb_spe
and will be presented at SIGGRAPH'22. (1/8)
Today, I am releasing *nanobind*, a new tool for generating bindings between C++ and Python code. If you use pybind11 or Boost.Python, then this will likely be of interest to you. For historical context: pybind11 started out as a side project of mine back in 2015. (1/7)
*Mitsuba 3* is now available! It's a major redesign of the lab's infrastructure for differentiable rendering building on the Dr.Jit just-in-time compiler announced yesterday. Full video link: 1/10
After over a year of development, I am excited to release the first stable version of *nanobind*, a tiny library for efficient C++/Python bindings. Much has happened since the first announcement: CPU/GPU data exchange with tensor frameworks, Eigen dense/sparse matrices, .. 1/3
@lufthansa
@djthomashome
@Apple
Makes me wonder how pure the stated motive is.. More likely, it is difficult to deal with customers that know exactly where their lost luggage is located.
I am excited to announce *Dr.Jit*, a just-in-time compiler for differentiable rendering. Dr.Jit is the foundation of the differentiable rendering stack at EPFL and powers the upcoming Mitsuba 3. The project is a joint work with
@seb_spe
, Nicolas Roussel, and
@DelioVicini
. 1/8
During the last year, my group has been pondering a great deal about differentiable rendering to understand it theoretically and improve efficiency substantially. I'm excited to share several major discoveries of this work. Full presentations will take place at
@siggraph
(1/7)
Did you know that inverse rendering can suffer from severe bias when the images are noisy (e.g. made using Monte Carlo methods). Our SIGGRAPH'23 paper dives into this overlooked issue (w/
@b_nicolet
, Fabrice Rousselle,
@_jannovak
, Alexander Keller, and Thomas Müller)
If you try to optimize geometry using a differentiable renderer, there is an elephant in the room: geometry causes discontinuous visibility changes, which mess up the derivatives. To use indirect cues like shadows in geometric reconstructions, this issue must be fixed. (1/7)
The Realistic Graphics Lab is looking to recruit a *PhD student* and a *Research Engineer*. We develop algorithms and systems that invert the process of rendering to reconstruct realistic 3D worlds from images. A bit like "TensorFlow", but for physical simulations of light.
Did you know that differentiating a volume renderer will produce biased and noisy derivatives? Our new sampling technique fixes this, improving reconstruction of editable & relightable volumes. Joint work with
@merlin_ND
, Thomas Müller and Alex Keller at SIGGRAPH'22. (1/8)
I'm thrilled and incredibly grateful to receive an ERC Starting Grant (1.5M€) that will enable my team to pursue an ambitious research agenda targeting differentiable & inverse rendering. Many thanks to my group and colleagues at
@ICepfl
,
@EPFL_en
for their support.
@ERC_Research
I’ve just released Enoki, which is the vector math/autodiff/GPU library that underlies our upcoming differentiable renderer Mitsuba 2. Feel free to upvote on Hacker News :).
Our latest SIGGRAPH Asia'20 paper (w/
@gllmLoubet
,
@tizianzeltner
, and
@nholzschuch
) is now available! It proposes a new analytic scheme to compute the contribution due to glints and caustics via a single rough (GGX) reflection or refraction. Link:
We can also jointly optimize geometry and appearance. In the example below, we determine the albedo and roughness of a Disney BSDF (shown under new viewing/illumination conditions). (6/8)
Hot take: we need a non-neural track for the remaining few papers of this category at conferences ;). Let's also give them a badge. Here is my humble proposal.
Our SIGGRAPH project on efficient sampling of caustics and glints (w.
@tizianzeltner
and
@iliyang
) was featured by
@twominutepapers
! The video talks a lot about my PhD work—I want to point out that Tizian is the genius behind this paper, so he deserves all of the kudos!
I’m really excited to share three
#SIGGRAPH2020
papers. The first w/
@MerlinND
,
@seb_spe
& Benoît Ruiz fixes the “memory explosion” problem of differentiable rendering, i.e. that the computation graph becomes so large that one runs out of memory after seconds of computation. 1/3
EPFL had a nice turnout at SIGGRAPH'21: 8/8 submissions accepted 🥳 (5 involving the geometric computing lab, 3 involving the realistic graphics lab). Looking forward to be able to share more details soon.
Merlin Nimier-David (
@merlin_ND
) and Delio Vicini (
@DelioVicini
) had their public defenses and graduated recently. They are the last two of the first generation of students at RGL. I am incredibly proud of the many amazing things they accomplished while here.
The second project "Path Replay Backpropagation: Differentiating Light Paths using Constant Memory and Linear Time" (with
@DelioVicini
and
@seb_spe
) fixes a fundamental scalability bottleneck shared by all physically based differentiable rendering done so far. (1/6)
A beautiful explainer video covering ray tracing, adjoint transport, Monte Carlo methods, reservoir sampling, and more. And all of this in 30 mins, fully rendered, with a captivating "puppet theater" explaining the ideas intuitively. How cool is that? 🤯
Alright computer graphics twitter, here's likely the best full-course, intuitive introductory explainer on the core of modern ray-traced rendering techniques I've ever seen, and it has pitifully few views for its quality.
The realtime viewer of this project uses Dr.Jit to compile fused neural inference kernels. All in Python, and running at hundreds of FPS. So cool! I look forward to learning more about the nitty gritty details..
🚀 Introducing our
#SIGGRAPHAsia
work “Adaptive Shells”, a novel
#NeRF
formulation that yields high visual fidelity and greatly accelerates rendering.
TLDR: Auto-derived bounding shells result in up to 10x faster inference than InstantNGP!
[1/n]
Do you enjoy building graphics software? My lab is looking to recruit a research engineer to help develop the next generation of Mitsuba, a physically-based renderer for solving inverse problems. A job ad with more details is posted here:
We have 2 papers at
#siggraphasia
that together make up
#mitsuba2
, RGL's new fully differentiable, vectorized, spectral, and polarized renderer. The first (w/
@merlin_ND
, Delio Vicini,
@tizianzeltner
) explains the system, which looks a lot like a compiler:
Combining these gradients with an optimizer produces a method for image-based geometry reconstruction. Unlike prior work, this does not require silhouette or mask losses, explicit meshing or complex regularization. (5/8)
The overhead of doing this is tiny compared to the rest of the differentiable rendering pipeline. Hooray! The paper and video are available here: (6/7)
Usually, finding a bug in your paper's experimental evaluation is just bad news! Not so this time: thanks to a bug found by Heloïse (
@DinechinHeloise
), Specular Manifold Sampling improves across the board. Tizian (
@tizianzeltner
) posted an explanation and re-generated the paper.
Thanks to
@DinechinHeloise
we were able to fix a subtle bug in our reference implementation of the "specular manifold sampling" (SIGGRAPH 2020) paper. Convergence is now considerably improved in some cases. See here for the updated paper and explanation:
I've added new abstractions to nanobind to easily exchange CPU/GPU/.. tensors with modern array programming tools including Numpy, PyTorch, TensorFlow, and JAX. The library takes care of all the nitty-gritty details of this process. Details:
Our method naturally handles secondary effects like shadows and indirect illumination, which can disambiguate the tricky solution space of single-view optimizations. This is done without any priors, neural networks, etc. — just good old physics and derivatives. (7/8)
Matt, Greg, and I are looking for a scene to put on the front cover of the 4th edition of the Physically Based Rendering book. Would you be willing to help? If so, please get in touch with us!
The second (w/
@tizianzeltner
,
@iliyang
) is an efficient and surprisingly simple approach for sampling specular paths to render things like caustics and glints in standard path tracers. The main contribution is a way of performing Manifold Walks on complex and messy geometry. 1/3
Anyways, if you have encountered similar issues in the past, then this is for you: 🎁 (includes detailed explanations of those plots). It's still a work in progress, let me know what you think. (7/7)
In the meantime, we've been working hard on getting Mitsuba 2 in shape for a release. This has been taking much longer than anticipated, and I apologize to those waiting—it's going to take at least another month. This is a massive effort and we want to make sure to get it right!
I missed these toys during the lockdown! This is an ellipsometer (a device for measuring how surface reflection changes the polarization state of light) built in the last few weeks—will post some more details soon.
What's going on with LLVM's libc++ project? I noticed yesterday that the header file containing std::vector<> expands to 2 to 2.4 megabytes of pre-processed header code! (of which >99% is overhead) This ...
Very impressed with
@wkjarosz
's HDRView image viewer, whose user interface just became a lot fancier. It also supports HDR displays on macOS—if you have a recent M1 or M2 apple laptop, those fireflies in EXRs will truly radiate.
The third is a collaboration with KAIST & MSRA involving Seung-Hwan Baek,
@tizianzeltner
, Hyun Jin Ku, Inseung Hwang, Xin Tong, and Min Kim. We’ve put together the first comprehensive database of BRDF measurements that captures changes in polarization state of light. 1/3
The second (with
@gllmLoubet
and
@nholzschuch
) introduces a new way of differentiating those pesky visibility-induced discontinuities by performing a change of variables that freezes them in place. This lets one do cool stuff :).
Merlin (
@Merlin_ND
) and Delio also recorded their SIGGRAPH Asia presentation of the paper "Mitsuba 2: A Retargetable Forward and Inverse Renderer" for those who missed it -- check it out here:
Many congratulations to Dr. Tizian Zeltner (wearing some nifty polarization optics on his graduation hat :-)).
@tizianzeltner
, it has been such a privilege to work with you! I can't wait to see what you will do next!
I'm actively looking for students, both at the PhD and postdoc level. If you're interested in inverse and differentiable rendering involving realistic light transport, please reach out! Some information is also available here:
Tizian Zeltner (
@tizianzeltner
) wrote a thorough animated introduction of polarized light, how it’s handled in the renderer, and the many things one must watch out for to prevent sign errors in the result. (We even built a machine in the lab at some point as an extra “testcase”)
I am planning to recruit a PhD student in the upcoming admissions cycle (deadline: Dec. 15). Are you interested in differentiable rendering, rendering systems, and material appearance? Do you like hacking on Mitsuba? Please get in touch/spread the word!
Glint rendering suddenly becomes very simple (basically just do a few Newton steps on the normal map to find a connection). We also propose biased variants of everything that converge even faster, while remaining temporally coherent. 3/3
There is one big catch: visibility changes at silhouette boundaries introduce discontinuities that break the differentiation process within the renderer. The optimization generally diverges towards bizarre solutions unless extra steps are taken. (3/8).
Guillaume (
@gllmLoubet
) made a recording of his SIGGRAPH Asia presentation "Reparameterizing discontinuous integrands for differentiable rendering" for those who couldn't make it to Australia. Check it out here:
Our method collects a small amount of extra information during ray intersection (aka. sphere tracing) to construct a *reparameterization* that makes the discontinuities mathematically benign. This enables accurate gradient estimates. (4/8)
Disclaimer: Sobolev preconditioned gradient descent involving mesh energies has been used by others, in some cases decades ago. The contribution of this paper is to realize how useful such techniques can be for differentiable rendering, and to show how it all fits together. (7/7)
A feature request to the Apple GPU/Metal team (
@graphicsguyale
,
@MGDev91
,
@gavkar
, ..?). Many exciting graphics/compute applications these days involve dynamic compilation, and for this they require some way of piping their JITted code into the GPU. 1/4
nanobind addresses all three: compilation time improves by a factor of 2-3x, binary size by a factor of 3x, and perf. overhead up to a whopping 8x compared to pybind11! This was possible thanks to technological improvements (C++17, PEP 590) and a philosophical shift. (4/7)
The paper and supplemental material are available here: . Enjoy!
Also stay tuned, we will release another differentiable rendering project tomorrow.. (7/7)
Baptiste Nicolet has put together a really nice Blender plugin over the course of the last few months. It imports Mitsuba's python extension module into the Blender process and then efficiently fetches mesh data from Blender's in-memory representation!
Over the last months, I've been working with
@wenzeljakob
's team to develop an exporter addon for Mitsuba 2 in Blender. Today I'm happy to release its first version!
Go check it out here:
More features coming soon!
SDFs are a neat representation because they can easily adapt to objects of arbitrary topology. To use them for such reconstruction tasks, the rendered image must be differentiated with respect to the SDF parameters. (2/8)
After a period of low activity, pybind11 has been completely revitalized thanks to several top contributors joining the team (Eric Cousineau, Ralf Grosse-Kunstleve, Axel Huebl, Henry Schreiner, Boris Staletic, Yannick Jadoul). Largest update in >3 years:
pybind11 development is now done in a team, with significant growth since those humble beginnings: it constitutes a core component of software across the world including flagship projects like PyTorch or TensorFlow. The GitHub repository is cloned >100'000 times *per day*. (2/7)
Delio just released the reference implementation of his SIGGRAPH paper on differentiable SDF rendering. This was used to reconstruct a chair starting from a sphere in the Mitsuba 3 teaser video posted a few days ago.
We've just released the implementation of our Siggraph 2022 paper on "Differentiable Signed Distance Function Rendering" on Github: . The code allows to optimize SDFs from (synthetic) reference images and is based on Mitsuba 3/Dr.Jit! (1/3)
I'll soon have my last tenure defense talk and wanted to get a fancy shirt from
@HUGOBOSS
for the occasion. Then I found out that their business in Russia has actually grown (!) since the invasion of Ukraine (). How shameful. I guess I'll go somewhere else.
Mitsuba 3 is easy to install and use from Python, which simplifies many things. Materials, textures, and even full rendering algorithms can be developed in Python, which the system JIT-compiles into efficient megakernels for the GPU (via OptiX) or CPU (via LLVM). 2/10
Three EPFL students planning to attend
@siggraph
'22 are facing concerning delays in their Canada visa applications. They submitted biometric data and traveled personally to the nearest consulate with visa service (Lyon, France). They've been waiting 2 to 3 (!) months .. 1/2
It enables a multitude of new applications building its support for inverse and differentiable rendering, GPU ray tracing, and spectro-polarimetric simulation.
Differentiable rendering can be surprisingly fragile whenever meshes are involved. A noisy gradient descent step is all it takes to turn the current reconstruction inside-out. The standard countermeasure for those kinds of problems is called Laplacian regularization. (2/7)
Why spend time on such a weird/niche thing? (Python bindings, really Wenzel? - it's not even related to your research..) The reason is that pybind11 is such a core component of all software that my lab develops that the overheads of the binding layer have become untenable. (6/7)
To learn more about nanobind, check out its page on readthedocs (). Benefits compared to pybind11 and other tools are explained here: . The nanobind logo was drawn by AndoTwin studios.
Beautiful work by Rohan Sawhney and Keenan Crane (
@rohansawhney1
,
@keenanisalive
) that shows how Monte Carlo techniques (widely used in the rendering community) can be used to solve extremely challenging geometric problems.
Very excited to share
#SIGGRAPH2020
paper w/
@rohansawhney1
on "Monte Carlo Geometry Processing"
We reimagine geometric algorithms without mesh generation or linear solves. Basically "ray tracing for geometry"—and that analogy goes pretty deep (1/n)
The core idea is to go to second order (think: Newton's method) in the smoothness term, within an overall first-order optimization. The result can be interpreted as Sobolev preconditioned descent, or as a mesh reparameterization via
@OlgaSorkineH
's differential coordinates. (5/7)
For those of visiting the EPFL Portes Ouvertes this weekend: I am running a demo session where you can see my group's optical measurement lab in real life and watch me struggle to explain it all in French😅. Sign-up:
I just released a new version of pybind11 that adds support for Python 3.8. It also fixes annoying crashes when importing multiple libraries that use pybind11 (
@PyTorch
et al.) produced by ABI-incompatible compilers. (e.g. GCC/libstdc++ and Clang/libc++).
@samdutter
@b_nicolet
@_AlecJacobson
Between 13-25 images with fixed viewpoints (→Table 1). Figure 8 in the paper shows what quality you can expect with a really low number of views (say, 1 to 4). Also worth noting: absolutely no neural networks are involved here.
Roughly speaking, a Laplacian regularizer wants each vertex to be at the center of its neighbors. This tightly couples the optimization variables, which is something that first-order methods like gradient descent really struggle with. (3/7)
@warrenm
It sounds to me like you had a bad experience with CMake in the past (and it did have lots of problems some years back). But CMake has changed *significantly* over time — many points that people object to really aren't valid complaints anymore. So this tweet seems a bit harsh.
Baptiste has created an efficient self-contained Python package that simplifies solving linear systems using a sparse Cholesky factorization. It has CPU and CUDA backends and can exchange data with array programming frameworks like PyTorch/Tensorflow/JAX. Check it out!
I'm happy to share 'cholespy', a self-contained Cholesky solver on the CPU or GPU, that is easily integrable with most tensor frameworks! You can find the code on GitHub or install it via PyPI: [1/4]
.. slows down every compilation step of essentially every C++ project. The issue () is apparently well-known but not a big priority. The developers say that they are too busy adding new features to dedicate serious resources to it.
While much has been said and written about this problem, current methods often perform poorly. We present *projective sampling*, a new method to accelerate derivative evaluation by orders of magnitude. This is joint work with Ziyi Zhang/
@Ziyi_Zh
and Nicolas Roussel/
@njroussel
.2/7
Regularization is also always a compromise: we must give up on finding the best solution in exchange for one that is reasonably smooth. Our method addresses both of these limitations. (4/7)
We’ve also made a video tutorial that will guide you through your first steps when using this method in Mitsuba. It showcases the reconstruction of a geometric object using only its shadows. Link: 7/7
Sebastien Speierer (
@seb_spe
) has been instrumental in building RGL's differentiable rendering infrastructure (Mitsuba 3, Dr.Jit) and is on the job market this Fall. I can't recommend him enough!
After working for +3 years on Mitsuba 3, I am now ready to move forward in my career and bring physically-based inverse rendering to the industry. So if you and your team are interested in such topics, feel free to reach out. Or let’s chat in person at SIGGRAPH next week! 🚀
The paper and video-talk for it are available here:
The official implementation is already fully integrated in Mitsuba 3, if you wish to try it out for yourself! 6/7