My theory is that the hexagonal storm at the pole of planet Saturn is due to the UV interpolation around an extraordinary vertex in a Catmull-Clark subdivision surface...
Spot area lights can be tricky to implement because they don't have a simple physical meaning (condensers, reflectors, shutters, barndoors... they all act in a very different way). However, decent basic controls are fundamental to give artistic control over lighting.
Some renderers take the simplification to compute Fresnel on the surface normal, instead of using the microgeometry. Done wrong, that makes high roughness look velvety, requiring some funky f0-f90 controls. This is what that looks like.
I am expanding what the pixel inspector can do, now capturing and visualizing light paths contributing to the pixel. The scattered paths are in yellow, NEE in red.
Today I finally pipelined the denoiser to the path tracer. It is expensive to run, around 15ms/frame, so I have work to do!! I feel the result is cool nonetheless! Denoising indirect illumination, caustics, and all ☺️
I replaced a complex data structure with a flat vector, and changed the algorithm to work with it. Result is twice as fast and uses about 1/3 of the memory.
Never underestimate the effectiveness of simplicity.
The denoiser work is done! It works better than I hoped for. It is impossible to tell the difference between fp32 and fp16. It looks like nothing changes in the image, except that one takes half the time than the other, and at 7.1ms I can have the denoiser on by default.
This is not true. Raytracing was introduced in RenderMan 11 in 2003, here is the release notes:
Personally I used raytracing rather extensively starting from Happy Feet between 2005 and 2006.
No raytracing was used in renderman until 2013.
Some movies used shader techniques in specific spots similar to raytracing.
Quote from Pixar technical director Chris Horne from 2013
Can you spot the undefined behavior?
Back to doing some C++ with Eigen. Just wanted to transform some points, and this one had me scratching my head for a while. Solution in next tweet (thanks to my colleague
@w1th0utnam3
for pointing out the problem!)
🧵
1/4
I posted about this a while ago: sometimes you want to scale a light and adjust its luminosity, bigger light means brighter illumination; sometimes you like the luminosity, but you want to change the softness of the shadows. You need two ways to scale a light.
Shades of gray in the shadow penumbra aren't always appealing: a tinge of red or blue can help express warmth or cold and help convey the emotion in a shot. A light control to produce color fringes without having to paint emission textures can be useful.
I spent a few hours working on memory pools and now I can handle a hefty amount of unique geometries. Here are 1 million unique cubes where for each I jittered points and colors.
The biggest bottleneck is building 1 million tiny BLASs.
When I was a lighter I rarely wanted to deal with radiometric quantities: set some value representing the radiant intensity, set the size... Often you change the size of the light to control the softness of its shadows, not the power; so, you need two ways to scale a light
Ok, enough for today, the weekend is over. I have a basic top-down 2-ways binned SAH BVH builder. I'll get more complex when I parallelize it and add support of higher branching factors. For now it's 176 lines of code (including comments and debug draw)
Fixed some fireflies, improved numerical robustness. Using the correct Fresnel equation, one can set IOR=1 to disable the dielectric interface, and only render the substratum. Perhaps trickier to texture, but physically grounded, instead of some arbitrary "specular weight".
I am currently mentoring no one. If you have questions or need some support in learning, hit me up. I mentor for free. For tech people: C++, raytracing, fundamentals of light transport... For artists: how to approach lighting, tips to develop you skills, how to interview...
I have been buried in tensors for the last couple of weekends and I am coming out for air... Still lots to do, learning a ton in the process! This is what I have been working on:
Last day of winter break. I added basic support for elliptical lights. Also added single/double-side emission, and light exposure when visible to camera (which makes the light invisible to primary rays when set to zero).
Second iteration at visualizing paths. Added a filter to exclude noncontributing paths, plus an interactive selection to isolate a single path and see more details about it.
Rotation manipulators are tricky. I have chosen a less conventional approach for this one: instead of circling around the mouse, which isn't natural, manipulation is constrained along the tangent of where you click (check the line). I find this more intuitive. What do you think?
I made some improvements in how I present subdivision surfaces wireframe. I am still tinkering on how this should be polished, but I feel this is step in the right direction.
Sooner or later, I'll have to go back and write myself a few modeling operators, and their manipulators.
First draft adoption of imgui docking is working. Edits on my side made it tricky to get "captionless" windows and imgui viewport to coexist. There is still a lot I'd need to polish.
Docking is black magic, so much code in the backend! Kudos to
@ocornut
for pulling that off!
Some rough surfaces with a matte finishing exhibit sharp reflection at a glancing angle. Not something current microfacet models do well. I am toying around a "polish" parameter to give control over that. Here is an experiment.
To me, native visualization tools are key for a speedy development cycle! By native I mean "inside the application". For example, this is the importance sampling visualization for the light controls I posted yesterday:
After nearly two years, I implemented the sphere in my project! Backward way of working in graphics... Typically, in a raytracer the sphere is the first thing you ever render.
Don't worry about the fireflies: I still need to implemented path regularization.
This experiment is a a little nuts: non linear undo.
Basically, if you undo some stuff and and them make some changes, that state forks into another "parallel reality". You can then jump between any step across the forking paths to continue where you left off, or fork further
This is the last post in a series about scene graph optimizations. In the video, each of the cubes of cubes is a little over 1/4 million transforms. Duplicating, manipulating that mass of nodes seems to be not possible in most DCCs. 🧵1/4
Alright, alright. I took a quick pass of redemption, after posting the wrong polycount yesterday. Here are your complementary 15,641,810,018,304 triangles.
Point in case, instancing alone doesn't make a good demo, not unless you want to make dragons' dust.
I parallelized the startup sequence, now it takes about 800ms for Workbench to boot up and be ready to use. cudnn takes longer than that, so the denoiser won't kick in until a couple of seconds later. You can see it in this video. The operation is seamless.
This "thing" is beginning to look like an actual 3d editor.
I need a file format... I need to implement a file format / scene IO... it's going to be a lot of work...
New blog post! DNND 3: the U-Net architecture.
In the post I deconstruct the DNN architecture used in the OIDN denoiser to process path tracing rendering.
#MachineLearning
#AI
With so many graphics people leaving the platform, I wonder how many are still here.
I try to compartmentalize, and not let one thing poison another. I’ll keep on posting in a positive spirit.
So… here is the problem with occasional gaming: by the time the forced update completes, my free time to spend gaming is gone and I moved on doing other stuff…
First screen recording of the year! Let's see if I can get back in the rhythm...😁 This is a weird one! (just to test rebuilding the dome light CDF per frame)
Ok, last offer... slightly faster than before, with twice as much stuff. 20 million cubes... last offer!
PS: if you are curious about the scene hierarchy, check the last few seconds of the video.
It wasn't planned, but I gave a shot at improving the selection wireframe on the path tracer. I feel it tuned out pretty sleek, and it was simpler to implement than I anticipated.
At which point do you call it a path tracer? Is indirect diffuse enough? I am counting on Twitter's denoiser to make it look good... 😊
Anyway, time is up for today. I didn't originally plan to work on this on my day off.
There we go, I have a super naïve raytracer running on a single thread on the CPU. Not a bad start, but clearly a monumental amount of work left to do :D
In the video I start with the rasterizer view and switch to raytracing (looks the same for now, modulo aliasing and the grid)
Improved panels alignment. Now, inactive resize bars and the frame around the render viewport are consistent. I believe it gives a small touch of elegance and solidity. Everything can be hidden, leaving just a massive viewport.
The denoiser work is done! It works better than I hoped for. It is impossible to tell the difference between fp32 and fp16. It looks like nothing changes in the image, except that one takes half the time than the other, and at 7.1ms I can have the denoiser on by default.
Scene graph update. 100,000 objects grouped as 100 top level groups containing 1000 cubes each. A procedural animation moves 15 of those top level groups, which propagates xform update to 15,000 objects. The scene graph updates per frame only what changed.
@Nothke
@cmuratori
Beginner programmer: flat array using std::vector
Intermediate programmer: complex data structure using custom templates containers and allocators
Expert programmer: flat array using malloc.