Saturday, September 29, 2012

Parsing shaders

I've done some more work on the Maya translator to support the new concept of shaders. 

A "Shader" in Aurora is responsible for prelighting only. They're somewhat atomic pieces of logic that given shading geometry and some parameters produce either a color or float (for now. I'll introduce manifolds, prim vars and other fun concepts later) which forms a virtual node graph that eventually feeds into the material properties of the "Material". The material interfaces with the integrators, and is responsible for handing over a singe BxDF during light transport. So far I've only written a texture2d shader with exr support and a noise3d shader with support for various transforms and spaces, but now that I have a generalized framework for shading they're pretty straight forward/quick to write, so I'll add a bunch more as I get time.

The Python and Maya material interface is monolithic/ubershader-style for now - I just piggy back on the existing Maya material node, and throw a bunch of my own attributes on there and parse these during scene translation. Internally it's all very modular, but while I figure out how to best parse/manage that elegantly it'll do for now. I suspect I can leverage the Hypershade in Maya, but I'll need to flesh out how to best handle it while parsing and also how to write custom nodes in Maya. Testing it out on a bigger scene it's behaving fine, but it's a pretty big impact on render times. It's nice to be able to control things interactively in Maya, though (although I haven't implemented any IPR style rendering yet rendering with only a single bounce is pretty fast).

The framework puts a clean line between prelighting and lighting allowing me to adaptively cache the former and stay unbiased. There's no caching mechanism implemented yet, though, so shaders are being re executed upwards of thousands of times more than needed.

A quick test on a medium sized scene:
1.5 mill polys, 9 light sources, multiple different materials, some with a procedural noise shader. 2048x1024 res, 8k samples per pixel and 16 light bounces. Render time was around 12h on my laptop, but it should drop considerably once I'm done with the shader cache. The lighting and material settings are pretty arbitrary, but I'm thinking I'll try and use this scene as a testing ground for my engine going forward by polishing it up a bit with textures and some proper lookdev and lighting. (Although the thought of UV mapping this beast isn't exactly intriguing..)



-Espen

Tuesday, September 25, 2012

Bring the noise



Turbulence style world space Perlin noise implementation. I still need to add the interface to this in the python API and Maya, but for now I'm having too much fun with this shading stuff to worry about pesky details like UI control.

-Espen

Monday, September 24, 2012

squares



That's right. Gone are the days of constant colored Stanford models. Textures are the new black, green, pink and orange.

I added a shading engine responsible for feeding bxdfs their coefficients, so I finally have an appropriate environment for things like texture mapping. There are a lot of features to be written, but the framework is there now so this is where the fun begins. I kinda broke my obj parser in the process of adding prim var support, so normals are back to faceted but that's all temporary. Be prepared for a bunch of updates on the  shading side.

-Espen

Thursday, September 20, 2012

No news is no news

No shiny new features this week, as I'm still fumbling around underneath the hood. Exciting times are close, though.


Here are some renders I ran as a sanity check for my kelemen material and infinite area light. 1024 by 1024, single environment light source, 8 light bounces, 1m polys, forgot to check the pixel samples. Render times were 15-20 min.






-Espen

Monday, September 17, 2012

Micro facets

In this weeks episode I've been cleaning up my material code. I've changed the microfacet distribution of my specular model to the modified Beckmann distribution suggested by Kelemen et al, and made the interface a bit more generic than before, so I can extend it to a bsdf next when I add subsurface scattering - and later plug in a shading engine to support varying parameters through texturing, procedural patterns etc.

For now, here's a pink buddah. 2048x2048, 4k samples per pixel, 10 light bounces.


-Espen

Monday, September 10, 2012

Embree

I'm doing some work on the back end of Aurora to make things go faster. Being that so many parts of the engine is a first pass, there's a ton of room for improvement. First up was comparing my core to that of a tested production engine. Now, it's hard - of not impossible - to get a good one-to-one comparison with any complete engine, and a lot of open source material out there is more geared towards research than production, but there are a few packages out there that has what I'm after - in particular Embree seems to be a good one.

"Embree is a collection of high-performance ray tracing kernels, developed at Intel Labs. The kernels are optimized for photo-realistic rendering on the latest Intel® processors with support for the SSE and AVX instruction sets. In addition to the ray tracing kernels, Embree provides an example photo-realistic rendering engine. Embree is designed for Monte Carlo ray tracing algorithms, where the vast majority of rays are incoherent. The specific single-ray traversal kernels in Embree provide the best performance in this scenario and they are very easy to integrate into existing applications."

And they weren't joking about the last part. With no external dependencies and a straight forward interface it only took about a morning to replace my kd tree and triangle intersection code with the BVH and intersection kernels in Embree and compare some render times. Fearing the worst, it wasn't all bad news.

The acceleration structure build times went from a few seconds for medium sized scenes (few hundred thousand polys) and a minute+ for huge ones (several mill) to less than a second for all cases I could throw at it, which I believe is mostly down to the fact that mine isn't multi threaded and has a pretty steep algorithmic complexity that doesn't do well with high tree depths. I have a couple of papers on faster KDtree build algorithms that I'm keen on trying out.

Overall render speed improved by about 2-3x for smaller scenes up to 4-5 times for bigger ones. While a lot of that comes from the lack of SSE in my own code it also speaks of either bad memory layout or room for improvement on the tree traversal side of things. While the plan is to get my own code up to speed with the SSE and compiler trickery going on in Embree, I'm more keen on getting on with other features at the moment, so I'm leaving the embree kernels in there and will come back to this later.

For now, here are some renders I ran to see what I'm looking at in terms of render times and convergence points for medium sized scenes with different material types.

1024x1024 pixels, 250k polys, 3 area light sources, 10 light bounces, 8k samples per pixel and Stanford Lucy with a lambert, a glossy Kelemen material and the new and improved Glass material for speed comparison. The lambert material is converging at pretty reasonable rate, but the caustics from glossy/mirror lobes needs a lot more samples so I definitely need some smarter algorithm for path sampling/integration.

Before heading down that path, though, I'm seeing some valleys in the CPU load that seems to correlate with my display driver blocking the main thread and making everyone wait, so I'll make sure it plays nice with others and works in parallel like everything else for what hopefully will be some more speed improvements.






-Espen

Thursday, September 6, 2012

Refracting bunny

Hopefully bug free this time. I fixed the error where paths including a diffuse or glossy material and ending with Transmit->Light (or "direct caustics" I guess) where not contributing energy, and things are looking a lot better.

Next up is performance improvements, and running some contrived tests to ensure things are still unbiased and otherwise visually behaving like expected.

1024x1024, two area lights, glass material with an ior of 1.55, a whole lot of pixel samples.


-Espen

Monday, September 3, 2012

Bending rays

What's a ray tracer without some good old caustics? With smooth shading normals in place the next logical step was to get specular transmission in there and refract some rays. Compared to the microfacet stuff I've been digging into for specular models implementing a perfect mirror model was reassuringly straight forward. I added a reflective mirror brdf for good measure and wrapped it all into a glass material. Tinting is currently done at the interface only - so no fancy volumetric absorption along the ray until I have a proper volume pipeline, but it does the trick for now.


EDIT2: I was using this scene as a performance test, but figured I could throw a sphere in there to show off the caustics as my other render was broken. 
2048x2048, 16 bounces, 1 mill polys, 15k samples per pixel (naive forward path tracing does not converge caustics particularly fast..):



Here's our hero with an index of refraction of 1.5:

EDIT: This one actually has a pretty hilarious - and rather obvious now that I've found it - bug. There are no caustics from direct light sources here (ie, Eye -> Diffuse -> Transmit -> Transmit -> Light paths), only from indirect lighting (Eye -> Diffuse -> Transmit -> Transmit -> Diffuse/Glossy -> Light), so only bounce from the floor is contributing to the caustics, not the light source itself. I'll leave the render up regardless, and post a correct one once I've wrapped up what I'm currently working on.


-Espen