Cars 3 and deferred texturing

I haven's spoken much about Cars 3 in the last few years. This time I will, because I think the film deserves attention, as it looks incredible, and it forced us to solve a lot of interesting problems. One of them is deferred texturing.



I am going to go back to the initial issue in Finding Dory, the show when we switched renderer to RIS. That gave us fantastic new potential for render quality, but brought with it the end of our classic Renderman Shading Language, along with all our traditional shading tools, including one, we called Picasso.


Using math figured out and later patented by my friend David Ryu in 2008, Picasso allowed a user in Maya to build a set of poses, e.g. T-pose, exploded prims, or open mouth, so hard-to-reach parts could be painted with ease. Each pose could be rigid, as in transform matrices would be the only data stored per pose, or deformable, so an additional 3d primvar was stored with each point.
Each pose had an associated rig of cameras, and each camera (usually orthographic) was able to project paint on a subset of objects, on one or several "channels", labeled arbitrarily or using specific standards, such as Albedo, ShinyDull, SmallDisp, etc...  Our shading networks would have strategically placed special "receiver" nodes with matching channel names.
The order of cameras would determine the composition order of the paint, that would alpha compose like in photoshop layers (in "normal" mode). The math is pretty specific about how much of each layer to see based on camera depth,  clipping planes, incident normals and a number of parameters. At render time, when the objects deformed, the paint and its layering was calculated using reference points and normals, so the paint wouldn't "swim".
The matching of receiver nodes with projection paint was resolved dynamically at render time.


Now, was this the most efficient possible way to texture an asset? Probably not. Certainly not at render time. But in the Reyes algorithm, the relative weight of this texturing was pretty minor, and it allowed to do complex texturing for assets that are traditionally hard to UV. Perhaps also because of Picasso, Pixar has always been resistant to adopt UVs.


The origin of the name is no longer clear, but I think that Picasso should draw its name from the Spanish painter who, for some time in his life, adopted cubist techniques, where paint on the surface appeared to be coming from projections other than the point of view itself.

After being built on hierarchical Renderman attributes, the later incarnations of Picasso I wrote on Monsters University, progressively relied more on dynamically generated, or code-gen'ed, RSL co-shaders that looked at the camera setups and layered the paint appropriately.

Why did we have to lose all this? It's not just about the disappearance of RSL. In Pixar's RIS, we are running shaders two orders of magnitude more times for a final render than we did for Reyes. The whole concept of co-shaders (hierarchically bound objects with an API that could be fetched dyamically by name) went away to help improve the new Renderman's performance. While those changes were needed, they had serious consequences on our traditional workflows: if we want to apply a single layer of paint on top of 100 object with 30 different bound shaders, we would have to go into each one of those objects and splice some texture and projection nodes into their shading networks, and hope we don't have to change our minds or make mistakes, or else do it all again.
 

Fortunately, we didn't only switch to RIS, we also switched to Foundry's Katana, away from our traditional, declarative lighting suite called Lumos. In Katana, deferred but inspectable proceduralism is a solved problem, and at the core of the tool. In our shading networks we started adding nodes, named "Receivers", which only have a channel name (to tune in with a paint broadcaster) and a default RGBA, for when no paint is found at all, or alpha holes are found in the paint.

I wrote a Katana "SuperTool" that looks somewhat analogous to the built-in "Gaffer". That is no accident, as we had modified Katana's Gaffer quite heavily for our Blue Umbrella short (by Saschka Unseld), and had gained a lot of insight on its workings. Instead of creating lights, this supertool creates cameras and settings, and based on that set up, at render (or rib generation) time, generates and wires in the appropriate textures, projections, and blending nodes, purely based on the channel name and their composition order and style.

The nice thing about this is that the authored shaders didn't have to be setup knowing what paint would be available in the future, that part would be figured out later, even on a shot per shot basis. Instead of a texture node, we just put in a Receiver with channel "GraphicMask", or "MetalColor", or "Displacement". This can be extra important for Cars movies, because we use a ton of graphics, which are often finalized and approved long after the assets are shaded and painted. So there, deferred projections solved.

In fact, why even stop at projection painting? Why not make it be able to read paint no matter where it came from? That's why we picked a multi-talented scientist and artist to name this new tool: Leonardo.

Since the introduction of Foundry's Mari in our pipeline on Monsters University, UDIM-based painting has taken a growing role in our workflow, especially in characters. This is notably later than a large part of the VFX community, and that is in part a consequence of how much Pixar cherishes the ability to make creative choices live, late in the pipeline, while uv painting sort of bakes it in. Leonardo allows us to switch techniques at any point, to cater for that need. So we added support for UVs, UDIMs textures. We extended it with the support to Disney's uvless texturing, called Ptex, which came in handy with complex meshes that are typically difficult to correctly UV.

We didn't end up stopping there. By the time the show was wrapping, we were able to read point clouds, primitive variables (or privars), openVDB and even lights and interactive space-shapers in Katana, which we call Rods. That came in very useful when we had to drive complex, shared paint such as animated streaks of dirt and damage, shared between characters and environments. Eventually, we created a "Broadcaster" setup that allowed deferred splicing of arbitrary networks with matching channel names, for those special cases that were not covered by our built-in Leonardo layers.



 I hope you get to see Cars 3. A lot of hard work and love went into it, and what came out is a surprisingly involving emotional piece, as well as a real eye candy.

Comments

Popular Posts