The last Reyes project
In my engineering position in Tools at Pixar, I am fortunate to get to work on pretty much all of Pixar theatrical projects. My department tends to start as early as four years from delivery, and we help along the way until one year or so to the release date. I was actually done with my work on The Good Dinosaur a long time ago and seeing the trailer almost caught me by surprise, a great one, as I hadn't seen recent imagery in months.
This show marks a special milestone at Pixar, though, for us who work on the technical aspects of our films. It is the last show we will release with our Reyes implementation of Renderman. It still uses a lot of the work we did on Monsters University, where we implemented a physically based engine using a hybrid raytracer on top of Renderman's Reyes. The quality of the environments in the huge world portrayed in The Good Dinosaur, however, raised the bar again and is a true achievement in itself.
This show marks a special milestone at Pixar, though, for us who work on the technical aspects of our films. It is the last show we will release with our Reyes implementation of Renderman. It still uses a lot of the work we did on Monsters University, where we implemented a physically based engine using a hybrid raytracer on top of Renderman's Reyes. The quality of the environments in the huge world portrayed in The Good Dinosaur, however, raised the bar again and is a true achievement in itself.
Here is the latest trailer. I think it is a true beauty, and shows how far were able to push photorealism before taking a leap into the new renderer: RIS, Renderman's path tracer.
In the last couple of years, I had a chance to give some talks around the world about the differences between Reyes, Raytracing and Path Tracing, and how we use them, and have learned to appreciate their strengths and weaknesses. Now we are still hard at work with making the new renderer work well for our artists on a platform like Katana.
So why did we change? I mean, other than the fact that most of the VFX world is working with path tracers by now - be it Manuka, Glimpse, Arnold or you name it.
While the selling point for us was the quality of water renders - they tend to matter when making a sequel to Finding Nemo - the main appeal to many of us was the speed we can get feedback of complex illumination phenomenons. It was a bit later though that I started seeing the real difference in the quality of the renders: in the same render times we got so much more detail that my eyes hurt. While in Reyes we were shading 4 to 10 shading samples per pixels, we are now doing thousands. All the pain we went through in the transition was quite frankly worth it.
The challenges I have seen in the new architecture is that we have lost our classic bag of tricks, and had to dive in with a fresh mind, but not so much time to do so. Things were weird and unintuitive. Bump (at least the way we formulate it) is now much more expensive than displacement. The cost of primvar is, comparatively, much higher. We lost a whole shading language: RSL, as well as some good old tools we were used to. More importantly, we lost the flexibility that dynamic networks of coshaders and lights provided us, and have to make up for it with a whole new toolset to connect bxdf's and patterns in complex ways.
We are now ramping up the following two shows to use the new toolset, and it is just now starting to look more appealing than daunting. We're over the hump.
Comments