Portable Material Descriptions

Let me share my thoughts about a problem I've had to deal with for a few years, since before Finding Dory.

The creative vision behind Coco, paired with the time constraints we had, forced us to get creative on the technology. In a post I talked about how we wrote a new surfacing tool, called Flow. One of the immediate problems we had to address was the exchange of materials: how do we make sure the appearance in our interactive renderer would be compatible with our rendering on Katana and Renderman? Even more now after Coco, how do we round trip these material across these different suites?

Screenshot from the Coco VR experience.

In our industry, describing portable material descriptions is not a new problem. In fact it is many problems in one:
  • Vfx studios have to work on the same show, share props, characters or whole environments, but for technical, competitive or legal reasons, they cannot share the technology used to render them.
  • A high quality material needs to look fairly accurate in a fast preview, so animators can use it to make creative decisions in real time.
  • A studio may need to export assets so they can be adapted for video games, toy makers, or VR.
  • An asset that renders today, may not work in a future pipeline or renderer 10 years from now. There is a need for making the materials work not only across studios and tools, but also across time. Storing whole archives of assets this way is often referred to as "digital backlots" (at least ours is), and I'll talk more about them in a future post.

There have been several remarkable attempts at describing looks for exchange. Let's examine a few:


Alembic is the most widespread pose caching (post-animation) format in the film industry. Sony Imageworks and ILM worked together on its development and its Open Source release, and as of version 1.1 they introduced lights and materials. The internal implementation includes a resolver of materials, so hierarchies can be described with inheritance and sparse overrides, which is a pretty powerful feature, similar to how Katana resolves Materials.

However, while the geometry is used all over the place, this rather stable format has seen few updates recently, and the industry adoption of its material description is extremely sparse. That is true even within some of the owners of Alembic, who moved on to newer technologies. In fact, major third party Alembic plugins don't even support the exchange of materials.

  • Tiny dependencies
  • Widespread abc format
  • Stable library
  • Well compressed files with de-duplication
  • Open Source
  • Includes a resolver for bindings, hierarchies, overrides
  • Material description is hardly used
  • No enforced standardization


While Alembic theoretically can support material descriptions, studios have recognized a need for content standardization and compatibility. ILM took this to heart, with Doug Smythe and Jonathan Stone spearheading the development. The first conversations go way back to Siggraph 2012, and early embracers counted our tools-shading and USD teams at Pixar, who got to contribute to its shaping, and The Foundry. This resulted in the 2017 Open Source release of MaterialX.

At this point in time, Autodesk is the primary facility other than ILM that has put some real engineering muscle behind MaterialX, writing a pattern generator called ShaderX.

MaterialX exchange files are simply implemented as an XML, but its real strength is in its schemas. It is very rigorous in its pattern description, somewhat inspired to compositing systems and to the Mari internal node graph, with canonical nodes shipped with openGL and OSL implementation to clarify the behavior and maximize compatibility. It has well defined rules for targeting different platforms, and for describing complex color spaces. It supports node grouping and encapsulation, as well as some basic referencing. Because it only supports materials and no geometry, MaterialX features specific rules but no coded resolvers for procedural material bindings or references.

At ILM, MaterialX has been used on a few projects, starting with "Star Wars: The Force Awakens", and "Rogue One", where some of the shots were rendered fully in Unreal Engine, using MaterialX as a common description. The VR project "Trials on Tatooine" is also a great example.

As of today, a handful of other studios are testing MaterialX, but DCC's are working on robust implementations that will enable its spreading.

  • Simple API
  • Small dependencies
  • Rigorous pattern standardization
  • Includes implementations
  • Self contained
  • Growing interest in the industry
  • Open Source
  • Separate from geometry
  • Some rules expressed but resolution is left to each client to implement
  • Bxdf standardization not fully flushed (yet)

Universal Scene Description

After releasing to Open Source in 2016, Pixar has been hard at work on USD's shading description. It comes off the bat with all the features of USD's core, such as references, variant sets, layering and composition, and a very optimized file format. While the Shading description is now getting more mature, lots of work is still going into Hydra (USD's render service protocol), so its output to each internally supported renderer (GL, Optix, or cpu/offline) will be accurate.

To some extent, USD looks like an Alembic on steroid, in the sense that it describes geometry, shading, dressing, lighting and animation in one composed layer stack. As such, it resolves references and layering, provides facilities to bind materials to geometry and schemas to resolve bindings and collections.

Working closely with ILM, my team greatly aligned itself with the concepts and language of MaterialX, ensuring compatibility and, in a near future, full support, meaning that you will be able to exchange MaterialX data directly in USD.

At Pixar and on Flow, we used USD of course. It has to fit in our pipeline which is almost all USD. Besides, we actually use a lot of the advanced feature sets of USD for Shading.

Flow native source data is USD itself, as opposed to having its own file format and then "export" to USD. It references pre-authored library materials, applies overrides and exports only the sparse overrides, as well whole new materials, shader bindings and public interface.

We can create derived Materials (similar to Katana's child materials) to any level of hierarchy depth. We can import and visualize alternate modeling variants, and export shading variants (currently in Katana, coming soon on Flow too), each with arbitrary complexity.

Finally, we export all this back to Katana, which in turn applies more sparse overrides, including collections and new materials, and exports back to another USD that will be composed on top. Assets shaded this way are then USD-referenced into shots during Layout or Set Dressing, ready for more shot-specific overrides. At some point soon, we'll be even able to render directly from USD to Renderman (r22 and later).

Did we use MaterialX too? I claim we did, at least in spirit. In the sense that MaterialX design has influenced USD shading a lot. What we don't use is the exhaustive list of nodes provided with MaterialX. They don't match closely with how we set up our networks, because we use less, larger nodes with multiple outputs, and MaterialX defaults to using many smaller nodes with a single outputs. You can see my older posts if you are curious about how we keep our renderers matching.

Other than Pixar, Animal Logic, Luma and SideFx have been actively using and developing USD and its plugins. 

  • Very powerful feature set
  • Composition resolved, clients don't need to implement it
  • Can read Alembic
  • MaterialX compatibility
  • Wide DCC support
  • Growing industry adoption
  • Great compression and fast file I/O
  • Open Source
  • Easy setup for Public Interface of networks
  • Moderate build dependencies, more if Hydra and python bindings are needed
  • No rigorous standardization of node types (unless using MaterialX schemas)


One of the advantages of the systems above is that they are open source. Because of that, some DCCs have been additionally relying on Multiverse for exchanging material data. Multiverse, distributed by Foundry, in itself is not a file format, so it relies on these existing ones. It can currently support, to varying degrees, USD, Abc and MatX, and provide plugins for loading and storing such data. We haven't tried it here, but I love the spirit of it.

Material Description Language

Independently from these efforts, NVidia came up with a different solution that spans their own and some external renderers: MDL, or Material Description Language.

This product is significantly different than the ones above in that it's not open source, and it's less of a node graph description, but rather a shading language (although one can often be mapped into the other).
MDL tries to be a standard representation for multiple targets, from real time engines, to interactive one (IRay), all the way to photoreal offline renderers, like Mental Ray or V-Ray. It is the responsibility of each engine to provide, full or partial implementations for each call of the material.

As the science of physically based illumination is settling over the years, MDL has put a lot of efforts in defining standards in its description, so Bxdfs and lights are enforced to be quite portable. Pattern description is simpler, and while more portable, it looks a bit like a more declarative version of the industry's favorite Open Shading Language (OSL) from Sony Imageworks.

With the exception of V-Ray and Substance from Allegorithmic, the penetration of MDL in the film industry is not clear to me at this point. I'd love to hear from you readers.

  • Standard Bxdf's and lights
  • Large user base in other industries
  • Large corporate backing in developing and maintaining it
  • Not Open Source
  • Low film industry usage

What to do, what to use?

All the notes about pros and cons are, of course, quite personal, so this is my view. If you have read so far, chances are you may be looking for a solution for exchanging Materials Exchange. Unfortunately, there isn't one solution that fits all, and the specific needs of a studio should shape their choice of technology. I am partial to USD because of all the things we can do with it, but there are valid arguments for each one.

One consideration is these products have been developed for years and have had to solve problems that most people don't even know they will run into. Before moving ahead and writing a new one, make sure you tried these. And if you are missing features, well, many of them are open source, why not contribute?


Popular Posts