Deferring topology in Substance 3D Modeler
During this long hiatus, with my brain in the trenches of working at Adobe and my free time well spent on parenting our lovely little one, I have been missing writing. Now that my 2 years at Allegorithmic and 5 years at Adobe awarded me some time for a short sabbatical, I have been thinking about how modeling was the first thing I took up when approaching Computer Graphics, and how it still influences what I do so many years later. So here are some of my reflections on the discipline, as well as my time spent as a tech lead in the Substance teams as especially on Substance 3D Modeler.
Modeling is a craft more than an art form
| Fan art project I did in college, 15 years ago |
While my early career steered me away from modeling and into shading pretty quickly, it also became apparent that in larger production realities a large portion of the creative work is done outside the modeling realm, and often by someone else - it was done in the concept stage, usually in 2d renditions, or clay sculpts, where characters and environments are designed without the limitations and technical knowledge required by existing 3D software. In other words, the creative step felt like it was controlled by talented traditional, 2D and clay concept artists for whom modeling tools were holding them back rather than empowering them.
On the other side of the hand-off, we have highly skilled modelers. While it does require good artistry, a great eye for detail and a keen eye for aesthetics, traditional polygon modeling in the VFX and animation productions is in many ways a technical craft. Positioning edge loops, getting the poly count right, preparing for rigging and negotiating with animators can take more time than getting the shapes right. And that's before we even mention UVs, whose ownership sometimes feels like a hot potato, to be assigned to someone else if possible.
Sculpting: deferring the topology problem
| Dragon sculpt by Damien Guimoneau (zbrush/substance 3d Painter) |
The final result of this process is a very dense mesh. However, if we temporarily disregard downstream technology like Epic's Nanite or NVidia's Meta Geometry, what is usually needed in production is more akin to a mesh with much less density, nice topology properties, well distributed UVs and a baked-out normal and/or displacement map, which makes up for the loss in detail. Also the mesh should preferably be quad-dominant and free of spirals of edges that make rigging so unwieldy.
Creating shapes while abandoning polygons
There is another class of sculpting tools that treats topology as even more volatile, to borrow a C++ term, and uses a different type of data as its "ground truth": Signed Distance Fields, or SDF.
|
Deferring topology in Substance 3D Modeler
There are other tools, like Oculus Medium and its successor Substance 3D Modeler, that live in between these worlds. In additional to using analytical SDF, they store a discretized version of these SDFs in voxel grids and treat them as ground truth. This technique is also used used (at much lower resolutions) by videogames, for terrain sculpting, especially when the terrain can be sensitive to destruction and needs to be recomputed. Polygons are computed from the 3d grids.

Teahouse, created by Joshua Eiten in Substance 3D Modeler
Clay
As a tribute to traditional sculpting - which is a hobby of some in the team, in Modeler the main data type is internally called Clay. Clay layers are voxel grids where each voxel registers its distance to the closest surface as well as, optionally, a color. The distances are clamped between -2 and +2 (just enough for the clay tools to work well), meaning we only store a thin band of data around the surface, and the rest is compressed away. Clay layers have a defined density, and unlike in Medium, they have a virtually infinite extent and are not limited by a fixed size bounding box.
Tools that work on clay volumes have an immediacy that is delightful and very tactile. The speed they can create and carve complex primary and secondary shapes feels quite amazing, especially in VR. Initially, most sculpting tools worked directly on clay volumes, but with time we found that introducing tools (such as build-up and smooth) that work directly on the mesh representation of such volumes gave better result on secondary shapes and surface detailing. That does not change the fact that all the polygons are volatile and ephemeral: after each operation on the mesh, we recompute the affected voxel grid and then recalculate a new polygon mesh off the new voxel grid, maintaining the voxel grid as ground truth. This effectively produces a freshly retopologized mesh fragment many, many times per second.
In fact we not only compute the mesh that represents the grid. In addition to the full resolution mesh, we also continuously generate its first three LODs, sampling the grid at sparser intervals. This makes it possible to use algorithms akin to transvoxel to smoothly transition between them and render large scenes in real time. How we got billions of volatile polygons to render smoothly in VR is worthy of a separate post.
The price of deferral
If your goal is a pretty picture (and your renderer can handle high res models), or guiding AI generative models, relinquishing the control over the precise polygons is quite liberating. If on the other hand your end product is a production quality mesh, to be used in some CG pipeline, that's not quite a free lunch.
The problem of topology and maybe UVs is deferred to after the creative process is done. We move from a world where topology is the very medium the craft is applied to, to a post process. The tension in this process is: how do you tell the computer what one artist's idea of a perfect output is, but also spend as little attention to it as possible since it should just "do it right". The process should ideally be entirely automated, and ideally would be very fast. At the time of writing, that ideal is still not quite reached, by any tool in the market that I can see, but great strides are still happening and papers are still being published. That too is worthy of a separate post.
That said, there are signs in the way AI is entering the industry that make me think maybe it won't always matter as much.



Comments