FMX ’11, Day One

FMX 2011 ReportToday was more or less routine. In total I did not spend more than about fifteen days in my whole life in Stuttgart and many things feel so boringly familiar to me already: Breakfast in the common room of the hotel, packing my bag for the conference, parking in the Most Expensive Parking Garage, obtaining my ticket from the front desk. Still, there are a few things that have changed over the years…

The first row in many rooms is now reserved for lecturers so I had to abandon one of my favorite seating places of past years. Further the exit of the main auditorium has been moved to its side, probably because people pushing their way out and people pushing their way in made the access to popular lectures a pain — literally. Further, the FMX had matured and grown and hence offered even more parallel lectures in the opposing building.

Shortly before ten I had my ticket, had set off my first tweet, got my welcome bag, had my ballpen at hand and took a seat in the second row: I was ready for this year’s conference! Since I was a bit early I came to see the last of the screened animations of the Filmakademie Baden-Württemberg, a satirical take on the juicy topic of Austrofashism in the 1930’s.

Going Mobile

see it at flickr

Neil Trevett,
originally uploaded by Phil Strahl.

Then they rolled this year’s FMX trailer, in my opinion a fresh idea after last year’s rather featureless animation. Then the first speaker of the event got introduced, Neil Trevett from nVidia, incidentally nVidia being the event’s main sponsor. I was preparing myself for having some marketing mumbo-jumbo dropped onto me but it wasn’t that bad after all in his lecture Movie Making and More, All in the Palm of Your Hand.

First he lined out how proud nVidia is of their Quadro series, CUDA and that all of this year’s Oscar nominated VFX films used nVidia technology somewhere in the course of the production. Big whoop — Adobe can claim the same thing probably.

CUDA is the computing engine in nVidia graphics processing units (GPUs) that is accessible to software developers through variants of industry standard programming languages

Then followed some more nVidia-green Power Point slides depicting how the market of the “classic”, stationary desktop computer was more or less saturated today, even the growth for notebooks and netbooks started to level out, yet the market for mobile devices such as smartphones and tablet-computers exploded exponentially, with the Android operating system already ahead of Apple’s iOS. And since nVidia wants to get a big slice of that cake as well they started development of the Tegra series, high-performance graphics processors with as little power consumption as possible.

The ARM is a 32-bit reduced instruction set computer (RISC) instruction set architecture (ISA) developed by ARM Holdings. […] They were originally conceived as a processor for desktop personal computers by Acorn Computers. […] The relative simplicity of ARM processors made them suitable for low power applications.

But that’s not enough: The old x86 chip architecture is also in decline since tablets and smartphones don’t come with a constant connection to a power grid, so the old ARM architecture will become a big thing of the near future. For nVidia this is Project Denver, ARM with integrated GPUs, while Microsoft’s Windows 8 will also support this kind of hardware. nVidia also works together with Adobe to make the execution of Flash and AIR applications more economic on mobile devices. “Apart from playing back video, the main task of Flash seems to be blending and filtering. And we let the GPU do this.”

nVidia seems to be pretty tight with Motorola as well, because their cutting-edge processors chips are and will be inside cutting-edge Motorola hardware as a chart showed. And I realized that I will carry a laptop with me for a couple of years longer, because what I want and need to do on the go can’t be feasibly done on a tablet anytime soon. But for the average person who wants to listen to music, surf the web, read eBooks and write emails a tablet with a full-sized keyboard/dockingstation/battery-combo (like the ASUS Transformer) might be just what they need. USB-input-device support anyone? Ice Cream Sandwich?

Autostereoscopy is any method of displaying stereoscopic images […] without the use of special headgear or glasses on the part of the viewer.

In the very near future mobile devices will have quad-core processors hence multitasking should become as common as necessary, further the portable devices will provide autostereoscopic displays, such as the Nintendo 3DS. Last but not least the resolution with increase drastically to an estimated 2560 by 1600 pixels (nVidia boasts it “Extreme HD”) which effectively results in a resolution of 300 dpi on a 10-inch tablet — print quality.

But what about power consumption? “As paradox as it may sound but more cores save you more power: The excessive cores only turn on when an application needs them” Neil explained.

Gee thanks, Neil, that’s pretty interesting, consumer-wise, but what about our industry? “Tablets in the movie industry can be and currently are used in three different ways,” he continued. One possibility is to port software to mobile devices. I snorted mentally: Anybody who tried the incredibly limited version of Photoshop for Android knows that a smartphone is still way too underpowered to get real work done. “By 2014 mobile devices will have 100 times the performance of today.” So will CUDA also work on phones? “No, CUDA’s power consumption is one of the biggest stepping stones in that respect” Neil shrugged. But that didn’t matter to me, because “the next generation of tablets and smartphones will be aimed at the demands of artists and the creative, like by additionally offering stylus operability and pressure sensitivity. The first devices will probably be released this year.” I dumbly smiled at the poor person sitting to my right. I am so buying into this!

The next useful integration of tablets was wireless tethering with desktop applications, because “this ‘new’ way of interaction with a user interface that is so much more intuitive than having to nudge a mouse pointer across the screen.” Adobe released for developers the Adobe Photoshop Touch SKD which opens developers the door to think of clever ways to integrate a tethered device. For example, it could serve as a color palette where swiping blends colors together. The resulting colors then can be returned to the host application. EDIT: Adobe recently me an announcement via eMail to update Photoshop and get complement software from the AppStore. Pity I roll with Android.

The third possibility is using tablets as cloud-clients. nVidia has already a Flash-based technology for combining collaboration with Tegra-powered tablets called “Studiopass”: It lets you upload, stream, annotate and comment in real time on video files with others. It is also possible to use the tablet as intuitive virtual viewfinder that’s connected to a render farm that returns within a few seconds the rendered image to the device. And, surprise, Studiopass is built on Flash.

Just when Neil got glowing eyes and popped up a slide in purest nVidia’s corporat0- design reading “Super” Computing, the time was up. Close call!

Baking Light

see it at flickr

François takes a pic,
originally uploaded by Phil Strahl.

I was too lazy to cross the street for Approaching a CG Production so I stayed in the König-Karl-Halle for Bakery Relight by Thomas Driemeyer, Emmanuel Turquin and François Belot from The Bakery who talked about their company and their software, Relight. I was a bit distracted by the amount of people swiping away on their iPhones, Joseph Olin among them, while François outlined the history of the company in southern France. I only got to hear the last few words, “Sand, Beaches and Girls!”, although what I understood as “Beaches” could have been some other word. Or maybe it just was the thick French accent and the tendency to pronounce some words French, or switch to French vocabularies entirely.

Relight is essentially a tool for lighting and shading shots and to develop looks, realistic as well as artistic. “This tool was designed by artists and not by engineers” they noted and indeed: As Emmanuel showed a quick overview of the interface it seemed quite straightforward, a bit like Katana. “You don’t need to make a break every time you want to render a change on your way to the final shot, you always work with the final and that really fast.” Even little comfort features are implemented, like solo-ing certain light sources.

Reyes is an acronym for Renders Everything You Ever Saw [and is] a computer software architecture used in 3D computer graphics to render photo-realistic images* yet does not employ raytracing algorithms.

It plugs directly into Maya, 3dsmax and XSI, it comes with its own hair and fur system that is created at render-time by its production renderer that produces really good and clean images. The renderer itself is a Reyes-like rendering algorithm, iterative and optimized for fast feedback in production quality. But this need for heavy-duty-caching takes its toll on memory consumption I guess.

Once the scene’s visible geometry is cached by renderer (many millions of polygons take a couple of minutes) Relight unfolds its performance. Lighting and rendering a forest-scene with hundreds of trees and atmospherics only took a minute on an average notebook computer — impressive. And once the geometry and the first lighting pass is cached, the renderer gets even faster with every change. It keeps track of what has been touched or adjusted and only updates the dependencies, what had been affected by the change; and nothing more. “Depending on how many processors you use, the suite scales pretty well accordingly.” Even motion-blur and DOF (depth of field) are fast.

Another big thing of Relight are point clouds for quickly carrying out calculations that would bring a raytracer too its knees, memory wise. Point clouds (or disk-clouds to be more precise) are independent from the underlying geometry and make it possible, for example, to render ambient occlusion in fur stunningly fast. But ambient occlusion is only one application. Point clouds can be used for glossy reflections, environment light (from HDRIs, for example), area-lights or sub-surface-scattering.

Towards the end, François added that since Relight provides feedback so fast, it is a great tool for budding lighting artists and there are many collaborations with schools. Then he closed with a sentence in thick French accent that sounded to me like “Thank you very much for your invention!”. You’re welcome!

Arnold, the Tracer

Whereas The Bakery‘s Relight tries as much as Pixar’s RenderMan to cheat around raytracing, Sony Pictures Imageworks has a completely opposite approach with their proprietary renderer Arnold. The lecture by Marcos Fajandro and Larry Gritz Path Tracing and Unbiased Rendering started off by comparing rasterizer/Reyes renderers and ray-tracing.

Of course, Reyes depends on shadow maps and offers no ray-tracing but can be insanely fast, whereas a pure ray-tracer craps out completely with too complex geometry (see the issues with Davy Jones as discussed on the FMX/08) and/or/like hair.

So what do you do? Integrate some limited raytracing into a Reyes renderer or hack rasterizing & (deep)shadow functionality into a raytracer? Both options aren’t exactly irresistible. So the folks at the Spanish company Solid Angle, I haven’t heard of ever before, developed Arnold.

The first feature film rendered with Arnold was the, in my opinion terrible, Monster House and now since a couple of years it has become Sony Pictures Imagework’s only renderer.

Monte Carlo methods […] are a class of computational algorithms that rely on repeated random sampling to compute their results [and] are especially useful for simulating systems with many coupled degrees of freedom, such as fluids.

Arnold is based physically on the Monte-Carlo methods, it uses no storage for lighting and global illumination information, has only one quality knob to tweak on (namely the sampling) and outputs the final in a single pass. Yes, Arnold is a ray-tracer par excellence and traces millions of rays. “And when you trace so many rays you need to make the rays really fast”. Further, Arnold sports networked, programmable shaders, subdivision surfaces and yet can handle “hundreds of millions of triangles and hair splines” and virtually “hundreds of gigabytes of texture-maps”. Impressive!

Since everything (everything!) is ray-traced, the final image is optically seamless. There’s no need for say, a shadow pass or motion vectors. My worst fear of such a renderer is that it probably takes ages just to get a feedback of lighting changes, but that is not the case: An example video showed a car in Maya in the viewport whose camera was turned around it. As soon as the operator let it go, Arnold kicked it and its interactive mode rendered the selected region in big blocks, then smaller blocks and even smaller blocks until you could make out the raytraced details of car paint and reflections. This refinement process took not more that two seconds.

“In the end what matters is not how long you wait for your beauty-rendering but how long you wait for feedback of each iteration in the (re-)lighting of a shot. Since an artists costs you approximately $ 40 an hour, this way of saving time saves you a big amount of money and the artist downtime between iterations” Marcos explained. I heard several mental kaching-sounds in the audience.

Making raytraced motion-blur efficient is also important for the final rendering. A fully lit and shaded scene with production-quality deformation motion-blur rendered only 15% longer than without. Arnold even is proficient when it comes to volume renderings.

“The joy of Arnold is, that everything just clicks together seamlessly when you raytrace everything in your image. If you use different renderes for different tasks, even simple things like shadows can be a problem, like shadows in volumes. With Arnold there’s no catch — you get the whole package in one go with ambient occlusion, motion blur, soft shadows on motion blur, caustics, etc. ” Marcos summed up.

And just as joyful is instancing of geometry that gets loaded into the memory once for whatever number of instances. One shot of Alice in Wonderland had 61 million triangles and could be rendered in a single thread’s 15 hrs which boils down to an hour or less in a farm.

“Still there is lots of room for future improvements” Marcos concluded before Larry took over. Larry talked in more detail about Sony’s way of VFX and relighting using the upcoming Smurfs movie as an example.

For every shot with CG-integration, they shot a HDRI of the set with an incredibly expensive Spheron camera. In fact, not just one panorama, but two in different heights so with the set survey data it is possible to calculate and to recreate the scene geometry via triangulation roughly in CG. Then, for quality reasons, the light sources and instruments in the panorama get replaced by Arnold lights and erased from the HDRI panorama via Katana. The actual plate from the camera is projected onto the low-poly scene geometry, then the HDRI panoramas get also projected into the scene. “It’s kinda the same what was done for Benjamin Button” Larry added. Now the CG-characters are imported into the scene and get their lighting and bounces from the surfaces that the plate and panorama was projected onto.

As mentioned above, Arnold has an open shading language. “Traditionally, shaders are black boxes to the renderer that just return some values to it; their units are sloppy oftentimes and their underlying C/C++ code can also crash the renderer altogether. With an open shading language, we don’t have these issues.” Instead of returning color values to the renderer, the Arnold shaders compute so called “closures”, descriptions of how the surface will react to light, and pass these numbers on to the renderer, which can decide what to do with them. “This is also 20% faster!” Larry added happily.

What working with these “closures” exactly meant and what it had to do with ray-budgeting and Multi-Importance-Sampling (MIS) for the Monte-Carlo raytracing I did not fully grasp but Larry had some links (and his email address) on the subject, in case anyone was interested. I reproduce them here for the same purpose:

Some features of Arnold like sub-surface-scattering are performed with point clouds on-the-fly, although the renderer still employs Monte-Carlo methods when point clouds won’t suffice. Since Arnold is proprietary, Imageworks developed a plug-in for XSI themselves and a very basic one for Maya.

But why did Sony Pictures Imageworks settle for just Arnold a few years back? “The pass management grew so complex and bloated, it was hacks upon hacks, that overwhelmed the mental capacity of the TDs. Now it’s back to just having a single pass. Lighting got so much faster and in the end. Lighting and rendering a single pass still is faster than rendering lots of different passes and trying to get them working together.” Larry explained, and Marcos added: “When you reduce time somewhere in the process, however, somewhere somebody uses the new freedom to add more complexity.”

Currently Arnold uses only the CPU despite the trend of making everything CUDA- and hence GPU-compatible. “Porting would be complex and time consuming and in the end wouldn’t speed up things significantly, so we spend the efforts to optimize the CPU code instead. Further, the terabytes of textures would need to be streamed to the GPUs as well…” Larry summed up the bottleneck situation.

“We still have a couple of minutes left so I’m gonna show you the Green Lantern trailer. Everything was rendered with Arnold.”

Fun fact: The plate at the end of the trailer with the title and release date also had a rather small line reading “Also playing in 2D theaters.” Like it or not: Stereo has finally arrived and it is here to stay. Deal with it!

And here’s a LaTeX-set presentation on Importance Sampling for Monte Carlo Ray Tracing from 2006 with lots of pretty pictures (and some equations).


see it at flickr

Haus der Wirtschaft,
originally uploaded by Phil Strahl.

The few hours I was asleep took their toll and I was wasted for the break. I schlepped my bag and camera to a long queue in front of Starbucks where I got me a motivational Caramel Macchiato so I had the energy to look for food. I found an Asian noodles stand in a subway station and made my way back to the Haus der Wirtschaft, munching on fresh cooked vegetables. The remaining ten minutes I had an espresso in the showroom of the FMX. When I felt ready for some more lectures and headed into the König-Karl-Halle and was surprised to see all the good seats taken. So I settled for a suboptimal seat next to a German student who was constantly eating or drinking something and merrily ignoring the ban on recording devices. Funny thing though was that he thought I didn’t speak German and I was in no mood to shatter his belief. So I sat there with Ophelia, my notebook, in her bag on my lap and waited eagerly for the next lecture to take place.

Render de Janeiro

Building, Lighting & Rendering “Rio” by Blue Sky’s Andrew Beddini was up next. Andrew somehow reminded me of a friendly Ben Stiller character and his engaging lecture was a pleasure to listen to without once the need to close my eyes for just a couple of hours. Or maybe it was because I was so close sitting to the pumped-up loudspeakers that I feared my ears would pop as they rolled the FMX-trailer.

Blue Sky, founded in 1987, is a veteran of the industry, they even took part in creating CG sequences in Disney’s original TRON. The next milestone was the animated short Bunny in 1998 I remember seeing at the Ars Electronica that time. I even remember that it was the first animation to employ the nowadays obscurely named Radiosity.

People really like to talk about their renderers I realized, as Andrew started summarizing the features of their proprietary renderer with the featureless name CGI Studio, or just Studio. On a rendering from 1993 Andy demonstrated the features their raytracer could produce back then which they still use today. Some of its features used for Rio were secondary rays, true radiosity, raytraced soft shadows, tesselation of beziér patches, a procedural shading pipeline and the possibility to use procedures for much more than just shaders. Rio is a feature of 1800 shots, and 80% of it had to be done in just 5 months so the focus was on creating a pipeline that was fast and efficient.

Character lighting was one of the main and foremost concerns of the production. And “good lighting needs good development.” The character of Susan with fair skin, glasses and big eyes was the first testing ground for plausible shading and lighting. First they applied the sub-surface-scattering-shader to her skin and realized, oh boy, that it looked not really convincing. So the first thing was to implement a density adjustment into it, then light transmittance through the skin, radiosity and a subtle but necessary secondary transmittance model that made the skin softer still.

Eyes behind glasses were another challenge because “eyes are critical for emotion. You really want to get the eyes right first. If they are off, if the audience doesn’t buy it, then all the other efforts are in vain, so get the eyes right!”. First they raytraced the eyes and the glasses but the physically realistic look did not work with the stylized character design. For example, the shadowing of the glasses under the rim was just too dark with the realistic refractive index of 1.5, only 1.1 was just about right. Can you do this? Should you do this? Andrew made it heard: “Break reality to make it work for you!” What was behind the glasses was rendered separately with a plate of the background to yield realistic refractions of it, as for the reflections, the reverse angle shot was used ever so subtly. But still the eyes didn’t look alive, so Blue Sky went the whole nine yards and rendered for the eyes a UV pass, a reflection pass, an object pass so the highlight could be adjusted in Nuke accordingly. This finally gave the eyes that certain something that was missing and was tweakable in every shot.

The set creation of the favelas was a difficult task as well. After the first color studies the assets were created with low complexity in a modular fashion and could be combined to a very organic whole according to a plan of the set’s final layout. “If you want to sell a set to your director render it in gray with just the ambient occlusion and he’ll love it!” Andrew joked. “There were absolutely no texture maps involved, everything was done procedurally with using the world space coordinates. Also we don’t need a level-of-detail (LOD) system.”

With just one building block, the lighting department instanced the geometry for lighting and mood tests of what the location might look in broad daylight, in the afternoon, at night and with atmospheric effects.

Raytracing and motion-blur — another dreaded combination of mine. But to tackle that problem Blue Sky employed a lot of tricks that made their lives easier. One of those was modulating the render-resolution accordingly via script: “At some point you can’t tell whether a 2k image was blurred or a 1k image was blurred, so if the renderer dials down the resolution you effectively are four times faster.” Another trick was using Nuke’s excellent vector blur: The camera and a Z-Depth channel from Maya got imported into Nuke and was blurred there in post.

A slide depicting the favela at night appeard on the screen. “In this particular scene we had about ten thousand light sources” Andy spoke and paused for emphasis. Ten thousand light sources with raytraced shadows?! “The thing is,” he continued “we use the same pool for all shadows, they get calculated at the same time. So if we’re having one light source or ten thousand only adds 10 to 15 % to the overall render time.”

Since the protagonists in Rio are birds, the sky is an important part of the film as well. “About 30% of the picture is sky!”. So how do you get a beautiful art-directed sky? The traditional answer is to have talented matte-painters. “The problem is, that we had four to five matte painters but hundreds of skies to paint. So this wasn’t feasible. Further the feature is in stereo, so…” he switched to the next slide depicting various clouds on a black background “…we had our art director paint a style guide for clouds we then built in 3D with volumes. We rendered each cloud with surface normals from 38 different camera tilts so you had almost any angle. Then we imported those onto planes in Nuke and populated the sky in the 3d-space there with the clouds.” The normals made it possible to vertex-light the cloud planes in Nuke. “This is really fast and you can churn out skies quickly. We did this for an entire sequence at once, not only for a single shot.”
But sometimes you need hero clouds, especially when they needed to have a volume and depth or some advanced lighting effects, like transmission on the edges of a key light. These clouds got individually modeled and rendered. The atmosphere-gradient with the sun was also done as a dome in Nuke and when you throw everything together — voilá — there’s your final sky! This sky then got rendered in Nuke and was used on a plane in Maya for the reflections on the water.

Generating a vista of Rio de Janeiro, especially a stylized version that still looks as photoreal as possible was the next challenge on Rio. The foundation for the environment was survey data of the topography that got artistically adjusted with some liberties. Still, it was built to scale so the renderer would deliver correct results of atmospheric effects, for example.

The vegetation was, like almost everything in Rio, procedural with implicit surfaces. But here the procedural approach posed some difficulties: it is hard getting a procedure to a single point. As for the shading, two world space procedural textures were created, one for the granite and one for the plants. A simple rule that depends on the steepness of the topology then blended between the two, so vertical walls would have no vegetation and soft slopes would be fully covered in trees. Finally, the city’s buildings were roughly modeled, but also were designed for procedurally driven variety: They could have differently spaced windows, doors, floor-segmentation, surfaces and rooftop structures.

What surprised me the most was the way the lighting was defined and carried out: It was script-driven and those scripts were simple text-files the production edited with NEdit in Perl and Python. “This is great because the files are small and portable, you can send them securely via email, and you don’t need to open Maya just to twist a light a bit to one side.” Blue Sky even has an interactive front-end called “Quick Render” to render models and light situations without waiting for Maya to finally launch. And there was more: “Light sources are not defined in the world-space but in camera-space,” instead of XYZ they are described by values for rotation, elevation and distance from the camera. Again, you don’t need Maya to adjust a light; creating a consistent rim-light can be done with a few lines in a text-editor easily.

Finally, even the atmospherics were rendered in favor of a Z-pass. “A Z-pass always has artifacts along the edges so you need to render it really big.” And once you have time for that and to add fog & haze in post, you can render physically much more plausible (and within the scene reflected) atmospherics.

A Hairy Subject

We only had a few minutes to let sink in what was heard before Mohit Kallianpur from Disney Animation Studios continued with his lecture of Untangling “Tangled” where he was the Look & Lighting supervisor and described the history of the look of the movie.

In 2007 the movie had a much darker, browner tone to its concept art until John Lasseter intervened and pointed the production in a more colorful and saturated direction. Still, that was relatively late in the pre-production stage and there was not that much time left to get the movie done. So Mohit got down to the root of art direction and formulated three principles: Stylized shapes, illustrative colors and believable textures.


Then the research begun by watching and analyzing the shapes, colors and appeal of the old Disney classics Snowhite, Pinocchio and Cinderella. Especially the latter had a certain shape language of flowing curves visible in almost every shot, a graceful harmony. Moreover, a set of signature shapes of that movie was collected consisting of various bell-shapes, s-curves and leafs. Everything in the film would follow these shapes, even the canopy of the trees.

Based on a very impressionist and rough mood-painting that followed the shape-guide, the team produced a full CG version of it as reference. This made it obvious that the language of shapes worked well, but the painterly appeal of the surfaces was too stylized. So the world needed to have believable textures, not rely on impressionist suggestions of detail.


The architecture of Tangled was influenced by European cities, Disneyland (!) and Pinocchio: Everything should be small, friendly and approachable. The buildings are not tall and they flare out in a curve, they appear even chunky and beefy with no sharp corners and a very organic and hand-built feel. And like this Mohit wanted the architecture on Tangled to look: old and used but not decrepit or dirty.


But what does “illustrative color” now mean exactly? The production settled for a lush saturated palette and always a play of warm against cool: If the light is cool, then the shadows should be warm and vice versa.

Legally Blonde

Yet the most intricate and most critical task was to get the look of Rapunzel’s hair perfect. Much like the sky in Rio, the hair in Tangled was a character by itself. The first tests showed that the hair shaders the studio had developed so far didn’t do justice to blonde hair. At all. So research needed to be done and a hair model was found and her hair photographed in probably any appearance (even wet) and every possible lighting situation. Also commercials of hair care products served as a valuable reference “since they propagate the ideal on their packaging we wanted to recreate in the film.” Even a PhD student did extensive research on the subject and in the end a feasible shader was programmed:

First of all, hair comes in individual strands and strands that make up a volume of hair. Moreover, these strands have a top, a bottom and a body with an uneven surface, none of which is like the cylinder-representation of hair we all used to work with. So hair has a specular reflection, so far so good. And for black hair that’s usually all you see. Light hair also has a sub-specular portion that is a broader highlight in the color of the hair, whereas the specular highlight has the color of the light source. Then hair has a transmission value when lit from behind, and multiple scattering is important for light colored hair. Still that’s not enough because a hair rarely comes singly: There’s also a volume diffuse portion, that is backward scattering of the light and volume transmission, which scatters away from the light source through the volume.

Now if you add up those five components it looks good but not perfect, a certain richness is missing. This “richness” comes from the light bounced back onto the hair and ambient occlusion. Then, and only then, you are rewarded with beautiful blonde hair. But for eyebrows, fur and eyelashes you still need a different shader because the blonde shader needs a volume to work with after all.


And another question: What constitutes appeal? What makes a person look appealing in a movie? How to light a character to look appealing? Mohit dug deep once again and looked through lots and lots of glamor-shots of actresses and actors of Hollywood’s golden age in search for unifying principles. And his answer is simple: “Cheat!”

The starlets on photos (and in movies) were always lit by a soft light, no matter what the rest of their environment looked like. Then there needed to be hue in the shadows, the dreaded “graying” of multiplying occlusion with diffuse passes makes characters look sickly. Since the eyes are the window to the soul, they needed special attention and always a specular reflection, no matter what. And as painters suggested, the cavities of mouth and nostrils should not go completely black but into a warm darkness. And unappealing colors (usually green bounces of leaves and grass) also make a character look weak and sick, so they had to cheat there as well. Finally a subtle bloom on the highlights never hurt anyone.


On Tangled there were 19 look development artists that in the end produced over a thousand paintings of looks. And lighting was important to elevate the mood that was already there by narration, staging and framing. As Mohit showed some examples of lighting I could not help but remember the words of my mentor, Fraser Maclean as Mohit uttered them as well: “Put light where you need it, not where it realistically would be.” Indeed, the progression of shots he showed were lacking any continuity of the origin of the lights. “And here… well I don’t know where the light is supposed to be coming from, there is no window there” …but still it works perfectly in each shot. It went so far, that even the murals in the tower could be toned down or changed in opacity on a shot-by-shot basis. “Light shapes add to the drama.”

And so does saturation. In dramatic sequences the saturation got sucked out of the pictures, only to return almost fully later on.

Art-Directed Trees

Once you can art-direct everything, you have to art-direct everything. The R&D department provided a simple tool for Maya that allowed the modeling artist to draw a couple of curves and the plug-in would make a tree with branches out of it. Moreover, they also had a tool that would grow leaves into a pre-defined canopy-shape which was really fun to watch.
And since there are so many trees in a forest, the models were also switched to “brickmaps”, a RenderMan-term for low-res voxel-representations of a model, if far enough away and automated “stochastic pruning”, which meant that depending on the distance to the camera, not visible leaves would be automatically switched with low-poly models or removed entirely.

“In the end we generated too many trees and had to hide them in haze or atmosphere in post” Mohit admitted. That really was new to me, having accidentally too many in the final picture of something so complex such as trees!

Tangled was rendered with RenderMan (finally another Reyes renderer today, I was almost worried!), had 1380 shots and 55 lighting artists which, in sum, resulted in 9.01 million hours of a single lighting thread. “That’s more than 1028 years”. Ah, statistics. It’s like I tell you now that this blog post is already some 6000-odd words long.

Physically Based Shading

see it at flickr
Ben Snow,
originally uploaded by Phil Strahl.

I used the short break to get back close to the front in row 2 and to empty my can of Starbucks “Doubleshot Espresso” to be all up an ready for Ben Snow’s and Christophe Héry’s Physically Based Shading at ILM1.

Ben showed some definitions at first outlining GI, IBL and HDRI. “We’re not trying to reproduce reality,” Ben made clear, “but filmed reality” and described a little ILM’s history in their attempts to achieve this goal over the years.

The Cook-Torrance or Torrance-Sparrow model is a general model from 1976 representing surfaces as distributions of perfectly specular microfacets.

The practice of trying to capture as much information from the set grew over the years in scope and professionalism as well. In the 1990’s they started filming and photographing light probes on set, 18% gray balls, to recreate the light later in the computer. But soon they realized that was not enough. Along came six photos in each direction for cube-maps and then the familiar chrome ball, a technique I never quite got to work for my own projects. Indeed, the chrome sphere’s reflections were rather low-res and you needed to paint out the photographer every time as well.
So in the “early days” ILM made heavy use of texture maps with painted in highlights and shadows, for shading they employed the Cook-Torrance specular model. Their light rigs were also pretty basic but could handle real world situations already rather well. Occasionally shadows grew really dark and some ways to cheat were e.g. spot lights or churning down the overall shadow opacity2.

Then came Perl Harbor when they really needed to crank their depicted reality up a notch since ambient occlusion wasn’t enough. So they developed what they called Ambient Environment Lighting, which essentially is creating a pass for the ambient lighting via ambient occlusion: Rays are cast in a hemisphere around the surface normals, then a number of rays hitting other surfaces dictates the occlusion; and a pre-pass is done to calculate the average direction of the light. At least that’s what I copied from Ben’s slides. The film was a milestone nevertheless because Michael Bay, the director, was not able to tell the difference between CG and live-action-footage anymore.

What happened after Perl Harbor was boosting the quality consistently further. 8-bit images had served their purpose well (including in Perl Harbor!), but the need and time asked for floating point precision. Also, no mirrored balls would be photographed anymore in favor of photographed panoramas.

So a couple of years afterward came Iron Man and asked for the realistic depiction of, well, iron and metal that needed to match the practical suits on set. And a thing that had been troubling the folks at ILM was anisotropic highlights that appear on brushed metal.

Further, a new approach for taking HDRI panoramas was employed and meant taking a series of photographs from a tripod in all directions, although (as it appeared to me) not in a truly high dynamic range but only covering two two exposure brackets — too little from my experience but still it worked for them.

New Frontiers

And just as they thought they had mastered metal, along came Terminator: Salvation where it was impossible to cheat any longer and they had to move to a new paradigm for lighting and shading in RenderMan. ILM’s goal was to get a simpler, more intuitive and physically based system of lighting and rendering:

BRDF, the bidirectional reflectance distribution function is a four-dimensional function that defines how light is reflected at an opaque surface (Wikipedia). In principle it’s any shader.

The quality of shading and lighting requred a BRDF-model that not only looked right but also acted physically correct in terms of energy conservation. In short this means, that the rougher a material is, the weaker is highlight gets, otherwise the energy (= light) reflected would be bigger than the light received — impossible3.

That also meant a normalized specular highlights with a more physically plausible specular falloff, so the further away a light source is, the weaker the specular highlight gets, depending on the roughness or the “Normalized Importance Falloff”: The intensity of the highlight falls off on rougher surfaces. For tight specular highlights like chrome or mirrors the light source has to get a long way away before dimming. The broader the speculars are, the more quickly they will dim. For this to work the light needs to have a physical size in the system, so no more point-lights or directional lights at ILM

“This was hard for everyone to adjust to and Christophe and I remember some very passionately fought holy wars” Ben remembered.

On the set of Terminator: Salvation the chrome spheres were back, but differently. Now they were moved and shot in motion so their reflection could be used on moving models. “We still don’t have a great way of capturing HDR moving images,” Ben continued in front of a turntable of a T-800 in front of a steelworks plate, “instead we shoot HDRIs with stable lighting and we also shoot our chrome spheres so we still get our FX, strobes, sparks, etc. And on top of that we applied some pyro elements shot on film and used them as reflections or area-lights in the scene.”

Iron Man 2

“We now had our tools that worked, they weren’t really mature but technically they were robust” Ben noted. For image based lighting ILM then used a graphical tool, the Environments Browser, to quickly define light sources within the image so they could be recreated with area lights. The match-move of a shot creates the environment and the HDRI panoramas get projected onto the geometry and you basically end up with a HDRI-mapped recreation of the set environment in 3d. That enabled the artists to render dynamic HDRIs from any position in the set to be used in image based lighting of the digital characters.


But Image Based Lighting also poses some risks or at least things to watch out for. An important thing to be aware of is that IBL-lights are treated as infinitely away point-lights from the scene. A fact, that comes distractingly into play when CG-lights with a discrete position within the scene are meant to work alongside an IBL dome. Then it really takes an experienced photographer and visual effects team to produce good and usable HDRIs from the set which is essential for this approach. Last but not least the pipeline should support the the floating-point nature of HDRIs or else one might run the risk of losing dynamic range along the process when editing HDRIs.

On set ILM still photographs chrome and grey spheres but only as references. This only works properly if the film crew is in the habit as well and does not treat those shots lightly or worse, forgets about them. Since the VFX team later needs to match the lighting of the shot everybody liked on set, those references should be shot immediately after the director yelled “cut!”. “You wanna make sure that your spheres are as big as possible in the frame and you wanna make sure they are in the right spot. And you don’t wanna shadow or be reflected in the sphere” Ben reminded. Sometimes when the spheres are moved like an object though the scene, it is advisable to take some static sphere footage as well.

“On the set as soon as they say We’ve got it! then off you go and get the references.” These references, of course, should only be shot for shots that will have CG portions but if there is ever any doubt that a certain shot definitely will not require CG, one should still capture the references to be on the safe side.

“And be serious about it. If you don’t take this seriously nobody else on the crew will” Ben shared his experience with the audience and switched to his last presentation slide, titled “How we capture HDRIs” which I shall reproduce here:

  • Canon 1Ds Mk3 with Sigma 8mm fisheye lens
  • Nodal Ninja & Tripod
  • Remote shutter trigger
  • 0.6 ND (2-stop) filter (depending on how bright it is)
  • 7 exposures, 3 stops apart
  • Direct sun f/16, ISO 100, center exposure 1/32 sec

After a quick slide of acknowledgements Ben handed over to Christophe who would once again talk in a little more detail about the math-laden background behind the presented concepts.

Since my senior-high maths-teacher sucked out all the fun I ever had with mathematics, I wasn’t in the mood of trying to follow Christophe’s every word but I got the general idea:

Calculating reflections and IBL with the Monte Carlo approach requires a high amount of samples for a clean picture. Since the distribution of the rays is random, you end up calculating a lot of stuff you will not really see in your final rendering. So he came up with MIS, Multi Importance Sampling, that takes into account what the camera sees, that the light illuminates and identifies an area in the lighting-dome where rays have a probability to affect the final rendering, such bright portions in the HDRI used for IBL. With the same number of samples used for rendering, which are only weighted differently, in the end compose a much more pleasing result because you only calculate what you need.

To implement this into the renderer, the BRDFs and lights need to provide eval() and sample() methods, where eval() returns a color and a pdf for a given input direction and sample() that returns an array of directions and pdfs: For instance, in a dome IBL situation, these will be the vectors to the bright spots in the image.
I just copied this from a slide of Christophe’s presentation, so don’t ask me what it all means. The only thing I am pretty sure is that pdf in this context stands for probability distribution function and not for Adobe’s favorite way of storing their user guides.

ILM’s BRDF specification has to follow a number of principles too: The shaders now must be normalized e.g. energy conserving, they must “substitute” to ILM’s trusted but old Cook-Torrance model, they should be anisotropy aware and, as always, efficiently computed.

Their solution now is D-BRDF based on an yet unpublished paper by Michael Ashikhmin and Simon Premoze; the Beckmann distribution4 and no masking: the reciprocity term is simply 4.0 * V.H * max(L.N, V.N).

For the nerds among you I even noted the links Christophe’s presentation closed with:

You’re welcome. Now where is my mind?

After this heavy information laden day I skipped nVidia’s panel discussion and went straight home to type up this blog post. who would have thought that it would take me effectively over two weeks to finish it?

  1. Christophe is actually employed by Pixar, whereas Ben works for Industrial Light & Magic, but since they render with Pixar’s RenderMan the collaboration is fruitful to both parties.
  2. Ben’s slide also read “eek!” at this bullet.
  3. Boy, I used that “trick” so many times as well. Paradoxically it looked so often “righter” than one of the physically realistic Mental Ray shaders. At least after comp. Why am I telling this anyway?!
  4. D = exp(- (tan(H,N) / roughness)^2) / ( cos(H,N)^4 * roughness^2 * pi )
  5. Christophe’s link didn’t work, so I assume he wanted to this one instead.


Steve (May 04, 2011)

Was there ANY indication from Arnold as to a release date? Their “coming soon” is starting to become an industry joke.

Phil Strahl (May 04, 2011)

Of course there wasn’t. I would love to draw a connection to Duke Nukem Forever but after 12 years this running gag finally arrived at its punch line…