• Home
  • Articles
  • Underworld: Evolution
  • Underworld: Evolution Sunstar
    Underworld: Evolution
    The technical evolution of Luma Pictures
    « prev1234next »
    title
    David Kintner is our fur guru and one of two lead lighters. On Underworld: Evolution he was the primary lighter and compositor for the William (the white werewolf) sequence, as well as a lighter for many werewolf and Marcus shots.

    Pavel Pranevski is a lead lighter and a pipeline TD. Pavel tackled the lighting on many of the more complicated shots on Underworld such as the werewolf transformations. He will provide some good insight on the types of technical challenges that we encountered, as well as flesh out the mechanics of our pipeline.

    As Luma's lead TD, Chad Dombrova writes mental ray shaders, creates tools, and develops the overall pipeline. For underworld, his primary task was lighting and rendering the fortress.



    The questions:

    Tell us a bit about your pipeline and how data is shared between various departments?

    Pavel: We are fortunate enough to be working for a company that heavily invests into R&D between productions, so before tackling Underworld we were able to sit down and rework our entire pipeline to suit the needs of a more complex show. We had a solid foundation of tools and ideas from working on Lakeshore's The Cave, and were able to build on that framework. All the data sharing between departments was handled by custom scripts and applications that allowed us to automate a lot of mundane tasks, and standardize the format in which data was passed down the pipeline. This approach eliminated a great deal of small snags and errors that normally slow production down to a crawl. Assets that were "heroed" were immunized, so to speak, before being released to the rest of the pipeline. For instance, we could always assume that our hero geometry was 100% clean without any errors, that hero textures were always the right size, bit-depth and resolution, and hero rigs always had the latest updated geometry with the latest updated UVs. Animation import and export, shader import and export, enforcing strict naming conventions, everything down to playblasting and submitting things for review was handled by custom scripts. This level of automation and customization allowed us to spend more time and focus on our shots, instead of running around troubleshooting assets.

    What file formats do you use?

    Pavel: Raw plates obviously arrive as cineons, which are then sized down and proxied accordingly. Simple JPG proxies are used by animators for background image planes, and specifically color-graded JPG proxies are used by lighters for matching CG elements. We try to do everything else in .IFF format, which is a very friendly format for exchanging elements between Maya and Shake. All the textures are converted to .IFF before they are heroed and released for use down the pipeline. Our texture artists are given the flexibility to use whichever format they are most comfortable with while developing textures, which is essential since textures were constantly being swapped between Zbrush, Bodypaint and Photoshop.

    Do you render in MentalRay Standalone or with MentalRay for Maya?

    Pavel: All final sequences were rendered with Mental Ray standalone, and this had to be the case for many reasons. First of all it saved us from losing memory to Maya GUI overhead. Furthermore, it allowed us to run all kinds of custom tweaks during the mi generation process, executed when renders were sent to the farm.

    Chad: For example, for a large asset like the fortress, archiving geometry as mi files was essential. We ran the archival process when fortress meshes were cleaned-up and exported as “hero” Maya files for reference into other scenes. Then, when lighting scenes were submitted to render on the farm, our mi generation script setup the mi files to include the archived geometry instead of retranslating all of that vertex data. From a storage and network standpoint, the benefits were truly astronomical. The total number of frames for all the fortress shots, including multiple rerenders per shot, most likely exceeded one hundred thousand. Without archiving, each .mi file would have been over 300 mb per frame, but with archiving, they were a mere 500kb – the 300mb of vertex data archived in a single location on disk.

    Adversitment
    • Latest Articles

    • Top Stories

    • Similar Stories

    • Related Stories

    • Last Coolthreads