top of page
An early 1900s oil rig violently catches fire after striking oil. BURN is a fully CGI short created in Houdini and rendered using Mantra & Redshift and composited with Nuke.
The incredible sound design was crafted by Copenhagen-based Human Robot Soul. Their uniquely dark and synthetic style was welcomed and took this project to the next level. The sound design has amazing subtle details so I recommend headphones.
The project was inspired by the iconic scene in the movie There Will Be Blood. Everything was built in Houdini and the first effect created was the wooden oil derrick collapse (yep, that's what they're called) which started with a real world scale 3D model based on the few reference images I could find. The model was textured with moderately burnt wood with areas of reflective oil splashes. It was prepared for simulation with a thorough rigid body constraint network built with key areas weakened to direct instability. The fiery geyser was built using a base particle simulation for the escaping oil which generates velocities to be sourced into a sparse pyro simulation. I built the entire effect twice, first using the fuel combustion model but then SideFX introduced the sparse pyro solver so I decided to start over and use the modern approach. The fire was a complex simulation using multiple custom fields to break up the infamous "mushroom" look and advect the sim with a wind field rather than a wind force (an effective trick learned from Andrew Melnychuk). The main fire geyser went through 67 versions, reaching a final simulation that took over 38 hours, 450 million voxels, and 5 terabytes of hard drive space. There were three more simulations: the fire directly on the wood, the trailing smoke, and the impact burst of fire & smoke when the derrick falls. The fantastic lantern, oil can, and interior wood boxes are courtesy of polyhaven.
Volumetrics were rendered with Mantra and Redshift for everything else. Compositing was done in Nuke within the ACES color workflow. The render times were significant because I utilized every trick in the book including in-camera depth of field & motion blur, displacement, translucency, hair, and volumetrics. Split between two renderers (mantra & redshift), I did my best to optimize the render times by making strict use of packed geometry, delayed-load techniques, ultra light IFDs, and proxies everywhere. Using primarily GPU rendering produced memory challenges because of the huge amount of geometry, voxels, and displacement. I was forced to use several optimization techniques including FOV optimization & culling, foreground background separation, and dynamically queried proxy versions of almost everything based on the camera.
bottom of page