"Good Food" - Gaussian Splat Commercial
Background
The Electric Lens Company (ELC) is a creative, technology-based studio in Sydney, Australia. We come from Film + TV visual effects and 10 years ago, started working on post-VFX activities such as virtual reality, mixed reality, virtual production and other R&D works which make their way into myriad commercial contexts. While we have delivered many end-to-end projects we collaborate with other creatives to build larger capabilities or to hybridise traditional productions.
Opportunity
Early in our history we recognised the value of capturing humanity and the world around us as essential ingredients in an otherwise synthetic, all-digital working environment we operate in. This means investments into motion capture, volumetric capture, scanners and cinema cameras. We met The Splice Boys several years ago who are the leaders in Bullet Time in Australasia. They possess hundreds of stills cameras and have been instrumental in several collaborations including our Virtual Carl Cox and Ned Kelly projects. The former requiring the stills cameras to be formed into a scan volume to capture Carl for a metaverse project.
ELC has been using Photogrammetry for over 20 years. The tried and tested method of recovering an asset or environment using many camera viewpoints. Two years ago we started looking at Neural Radiance Fields (NeRF) as an extension of the same base technique however with different pros and cons. More recently 3D Gaussian Splatting (3DGS) has reduced some of the challenges with NeRF and presented the possibility of creating what we call a "Volumetric Photograph"; something that can be moved through and will rasterise into a photo-realistic image from any reasonable viewpoint.
Collaborating with the State Library of Victoria on a Ned Kelly project in 2024 allowed us to develop the workflow for capture and development of the 3DGS asset. Learning about the capture methodology and pitfalls. Earlier in 2025 the Splice Boys and ELC collaborated on a spec project to develop their cameras into a portable rig - we call a "Lightfield Capture Rig" as a reference to LYTRO who innovated in the area of Light Fields several years ago. A Light Field is a volume of information representing the light emitted in any direction at that point. Rather than Photogrammetry, which stores triangles and a single texture map to describe the diffuse-only light at that triangle, Light Fields, NeRF and 3DGS is able to represent the specular component which makes up the 'shiny' aspect to materials in the scene. Highly important for representing real-world light transport.
The spec project - a Kung-Fu short film - would show us that the Splice Boys capture rig, methodology and the 3DGS format could allow us to, indeed, create a "Volumetric Photograph" that would satisfy high resolution commercial production expectations with regard to image quality, colour space and dynamic range as well creative control in how the scene was captured.
An opportunity arose quickly after this which would have us collaborating together again, and with The Producers for a commercial production for The Good Food Guide. Story telling within this context would justify the efforts to ratify this workflow so needless to say, we were excited to endeavour through this journey!
R&D
Several components of the workflow were yet to be established for meaningful control, beyond the established base needs of the 3DGS technology. Where do we render the Splat asset for the best fidelity ? How would we make virtual cameras to move through the Splat, once created ? How do we get VFX-level colour space control ?
A short R&D episode began with ideating the ideal end-to-end workflow and identifying the options within each step which would let us coordinate the search or development of a tool for that missing step. We'd noticed a new OFX plugin for Resolve + Nuke from developer Dmitriy Lyakshev which was one of only a few ways to externally render the Splat outside of the native Splat training apps. This would allow us to create camera paths in a more controlled way and iterate through that process with familiar flexibility. Colour Space would be the next challenge. ELC ratified to an ACES based colour pipeline in 2022 so we could work in an 'unlimited' colour volume and dynamic range before tone-mapping down to traditional SDR/Rec709 deliverables. Developing the RAW photos through Lightroom Classic couldn't make any legitimate full-dynamic range outputs for us to put through the Splat software - PostShot from Jascha Wetzel - so we found a hack to smuggle the data via Rec2020 PQ (HDR) in the PNG 16-bit format. This would be transformed through Nuke into ACEScg.
At the time of writing, PostShot wasn't ingesting either sRGB gamut or ACEScg EXR for training of Splats so we maintained this 'smuggled' HDR data within PNG-16 through Post Shot and upon return to Nuke we could transform the Splat into ACEScg and begin colour work using Resolve for grading. None of our bespoke projects are ever straight forward so this peculiar method was published back to the respective developers as we maintain a relationship with the creators of the tools we depend on.
Production
The Splice Boys team assembled the rig on several locations and, having location scouted in prior weeks, were able to manipulate the rig into a position to best cover the volume most likely to be explored by the virtual camera in the post-production phase. Australian Commercial budgets limited any pre-visualisation process but a round of pre-viz would help with character blocking and virtual camera paths to inform of the required coverage. Suffice to say, the coverage was usually about 120-140 degrees in a forward direction which means peripheral detail is included into the captured images as well as a great amount of the subjects in frame.
Traditional lighting, food stylists, talent and direction continue to be important to the shoot and this article intentionally glosses over these critical components to maintain focus on the differences in the photographic method used. Splice Boys use Xangle Camera software by Eric Paré to synchronise the many shutters and batch download the images from the cameras.
Challenges
A challenge on set was that without a traditional hero-camera, there's nothing meaningful to see from the camera array; 150 photographs looking in different directions! However some key angles were looked at for the purpose of finding an appropriate composition and timing of the shutter to capture details such as fire, sauce droplets and performances of the talent. In the future, a pre-calibrated process could rapidly develop a preview Splat which is able to be analysed by stake holders for an approval or take select.
Post-Production
Once selects were made the process of coordinating the datasets from Lightroom to Reality Capture began. As mentioned above, the dynamic range was 'smuggled' through a novel workaround to some strange limitations in Lightroom and current limitations within Post Shot. A humorous situation arose where the first attempt to take RAW files directly through Reality Capture and into Post Shot resulted in 'volumetric chromatic abberation' where we could move around rainbows which were cooked into the model. Clearly the need to take control over the developing of the RAWs was needed to have authority over such RAW interpretation issues!
Alignment in Reality Capture was a breeze as usual, owing to the relatively small camera count - 150 cameras - and so this process was dependable however after a few early attempts at developing the Splat model it made sense to align the world and origin within Reality Capture so the Extrinsic/Intrinsic data files were sanitised and the Splat model would be in a known position for later cropping work - where we could take parts of the scene from different takes. Not aligning the Reality Capture scene made for too much manual alignment later and the Nuke environment is extremely limited for viewing the Splat data outside of the hero camera. The Burger scene would have the Origin placed in the centre of the basket of fries as an example.
Pushing out to Post Shot had us exploring the main factors to high quality Splats (beyond Camera Density and Sensor Resolution). These Parameters are the input image resolution, Splat Count and Training Iterations. We found that high resolution inputs were fine at the 150-count and 24MP resolution. Splat count went from 10Million to 30Million before the A6000 48GB GPU ran out of memory. Iteration counts were at least 90k steps. These parameters made ~7-9GB sized .ply files which were rendered in Nuke on the Mac M2 Ultra (128GB) with enough reliability that we could render at the 4k x 4k resolution with workable pace. Proxies were made for working with the Virtual Camera. Although we could use Nuke for Camera work, it was easier to make Polygon-Proxies in Reality Capture and use Houdini for Camera work. These Cameras were exported as Alembic back to Nuke.
A HUGE workflow discovery was made while Cropping the Splats within the Irrealix plugin. We found we could re-import the same Splat and crop/inverse-crop the model so as to move parts of the set around or delete them entirely! A vertical girder was deleted to suddenly reveal people standing behind it, adjusting the composition for the better. In two other scenes we could bring in Fire and Smoke from other takes and enhance the image. The Gaussians composited correctly in depth with surprising perfect results as the plugin supports up to 10 Splat models (and are rendered collectively for a Deep-Image-like result)
Separating out layers for grading became a simple process with the cropping boxes - of course the need to carefully coordinate the transform/cropping operations while also keeping layers in mind was an organically intuitive need. In the future we may be able to operate on the Splat model using Nodes which would easily help separate out elements, combine and export in an easier way.
Grading in Resolve Studio 19 was done in ACES and worked in HDR before tone-mapping down for the SDR deliverables. (An HDR version will be available soon) These graded elements were brought back into Nuke for final Motion Blur, lensing and timing into the Nuke Studio-based Edit.
As there were myriad deliverables we were able to work at the 4K x 4K 60p format and then retime the Virtual Camera for the 25p outputs and appropriate shutter-angle for motion blur at that frame rate. The Square film back meant that the vertical 9:16 cuts weren't such a compromise - similar to shooting open-gate. Working this way had a large processing penalty as instead of making ~200Mega pixels per second we needed to produce over 1 Giga pixel per second. Working at 60fps really helps with the 'dream like' quality of the piece and was a great achievement.
Outcomes
We'd built a version 1 of an end to end volumetric capture and Gaussian Splat VFX workflow. High resolution, high frame rate and familiar VFX workflow allowed the images to be made and tell the story of culinary moments with friends and family. The hurdles faced were overcome by creative work arounds as well as great support from the developers behind some of the new tools to create and manipulate Gaussian Splats. We marked this milestone as "Volumetric Photography" to highlight the still-life aspect of this technique.
The technique can be furthered by considering the time dimension via a similar density of video cameras. Shooting at 60fps or higher could allow a sequential Splat model to be explored in time and space; effectively a basic 4D Bullet Time rig albeit stretching the Bullet Time term quite a bit. Smart volumetric interpolation could help virtualise the shutter time so as to retime to higher frame rates than 60hz.
A range of realtime viewers have become available to an immersive audience using VR headsets. Optimised resolutions are needed for local processing on mobile hardware such as Apple Vision Pro. The experience of walking around these volumetric memories is like no other. So much potential in this space and we're keen to explore it with everyone.
Credits
Massive thanks to ECD Ryan Petie and the team at Publicis Worldwide Australia for not just saying yes to the tech but for backing it with smart, character-driven creative.
Director Oly Altavilla kept the story focused throughout, guiding the team through a lot of noise, never losing sight of the human side of the film.
To The Producers you backed this thing at every step and made magic happen.
Huge Thanks to Amazing crew and all the people not listed!
🧑🍳 Food Stylist: Caroline Velik
🎨 Art Director: Lucinda Thomson
💡 Lighting: Jon webb
🎥 Director: Oly Altavilla
📋 EP: Noelle Jones
🎬 Producer: Georgia Rankin
🎥 DOP: Richard Kendall
📷 Camera Richard Kendall Tom Brandon, Alex strati, Jason Gao
🎧 Sound: Mike Lange
📸 Stills: Jon Webb
🎭 HMU: Natalie Burley
👗 Wardrobe: Anita Fitzgerald
🛠️ Art Dept: Lucinda Thomson, Rob Molnar, Carla Smith, Kim Ritchie
🍝 Food: Sarah Watson, Haruka Kaneto
📍 Location: Paul DiCintio
🐶 Animals: Lauren Sellwood
🍴 Catering: Paul Le Noury
🎚️ Post Sup: Matt Hermans
🎥 Grip: Adam Vitolins
⚡ Gaffer: Dan Carr
#GaussianSplatting #3DGS #TVC #Volumetric #virtualproduction
Video Producer, 3D mograph and drone operator at Airspeeder
3moAmazing work, found my new dream company to work for one day! Is a VR output for Vision Pro / Quest 3 headsets something your considering making publicly available?
It’s incredible. Wow! I just created a freeze time script recently. And now I want to do it in capture motion way and to build it in 3D. Super inspired by your experience. Thank you
President @ Infinity Photo-Optical | Winner of R&D 100 awards
3moVERY NICE! But for simpler shoots, wouldn't a single lens capable of keeping focus from infinity to 2 inches work? www.ts-160system.com
Founder, Curator & Technologist: OMNI AI Film Festival, UNSW
3moAmazing work Matt, can’t wait to hear and see more
What a great effort, that rig looks awesome, nuke tool looks very cool as well, great write up!