Frantic Films Works Frantically on Journey to the Center of the Earth
September 13, 2008 4:13 pm
Frantic Films VFX, serving as the lead provider of visual effects for the stereoscopic summer blockbuster, "Journey to the Center of the Earth", used Autodesk's 3ds Max and in-house Flood software to astonishing effect.
Sergio Rosa caught up with Frantic Films' visual effects supervisor, Chris Harvey, to discuss the work that went into the film.
In the last few years, we’ve seen a rebirth of stereo 3D films that has spawned projects such as Spy Kids 3D. What do you think is the reason behind this?
Hmm . . . that’s a good question. I guess one of the big reasons is that technology has gotten to a place where there are no longer so many limitations on making a stereo film. For instance, Spy Kids 3D wasn’t even filmed in stereo to the extent that Journey was, or how Avatar will be shot. So technology is certainly a big factor. Stereo camera rigs are smaller, easier to manage and more consistent in what they capture. Digital cinema also plays a role. But from an audience perspective, I think it’s just a new—and when done correctly—very enjoyable experience. Personally, I love watching films in stereo and I hope that it isn't just a passing fad. I would love to see a drama or comedy in stereo. The whole experience is just so much more immersive.
What were the special considerations you had to keep in mind when you were working on this project, besides having to render two sets of images? What were the most difficult issues to resolve?
[Laughs] That’s a big question! There are a whole host of challenges when it comes to creating effects for a stereo film. First off, we had to stereo proof the facility, which basically means creating an entirely new stereo pipeline, particularly in terms of the compositing workflow. It also means building a new viewing environment, which was essential to both our facilities [Vancouver and Winnipeg]. Basically, you have to be able to see stereo projected big! We also had challenges with data. There is twice as much as normal, twice as much source, twice as many renders, twice as many layers in the composite, etc. Artistically, we had to learn new storytelling methods and composition when it comes to animation. And speaking of animation, having digital elements interact and seamlessly integrate into stereo space is infinitely harder than with a mono show. We also had to develop a new technical eye for spotting stereo issues. Overall, it was a lot of research, development and just plain trial by fire, and hard work from a lot of really talented people in figuring this stuff out. But, in the end, we developed a very strong stereo pipeline and are more than ready for the next show!
You mention that Frantic Films is more than ready for the next show. Do you already have a next project in 3D that we could know about?
We currently do not have a Stereo film in the shop but are looking at different projects all the time....unfortunately I cannot comment on what those are.
You mention that thanks to the technology available today, there are not as many limitations as there used to be for creating stereo movies, and I even see some new software supporting stereo out of the box (for example the new version of Autodesk products now offer stereo-image creation). Do you consider this will spawn even more 3D films as before? Could this be what takes cinema to a new level, or maybe could this mean that most, if not all pictures will be made in Stereo?
I really doubt that all films will become stereo, well, at least not right away. But I do think that there is a lot of hype for a reason. I personally love the stereo experience, and think it would work for any type of film—comedy, drama, VFX, action, whatever. And yes, with all the new tools currently available, and as experience grows, more and more movies will be made in stereo. In regards to taking Cinema to the next level...well that's a pretty tough nut to crack, I mean, in my opinion it opens up a lot of doors and adds a whole new level to the viewing experience...but for me, the bottom line is the same as it's always been....the story and the art! If you get both of those right you can take cinema to the next level with a black and white soundless movie.
Can you talk a little about your water simulation pipeline using Flood? From what I can see, it can handle very realistic water simulations. Is Flood a stand-alone tool or does it work on top of 3ds Max? Did you have any technical problems simulating and rendering such a massive amount of water?
Flood, at its core, is a stand-alone tool. However, there are many components to it and they’re all heavily integrated into 3ds Max. Altogether, in the Flood family we have Flood:Core, Flood:Surf and Flood:Spray, and some of these have further subsets. All of these different components can work independently or, in the case of something like the work we did in Journey, together. They also feed into other proprietary systems such as Krakatoa, which we use for a lot of our particle rendering.
As far as technical problems . . . well, when it comes to simulations there are always technical issues, but for Journey, they included needing to artistically sculpt and drive the ocean surface simulation, since the ocean itself was a character of sorts. That, while doing its own performance, had to be constrained in a natural way to follow the practical raft photography. There was also the challenge of just massive data and network usage when it comes to simulating such huge data sets. Ocean surface simulation went to the horizon, but to give you some idea of the interactive simulation sizes, there was one shot in particular that was made up of three, five-foot-deep three-square-mile pools all stitched together. And that was just for one level of simulation. On top of that, there were millions of spray particles on ocean surface all the way to the horizon, which all together resulted in over half a terabyte of information. So, managing and controlling all of that becomes quite a challenge.
Another issue with our simulation for Journey was the sheer dependency loops. The practical raft, ocean surface, razorfish and plesiosaur, the ocean surface, etc. Each one is driving the other and in turn is being driven by each other.
Most of the shots feature a large number of fish flying around and jumping out of the water. Were those fish created with any crowd simulation system?
Initially, we had planned to create a crowd system for this, but in the end we didn't. The animators did it by hand . . . sort of. What we did create was a procedural swimming system that sat within the fish and plesiosaur rigs, allowing a base swimming motion that happened through a series of input controls that could be animated. Custom key frame animation could then be added on top of this system and it allowed the animators to focus on the details. Having this allowed the animators complete control over how the fish moved and where they were placed. With all the art direction and the constantly changing ocean surface, this turned out to be the best way to handle the large numbers of fish.
How was the crowd handled at render time? Did you group each of the fish render passes based on distance from the camera, or did you separate the fish on a crowd-fish and hero-fish basis?
This varied from shot to shot, but we did build an LOD system into our animation and lighting pipeline whereby characters could be flagged based on distance from camera and performance importance. This would in turn allow us to do a variety of things, such as splitting them out to different passes, assigning different shaders or model settings, etc.
What tools did you use for rendering? Did you rely mostly on Mentalray for 3D Studio Max, Renderman or a combination of different render engines?
We actually used Gelato from NVIDIA. We wrote and controlled the 3ds Max front end for this software and it was an immense asset during the project. We were actually able to write a lot of special code to handle the unique challenges faced when making a stereo film. In fact, instead of taking the usual double hit to render time we were able to, through clever coding and the leveraging of Gelato's unique features, reduce that render hit to only about an extra 25%.
Were the plesiosaurs key frame animated, or did you use a small crowd system for them?
There was no crowd system used at all for the plesio. We did, however, have a procedural swimming system for them. What this allowed us to do is lay down the basic swim procedurally, which gave the animators the freedom to spend the most amount of time where it counts: in their performance.
Can you talk about the tools you used to rig and animate the fish and plesiosaurs? Did you use native Max tools on them, custom or third party tools, or a combination of those?
In order to answer this, I guess I actually need to cover both modeling as well as the rigging process to a degree, as they became very closely entwined. We started with the low resolution sculpt. We then moved it into Mudbox and Zbrush and started to detail the creatures out. Once pristine high level sculpts were complete, we dirtied them up, adding scars, asymmetrical details and variation for multiple creatures. Texturing was a hybrid of direct 3D painting in Zbrush and custom maps created in Photoshop. We also sculpted something I call Displacement Morphs, which is an entirely different full body sculpt that is essentially the body under complete compression or tension. They were referred to by the artists as the "wrinkly old man sculpts.” (!) These were then dynamically driven by surface tension and compression in the creatures surface, adding yet another level of extremely fine and animated detail.
As stated before, we built a non-linear pipe, which meant that we also built tools to allow animators to constantly update their animation rigs as needed, without losing animation. This was key because we could continue to improve on the rigging process while animation moved forward. The rigs also had a lot of procedural swimming motion built into them so that the base animation tracks happened "automatically" on which the animators layered the hero animation on top. For muscle, fat and soft tissue effects, we used a custom tricked-out version of SkinFX, which our lead TD, Kees Rijnen, wrote. Some of these sub-surface animation effects were applied as a post process pass by creature TDs, but most of them ran in near real-time directly in the animator’s viewports. Once animation was complete, we ran custom caching tools so that clean files were generated for Lighting and Rendering containing only the cached geometry and all-important metadata that needed to be passed with the individual creatures.
So, all in all, it was a whole whack of out-of-the-box custom scripts and custom plugins that we used to create the rigs and animation for our creatures. In the end, however, it all comes down to the talented team of people that executed the work!
Did you face any specific challenges when changing the rigs for the creatures or were these changes more related to adding new features to the existing rigs?
Hmmm...interesting question. I guess nothing out of the ordinary specifically. Most of the time the rig changes were strictly new features or tweaks to existing features. However there were two occasions late in the game (once with the fish, and once with the plesiosaur) where more significant changes were made. With the fish it involved a redesign in the way they wanted it to move and jump, which required a re-working of the deformations as well as modifing some internal structures and joint placement. This caused challenges when we wanted to preserve animation, but we wrote tools to accommodate for this and in the end it held up very well. For the plesiosaur, it came when the need called for a redesign of the creature...which caused a lot of changes in the facial regions. Beyond that, we faced similar challenges you always face with creatures I guess.
Are there any final words you want to say?
Sure, I would love to say thanks to all the Frantic crew...I have talked a lot about the tools and things we did to create this show but when it comes right down to it, all that would be pointless without our talented artists cranking away with the passion they always have!
About Frantic Films
All supporting images are copyright.
Images cannot be copied, printed, or reproduced in any manner
without written permission from Frantic Films VFX.
Animation Alley is a regular featured column with Renderosity Staff Columnist Sergio Rosa [nemirc]. Sergio discusses on computer graphics software, animation techniques, and technology. He also hosts interviews with professionals in the animation and cinematography fields.
September 15, 2008
Please note: If you find the color of the text hard to read, please click on "Printer-friendly" and black text will appear on a white background.
- Best Selling of 2018 Awards Sale
- Top-Wishlisted of 2018 Sale
- News of the Week for October 22, 2018
- News of the Week for October 15, 2018
- News of the Week for October 1, 2018
- News of the Week for Sept 19, 2018
- News of the Week for Sept 3, 2018
- News of the Week for May 14, 2018
- News of the Week for April 23, 2018
- News of the Week for April 16, 2018
Fantastic interview, Sergio! You asked some excellent questions, and I thoroughly enjoyed reading this a few times :) Chris - Thanks so much for sharing your time with us, and for some great in-depth answers giving us a behind-the-scenes look at what the Frantics team does so well!