I have been big fan of hollywood movies. and being a filmmaker/editor, I always explore filmmaking techniques they use for making movies.
James cameron is one of my fav. filmmakers. you must have been amazed how they made it after watching James cameron's hollywood movie 'Avatar', being a technical kidaa i couldn't stop myself. I have researched a lot, and here i come up with some imp. information. it will be great clues for aspiring filmmakers and vfx students. so that i thought to share it with you guys.
Avatar has captured the imagination of millions of people from around the globe. Creating a completely new world from scratch, one with hundreds of species of flora and fauna and breathtaking landscapes is a daunting task that Cameron completed with flying colors.
I have listed the softwares and tools used in post production. I intend to bring more detail and show some of the technical challenges and innovations that helped shape the world of Pandora and its inhabitants.
Some juicy details
While people think of visuals as being the biggest achievement of Avatar, it would be more appropriate to call it immersion – the feeling of being there, without any distracting clues that the world and its characters are computer-generated.Bringing characters to life in a convincing manner is a daunting task. Final Fantasy – The Spirits Within was the first to try and they failed spectacularly. The reason is psychological: while the brain can interpret a simple stick figure as a human, as the complexity of the increases, perceived realism improves until a point where the character is almost – but not quite – alive. Such a character looks real but dead. This dread area is called the Uncanny Valley. After the Final Fantasy flop, few have ever attempted to cross the Uncanny Valley; although LoTR’s Gollum is often cited as a realistic digital character, he was supposed to look repulsive, so the challenge wasn’t that big.
Performance capture
James Cameron wanted to create beautiful, sexy characters you could fall in love with (and judging from various talks on forums, it seems that Neytiri had that effect) and for this, rough motion capture that is usually employed was not enough.
Cameron and his team coined the term performance capture – that is capturing all the nuances, body language and feelings of an actor and translate them on the digital counterpart. For this they used innovative solutions from Giant Studios (that company was responsible for all motion capture) and they added a helmet-mounted camera attached to a rod, that sit in front of the actors’ heads; this way, the facial expressions could be captured along with the whole body motion. The helmets also featured ears and braids, helping actors remain aware of them. In addition to that, they used up to 12 HD cameras to capture the actors from different angles in wide and closeups to preserve all the fine details.
This approach helped the animators at Weta to concentrate only on bringing up every nuance of the original performance, like small winks, swaying of the hair or movement of the ears. It was also a boon for actors, now freed of tedious details like markers, lighting and costumes, now being able to concentrate on what they do best – conveying emotions.
“Everybody has their own camera eye and sensibility and to translate that through an intermediary is very difficult“, said Rob Legato, referring to the traditional workflow that required the animators to handle camera movement, with variations taking weeks to complete.
For the avatars, Stan Winston Studios built full-size models of generic Na’vi male and female characters that were laser-scanned and tweaked. Incorporating features from the human characters was a challenge in itself as the avatars had to have some resemblance to their ‘operators’. Moreover, a face conformation too different from the actor’s face did not translate well for face performance capture, this is why the Na’vi ended up looking quite similar to the actors portraying them.
Speaking of performance capture, to fully retain all the nuances of the body language, the Weta artists had created all the face muscles, fat and tissue and used state-of-the-art shaders and light models to give the sense of depth beneath the skin, with the usual sub-surface scattering model replaced with a sub-surface absorption one.
The modeling and texturing of the models and characters had to go through several update cycles, because as the one character model was made more realistic, all other CGI near it started looking obviously fake and had to be improved as well.
For the vegetation, Weta created a library of almost 3000 separate plants and tress, that enabled them to “decorate” the Pandora jungle in a realistic fashion. By the way, initially the vegetation was supposed to be cyan, but the jungle was too alien, so they bough back some green for a more familiar look.
And here are the main software used in post production:
They also used motion capture for creatures (using actors for stand-ins), and even for helicopters and banshees. This allowed for a better interaction with the actors and more natural motion paths (especially tricky for helicopters, which sway in their path in a way only a helicopter pilot can describe). The Weta animators still had to refine the motion and add all the necessary details, but the general motion path was already given to them in the way the filmmakers wanted.
Real-Time feedback
Another problem of working with CGI-heavy pictures is that the actors play in an empty space and when standing on the floor it’s difficult for everyone (actors, cameramen, director) to imagine how the end result will look like. So James Cameron and his team devised a “virtual camera” hooked to Motion Builder, allowing them to see the virtual environments in real-time.“Everybody has their own camera eye and sensibility and to translate that through an intermediary is very difficult“, said Rob Legato, referring to the traditional workflow that required the animators to handle camera movement, with variations taking weeks to complete.
For Avatar, they would move the camera in the empty set and the virtual camera would do the same inside the simulated environment, all while actors movements were being captured and applied to their digital models. This way, instead of doing a scene with actors, having the motion interpreted later and receive an animation weeks later, the production team could see the results instantly, providing them with the essential feedback needed to refine the details. This approach allowed them to stop using storyboards and just experiment. For camera work alone, they could try 15-20 variations – wider, lower – to get the best look.
When a scene was ready, it was given to Weta and they replaced the low-res models with high-res ones, did the tweaks and small details and rendered it.Another clever thing was the Simulcam – a rig that allowed them to do real-time compositing over green-screen. So they’d film actors in areas that required digital extensions (for example the hangar bay) and the image in the viewfinder would show the composited result, with the digital backgrounds instead of the green screen. Needless to say, this is a huge help for people behind the camera, now able to see how the scene is going to look like and how to frame it.
Attention to detail
The creatures started as sketches on paper and were then digitally sculpted with Z-Brush. Vehicles were modeled with Maya, XSI and 3ds max.For the avatars, Stan Winston Studios built full-size models of generic Na’vi male and female characters that were laser-scanned and tweaked. Incorporating features from the human characters was a challenge in itself as the avatars had to have some resemblance to their ‘operators’. Moreover, a face conformation too different from the actor’s face did not translate well for face performance capture, this is why the Na’vi ended up looking quite similar to the actors portraying them.
Speaking of performance capture, to fully retain all the nuances of the body language, the Weta artists had created all the face muscles, fat and tissue and used state-of-the-art shaders and light models to give the sense of depth beneath the skin, with the usual sub-surface scattering model replaced with a sub-surface absorption one.
The modeling and texturing of the models and characters had to go through several update cycles, because as the one character model was made more realistic, all other CGI near it started looking obviously fake and had to be improved as well.
For the vegetation, Weta created a library of almost 3000 separate plants and tress, that enabled them to “decorate” the Pandora jungle in a realistic fashion. By the way, initially the vegetation was supposed to be cyan, but the jungle was too alien, so they bough back some green for a more familiar look.
And here are the main software used in post production:
- Autodesk Maya (most shots)
- Pixar Renderman for Maya
- Autodesk SoftImage XSI
- Luxology Modo (model design, e.g. the Scorpion)
- Lightwave (low-res realtime environments)
- Houdini (Hell’s Gate scenes, interiors)
- ZBrush (creature design)
- Autodesk 3d design max (space shots, control room screens and HUD renderings)
- Autodesk MotionBuilder (for real-time 3d visualisations)
- Eyeon Fusion (image compositing)
- The Foundry Nuke Compositor (previz image compositing)
- Autodesk Smoke (color correction)
- Autodesk Combustion (compositing)
- Massive (vegetation simulation)
- Mudbox (floating mountains)
- Avid(video editing)
- Adobe After Effects (compositing, real-ime visualizations)
- PF Track (motion tracking, background replacement)
- Adobe Illustrator (HUD and screens layout)
- Adobe Photoshop (concept art, textures)
- Adobe Premiere (proofing, rough compositing with AE)
- many tools developed in-house
- countless plugins for each platform, some of them Ocula for Nuke, Ktakatoa for 3ds max, Sapphire for Combustion/AE.