[July 24th, 2007]
Posted by Michael
It might be the back door entrance into the world of Hollywood feature film production or just one more use of machinima in the commercial world: pre-visualization. Replacing expensive and specialized 3D packages with easy-to-access live 3D environments sounds like a marriage made in heaven. However, what are the real problems and chances here?
At times it can become a bit difficult to trace the real state of the art in terms of machinima and pre-vis because the glitter of Hollywood is shining all over it. One rather solid piece of information comes from Scott Lehane as he reports on the use of Unreal in the pre-visualization of Spielberg’s A.I.
Dennis Muren himself talks about it but the man at the helm of that project seemed to be Scott Rosenthal of ILM
“The whole goal of the exercise was to expose aesthetic choices or opportunities to Stephen Spielberg. So it was a director’s tool not a postproduction tool. We weren’t particularly concerned about using that data later for motion control or for match moving in CG,” said Rosenthal .
But he added that “while the system really was not a postproduction process or tool, it was a way to push postproduction decision making back into production where it belongs.”
(the article is offline, but I put a copy of it here)
What stands out is the idea of a truly ‘creative tool’ and not a fast reproduction engine. In other words: we are not talking about a faster version of Maya, but something more creative. And if Lucas is correct in mentioning the sum or $ 10 million for the pre-vis work on Star Wars Episode III … then there is a lot of room for creativity from the machinima side. Plus there seems to be a need for it.
Lucas himself spoke about new interactive features at SIGGRAPH 2005. The result is, as most might know (see Paul’s post), that ILM has developed its own pre-vis tool Zviz which seems to have some roots in machinima. Again, the interesting thing is that Zviz is currently at work on the Clone Wars series. Only the ILM wizards really could tell us where pre-visualization ends and production starts. Once you are in this creative process the opportunities seem to thrive. If anybody has actually seen it at work: I would love to hear more about it.
But ILM is not the mother of invention here. The whole idea has been around a lot longer. In 1989 David Smith developed an early 3D game – The Colony – and managed to get through to James Cameron, who at that time was shooting The Abyss. The tale goes that, back then, he created some virtual walkthroughs for Cameron-associate and visual effects person Michael Backes and continued to work on virtual walkthroughs. All this a more than a decade before A.I.
But it does not always need to be a new engine. You even can use out of the box engines. It seems that the pre-vis work on Stealth was done in flight simulator game engines if I understang this must-read article correctly. Note that Digital Domain, the effect company behind Stealth has recently announced to get more involved in interactive endeavors. Borderlines get blurry. Pre-vis companies like the Pixel Liberation Front work for EA as well as for Superman Returns (as well as for the less savvy Stay Alive where the game sequences were all pre-rendered).
No doubt: the pre-visualization industry has evolved since and so did the machinima community. With iClone and Moviestorm we look at yet the next generation of tools on the borderline of machinima and pre-vis, maybe more accessible than ever. Even Zviz is supposed to run on laptops. So what can we offer that pre-vis needs?
Down here at Georgia Tech we started our own humble experiments with pre-visualization: Some time ago Matthias Shapiro did a project called NUCCI (New User Camera Control Interface). The project was basically a pre-vis test and recreated a scene from Fincher’s Panic Room (itself a heavily pre-visualized blockbuster). More recently we have been testing whether we could work with the 3D department at Turner Broadcasting and did this little test. And the short answer is, yes, it does make a lot of sense and offers a direct way of access to virtual sets.
One obvious issue remains the interface. Playing a game feels different from operating a camera and a couple of standard features of even the most mundane pro-sumer DV cameras remain still largely inaccessible in machinima today. Camera operators do not want to type in variables in the console or press a button combination to change the shutter speed or pull focus.
When Peter Jackson says “We want Legolas to run up the chain to the Cave Troll” … How fast can games really react to that? The Cave Troll scene in Lord of the Rings might be a good example. It uses numerous technical elements from mo-cap to compositing. One of them is that Peter Jackson designed the camera work for the cave troll scene in the Lord of the Rings using a virtual camera that simulated a handheld effect. That is a truly creative use of real-time. But who can play with that level of hardware investment in the machinima community?
You still need a lot of processing firepower and expensive software to claim a real-time production pipeline like The Virtual Director does. Machinima is different, machinima is fast, cheaper, and right at hand wherever you need it. So it needs more affordable solutions like that by Crack Creative and even more machinima-specific: by GameCaster. Fiezi himself has played around with that.
I could not agree more. We have to make the camera more accessible (and I wish we could get some students interested in such a project) … but while such an interface might be appealing to producers, is it *really* the problem we face when doing pre-vis machinima?
In our own test project we did not have much time to play with fancy interfaces. We just hooked up a Wii controller to UT. Our biggest problem was not to drive the camera or poly-count. It was the import of the 3D data. Real-time modeling and Maya-modeling are still worlds apart. 3D modelers in the “real” world have no idea how careful one has to be to get the model into Unreal. We (in the person of now-alumni Nick Bowman) spent more time cleaning up the model than actually modifying any Unreal bits. Not much creativity there.
Strangely enough, it remains one of those myths is that somehow the assets of Hollywood production houses will transfer into games. The way things look now, I doubt that it works “just like that.” Steven Katz – one Hollywood writer who actually knows a thing or two about real-time and even video games writes:
‘For more years than I can remember, production software and hardware companies have been touting the workflow concept of games and movies being derived from the same assets — literally the same models and scene files. This is engineer speak, and there are so many reasons why this will not happen quickly that it’s not worth the space to go into it.
(…) there is a very real convergence between movies and games. But it’s not the assets that are coming together; it’s the rendering technology.’
The looks might be comparable but what good is it if I have to model the creature twice because the Maya-exclusive ueber-poly 3D model simply does not get into a real-time engine?
Unreal 3 might change some of that but what is needed is a really, really, really clever and super safe exporter when it comes to their poly-baking. I have never played with CryTek‘s engine or Doom III, so maybe there are better options out there? In any case the exporter looked more in need for an overhaul to me than the fancy interface.
And then we can tackle the real question: If the graphics are getting ridiculously pretty in those next-gen gems – and at the same time the fast render in Maya will at some point reach real-time levels. How do we distinguish between real-time Maya and real-time machinima?
The answer should be: higher level of creativity. Here I liked Paul’s point, when he talks about the multi-player capabilities in machinima. Live performance of multiple artists – that is a difference. Live puppeteering – that is a difference (let’s see for how much longer). Combining sound and sight, improvisation in the virtual space … these are creative areas that pre-vis Maya-style can hardly reach but that are the norm in video games. So maybe machinima should worry less about adjusting itself to traditional pre-vis and instead offer a new form of pre-visualization as such? Make the pre-visualization process more like playing session? Like virtual theater?
Let’s see whether the Hollywood gurus will listen to a bunch of gamers. And I would love to hear from Matt Kelland and the iClone guys about their experience with machinima and pre-visualization.