[August 14th, 2010]
Apple going back to some machinima roots?
Posted by Michael
Systems and methods are provided that record data in a videogame, such as a user’s character and performance in the videogame, and generate a book, e-book, or comic book based on the recorded data. A narrative data structure generated from the recorded data may include pregenerated text and images, and may provide for insertion of the recorded data into the narrative data structure. The recorded data may be converted into natural-language text for insertion into the narrative data structure. In some embodiments, the system may record screenshots of the videogame and insert the screenshots into the narrative data structure as illustrations. The narrative data structure may be provided to a location for printing as a book or other publication or may be electronically formatted and provided as an e-book.
Does that ring a bell? Recording data from a game play session and creating a narrative from that? This is the beginning of Patent No 20100203970 filed by Apple in February 2009 (you can search for that kind of stuff here). Their idea was to automatically generate comics or novels from game play – e.g. from Mass Effect – as patentlyapple.com reports.
Once upon a time there was a comparable Nintendo patent, but the Apple one is obviously more geared toward iPad and ebooks in general. The jump from that to a cinematic interpretation is not just a hop and a skip, but we can indeed imagine it. And after all, the screenshot feature of The Sims was the source of countless Sims comics, which Will Wright noted as a great result of the community of gamers:
We have this feature in the game where the fans can tell a story as they’re playing the game. They can take screenshots and annotate and make a story, kind of like a comic book. And it’s a very simple process, like one or two button clicks, and you have it published on our Web site.
We have fans uploading these stories at the rate of about 400 a day. We have about 18,000 stories that the fans had written for the games. And they’re fascinating to read. The effort people put into these stories is amazing. And it gives us a sense of where the people want to go with the game by just looking at the stories that they’re telling currently.
And the development from that to the machinima community of The Sims 2 was just another step on the development ladder. Indeed, the Apple patent stays a bit vague as it speaks of “a book, comic book, or any other publication may be created from the generated narrative data structure.” So did Apple read Wrights interviews? And do they believe they can fight the holder of any IP (like EA for Sims or Microsoft for Mass Effect) when they turn their games into comics and sell that service to their Apple-ites? It cannot be, can it? Most stories that involve a fight with a EULA are sad stories, very sad stories.
Let’s instead look at this from a more technical point of view. Because what is interesting in the Apple patent is their approach to narrative.
First, the system is server-based. Users have to send their game data to the server where it is analyzed and a narrative is fabricated from that. Interestingly enough, it seems Apple also wants to cover DVD players and the like – not sure how that might combine with video games.
Patentlyapple did so many beautiful screens, that I will use them here to re-tell the logic as I understand it. First the overview of the system:
The first question one would have: how do you standardize the recorded data between different games? Here comes the second step as Apple suggested the following conceptual diagram as an indicator for what they want to record:
… which leaves me scratching my head. Can I record my Final Fantasy session with that as well as my Sims or my Bioshock one? The whole system seems to be streamlined for a Mass Effect type game from character creation to dialog trees.
One thing is for sure: Apple did read the interviews with the the BioWare folks.
As I remain interested in questions of camera control and visualization another thing I do not understand is: the system can record screenshots. That in itself is not a problem – the problem is to know what screenshot to record during play. How could the system know when to take a screenshot if it has no prior knowledge of the story that it will be telling?
Third, the patent claims that this data is enough for a story-generation on a independent system (the server). Even if the server had more detailed information, such as special info on the individual game and the specific conditions of this play segment, there are many open questions. How would that system keep older play instances in logical connection to the new narrative? Any kind of adaptation has to add or re-interpret the underlying source. I would assume such a re-interpretation should be modeled after the player – I can play a session of Half-Life very aggressively or rather cautious. The choices remain the same as the path is set in HL2 but the comic’s pacing should be completely different. How would that be accomplished? How can it deal with the eternal re-spawning I needed in my Call of Duty session?
I admit, what I really would love to see the early mishaps during the debugging phase of such a system. It would be great if the system would be imaginative enough to misinterpret my Gears of War game and tell it back to me as a romantic comedy! The desasters of early Interactive Fiction generators are at time hilarious.
Finally, I had one simple question: Why do they re-tell my events as they have just happened in the game? Why not use all that magic to tell me a story that did not happen or the story that happened by told from a different point of view? For example, tell me the story of Mass Effect from the perspective of one of my teammates or Half Life from the view of a scientist and the actions of my play shapes their stories. Now that could be fun.