OptiTrack is a motion capture system which is used in the field of 3D animation, biomechanics, virtual reality and of course in video games. The technology has been used by indie and AAA game developers to bring production-capable systems in-house and on-site rather than renting time at a motion capture service provider. AAA developers who have used OptiTrack include Ubisoft, Remedy Entertainment, Crytek, DICE, CD Projekt RED, Sony Entertainment Europe and Activision.
GamingBolt caught up with Seth Steiling, who is the Marketing Manager at OptiTrack to know more about this technology. Check out his response below.
Rashid Sayed: To begin with, can you please tell us about OptiTrack?
Seth Steiling: As the largest motion capture provider in the industry, OptiTrack has enabled thousands of studios and research facilities to integrate mocap into their pipeline. While our systems are capable of delivering the world’s largest capture volumes and best-in-class 3D precision, our technology is also the simplest to use, our architecture is totally open, and our prices are the lowest on the market.
Armed with open, affordable motion capture, our customers are accomplishing the unthinkable—including helping quadriplegics drive race cars, programming flying robots to play live instruments, measuring the swings of PGA pros, and breaking new ground in performance-capture.
Rashid Sayed: With regards to motion capture, what is the number one demand these days from game developers?
Seth Steiling: Each game’s mocap pipeline is uniquely informed by a host of factors, including the fidelity of character modeling and animation, the genre, the art style, and even the gaming platform—so there really isn’t a standout feature that all devs require. However, all developers look for a mocap system that produces clean data that‘s true to the actor’s performance, along with a processing pipeline that fits seamlessly into their existing animation workflow.
Rashid Sayed: Can you tell us the differences between motion capture in games compared to process followed in movies using Optitrack?
Seth Steiling: Differences in mocap for film and games stem from differences in real-time versus offline processing. Real-time, interactive applications like gaming typically utilize simpler bone-based rigs and lower-resolution models and textures—all of which allow artists to create faster, more efficient pipelines that are ideal for churning out thousands of short animations. In short, because the final application is usually bottlenecked by the gaming rig’s hardware, the game engine and CG pipeline can be leaner than what is seen in film, without limiting the visual fidelity that’s otherwise achievable.
In non-real-time applications like film, processing power is much less of a bottleneck, allowing for higher-fidelity assets and advanced solvers to be incorporated into the production pipeline. In order to take advantage of these advanced models, rigs, and solvers, film mocap typically captures much more data than what can be used in games.
For example, a face mocap pipeline for games might use markers to drive a simple bone-based rig, because in the end, the engine will be oriented around driving bones for animation. A film pipeline will potentially runmocap through a FACS system and a blend shapes-based solve, followed by advanced models for soft-tissue or blood-flow, etc.—much of which benefits from a more verbose motion capture dataset. Some film pipelines are further complicated by the need for a virtual camera, full performance capture, or even on-set mocap blended with live action footage.
Rashid Sayed: Do you think the new consoles PS4 and Xbox One are capable enough to produce movie like quality facial capture?
Seth Steiling: While the gap in fidelity is narrowing, film remains a much more flexible platform for photo-real CG than games. A single, film-quality frame can take from minutes to hours or even days to render, so the dream of Avatar in real-time will require hardware that is a couple of orders of magnitude beyond what’s currently top of the line.
Major advancements in scaled GPU rendering are making final-frame post-visualization possible, but only with dozens of top-shelf GPUs pushing the render. Furthermore, film-quality facial animation is the byproduct of not only super powerful processing and advanced rigs and solvers, but huge teams of animators perfecting motions in post on a scale that most game developers just cannot afford to invest in.
Rashid Sayed: How much effort does a developer need to put in to integrate Optitrack in their games? Is it a time and technically resource consuming process?
Seth Steiling: Within the motion capture landscape, OptiTrack systems are very simple to use.Processes that have traditionally taken hours or even days—like system setup and performer calibration—can be accomplished in minutes or even seconds with OptiTrack. This means that a single operator can configure an entire mocap stage without a background in engineering.
Having said that, integrating motion capture into an existing animation pipeline, regardless of the technology, requires a skill set that is unique compared to standard key framing. Retargeting—which is a solve to map source mocap to often differently-proportioned character rigs in a way that respects the original performance—is a crucial component to mocap processing that is often new to key frame animators.