OptiTrack Interview: Delivering Affordable Motion Capture Solutions For The Video Games Industry

GamingBolt speaks to Seth Steiling to know more about Optitrack.

Posted By | On 16th, Jul. 2014 Under Article, Interviews | Follow This Author @GamingBoltTweet

OptiTrack is a motion capture system which is used in the field of 3D animation, biomechanics, virtual reality and of course in video games. The technology has been used by indie and AAA game developers to bring production-capable systems in-house and on-site rather than renting time at a motion capture service provider. AAA developers who have used OptiTrack  include Ubisoft, Remedy Entertainment, Crytek, DICE, CD Projekt RED, Sony Entertainment Europe and Activision.

GamingBolt caught up with Seth Steiling, who is the Marketing Manager at OptiTrack to know more about this technology. Check out his response below.

Rashid Sayed: To begin with, can you please tell us about OptiTrack?

Seth Steiling: As the largest motion capture provider in the industry, OptiTrack has enabled thousands of studios and research facilities to integrate mocap into their pipeline. While our systems are capable of delivering the world’s largest capture volumes and best-in-class 3D precision, our technology is also the simplest to use, our architecture is totally open, and our prices are the lowest on the market.

Armed with open, affordable motion capture, our customers are accomplishing the unthinkable—including helping quadriplegics drive race cars, programming flying robots to play live instruments, measuring the swings of PGA pros, and breaking new ground in performance-capture.


"In non-real-time applications like film, processing power is much less of a bottleneck, allowing for higher-fidelity assets and advanced solvers to be incorporated into the production pipeline."

Rashid Sayed: With regards to motion capture, what is the number one demand these days from game developers?

Seth Steiling: Each game’s mocap pipeline is uniquely informed by a host of factors, including the fidelity of character modeling and animation, the genre, the art style, and even the gaming platform—so there really isn’t a standout feature that all devs require. However, all developers look for a mocap system that produces clean data that‘s true to the actor’s performance, along with a processing pipeline that fits seamlessly into their existing animation workflow.

Rashid Sayed: Can you tell us the differences between motion capture in games compared to process followed in movies using Optitrack?

Seth Steiling: Differences in mocap for film and games stem from differences in real-time versus offline processing. Real-time, interactive applications like gaming typically utilize simpler bone-based rigs and lower-resolution models and textures—all of which allow artists to create faster, more efficient pipelines that are ideal for churning out thousands of short animations. In short, because the final application is usually bottlenecked by the gaming rig’s hardware, the game engine and CG pipeline can be leaner than what is seen in film, without limiting the visual fidelity that’s otherwise achievable.

In non-real-time applications like film, processing power is much less of a bottleneck, allowing for higher-fidelity assets and advanced solvers to be incorporated into the production pipeline. In order to take advantage of these advanced models, rigs, and solvers, film mocap typically captures much more data than what can be used in games.

For example, a face mocap pipeline for games might use markers to drive a simple bone-based rig, because in the end, the engine will be oriented around driving bones for animation. A film pipeline will potentially runmocap through a FACS system and a blend shapes-based solve, followed by advanced models for soft-tissue or blood-flow, etc.—much of which benefits from a more verbose motion capture dataset. Some film pipelines are further complicated by the need for a virtual camera, full performance capture, or even on-set mocap blended with live action footage.


"Major advancements in scaled GPU rendering are making final-frame post-visualization possible, but only with dozens of top-shelf GPUs pushing the render."

Rashid Sayed: Do you think the new consoles PS4 and Xbox One are capable enough to produce movie like quality facial capture?

Seth Steiling: While the gap in fidelity is narrowing, film remains a much more flexible platform for photo-real CG than games. A single, film-quality frame can take from minutes to hours or even days to render, so the dream of Avatar in real-time will require hardware that is a couple of orders of magnitude beyond what’s currently top of the line.

Major advancements in scaled GPU rendering are making final-frame post-visualization possible, but only with dozens of top-shelf GPUs pushing the render. Furthermore, film-quality facial animation is the byproduct of not only super powerful processing and advanced rigs and solvers, but huge teams of animators perfecting motions in post on a scale that most game developers just cannot afford to invest in.

Rashid Sayed: How much effort does a developer need to put in to integrate Optitrack in their games? Is it a time and technically resource consuming process?

Seth Steiling: Within the motion capture landscape, OptiTrack systems are very simple to use.Processes that have traditionally taken hours or even days—like system setup and performer calibration—can be accomplished in minutes or even seconds with OptiTrack. This means that a single operator can configure an entire mocap stage without a background in engineering.

Having said that, integrating motion capture into an existing animation pipeline, regardless of the technology, requires a skill set that is unique compared to standard key framing. Retargeting—which is a solve to map source mocap to often differently-proportioned character rigs in a way that respects the original performance—is a crucial component to mocap processing that is often new to key frame animators.

Tagged With: , , ,

Awesome Stuff that you might be interested in

  • Guest

    DUH! Only Sony fanboys believe that because they believe in the lies and overhype. The PS4 isn’t even powerful enough for a proper VR experience and therefor Morpheus will fail.

  • Mark

    The question should be, “By the end of this cycle, will PS4 and Xbox push out Avatar like graphics”? I’m gonna say, if not, then real close. I mean look at Star Wars from E3, if that is to be believed real time. I’m optimistic.

  • Rooster41

    Games like Ryse Son of Rome and The Order 1886 already have Pixar quality graphics, so yeah the Xbox One and PS4 games are pretty close to having Avatar’s graphics

  • Dirkster_Dude

    If it still takes hours or days for much of the motion capture technology to be rendered it is unlikely any console not specifically designed for it can be made to do it in the near future because the console hardware specifications don’t change over time. Think about $6 grand PCs that are designed for clustered computing to help the rendering process. In other words multiple computers working together to take hours or days to make a useful frame. This guy isn’t talking about just a single PC that an enthusiast or rich person could buy. They are talking about a set up like Bill Gates was once rumored to have to control his entire mansion. Your are talking 20-30 high end out of this world PCs. There could be some breakthrough that the next generation of consoles can take advantage of, but right now this looks like it will take another 2 or 3 console generations at least.

    • Guest

      And now you know why cloud computing has such a bright future ahead.

    • Kamille

      lol what?

  • Kamille

    CGI films are rendered with insane detail on hundreds of high-end PC’s at the same time and they often take petabytes to store and terabytes of ram. Not even in the next 50 years any PC or game console will be able to render all that detail in real-time let alone Avatar’s kind of detail where they are simulating every aspect of real life to the maximum detail possible.

  • Bliss Seeker

    Didn’t read the article because of the title…
    It would require hundreds of top of the range graphic rendering workstations to even render a scene in lesser time; and just one could cost thousands of dollars! It would take hours or even days for one top of the range PC to even render one frame of an object in a scene – let alone a whole scene!

    • Bourbon Tango

      Title is pandering and sensationalized—but the article itself is less about the obvious (derp, can’t solve and render Avatar in real-time, derp) and more about the differences in game vs. film mocap pipelines.

    • Bliss Seeker


  • Kumomeme

    ofcourse its…not only avatar..but current gaming technology still far cry from what cgi capable to do…either console of pc…not mention the rendering process creating for cgi not only take from hours to days also might even to weeks… by using lots of high end pc with awesome cooling…the tech using in avatar also start of the art..from facial to motions…im not cgi guys…but these stuff can be known from various source
    only idiot here suddenly starting fanboy detabates..either pc or console fanboys….stop arguing over stupid debates

  • Dangerousjo 1985

    As long as the new consoles look better then the last gen,which they easily do ,I’m very happy..


Copyright © 2009-2017 GamingBolt.com. All Rights Reserved.