How A Ex-Naughty Dog Dev Is Taking Facial Capture To The Next Level By Surpassing The Uncanny Valley
John Hable on faking triggers to fool our FFA and getting past the uncanny valley.
The games industry is an exciting place to work in. The amount of creativity could be astonishing at the times. There is a continuous cycle of learning, and a passionate drive to take creative technology to the next level. One such game developer who is pushing to make things easier in the facial capture department is John Hable. Hable has a fascinating portfolio. He has been a video games programmer since 2005, worked with EA on the Tiger Woods 2007 Ucap project, the cancelled Spielberg project (LMNO) and landed up working with one of the most notorious games studios in the world: Naughty Dog.
He was the tech lead behind the lighting technology used in Uncharted 2: Among Thieves and partially worked on Uncharted 3: Drake’s Deception and The Last of Us. Moving on from Naughty Dog, he is currently the founder of Filmic Worlds, a new company that focuses on scanning actors for video games. During his new venture, he learned that scanning data from raw scans to blend shapes requires long hours of manual work. With months of research, Hable has finally found a way to build a customized pipeline that allows an artist to develop a face rig from facial scans. This essentially means that a development team does not need ton of artist guys to capture individual data resulting into savings of cost and time.
So how does this work?
“To give your readers the short version, my theory is that the Uncanny Valley is triggered by the Fusiform Face Area (FFA) rejecting the face. Basically, the area of your brain that processes faces is completely different from the area of your brain that processes everything else,” Hable said to GamingBolt.
But what exactly is Uncanny Valley? It’s creating the right balance with regards to how characters look. As characters begin to look too realistic, they become creepy. “Something about them looks wrong which gets in the way of the viewer developing an emotional attachment to the CG character,” Hable tells us.
“This effect has been verified by studying people with brain damage. Some people have had brain damage to their FFA and they can not recognize faces. This condition is called Prosopagnosia. They can recognize cars, handwriting, clothes, etc. but they can not distinguish their family members from a random person off the street.”
“Other people have the reverse problem called Visual Agnosia. They have had brain damage to part of their brain but their FFA is still intact. These people can only recognize faces. They can see colors and basic shapes but they can not recognize objects, read handwriting, etc.”
“So my theory is that the Uncanny Valley is due to the FFA rejecting the image. If we can figure out what those triggers are and put them in our faces then we should be able to cross the Uncanny Valley even if they are not photoreal.”
He further explains that how this process will actually work. “We can solve this problem by capturing an animated color map of the human face. The most well known game that used this technique is LA Noire, although we did a similar thing for Tiger Woods 2007. If you capture those color changes and then play them back in real time then for some reason the face crosses the uncanny valley. The face is not photoreal. You would never mistake it for a photograph of a person. But that animated color information is enough to make our FFA accept the face and thus cross the uncanny valley.”
“Unfortunately, capturing the color changes during a mocap shoot is prohibitive for many, many reasons. Even though the quality is great, the technology has never really taken off because it does not fit with the production logistics of most games.”
“The solution that I’ve settled on is to capture some set of poses with color changes, and then blend between those color maps. You would capture a color map for your model in a neutral pose, a color map for jaw open, a color map for eyebrows up, etc. Then you can animate your face using standard motion capture solutions and blend between those color maps automatically. For example, as the character’s rig opens his or her mouth, we can blend in the color changes from skin stretching in addition to moving the geometry.”
Hable acknowledges that he is not the first person to work with such a concept. In fact he got his inspiration from an ICT paper on Polynomial Displacement Maps.
“Of course, I’m not the first person to experiment with this concept. The inspiration for this approach was the ICT paper on Polynomial Displacement Maps. ICT later created the data for Digital Ira, which was then used in the Activision and NVIDIA demos. Those were great research projects but they do not necessarily scale to the giant volume of data that video games need. That’s why I founded Filmic Worlds, LLC.”
“In the Fimic Worlds capture pipeline the talent gets solved into a set of blendshapes including both the geometry and the stabilized color map. In other words, it’s a production ready solution to blend the color maps along with the geometry. If we can do that correctly, we should be able to fake those triggers to fool our FFA and get past the uncanny valley.”
Stay tuned for more coverage from our exclusive interview with John Hable in the coming days.