FORTUNE — Nothing says “the future” like a disembodied head. As developers and designers begin churning out the next generation of games and entertainment, the pace of technology demos showing what types of computer-generated graphics will soon be possible has picked up. And that means one thing: more creepy-yet-astonishing 3D-generated heads.
Activision (ATVI) is showing off new technology at the annual Game Developer’s Conference, taking place in San Francisco this week. The rendering techniques and code that create life-like animation were unveiled by the gaming giant’s research and development division yesterday. The animated character shown here is being rendered in real-time on current video card hardware, suggesting innovations like these could be showing up in commercial products sooner rather than later.
“We will show how each detail is the secret for achieving reality,” wrote researcher Jorge Jimenez on his blog, before the presentation. “For us, the challenge goes beyond entertaining; it’s more about creating a medium for better expressing emotions and reaching the feelings of the players. We believe this technology will bring current generation characters, into next generation life.”
Activision isn’t alone. Chipmaker NVIDIA (NVDA) recently touted real-time face-rendering at its GPU Technology Conference in California. The program, dubbed Face Works, employs face- and motion-capture technology developed at the University of Southern California’s Institute of Creative Technology. The center’s Light Stage process records data to within a tenth of a millimeter using photography that captures the geometry of an actor’s face. Light transmission through skin — the key to rendering subtle emotional cues like blushing — and reflections can be recreated as well.
At Sony’s (SNE) Playstation 4 launch even earlier this year, actor Max von Sydow made a brief appearance on stage — as an interactive 3D model. David Cage, founder of innovative studio Quantic Dream, demoed what kinds of graphics would be possible on the console maker’s next hardware release. (Why so many old men? It’s not clear, but it may have something to do with the complexity of rendering wrinkles that move and bend.)
All of this is likely to kickstart another round of debate about the so-called “uncanny valley.” That concept suggests that when human replicas — either robots or in computer renderings — begin to look realistically but not perfectly human it can make real-life observers feel queasy or revolted. (The “valley” in questions is the dip in a graph of the comfort level of humans presented with a rendered human likeness.) As of yet, that hasn’t stopped engineers from pushing the boundaries of what’s technology possible — perhaps in hopes of leapfrogging over the problem entirely.