This animated character is being rendered in real-time on current video card hardware, using standard bone animation. The rendering techniques, as well as the animation pipeline are being presented at GDC 2013, “Next Generation Character Rendering” on March 27.

The original high resolution data was acquired from Light Stage Facial Scanning and Performance Capture by USC Institute for Creative Technologies, then converted to a 70 bones rig, while preserving the high frequency detail in diffuse, normal and displacement composite maps.

It is being rendered in a DirectX11 environment, using advanced techniques to faithfully represent the character’s skin and eyes.

Jorge Jimenez from Activision Research and Development has revealed some impressive real-time facial rendering results.

Although it appears we are getting closer to creating life-like virtual facsimiles of ourselves each day, we invariably come up short. Whether it is the creepy sheen of the uncanny valley effect, the vacant, slightly dead eyes, or the stiff, upper lip/mouth articulation which makes these models seem like they were injected with BOTOX®, there’s always something a tad off which keeps these digital avatars from being wholly digestible as “the real”. It’s almost impossible to trick the human eye as one has to be utterly immaculate with the media.

Activision R+D has not mastered it either, but they seem to be getting close.

@teemunny

via Laughing Squid, Mashable (hat tip to Darren Gruber)

jonas-02-on lauren-02 lauren-03 lauren-05-on lauren-06-on

More Technology