Invention Title:

PUPPETEERING A REMOTE AVATAR BY FACIAL EXPRESSIONS

Publication number:

US20250342639

Publication date:
Section:

Physics

Class:

G06T13/40

Inventors:

Assignee:

Applicant:

Smart overview of the Invention

The patent application describes a method for controlling a remote avatar using facial expressions. The process begins with capturing a first image of a user's face and creating a corresponding facial framework, which includes a mesh of facial information. This captured image is then projected onto the framework to determine the facial texture. As the user's expressions change over time, additional facial frameworks are received, and the facial texture is updated accordingly to be displayed as a three-dimensional avatar that mirrors the user's expressions in real-time.

Technical Details

The method involves data processing hardware that receives sequential images of the user's face, each capturing different expressions, such as smiling or raising eyebrows. For each captured image, a corresponding facial expression texture is determined. These textures are blended with the updated facial texture from subsequent frameworks to create a coherent representation of the user's current expression. This blending process relies on calculating texture vectors and assigning rendering weights based on differences between these vectors.

Rendering Techniques

To achieve realistic rendering, the method calculates texture vectors representing differences from a neutral expression. Rendering weights are assigned to these vectors, ensuring that their sum equals one. The avatar's final appearance is determined by blending these weighted textures. This approach allows for dynamic and responsive avatar animations that accurately reflect subtle changes in the user's facial expressions.

Advanced Features

The system can handle situations where parts of the user's face are obstructed by using textures from prior frames to fill in missing details. Additionally, it can generate detailed renditions of specific facial features like eyes and mouths by detecting their edges and approximating their positions. These features are then rendered with fill to enhance realism. The system supports input from RGB images captured by mobile devices and can display avatars on augmented reality devices.

System Implementation

The described system comprises data processing hardware and memory hardware that store instructions for executing the method's operations. These operations include receiving initial and subsequent facial frameworks, projecting images onto these frameworks, updating textures, and rendering the three-dimensional avatar. The system effectively captures and displays a virtual representation of the user's face, offering an innovative solution for enhancing remote communication by conveying emotional nuances through avatars.