Invention Title:

METHOD AND APPARATUS FOR TRANSFERRING FACIAL EXPRESSION OF DIGITAL HUMAN, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Publication number:

US20250124679

Publication date:
Section:

Physics

Class:

G06T19/20

Inventors:

Assignee:

Applicant:

Smart overview of the Invention

The disclosure introduces a method and apparatus for transferring facial expressions of digital humans, applicable in fields like augmented reality, virtual reality, and computer vision technologies. It is particularly relevant to scenarios involving the metaverse and virtual digital humans. The process involves identifying a target reference model that matches an object model from a pre-existing library, acquiring its expression library, and transferring the expression to the object model.

Background

Traditionally, binding facial expressions to digital human models requires professional designers and involves techniques such as blendshape deformation and skeleton skinning. This manual process is time-consuming, often taking weeks, especially for high-quality, ultra-realistic digital humans. The proposed method aims to streamline this by automating the transfer of expressions from reference models to object models.

Methodology

  • Screening: An identification of a target reference model is matched with an object model from a preset library containing multiple reference models.
  • Acquiring: An expression library for the target reference model is obtained based on its identification.
  • Transferring: The last frame of an expression from the reference model's library is transferred to the object model to achieve the desired expression.

Device and Storage

The system includes an electronic device with at least one processor and a memory that stores executable instructions for performing the expression transfer method. Additionally, a non-transitory computer-readable storage medium is provided to store these instructions, enabling efficient execution of expression transfers across digital human models.

Applications and Benefits

This method significantly reduces the time and labor associated with manually binding expressions to digital human models. It enhances efficiency by automating expression migration, making it highly beneficial for applications in virtual environments where quick adaptation and realism are crucial. The approach allows for diverse model configurations, accommodating various styles and demographics within the digital human spectrum.