US20250124679
2025-04-17
Physics
G06T19/20
The disclosure introduces a method and apparatus for transferring facial expressions of digital humans, applicable in fields like augmented reality, virtual reality, and computer vision technologies. It is particularly relevant to scenarios involving the metaverse and virtual digital humans. The process involves identifying a target reference model that matches an object model from a pre-existing library, acquiring its expression library, and transferring the expression to the object model.
Traditionally, binding facial expressions to digital human models requires professional designers and involves techniques such as blendshape deformation and skeleton skinning. This manual process is time-consuming, often taking weeks, especially for high-quality, ultra-realistic digital humans. The proposed method aims to streamline this by automating the transfer of expressions from reference models to object models.
The system includes an electronic device with at least one processor and a memory that stores executable instructions for performing the expression transfer method. Additionally, a non-transitory computer-readable storage medium is provided to store these instructions, enabling efficient execution of expression transfers across digital human models.
This method significantly reduces the time and labor associated with manually binding expressions to digital human models. It enhances efficiency by automating expression migration, making it highly beneficial for applications in virtual environments where quick adaptation and realism are crucial. The approach allows for diverse model configurations, accommodating various styles and demographics within the digital human spectrum.