US20250124680
2025-04-17
Physics
G06T19/20
The disclosed method pertains to generating digital humans using advanced technologies such as augmented reality, virtual reality, and deep learning. It is designed to be applicable in scenarios like the metaverse or for creating virtual digital humans. The process involves obtaining a target object model from a picture of the intended digital human, extracting key head features from a pre-configured library, and integrating these features into the model to produce a digital human figure.
This innovation falls within computer and artificial intelligence technologies, with a focus on augmented and virtual reality, computer vision, and deep learning. The goal is to streamline the creation of realistic digital humans for various applications, including virtual environments like the metaverse.
Traditional methods for creating digital humans are labor-intensive, costly, and time-consuming, often requiring months of work across multiple stages such as original painting and animation. These methods struggle with accurately replicating real-world features like eye or lip shapes, leading to final results that depend heavily on the skill levels of individual creators.
The proposed method automates the generation of digital humans by leveraging a combination of pre-existing models and feature libraries. Key steps include:
This approach reduces costs, improves accuracy, and increases efficiency compared to manual methods.
The system includes an electronic device with at least one processor and memory storing instructions for executing the method. It can automatically generate 3D digital human figures from 2D images by focusing on head and facial features. The method supports various devices like smartphones or tablets and can adapt to different algorithms for feature extraction. This automation enhances realism and reduces barriers in digital human creation, fostering advancements in digital industries.