US20260023579
2026-01-22
Physics
G06F9/451
A novel approach is proposed to create real-time adaptive wallpapers using multi-sensory data. This method leverages sensors to gather multimodal input, which is then transformed into encoded features. These features are processed by a machine learning (ML) model that integrates user profiles, web browsing history, and behavioral patterns. The processed data generates machine learning outputs, forming the basis for creating dynamic wallpapers.
The system employs an on-device large language model (LLM) to generate text prompts derived from ML outputs. These prompts are crucial for the creation of 360Β° multimodal wallpapers through generative models. The innovative use of an on-device LLM ensures that the generation process remains efficient and tailored to the user's environment and preferences.
A continuous feedback loop enhances the wallpaper generation process by refining text prompts based on real-time sensor data and user interactions. This iterative feedback loop between the generative models and the on-device LLM ensures that the wallpapers are consistently updated and aligned with the user's current context and preferences.
This technology is particularly relevant to virtual reality (VR) and augmented reality (AR) systems, where immersive environments are essential. By incorporating sensor data into the generative process, the system addresses existing limitations in VR and AR applications, providing a more personalized and context-aware user experience.
The implementation involves a processing device that manages the conversion of sensor data into encoded features and the subsequent generation of adaptive wallpapers. The system can be realized through various embodiments, including a virtual reality generation system and a non-transitory computer-readable medium, ensuring flexibility in deployment across different platforms and devices.