US20250113066
2025-04-03
Electricity
H04N21/21805
The disclosed system facilitates a unique virtual experience, allowing users to navigate between multiple live events occurring simultaneously in different locations. By organizing a network of cameras into distinct groups, each associated with a specific event, the system captures video streams from various angles. This setup enables users to switch between events manually or automatically, offering a personalized and immersive viewing experience through a virtual reality headset.
This system addresses the limitations faced by individuals who cannot attend live events due to geographic, financial, or health constraints. Traditional methods like television broadcasts do not replicate the atmosphere of being physically present at an event. The invention leverages virtual reality technology and a camera network to provide an authentic sense of presence, enhancing the viewing experience beyond what is currently possible with existing solutions.
The system employs artificial intelligence (AI) and computer vision algorithms to process real-time video streams from multiple cameras. Users can receive these streams via a VR headset, allowing them to switch between different events and camera angles based on their preferences. AI algorithms such as machine learning and neural networks are utilized to analyze streaming data, predict moments of interest, and suggest optimal viewing angles.
The system also allows cameras to be mounted on movable objects like players or equipment, enhancing the dynamic viewing experience.
Users can customize their audio-visual experience by combining video from one event with audio from another. The system supports multi-layered audio experiences, enabling users to overlay ambient sounds, commentary in different languages, and even maintain communication with others. This flexibility provides a truly immersive multi-sensory experience, allowing users to engage deeply with multiple live events simultaneously.