Invention Title:

Photometric Stereo Enrollment for Gaze Tracking

Publication number:

US20250342723

Publication date:
Section:

Physics

Class:

G06V40/19

Inventors:

Assignee:

Applicant:

Smart overview of the Invention

The application introduces a method for enhancing gaze tracking accuracy using photometric stereo techniques with a single camera. This approach involves creating a user-specific anatomical model of the eye, which includes data on the center of vision at various dilation states. The model aims to improve gaze tracking precision, particularly in devices like head-mounted displays where space for sensors is limited.

Technical Background

Gaze tracking traditionally relies on monitoring the eye's direction to determine where a person is looking. The center of vision is linked to the macula in the retina, which is not directly observable. Photometric stereo techniques allow for 3D information capture by altering light positions without moving the camera or object. This method can provide detailed structural data about the eye, aiding in accurate gaze tracking.

System Implementation

Using anatomical information specific to an individual's eye enhances gaze tracking accuracy. Photometric stereo techniques can capture this data, such as the iris-pupil edge at different dilation states, with a single camera. This is particularly beneficial for head-mounted devices with limited space, where traditional multiple-camera setups are impractical.

Device Constraints and Solutions

Head-mounted devices like glasses pose challenges in placing cameras and illumination elements due to space constraints. Traditional gaze tracking methods may not be feasible, but photometric stereo techniques offer a viable alternative. These techniques require minimal equipment and can operate effectively with close-range and steep-angle setups.

Enrollment Process

The enrollment process involves capturing structural data about the eye under controlled lighting conditions. Light sources are adjusted to induce specific dilation states while indicators guide the user's gaze direction. The resulting data helps generate an iris-pupil boundary model, improving subsequent gaze tracking accuracy in close-proximity devices like head-mounted displays.