Cornell University has developed a groundbreaking technology that allows for tracking gaze and expressions without the need for cameras. This innovative approach utilizes sonar-like techniques to observe subtle changes in facial muscles, enabling the tracking of eye movements and facial expressions.
The technology, known as GazeTrak and EyeEcho, offers a non-intrusive way to monitor user interactions in virtual reality (VR) environments. GazeTrak employs a setup with a small speaker and four microphones placed around each eye frame of glasses to bounce sound off the eyeball and capture echoes. An artificial intelligence system processes the data to determine the user's gaze direction.
On the other hand, EyeEcho utilizes a single speaker and microphone per side, aimed at the user's cheek to detect facial expressions. These expressions can then be replicated in real-time on a virtual avatar, enhancing the immersive experience in VR applications.
The technology is designed to be compact, cost-effective, and energy-efficient, making it suitable for integration into future wearable devices such as smart glasses or VR headsets. It has the potential to revolutionize hands-free video calls through avatars, even in noisy environments like cafes or outdoor settings.
While existing smart glasses can recognize faces and a few specific expressions, the continuous tracking capabilities of EyeEcho set it apart. The system can operate for several hours on a standard smart glasses battery or a full day on a VR headset battery, offering extended usage without compromising performance.
As a prototype, the technology holds promise for enhancing user interactions in virtual environments and opening up new possibilities for immersive experiences. With further development and integration into commercial products, it could redefine the way we engage with VR content and communication platforms.