The MQP appears to be a nifty tool for eye-tracking functionality in either gaming or research studies. However, as a general VR user, I have some remaining quandaries about the quality of the MQP’s eye tracker.
I’m curious about the overall accuracy and precision about the eye tracker itself. Is there any way to quantify these properties?
Another quandary I want to explore is something more niche. One of my colleagues is interested in replicating the head and eye rotation that occurs during natural locomotion - i.e. when something gets our attention and we rotate our body, head, and eyes to fixate on that item. I am curious about to what extent a fixation target is outside at the periphery of one’s view to require a user to rotate their head and eyes. During such a motion, if the head and eye are both in free movement, to what extent do the eye and head separate?
I want to aggregate a collection of EEG responses to eye movements, in order to conduct some form of template filtering when looking at EEG signals in more natural conditions. This EEG data can be aggregated using wearable EEG devices such as the Muse S, but the important quality is that
<aside> 📃
Majority of this information is cited from the related works of this paper: https://doi.org/10.1145/3361218
</aside>