I would like to understand, in the world coordinate system, which coordinate system is used as the origin for the eye position and fixation position?
Eye Tracking | MagicLeap Developer Documentation
To clarify, the eye position and fixation point is provided in world coordinates.
No, the the headset position is reported located in-between the position of the average of the left and right display referential which ends up being somewhere between the two display and a bit ahead of the users eyes.
To clarify my previous post, I wanted to point out that the fixation point position provided by the API is not relative to the headset but based off of the world origin.
Hello,
We are using an eye-tracking SDK. Four of us conducted eye-tracking data collection tests from the same seat and at the same angle. Our eye-tracking collection app was running continuously, but the fixation point X produced were all different. What could be the reason?
Each column represents the same person, with a total of four people and four columns.
Did you run eye calibration before testing with each participant?
We have run the eye calibration, but the fixation point still drifts.
We asked the participant to look at the four points circled in red on the screen, and we found that their gaze drifted significantly.
A few questions about this:
Are you using the head strap that came with the Magic Leap 2?
Was eye calibration performed before your application was run?
Do you have any repro steps that you can share with us? (How could we reproduce this issue ourselves )
Hello, I understand this is not the main problem, but I wanted to clarify that what you are saying is that the eye position and fixation point is provided in world coordinates, so I do not need to normalize the gaze for the global position? We are using code as below:
hasGazeData &= eyeTrackingDevice.TryGetFeatureValue(UnityEngine.XR.CommonUsages.devicePosition, out Vector3 gazePosition);
hasGazeData &= eyeTrackingDevice.TryGetFeatureValue(UnityEngine.XR.CommonUsages.deviceRotation, out Quaternion gazeRotation);
and we are trying to use this data to calculate a gaze vector that has been normalized in the global field. Will this code output what we already have? If not, what should we output to calculate/normalize the gaze data into a usable gaze vector?
@daisy.t.gan1 Do you mind creating a new post and linking this thread? I just want to make sure that we don't push notifications to users that are part of this thread when discussing your question. In your new post could you also specify if you are using MLSDK or OpenXR and the version of the SDK you are using
Thanks, have made a new thread, but my original question under this thread was figured out after some digging. The two output local data points for just the head device position and rotation.