Hologram Drift Issue When Tracking Object with Retroreflective Markers using Depth Camera (Raw data)

Good afternoon everyone,

I am developing an application for Magic Leap 2 that tracks an object equipped with retroreflective markers and overlays its holographic counterpart on the real object in real-time.

Unity Editor Version: 2022.3.61f1
ML2 OS Version: 1.12.0
MLSDK Version: 1.12.0.

Untitled video - Made with Clipchamp

As you can see in the video, the tracking appears to be correct, and I am able to compute the pose of the object in the world reference frame. However, when I move my head, the hologram seems to drift — following the head movement briefly before realigning with the tracked object.

To compute the object’s pose in the world reference frame (i.e., the XROrigin, corresponding to the head’s initial position when the app launches), I perform the following matrix multiplication:

csharp

Matrix4x4 worldTobject = worldTsensor * sensorTobject;

where worldTsensor is calculated like this:

csharp

Pose offset = new Pose(
    xrOrigin.CameraFloorOffsetObject.transform.position,
    xrOrigin.transform.rotation
);

Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, offset);

and sensorTobject is the pose of the tracked object relative to the sensor reference frame (solved using PnP algorithm and pinhole camera model).

I don’t understand why the hologram drifts even though everything is referred to the XROrigin, which should remain fixed.

Am I missing an additional transformation (e.g., head pose)? Or are the sensor pose and the depth camera frame not synchronous? I get the sensor pose just before processing the depth frame as suggested in the pixel-sensors API examples:

csharp

Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, offset);
if (pixelSensorFeature.GetSensorData(sensorId.Value, stream, out var frame, out var metaData,
        Allocator.Temp, shouldFlipTexture: true))
{
    // Process Frames ...
    Debug.Log("Sensor Pose:" + sensorPose);
    streamVisualizer.ProcessFrame(frame, metaData, sensorPose);
    
}

Any help or advice would be greatly appreciated!

Hi @alessandro.albanesi,

Is the GameObject that is overlayed onto the physical object a child of the headset in the hierarchy? It may be that the object is briefly following the head pose because it is a child of the camera or XR Origin.

Hi and thanks for the prompt response!

No, as you can see, the stylus object is not a child of any hierarchy in the scene.
I’ve attached a screenshot of the scene hierarchy for reference.

Here is my ML Rig:

The CV camera and the headpose are out of sync, so that may be the cause of the drift. This can be resolved by using the world camera.

I am using the depth camera (raw). Is that the CV camera you are referring to? From what you are saying the depth frame and the depth sensor pose are not sinchronized, is that correct?

Have you tried obtaining the pose after getting the sensor data?


if (pixelSensorFeature.GetSensorData(sensorId.Value, stream, out var frame, out var metaData,
        Allocator.Temp, shouldFlipTexture: true))
{
    // Process Frames ...
    Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, offset);
    Debug.Log("Sensor Pose:" + sensorPose);
    streamVisualizer.ProcessFrame(frame, metaData, sensorPose);
    
}

Yes, same result. I also checked how long does the processing of the frame take ( tresholding, PnP algorithm etc..) before getting the next sensor data but it is less than 500ms (as suggested in one of the posts). Could the problem be related to the unsynchronized data of depth sensor and sensor pose? If so, why are they unsynchronized? Is there a way to synchronized them?