Hologram Drift Issue When Tracking Object with Retroreflective Markers using Depth Camera (Raw data)

Good afternoon everyone,

I am developing an application for Magic Leap 2 that tracks an object equipped with retroreflective markers and overlays its holographic counterpart on the real object in real-time.

Unity Editor Version: 2022.3.61f1
ML2 OS Version: 1.12.0
MLSDK Version: 1.12.0.

Untitled video - Made with Clipchamp

As you can see in the video, the tracking appears to be correct, and I am able to compute the pose of the object in the world reference frame. However, when I move my head, the hologram seems to drift — following the head movement briefly before realigning with the tracked object.

To compute the object’s pose in the world reference frame (i.e., the XROrigin, corresponding to the head’s initial position when the app launches), I perform the following matrix multiplication:

csharp

Matrix4x4 worldTobject = worldTsensor * sensorTobject;

where worldTsensor is calculated like this:

csharp

Pose offset = new Pose(
    xrOrigin.CameraFloorOffsetObject.transform.position,
    xrOrigin.transform.rotation
);

Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, offset);

and sensorTobject is the pose of the tracked object relative to the sensor reference frame (solved using PnP algorithm and pinhole camera model).

I don’t understand why the hologram drifts even though everything is referred to the XROrigin, which should remain fixed.

Am I missing an additional transformation (e.g., head pose)? Or are the sensor pose and the depth camera frame not synchronous? I get the sensor pose just before processing the depth frame as suggested in the pixel-sensors API examples:

csharp

Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, offset);
if (pixelSensorFeature.GetSensorData(sensorId.Value, stream, out var frame, out var metaData,
        Allocator.Temp, shouldFlipTexture: true))
{
    // Process Frames ...
    Debug.Log("Sensor Pose:" + sensorPose);
    streamVisualizer.ProcessFrame(frame, metaData, sensorPose);
    
}

Any help or advice would be greatly appreciated!

Hi @alessandro.albanesi,

Is the GameObject that is overlayed onto the physical object a child of the headset in the hierarchy? It may be that the object is briefly following the head pose because it is a child of the camera or XR Origin.

Hi and thanks for the prompt response!

No, as you can see, the stylus object is not a child of any hierarchy in the scene.
I’ve attached a screenshot of the scene hierarchy for reference.

Here is my ML Rig:

The CV camera and the headpose are out of sync, so that may be the cause of the drift. This can be resolved by using the world camera.

I am using the depth camera (raw). Is that the CV camera you are referring to? From what you are saying the depth frame and the depth sensor pose are not sinchronized, is that correct?

Have you tried obtaining the pose after getting the sensor data?


if (pixelSensorFeature.GetSensorData(sensorId.Value, stream, out var frame, out var metaData,
        Allocator.Temp, shouldFlipTexture: true))
{
    // Process Frames ...
    Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, offset);
    Debug.Log("Sensor Pose:" + sensorPose);
    streamVisualizer.ProcessFrame(frame, metaData, sensorPose);
    
}

Yes, same result. I also checked how long does the processing of the frame take ( tresholding, PnP algorithm etc..) before getting the next sensor data but it is less than 500ms (as suggested in one of the posts). Could the problem be related to the unsynchronized data of depth sensor and sensor pose? If so, why are they unsynchronized? Is there a way to synchronized them?

Sorry to hear that you are running into this issue. The team is out of office today but you could look into making a minor change in the SDK itself and see if it resolves your issue. Possibly by modifying a function in the SDK to allow for an additional property to be passed into the Get Pose function. (Note we have not tested this workaround yet)

Move the Magic Leap Unity SDK into your project’s packages directory.

Make sure the Magic Leap Unity Package is imported into your project as a folder in your /packages/ directory. (You can skip this step if you are using the Magic Leap Examples Project)

  1. Right-click the Magic Leap SDK folder in the Unity Editor and then select “Show in file explorer” (or equivalent)

  2. Copy the com.magicleap.unitysdk directory and paste it into the <YourProject>/packages/ directory.

  3. The contents of the folder can now be edited without reverting to the version that was embedded in the package.

Edit the Scripts

  1. Open the MagicLeapPixelSensor.cs script.
    Located under: /Packages/com.magicleap.unitysdk/Runtime/OpenXR/PixelSensors/MagicLeapPixelSensor.cs

  2. Duplicate the GetSensorPose function (lines 74-98) and modify it to accept a captureTime and uses this value instead of the next predicted display time.

public Pose GetSensorPose(Pose offset, long captureTime)
        {
            unsafe
            {
                var convertedOffsetPose = XrPose.GetFromPose(offset);
                if (sensorSpace == 0)
                {
                    var createSpaceInfo = new XrPixelSensorCreateSpaceInfo
                    {
                        Type = XrPixelSensorStructTypes.XrTypePixelSensorCreateSpaceInfoML,
                        Sensor = Handle,
                        Offset = convertedOffsetPose
                    };
                    var xrResult = NativeFunctions.XrCreatePixelSensorSpace(PixelSensorFeature.AppSession, ref createSpaceInfo, out sensorSpace);
                    if (!Utils.DidXrCallSucceed(xrResult, nameof(PixelSensorNativeFunctions.XrCreatePixelSensorSpace)))
                    {
                        return default;
                    }
                }

                var spaceInfoFunctions = PixelSensorFeature.SpaceInfoNativeFunctions;
                // The line below was updated to accept a capture time value.
                var pose = spaceInfoFunctions.GetUnityPose(sensorSpace, PixelSensorFeature.AppSpace, captureTime);
                return pose;
            }
        }
  1. Open the MagicLeapPixelSensorFeatureAPI.cs script.
    Located under : /Packages/com.magicleap.unitysdk/Runtime/OpenXR/PixelSensors/MagicLeapPixelSensorFeatureAPI.cs

  2. Duplicate the GetSensorPose function and modify the new version to accept a captureTime.

      public Pose GetSensorPose(PixelSensorId sensorType, long captureTime, Pose offset = default)
        {
            if (!IsSensorConnected(sensorType, out var sensor))
            {
                return default;
            }
            // The line below was modified to pass in the capture time value.
            return sensor.GetSensorPose(offset,captureTime);
        }

Edit your existing code

  1. Finally, with these modifications made you can try to obtain the pose as you did previously but passing the capture time into the GetSensorPose function
if (pixelSensorFeature.GetSensorData(sensorId.Value, stream, out var frame, out var metaData,
        Allocator.Temp, shouldFlipTexture: true))
{
    // The line below was modified to pass in the Capture Time value
    Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, frame.CaptureTime, offset);
    Debug.Log("Sensor Pose:" + sensorPose);

    // Use Frame and Pose Data
    // Example : streamVisualizer.ProcessFrame(frame, metaData, sensorPose);
    
}

If you do test the workaround please let me know

You might not need to make changes to the SDK. Simply checking if the frame.CaptureTime == 0 could be enough.

Untitled video - Made with Clipchamp (1)

Hi everyone,

Thanks again for the earlier tips, after tweaking the SDK functions, overall drift is much better. The problem now is a subtle jitter when I move my head; the hologram keeps micro-correcting itself as new sensor pose data comes in.

Is there another smoothing or filtering step I can apply, or is that last bit of wobble just a hardware limitation of the tracking sensors / head pose sensors ?

1 Like

Great to see that you got it working ! The last bit of wobble could be related to the headset’s reprojection and head tracking. Although applications run at 60hz the headset up samples the app to 120hz. Additionally the sensors on the Magic Leap do not operate in sync so the head pose might be obtained right after or right before the depth sensor data.

Thank you for your help so far. I’m trying to understand whether the mismatch between the app’s render rate (120 Hz) and the sensor sampling frame rate (60 Hz) can be mitigated. On HoloLens 2, we had a similar situation—the app ran at 60 Hz while the sensor sampled at 45 Hz—yet no noticeable wobble occurred.

Do you have any guidance or recommended settings to resolve this discrepancy?

Do you see the same issue with the wobbling when you run the depth sensor at a lower framerate on the Magic Leap 2? Have you considered using the Reprojection extension to stabilize the virtual content?

I just integrated the code suggested in the page in my project and it definetly solved the wobbling problem. However, now that i included this reprojection script, the holograms sometimes seem to “flash”. Is there some kind of setting of the ML rig /camera that i have to specifically change to correctly integrate the reprojection into the project without this strange “teleporting/flash” effect happening? For example changing the near/far clipping plane distances maybe? Hope i was clear enough, it is difficult to describe this phenomena and it cannot be recorder, it is only perceived by the user wearing the glasses.

As you can see in the image, i attached the script to a gameobject and defined as the target object the object i am tracking. I was wondering if there is a way to apply the reprojection to all the holograms rendered and not only a specified one. Before you suggested me to look into the reprojection code i had already selceted these settings into the OpenXR page:

As you can see i had already flagged the Magic Leap 2 Reprojection and set the Depth Submission Mode to 16 bit. However the wobbling didn’t disappear. Only when i included the script suggested in the Reprojection developer docs page it disappeared. Shouldn’t it be the same?

When using the manual reprojection mode, you can only specify one plane.
However, using the depth submission mode would allow multiple objects to contribute to the reprojection change. However, for UI you would need to edit the shader applied on the panel so it writes to to Z Depth.

You might be also able to use eye tracking and simply set the target object based on the user’s gaze.

Regarding the issue you are having with the flashing.

Have your tried the example that does not require the target object. Instead simply enabling the reprojection feature with the Depth submission mode and commenting out the code in the Update() function?

Is the skybox on your Main camera set to Transparent?

If this does not work you may need to submit the velocity / pose information in LateUpdate() or FixedUpdate() or void OnPreRender()

Indeed this ended up working just fine! Thank you so much for the support, it really is appreciated. If I make further progress and manage to finalize a reliable version of the tracking algorithm, I might share it on GitHub. Thanks again!

2 Likes