Hologram Drift Issue When Tracking Object with Retroreflective Markers using Depth Camera (Raw data)

Good afternoon everyone,

I am developing an application for Magic Leap 2 that tracks an object equipped with retroreflective markers and overlays its holographic counterpart on the real object in real-time.

Unity Editor Version: 2022.3.61f1
ML2 OS Version: 1.12.0
MLSDK Version: 1.12.0.

Untitled video - Made with Clipchamp

As you can see in the video, the tracking appears to be correct, and I am able to compute the pose of the object in the world reference frame. However, when I move my head, the hologram seems to drift — following the head movement briefly before realigning with the tracked object.

To compute the object’s pose in the world reference frame (i.e., the XROrigin, corresponding to the head’s initial position when the app launches), I perform the following matrix multiplication:

csharp

Matrix4x4 worldTobject = worldTsensor * sensorTobject;

where worldTsensor is calculated like this:

csharp

Pose offset = new Pose(
    xrOrigin.CameraFloorOffsetObject.transform.position,
    xrOrigin.transform.rotation
);

Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, offset);

and sensorTobject is the pose of the tracked object relative to the sensor reference frame (solved using PnP algorithm and pinhole camera model).

I don’t understand why the hologram drifts even though everything is referred to the XROrigin, which should remain fixed.

Am I missing an additional transformation (e.g., head pose)? Or are the sensor pose and the depth camera frame not synchronous? I get the sensor pose just before processing the depth frame as suggested in the pixel-sensors API examples:

csharp

Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, offset);
if (pixelSensorFeature.GetSensorData(sensorId.Value, stream, out var frame, out var metaData,
        Allocator.Temp, shouldFlipTexture: true))
{
    // Process Frames ...
    Debug.Log("Sensor Pose:" + sensorPose);
    streamVisualizer.ProcessFrame(frame, metaData, sensorPose);
    
}

Any help or advice would be greatly appreciated!

Hi @alessandro.albanesi,

Is the GameObject that is overlayed onto the physical object a child of the headset in the hierarchy? It may be that the object is briefly following the head pose because it is a child of the camera or XR Origin.

Hi and thanks for the prompt response!

No, as you can see, the stylus object is not a child of any hierarchy in the scene.
I’ve attached a screenshot of the scene hierarchy for reference.

Here is my ML Rig:

The CV camera and the headpose are out of sync, so that may be the cause of the drift. This can be resolved by using the world camera.

I am using the depth camera (raw). Is that the CV camera you are referring to? From what you are saying the depth frame and the depth sensor pose are not sinchronized, is that correct?

Have you tried obtaining the pose after getting the sensor data?


if (pixelSensorFeature.GetSensorData(sensorId.Value, stream, out var frame, out var metaData,
        Allocator.Temp, shouldFlipTexture: true))
{
    // Process Frames ...
    Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, offset);
    Debug.Log("Sensor Pose:" + sensorPose);
    streamVisualizer.ProcessFrame(frame, metaData, sensorPose);
    
}

Yes, same result. I also checked how long does the processing of the frame take ( tresholding, PnP algorithm etc..) before getting the next sensor data but it is less than 500ms (as suggested in one of the posts). Could the problem be related to the unsynchronized data of depth sensor and sensor pose? If so, why are they unsynchronized? Is there a way to synchronized them?

Sorry to hear that you are running into this issue. The team is out of office today but you could look into making a minor change in the SDK itself and see if it resolves your issue. Possibly by modifying a function in the SDK to allow for an additional property to be passed into the Get Pose function. (Note we have not tested this workaround yet)

Move the Magic Leap Unity SDK into your project’s packages directory.

Make sure the Magic Leap Unity Package is imported into your project as a folder in your /packages/ directory. (You can skip this step if you are using the Magic Leap Examples Project)

  1. Right-click the Magic Leap SDK folder in the Unity Editor and then select “Show in file explorer” (or equivalent)

  2. Copy the com.magicleap.unitysdk directory and paste it into the <YourProject>/packages/ directory.

  3. The contents of the folder can now be edited without reverting to the version that was embedded in the package.

Edit the Scripts

  1. Open the MagicLeapPixelSensor.cs script.
    Located under: /Packages/com.magicleap.unitysdk/Runtime/OpenXR/PixelSensors/MagicLeapPixelSensor.cs

  2. Duplicate the GetSensorPose function (lines 74-98) and modify it to accept a captureTime and uses this value instead of the next predicted display time.

public Pose GetSensorPose(Pose offset, long captureTime)
        {
            unsafe
            {
                var convertedOffsetPose = XrPose.GetFromPose(offset);
                if (sensorSpace == 0)
                {
                    var createSpaceInfo = new XrPixelSensorCreateSpaceInfo
                    {
                        Type = XrPixelSensorStructTypes.XrTypePixelSensorCreateSpaceInfoML,
                        Sensor = Handle,
                        Offset = convertedOffsetPose
                    };
                    var xrResult = NativeFunctions.XrCreatePixelSensorSpace(PixelSensorFeature.AppSession, ref createSpaceInfo, out sensorSpace);
                    if (!Utils.DidXrCallSucceed(xrResult, nameof(PixelSensorNativeFunctions.XrCreatePixelSensorSpace)))
                    {
                        return default;
                    }
                }

                var spaceInfoFunctions = PixelSensorFeature.SpaceInfoNativeFunctions;
                // The line below was updated to accept a capture time value.
                var pose = spaceInfoFunctions.GetUnityPose(sensorSpace, PixelSensorFeature.AppSpace, captureTime);
                return pose;
            }
        }
  1. Open the MagicLeapPixelSensorFeatureAPI.cs script.
    Located under : /Packages/com.magicleap.unitysdk/Runtime/OpenXR/PixelSensors/MagicLeapPixelSensorFeatureAPI.cs

  2. Duplicate the GetSensorPose function and modify the new version to accept a captureTime.

      public Pose GetSensorPose(PixelSensorId sensorType, long captureTime, Pose offset = default)
        {
            if (!IsSensorConnected(sensorType, out var sensor))
            {
                return default;
            }
            // The line below was modified to pass in the capture time value.
            return sensor.GetSensorPose(offset,captureTime);
        }

Edit your existing code

  1. Finally, with these modifications made you can try to obtain the pose as you did previously but passing the capture time into the GetSensorPose function
if (pixelSensorFeature.GetSensorData(sensorId.Value, stream, out var frame, out var metaData,
        Allocator.Temp, shouldFlipTexture: true))
{
    // The line below was modified to pass in the Capture Time value
    Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value, frame.CaptureTime, offset);
    Debug.Log("Sensor Pose:" + sensorPose);

    // Use Frame and Pose Data
    // Example : streamVisualizer.ProcessFrame(frame, metaData, sensorPose);
    
}

If you do test the workaround please let me know

You might not need to make changes to the SDK. Simply checking if the frame.CaptureTime == 0 could be enough.