Give us as much detail as possible regarding the issue you’re experiencing:
Unity Editor version: 2022.3.42.f1
ML2 OS version: 1.12.0
Unity SDK version: 2.6
Host OS: Windows 11
Error messages from logs (syntax-highlighting is supported via Markdown): -
We use the suggested SDK modification in the
Hologram Drift Issue When Tracking Object with Retroreflective Markers using Depth Camera (Raw data) - OpenXR - Magic Leap 2 Developer Forums
topic to get capture timestamp for rgb and dept frames.
With this we can pair rgb and depth frames within 10 ms capture time difference. So we have the closest pairs.
But if we check the posees got for the rgb and depth frame we realize that there are larger and larger differences as we move further from the origin/starting point.
On the next charts you can see as we move around a box 2 times. first continously, second stopping for a while several times.
The first chart shows the dfferences the secondo one shows the absolut position values in mm.
On he first chart also displaced the capturetime difference in ms.
We cheched it on multiple devices. it shows almost the same difference on different devices too.
What could couse this? How could we avoid this.
Aditional info: we scanned the a local space before the tests.
Thank you for the detailed graph.
Are you obtaining the RGB image using the MLCamera API to obtain the Camera Images and the Camera pose?
Also to make sure that I understand, the main issue you are seeing is that the origin pose for the two frames is inconsistent as you move around the environment?
We use
MagicLeap.OpenXR.Features.PixelSensors.MagicLeapPixelSensorFeature.GetSensorData(PixelSensorId sensorType, uint streamIndex, out PixelSensorFrame frame, out PixelSensorMetaData metaData, Allocator allocator, long timeOut = 10, bool shouldFlipTexture = true).
For getting pose, we use the custom GetSensorPose, You wrote in
Hologram Drift Issue When Tracking Object with Retroreflective Markers using Depth Camera (Raw data) - OpenXR - Magic Leap 2 Developer Forums
And what we se is, that if we get rgb and depth frames which has around 0.05 sec capturetime difference only, and we capture frames even with a “still” headset, the pose (x,y,z) coordinates are much higher if we capture frames 1 meter away from the origin than when we capture frame closer to origin.
On the second graph you can see the distance from origin.
on the first you can see the error between the rgb and depth frame poses.
The correlation between the distance-from-origin and the erorr-between-rgb-depth-poses can be seen easily.
on the first graph I added remarks to section (on the top of the graph) to describe the action we took meanwhile the frames recorded.
I’m not sure I understand the question fully.
The get sensor pose is the 3D pose of the sensor relative to the origin of the application. The pose is not relative to the headset itself but rather a tracked point in the scene. The values would appear to grow on both of the poses because you are moving further away from the origin (0,0,0)