Controller losing tracking when it goes to sleep

After a lot of work trying to narrow this down, I finally discovered that the problem only manifests while running the new openXR pixelsensor api. Something in the presence of the pixel sensor is causing the controller position to drift away from the origin and the controller to fail find itself eventually. Even upstream in the system menu.

I'm going to attempt to roll back to MLCamera until PixelSensor gets fixed.

1 Like

Thank you for sharing that information. Were you using the World Camera Pixel Sensor in it's default configuration or did you modify some of the exposure properties before connecting?

Default. Looking through my code and not finding any modification like that.

1 Like

Thank you for the additional info. I marked this issue as a bug in the meantime.

Rolling back to MLCamera has been much more difficult than expected. Primarily because the documentation does not seem to match the current api. The extrinsics are not giving reliable numbers and the advice in the docs say we need to make sure we enable Perception Snapshot but that doesn't actually exist in this version. That's not the only outdated advice. So it seems like I'm kinda stuck between an abandoned version (MLCamera) and a half-baked version (pixel-sensor).

Are there unpublished docs? What can I do here to get good accurate extrinsics data? I'll take a static offset if I have to. Otherwise I'm stuck trying to hack together some kind of calibration tool?

Hmm :thinking: I’m sorry to hear that you are running into those issue. What options do you see when you go into the OpenXR settings and select the gear icon next to the Magic Leap Support feature?

The MLCamera is supported and we try to keep those docs up to date.

Ugh. It was there. My screen was too small and the gear was hidden.

Now that I have it, I'm getting solid triangulation from the pose given through the extrinsics, but it's consistently offset. And it seems to be offset relative to the difference between the pose of the device at device start vs app start. So I seem to be getting a pose from device coordinate system and I'm trying to use it in the app coordinate system.

I can't find in the docs any way to translate between the two. Some call I can make that will tell me when I'm in an app session, what's the transform matrix between the two. I assume this must exist?

Unity does not provide a conversion between XR Spaces. (Even though this call exists in Native OpenXR) However you can set your application to track in the same space as the MLCamera pose. See the link on the ML Camera Page in the OpenXR Section to set your application to Unblounded space.