I have been trying a bunch of things but no luck so far. Is there a doc on how to do this? I found the one for the depreciated SDK but no the openxr unity approach. So how do you get the pose that corresponds to the picture you just took (for depth and RGB)?
For the RGB camera you can use the previous ML Camera API with Perception snapshot enabled in the Settings of the Magic Leap Support OpenXR Feature in Unity. You will also need to set your tracking origin to unbounded.
To get the pose of a pixel sensor at a specific time, you may need to look at this post: