How to Get Intrinsics for RGB and Depth Sensors? How About Extrinsics Between the Two?

You’re correct, those two APIs do not return the same reference frame by default.

MLCVCamera.GetFramePose() returns the pose in Unbounded tracking space (MLSDK default), which is anchored to the world origin defined by the system.

PixelSensorFeature.GetSensorPose() returns the pose in the tracking origin set in your XR Origin (usually Device or Floor space in Unity). This is why you see the large offset.

If you need them to align:
1 Make sure your tracking space is set to Unbounded as mentioned in the OpenXR MLCamera doc.
2. To query based on the frame timestamp, See this SDK modification example:

3. You also might want to verify the pose is not returning (0,0,0) before processing.

This way, both ML Camera and Pixel Sensor poses will report from the same origin.