The origin (0,0,0) is set when the device is turned on.
We capture RGB and Depth frames too.
We expect that: if we capture RGB and Depth frames at the same moment than the pose we get for the RGB frame should be very close to the pose we get for the Depth frame.
Based on our measures. It seems that, the difference (error) between the RGB pose and the closest Dept frame pose correlates with the distance from the origin.
So if we are close to the origin, thant the difference is small,
but if we move 1 meter avay from the origin, than the error is 40-50 mm on at least one direction (x,y, or z) (this comaprision does not include the orientation differences, but it is importatnt too)
It is almost the same if the device is still or moving at the moment of capturing.
When we go back near to the origin, than the error is samll again.
When getting the pose we pass the sensorId , and the captureTime
pose = pixelSensorFeature.GetSensorPose(sensorId, frame.CaptureTime)
the method we call is modified , based on your sample code in
public Pose GetSensorPose(PixelSensorId sensorType, long captureTime, Pose offset = default)