Thank you for the reply, as a follow up question is the offset between the depth and CV cameras available in the API?
I created and tested the following pipeline with a made-up cameras offset and obtained this Error: MLCVCameraGetFramePose in the Magic Leap API failed. Reason : MLResult_PoseNotFound
which is according to documentation : Coordinate Frame is valid, but not found in the current pose snapshot
- I thought about transforming the normalized pixel coordinates from my detection model to normalized CV cam coordinates :
ndc_coordinates = ((undistortedCoordinates * new Vector2(resolution.x, resolution.y)) - resultExtras.Intrinsics.Value.PrincipalPoint) / resultExtras.Intrinsics.Value.FocalLength;
-
Then to the normalized depth cam coordinates system :
Made-up for now, but is there an API function or do I need to calculate the offset from each cameras position ? -
Afterwards, I retrieve the point depth from a dictionary that I created during depth mapping.
-
And finally convert to world system to display the point in the headset:
Matrix4x4 cameraToWorldMatrix = depthPointCloudTest.cameraToWorldMatrix;
Vector3 worldPoint = cameraToWorldMatrix.MultiplyPoint3x4(cameraPoint);
It is the same error as this topic but i believe it is due to my wrong offset in coordinates calcul. Or maybe the dictionary that i created during the depth map is not really simultaneous to the detection from CV camera.