I want to use two world cameras on Magic Leap 2 to perform triangulation and determine the 3D coordinates of a target point in Unity world space.
I tried to use the following data:
- From
PixelSensorFisheyeIntrinsics
:
FocalLength
(pixel)
PrincipalPoint
(pixel)
RadialDistortion
TangentialDistortion
- From
PixelSensorFeature.GetSensorPose()
:
position
(Vector3)
rotation
(Quaternion)
I have also used OpenCV for Unity to undistort the image and obtained the target point’s pixel coordinates in the undistorted image.
However, I failed to use the above data for triangulation. Is there any example code or documentation for performing triangulation using World Cameras? I want to know if I’m using the correct parameters.
Thank you very much for your help!
Have you looked at the following post ?
Thank you for your reply!
Yes, I successfully obtained the undistorted world camera image by referring to the post you mentioned.
However, I would like to ask whether triangulation can be performed using these undistorted images along with the parameters mentioned above. The reason I ask is that I couldn’t find detailed explanations for some of the parameters.
Specifically:
- Does
PixelSensorFeature.GetSensorPose()
return the optical center’s position in Unity’s world coordinate system, along with the optical axis orientation?
- The PrincipalPoint values I obtained are around (508, 508), but I am unsure which corner of the image is considered the origin for these coordinates.
Any clarification on these points would be greatly appreciated. Thank you!
It’s the same origin as it is in OpenCV. I think that’s bottom left corner? But you would need to double check the opencv documentation to be sure.
You are correct that the get pose function provides the pose from the sensor’s center.
Unfortunately the topic of triangulating points from multiple images is a little out side of the scope of the forum. But it sounds like you are on the right track, using OpenCV and undistorted images.