How to use depth camera DistortionCoefficients?

I am trying to map depth camera data to cv camera frames to calculate depth at a given coordinate in the cv camera frames. Is there any further info or example on how to use the DistortionCoefficents of the depth camera?

Hi @frankschoenhofer, happy to help woth this. What type of sample are you look for? What are you using to develop your application (Unity / Native)?

Hi! I'm using Unity - I am using Google Mediapipe to track human poses through the cvcamera and combine it with depth camera data to calculate joint positions in 3D. It works, but the accuracy could be better.
That's why I'm looking into ways to improve the mapping between cvcamera and depth camera. Reading through the documentation I found the depth camera DistortionCoefficients, but the documentation doesn't explain much about that feature. Do you have any further readings or examples how to use the coefficients to "undistort" the image?

1 Like

That sounds awesome! I’d love to see it once you get it working.

Regarding documentation, the distortion coefficients are not specific to Magic Leap and refer to the depth camera's projection matrix:

Other depth camera devices provide similar values. Since our depth camera API is experimental, it's still maturing and we are working on providing more samples in the future. For now I recommend looking at the work that has been done with existing depth cameras and applying it to our API.

For Example : AzureKinect4Unity

Let me know what type of example/ sample could help with this so I can share it with our voice of customers team.

Maybe mapping the camera images in 2d without lens distortion correction is enough. I'll try to optimize that first. The optics of the two look pretty similar.

Thanks for the links - I'll look into that!

I believe the depth frame is not paired with the RGB image. That means that for an RGB image taken at time t, the depth map you get does not correspond to the same t. Thus, using the 2D position of a joint from the RGB image to find the depth in the depth frame will give you inaccurate results.

@frankschoenhofer I am also trying to use MediaPipe on the ML2. I am having issues using a plugin written for unity here ( GitHub - homuler/MediaPipeUnityPlugin: Unity plugin to run MediaPipe). How did you get MediaPipe to run on the ML2? This plugin? If not, can I ask how?

Hi @mattycorbett - I used that same plugin. But it has to be built for Android x86_64. I forked the repo and added the x86_64 target to .bazelrc
Was planning to contact homuler, maybe they'll integrate it.
This is the fork: GitHub - karmacod3r/MediaPipeUnityPluginAndroidX86_64: Unity plugin to run MediaPipe graphs

1 Like

@frankschoenhofer Thank you so much for responding! This is awesome. Did you have to change the file folder structure or anything else? Just clone the repo, build for x86_64 (did you use a specific NPK level?), and import, right?

Thank you again!

No need to clone it. I forked it, edited and ran the github action to build it.

…when the action is through you can download it’s artifacts, meaning the unity package. I just merged in recent changes and started a new build I can keep you updated when and if that succeeds

@frankschoenhofer Thank you!

@frankschoenhofer The tarball works! Its now throwing the error below, but that says to me the libraries are loading! Ever had this error?

No, maybe thats because mediapipe is not included in the plugin anymore. Guess mediapipe has to be built for x86 too.
I used the old version 0.10.3 till now. You can download that one under „releases“ if you want to try it. Maybe I‘ll look into building mediapipe next week. Let me know if you figure that one out - must be something similar.

@frankschoenhofer I've made more progress. It wasnt the libraries, its the config file. It throwing errors when I use the CPU-based files, and not when I use the GPU files. Do you remember if you used the CPU or GPU files?