How to get depth at specific Main/CV Camera pixel position?

Hi, I'm wondering how to query the depth of an object at a specific pixel location from the Main/CV camera?

I've copied from the depth camera sample code and am reading from both the Main RGB camera and Depth camera in my application. I detect some markers/objects using the RGB camera, and am wondering about the pixel/coordinate mapping between the depth and main camera such that, for example:
if I read from the RGB Main/CV camera at max resolution of 4096x3072, and detect the corner of a 2D marker at pixel coordinate (1000, 500), how could I find the corresponding (x, y) position in the 544x480 depth image?

Ideally, this would be like a function such as
MLVec2f RgbPosToDepthPos(MLVec2f rgbPixelCoords, MLVec2f rgbImageResolution);
(resolution of rgb image is included because the aspect ratio and resolution of your rgb camera feed would affect the pixel mapping)

I wrote an app to read and save corresponding RGB and depth frames, and I noticed the depth camera has a much larger FOV than the Main/CV camera. Also, the depth camera has a bit of a fish eye lens compared to the Main camera. I was going to see if I could find a mapping between the two cameras myself, but the fish-eye distortion of the depth camera and the fact that the depth camera can't see things like markings on a ruler/tape measure make it very difficult to line up images, so I thought I'd ask on here if such a function already exists.

Also, what's the accuracy of the depth sensor when working at a distance of about 1 meter?

Additionally, can you clarify the depth measurement?
This is an example screenshot from blender showing a camera's frustum looking at some flat surface. When you move further away from the camera's center FOV, the rays between the camera and the surface get longer and more diagonal. If you query the depth of some pixel at the top of the camera's field of view, is the depth returned equal to the length of the straight line (yellow line in the image) between the camera and that point, or the orthogonal distance (green line) between the camera and the plane?


If the depth returned is the yellow depth, is there an easy way to convert that to the orthogonal (green) depth from the camera using some built-in camera intrinsics/calibrations, such that the depth would be similar to the "z" component of a camera based coordinate system?

Thanks

And building on that last question, if the depth measurement is based on that yellow line (direct ray from camera center to object), and if you knew the physical characteristics of the camera, like FOV and focal length, etc, then could you not treat each pixel in the depth camera image as a ray trace and use depthValue * cameraRayDirectionAtPixel(x, y) to get a 3D (x, y, z) point cloud for every pixel in the depth image? Is there an accessible API for doing so?

1 Like

The depth returned will be the yellow line, the radial distance from the depth camera to the real-world location. Regarding the second question, we do not have a helper class for this but I will put in a request.

For our platform I subtract the principal point and divide by the focal length to get the x and y of the ray and set the z to 1. Then multiply the whole thing by the depth.