You will need to do two things.
- Undistort the image so that the depth points are projected correctly.
- The image is a range image, so you will need to multiply the pixel position by the depth from the image.
The example below uses a cached projection table (that handles the distortion) and the depth data obtained from the depth sensor:
// Iterate through each pixel in the depth data.
for (int y = 0; y < resolution.y; ++y)
{
for (int x = 0; x < resolution.x; ++x)
{
// Calculate the linear index based on x, y coordinates.
int index = x + (resolution.y - y - 1) * resolution.x;
float depth = depthData[index];
// Skip processing if depth is out of range or confidence is too low (if filter is enabled).
// Confidence comes directly from the sensor pipeline and is represented as a float ranging from
// [-1.0, 0.0] for long range and [-0.1, 0.0] for short range, where 0 is highest confidence.
if (depth < minDepth || depth > maxDepth || (useConfidenceFilter && confidenceData[index] < -0.1f))
{
//Set the invalid points to be positioned at 0,0,0
depthPoints[index] = Vector3.zero;
continue;
}
// Use the cached projection table to find the UV coordinates for the current point.
Vector2 uv = cachedProjectionTable[y, x];
// Transform the UV coordinates into a camera space point.
Vector3 cameraPoint = new Vector3(uv.x, uv.y, 1).normalized * depth;
// Convert the camera space point into a world space point.
Vector3 worldPoint = cameraToWorldMatrix.MultiplyPoint3x4(cameraPoint);
// Store the world space point in the depthPoints array.
depthPoints[index] = worldPoint;
}
}
Here is a more detailed example: