Give us as much detail as possible regarding the issue you’re experiencing.
ML2 OS version: 1.12.1
I am trying to use the raw and processed depth images streamed to a python client to track tools with IR reflective spheres attached to them. Specifically, I use the raw depth image to capture the locations of the spheres in 2D, and then use the processed depth frame to estimate their 3D position. The tracking seems to only work at a specific depth - otherwise, the detected spheres resemble a scaled-up or scaled-down version of my tool. I’m using the following code to undistort the sphere centroids acquired from raw depth image, and computing the ray that should resemble sphere position in local depth camera space, inspired by this post: Processing the depth frames - Unity Development - Magic Leap 2 Developer Forums.
def undistort(input_pt):
xy = input_pt/self.resolution - np.double(0.5)
r2 = np.sum(xy\*xy)
r4 = r2 \* r2
r6 = r4 \* r2
xy_rd = xy \* (1 + (self.d.k1 \* r2) + (self.d.k2 \* r4) + (self.d.k3 \* r6))
xtd = (2 \* self.d.p1 \* xy\[0\] \* xy\[1\]) + (self.d.p2 \* (r2 + (2 \* xy\[0\] \* xy\[0\])))
ytd = (2 \* self.d.p2 \* xy\[0\] \* xy\[1\]) + (self.d.p1 \* (r2 + (2 \* xy\[1\] \* xy\[1\])))
xy_rd\[0\] += xtd
xy_rd\[1\] += ytd
xy_rd += np.double(0.5)
return (xy_rd \* resolution - center)/focal
#getting centriods
u = M[“m10”] / M[“m00”]
v = M[“m01”] / M[“m00”]
depth = raw_frame.depth[int(v), int(u)] * 1000 #convert to mm
uv = [u, v]
unit_vec = undistort(uv)
ir_tool_centers.extend([
unit_vec[0],
unit_vec[1],
depth
])
#spheres xyz a reformatted version of ir_tool_centers
xyz = spheres_xyz[i,:].copy() # [x_ray, y_ray, depth]
xyz[2] += cur_radius # z’ = depth + radius
temp_vec = np.array([xyz[0], xyz[1], 1])
spheres_xyz[i,:] = temp_vec / np.linalg.norm(temp_vec) * xyz[2]
Is this code correct? If not, what should I do to increase the robustness of the tracking?