Feasibility of Inside-Out Tracking for a Custom IR LED Cube with ML2

Hi,

I’m interested in implementing an inside-out tracking project with a quality level comparable to the Magic Leap 2 controller.

My concept involves a 5 cm cube with a 750 nm IR LED pattern on each side, along with an IMU inside for angular data. The goal is to overlay a 3D model of the cube on the physical one with high precision, similar to how the ML2 controller's tracking and overlay work.

I’m considering leveraging the MagicLeap.OpenXR.Features.PixelSensors API in combination with OpenCV for IR LED position detection and using the IMU to provide orientation data.

Do you think the ML2 APIs and hardware capabilities are robust enough to achieve this level of tracking accuracy and overlay precision? Could I expect a result on par with the controller, or are there specific limitations I should be aware of?

Thank you for your insights!

Tracking a peripheral device to the accuracy of the Magic Leap 2 controller is a nontrivial task and could be considered a research project by itself. Note that the Magic Leap 2 controller's pose is obtained using a combination of sensors, cameras, and IR lights.


Some notes regarding our world cameras:

The world cameras report in 2 separate streams (0 and 1). Combined, they report at 60 Hz. However, the exposure can only be adjusted on one of the streams since the other stream is required to be constant to maintain head pose. This means that, depending on the environmental conditions, you may not be able to see the IR lights beyond a single 30 Hz stream.

Thanks,

ok, so let's try to be less ambitious. Do you think it would be hard to get a IR LED position in space using pixel sensor framework. My idea is it get a single point from /pixelsensor/world/left and /pixelsensor/world/right and to use triangulation to know position of my led in world.

That is possible. You might not need overlap at all if you know the size of the IR patern that you are tracking. Similar to how marker detection works. You would use the undistorted points and then estimate the depth based on how large the pattern appears. If you don't know the size you could use the overlap / triangulation to estimate the size before tracking. Note the world camera's don't overlap completely so you would estimate the size only when the lights are visible across two cameras.