Can I overlay a 3D model on a dynamic object detected in real time?

Hi! I'm working on a project where I need to overlay a 3D model over a physical object detected in real time. When the user moves the real object the virtual object moves with it. I have looked at the magic leap documentation but it is not clear to me if this is possible to do or how it is done. The steps that would have to be performed are the real time detection and tracking of the object and the overlaying of the virtual object during the whole experience. Is this possible? Do I need to use a third party solution?

I really appreciate any help! Thank you!

The Magic Leap SDK does not include native functionality for Object Detection and tracking, however solutions like Vuforia and Vision lib can enable this functionality.

Note, depending on your use case you may want to consider using marker tracking to track the object.

Thank you very much for your quick response, the project is for an academic environment and the goal is to use normal physical objects (usually small) to simulate a surgical training environment and move them with your hands. I think marker tracking is probably not appropriate for this use case. To understand it better, could you explain to me what data I can access from the mesh? Because looking at this example it seems that it is possible to do 3D reconstruction of objects.

I will investigate more about Vuforia and Vision Lib. Are these third party solutions compatible with Magic Jump OpenXR or not yet?

Thank you!

The Magic Leap 2 does not provide mesh classification but you can access the geometry of the scanned environment. The environment is returned as mesh chucks in the Unity Application and can be accessed using the ARFoundation Meshing Subsystem or Magic Leap Meshing Componenet.

Would you still suggest VisionLib if we wanted to play with our own object detection models on the camera feed?

I'm suffering an uphill battle as I'm not sure what is the best way to go about doing this given the current simple camera example.

You would need to obtain and feed your model data manually if you plan on deploying custom ML models for object detection.

That said, vuforia and visionlib both provide ways to create a target from an object and detect it in the Magic Leap using their SDK

Hi, I have a similar use case as the original poster. Would it be possible to track moving humans and make them occluded. For example, if I am standing on the sidewalk and people are passing by, I would like to highlight the outline of all the people in a green color.

Our SDK doesn't natively support this functionality, so you would need to rely on third-party solutions for human detection and tracking.

Integrating human tracking, even with something like OpenCV, would be quite complex. You would need to estimate the depth of each person or combine the RGB camera image with the depth data from the sensor to achieve the result you are looking for.

Having tried both Vuforia and VisionLib I would maybe re-evaluate how you do things. Both "technically" work, however depending on what you're trying to track it will be very slow and very finicky about where you have to stand to get it to track the object.

We ditched object tracking for a more simple system where the user is given a 3d representation of the object with two points highlighted on it. The user then pinches these two points on the actual object which then positions the overlay on top. If you need to track a moving object then you would definitely need either VisionLib or Vuforia, but again they are unreliable at times so don't expect perfect tracking.

1 Like