ML_MarkerTracking API: Single Pose Est. with Multiple Markers

Give us as much detail as possible regarding the issue you're experiencing.

Unity Editor version: 2022.3
ML2 OS version:c 1.4.0
MLSDK version: 1.4.0
Host OS: (Windows/MacOS) Windows 11 23H2

Error messages from logs (syntax-highlighting is supported via Markdown):

Good Morning everyone,
i'm building an application through unity for surgeons. We can achieve good marker accuracy when using integrated APIs, very well done. But it is impossible to track robustly with sub-millimeter accuracy, which is strictly needed when superimposing patient 3d model to the real patient.
What i'm looking for is a way to use markers composed of more than one (like a charuco board, a diamond aruco and so on..).
What i've tried is the following:
-OpenCV integration, a user from there has succesfully used openCV Android release when doing native development (i'm using unity because my work has to be left to my lab and usable from everyone, so doing only code it's not good for me, but maybe i can make something like an interface).
-Using the single marker poses in order to compute a mean value, but this is very far from the optimal solution since the Perspective-n-Point algorithm works better if multiple data are given to it (like marker corner coordinates).

Is there a way to modify the ML_MarkerTracker API? I'm pretty sure that in Magic Leap a lot of stuff are done as the literature suggest, so maybe i can achieve what i need by directly modifying the API source code. Where can i find it (if it is findable)?

Welcome to the MagicLeap forums! I'm sorry, the native MagicLeap marker tracking C apis are not open source.

If you're using Unity, you can view and modify the C# wrapper code in the MagicLeap Unity sdk that would ultimately wrap the ML native apis. In a Unity project, you can take a look at the source code in this folder to see how the Unity apis are implemented- Packages\com.magicleap.unitysdk\Runtime\APIs\MarkerTracker

If you'd like to use OpenCV or a custom solution for marker/chessboard/corner detection, camera images are available through the ML apis for Unity and native. If you want to use OpenCV in Unity, you'll need to find a c# wrapper or write your own for the functions that you would like to use.

MLCamera documentation-
(Unity) MLCamera Overview | MagicLeap Developer Documentation
(General) MLCamera | MagicLeap Developer Documentation

To clarify requirements, do you need to be able to track the movement of markers from frame to frame, or just use them to establish an initial frame of reference? After detecting a marker once, you could thereafter just rely on headpose (in a reasonably small workspace) or spatial anchors to localize content. Are you trying to use them for more than just localizing content for rendering? I'm not sure if sub-millimeter content movement would be visually perceptible in the ML display.

Best,
Adam

First of all, thank you very much for the (very) fast reply.
Secondly, i'm learning day by day how Unity and Magic Leap 2 work, since i've started my development journey 2 months ago (anyway i'm able to code, so not so bad :D).
What i need is "freedom", meaning that i've to try "easily" different things, the task is to place accurately an hologram with respect to a patient' body, this can be done in two way: one in continuos tracking and one for placement tracking and then rely on the integrated SLAM algorithm to maintain the headset position aligned with the ambient. Another stuff is also to track continuosly instrumentation and tools, because maybe i need to align something with respect to the patient. The problems which arises are several. The PnP alorithm for pose estimation works better with multiple markers, so i would modify it in order to accept more than one marker by knowing the real relative positions. Secondly i need access to estimated corner positions in order to try implementing an algorithm to estimate the accuracy error of hologram placement (ie: reprojection error isn't always a good measure to quantify error placement. Current 2D navigator have an accuracy in the order of <2mm and <2°, we are often able to do equal and sometimes even better, but i want to try to improve it more since the ambient awareness of the Magic Leap 2 seems very stable with a meshing vertex confidence >0.75/0.8. Here arises another question: how to measure how much good the magic leap is aware of the ambient? I've thought about 2 ways (this is another question, let me know if it is better to make another thread): Space confidence, but related only to recognized space created through the SpacesApp, or vertex confidence obtained through the MeshingSubsystem? I'm not able to always pre-register a space.
At the end, and this is another different question, how to share locally with another magic leap 2 the same world coordinates? In order to show hologram in the same position also for the second magic leap 2 (i can share spaces, but i'm not sure they are the right thing for me). The reason is that i would want a way to share without internet access (but a local wlan yes) content data between users, or also to make accurate recording since in the capture there is a slight superimposition offset (now is slight after the last update, before was very much).

Thank you very much for any information,
if needed i can make several thread for my various question.
See you.

I can try to answer these questions:

  1. Regarding using Markers to Improve Tracking : Currently, you cannot feed marker data or known points to improve the Magic Leap's localization, however we have created a feedback ticket for this functionality and sent it to our Voice of Customer team. I also recommend exploring using OpenCV for Unity if you are unfamiliar with Native. It includes a lot of the same functionality as the Native Library. Note, the OpenCV for Unity package is not configured for Magic Leap 2 out of the box. You will have to obtain the images yourself and can use the Examples in the OpenCV unity package to see how to perform the OpenCV functions.

  2. Regarding Mesh and Tracking Confidence : The Magic Leap Unity Example project demonstrates how to use the meshing subsystem which includes the option to provide vertex confidence. However, this confidence measures the error of the mesh and depth data rather than the localization. For tracking confidence you can poll the Head Tracking state as shown in this example : Handling Tracking Loss Events | MagicLeap Developer Documentation

  3. Regarding Sharing Maps: The spaces application allows you to export and import Spaces at runtime. Meaning that if the space was mapped on one device, you can share it to another device. See the MLSpace Examples on our developer forum for more information : MLSpace Examples | MagicLeap Developer Documentation.

  4. Regarding Shared Origin: If you cannot pre-map the space or want to you an alternative method, I recommend using a marker to defined an origin for your content. While the headsets will still have deferent origins and global positions, the content can be placed relative to the marker or anchor. A simple implementation of this would be to make the Virtual object a child of the transform that represents the marker and sending the virtual object's local position, rotation values rather than the global ones.

    • Multiplayer examples : You can see the the Magic Leap Photon Fusion Example or the Wifi Direct Shared Experience Example . While these demos show colocation using Photon or Wifi Direct you can implement similar logic regardless of the Network solution. Which means that you can use something like Unity's Netcode for GameObjects if you find it more appropriate.
1 Like

Very useful. Thank you very much! :smiley: I will try