Localization without space

Hello,
i have a question regarding localization without a scanned Space.
If I do not use a Space to localize to a specific environment, how does the ML2 localize itself relative to objects? Like do u use the Camera, Lidar, gyroscope sensors or a combination of them?

For example, I scanned an ArUco marker without being localized. What exactly does the ML do so that the objects generated from the ArUco marker remain in their position even after I remove the marker or move away from it?

From the code I understand that ML creates its own coordinate system. But what is the starting point (origin) of this system, and is it possible to move the coordinate system’s origin (0,0,0) to the scanned marker?

My last question is: Is it possible to use infrared markers to help the ML orient itself in Spaces that look almost identical, so that it does not switch between them?
More generally, could infrared markers help the headset improve its spatial orientation?

greetings and thx for the help :slight_smile:

Hi, I just wanted to ask if there are any updates on my question. Since it’s been around 30 days now and I still haven’t found a solution, even partial answers would be great. :slightly_smiling_face:

Hi aw01,

I’ll look into this and see what I can find. Is there a specific use case you’re working on that you could tell me more about or is this more of an academic question about the workings of the technology?

Thank you,

Hey jspire,

we have the problem that we want to localize within a space that contains multiple zones. Each zone looks identical to the others.

We need to display objects (in our case, squares) at specific positions, and these objects must remain fixed at exactly the points where we placed them, even when we move around inside the zone. This works initially, but because the zones look similar, the ML loses its localization and jumps to a different zone. When that happens, the ML jumps to a different starting point and draws the objects depending on the new starting point.

To prevent this, we want to support localization using markers (e.g., ArUco tags, large numbers, or large characters) or differences such as different colors on surfaces for each zone.
For this reason, we want to understand whether the ML uses the world camera and visual information from the environment for localization. More specifically: can the ML detect different colors or symbols and use these visual differences to determine the correct position inside a space?

We also tried creating a separate space for each zone, but even then the ML localized to the wrong zone while running the application.

Another approach we tested was skipping room scanning entirely and placing anchors only via markers.
However, even with this method, we observed drifting.
This raises the question of how the ML establishes its coordinate system and what methods it uses to determine distance and position relative to the markers. For example:
– Does a marker need to be visible at all times?
– Can we build a consistent coordinate system without a spatial scan by aligning the coordinate origin to a marker?

Ideally, we would have three markers visible to the camera. Based on the initial view, we would draw the object positions, and whenever enough markers are visible, we could sync.
For this to work, it would be important to know how to set the coordinate system’s origin to a specific marker so that our entire system remains consistent around a fixed point instead of around the ML’s internal starting point.

From the HoloLens, we know that it was able to determine its position relative to three infrared markers. Therefore, we are wondering whether something similar is possible with the ML.

Thank u for ur help.

Hi aw01,

I’m going to have to look into this more but could you share with me some images or videos of the example test space you’re using? I’d like to see how you have it set up and see where the drifting happens if possible. Would you be able to share that?

Thank you,