Hey jspire,
we have the problem that we want to localize within a space that contains multiple zones. Each zone looks identical to the others.
We need to display objects (in our case, squares) at specific positions, and these objects must remain fixed at exactly the points where we placed them, even when we move around inside the zone. This works initially, but because the zones look similar, the ML loses its localization and jumps to a different zone. When that happens, the ML jumps to a different starting point and draws the objects depending on the new starting point.
To prevent this, we want to support localization using markers (e.g., ArUco tags, large numbers, or large characters) or differences such as different colors on surfaces for each zone.
For this reason, we want to understand whether the ML uses the world camera and visual information from the environment for localization. More specifically: can the ML detect different colors or symbols and use these visual differences to determine the correct position inside a space?
We also tried creating a separate space for each zone, but even then the ML localized to the wrong zone while running the application.
Another approach we tested was skipping room scanning entirely and placing anchors only via markers.
However, even with this method, we observed drifting.
This raises the question of how the ML establishes its coordinate system and what methods it uses to determine distance and position relative to the markers. For example:
– Does a marker need to be visible at all times?
– Can we build a consistent coordinate system without a spatial scan by aligning the coordinate origin to a marker?
Ideally, we would have three markers visible to the camera. Based on the initial view, we would draw the object positions, and whenever enough markers are visible, we could sync.
For this to work, it would be important to know how to set the coordinate system’s origin to a specific marker so that our entire system remains consistent around a fixed point instead of around the ML’s internal starting point.
From the HoloLens, we know that it was able to determine its position relative to three infrared markers. Therefore, we are wondering whether something similar is possible with the ML.
Thank u for ur help.