Viewing ML Spaces and creating Spatial Anchors in Unity Editor

Hello,

Is it possible to develop place-based content for the ML2 by importing a Space into the Unity Editor and creating Spatial Anchors in the Unity Editor?

I understand that Spaces must be created and mapped using the Spaces application on the ML2 device before they can be used by applications. In the past, I have placed and stored Spatial Anchors and mesh blocks at runtime.

Now, I want to do so during the development process. I have mapped a Space and exported its GLB, MAP file, and map_anchor_meta.json. I’m wondering if there is any supported way to use these files in conjunction in the Unity Editor as a reference when positioning content. Ideally, I would then serialize the game objects’ information referencing the new Spatial Anchors I create in the Editor.

For context, I’m developing a game/tour app that is meant for a location on the other side of the world, so it is not possible for me to place spatial anchors using my device at runtime. And even then, I would still need to be able to tweak positions in Unity Editor.

Thank you very much for your assistance, and I apologize for any confusion.

There is not a direct way to import anchors that are created in the Unity editor. However, you could create app logic that creates the anchors at run time. The Localization Map API provides access to the map origin, which is the same as the origin for the GLB model.

That means you can load in the Space file and simply position it when the device localizes to the map origin.

So if you want to add anchors on top of the model, you can define a set of points and parent them to the origin, then create the anchors once the Map loads.

That makes sense. Thank you for the quick response. I have a follow up question.

The docs say, “When the pose of the map origin is updated to correct for drift, the pose of all spatial anchors will be updated accordingly. When localization updates occur at runtime, the origin and all anchors will be posed such that content close to the Magic Leap 2 device appears as visually stable as possible. For this reason, it is recommended that applications create anchors close to the device.”

My app needs to support localization across a 1 kilometer distance. Given that the pose of content closer to the device will be prioritized for visual stability, how small should each Space be so as to not have the pose of game objects anchored further away break immersion? I have a separate system using trigger colliders to enable/disable distant game objects as I walk around the 1km path.

It depends on your environment. If you have stable Head Pose you may notice that the drift is minimal as you walk across long distances. For example, you might not need to use multiple anchors at all. Let me know what your results are after testing on the device.

Sorry I might have read that wrong. Note that we recommend for a single local space to be around 250 m2. So depending on your use case, you may want to consider looking at other localization methods such as Niantic Lightship or Immersal.

Using the localization map origin for each Space worked well! The margin of error is small enough that I didn’t need to create Spatial Anchors. Thanks for the help!

1 Like

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.