Spatial Integration Workflow

Unity Editor Version: 2022.2.21f1
ML2 OS version: 1.4.0 dev 2
MLSDK version: 1.11.0
Host OS: Windows 10

I'm just looking for a thorough guide to the workflow to implement Spatial Anchors and localizing in an application. So far I have mapped an area, used the Anchors example project, imported that map data/json to my Unity project, and created an editor script to read the json and load placeholder gameobjects at the locations of the anchors for a rough map to understand the layout. So for the next steps, I have a couple of questions:

  1. Is there any way to visualize the space within Unity? Even a simple representative
  2. In the json data, what is the purpose of the PPMapping values?
  3. For adding gameobjects etc., what is the method to reapply positioning via the localization tools for correct offsets etc.? Must I query the anchors at runtime and assign positions etc. then?
  4. For spawning gameobjects, I would like to spawn a humanoid character, so is there an inbuilt method of finding the floor level or will I have to do that manually via the anchor positions? Can I use the XR Rigs origin, or is that not specified when spawning? The prefab mentions the origin as not being defined.

Hi @ryan-gx , welcome to Magic Leap's developer forum. Happy to help you with these questions.

  1. You cannot visualize the space data but I have submitted this request to our voice of customer team.
  2. The Json Data was not indented to be read by the end user and is dependent on the device's localization data.
  3. Yes, the anchors can be thought of as empty game objects that are attached to your space. They have a unique ID which can be used to "bind" additional data to an anchors position. In the example project the objects are spawned by first querying the anchors to determine which anchors are present, and then the anchor positions are queried at an interval or when localization status changes, to insure they stay in the correct position.

Since the anchors are similar to empty game objects, you could choose to save all of your data relative to a single anchor, then restore the objects using their position/rotation relative to the anchors position / rotation.

  1. The XR Origin position is relative to where the application first started, or where the headset gained tracking. (The setting can be adjusted on the Magic Leap Camera Component) . However, you can get the ground position using Plane finding and their semantic tags. For example, here is a code snippet that gets the closest floor plane relative to a position. Make sure to have an instance of the ARPlaneManager in your scene before running the script.

using UnityEngine;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class ARFloorDetector : MonoBehaviour
{
    private ARPlaneManager _planeManager;
    void Awake()
    {
        _planeManager = FindObjectOfType<ARPlaneManager>();
    }

    public  ARPlane GetNearestFloorPlane(Vector3 position)
    {
        if (_planeManager == null)
            return null;

        ARPlane nearestPlane = null;

            float bestDistance = float.MaxValue;
            foreach (ARPlane plane in _planeManager.trackables)
            {
                if (plane.classification is PlaneClassification.Floor)
                {
                    float distance = (position - plane.center).sqrMagnitude;
                    if (distance < bestDistance)
                    {
                        nearestPlane = plane;
                    }
                }
            }

        return nearestPlane;
    }
}

Great, thank you. And when querying the anchor via id through the hashset in AnchorManager, does this return values for position etc. via any particular method?

e.g. GetAnchor(x).Position? or something similar?

I will go the route of using a single anchor for positioning, while using secondary anchors for helping visualize the space within Unity.

@kbabilinski Also, with regards to using an ARPlaneManager, I can get it running fine in the simulator, but in a build, no planes are detected at all. I find the planes, then use the raycast of the XR Interactor to place an additional object, but cannot get it working in a build.

I have added an ARPlaneManager, added an AR Default Plane prefab to visiualize the area and then wait for it to detect planes before beginning my interactions as a test, but no luck in the headset build.

Any idea as to why my planes are not being detected?

Did you request the spatial mapping permission at runtime?

Not at runtime, no. I have it set in the permissions of the project settings.

Since this is a dangerous permission you will need to request it in the manifest and at runtime. Requesting Permissions | MagicLeap Developer Documentation

Thank you for helping us improve our documentation. We will update our plane finding documentation to clarify this requirement.

Great, thanks. I'll be moving back to focus on anchors shortly.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.