Seeking Guidance for Virtual Item Placement in a Magic Leap Project

Dear Community,

I’m new to Magic Leap development and would appreciate any guidance you can offer. Thank you in advance for your help!

I’m currently working on a project where I aim to plan the placement of items in a room using a virtual reconstruction of that space. The idea is to create a virtual twin of a room, place items within this virtual scene, and then visualize these items in their planned locations in the physical room using the Magic Leap headset.

My initial approach was to use the Spaces App to scan the scene, import it into Unity, and arrange the items there. Ideally, these virtual objects should appear in the correct physical locations when viewed through the Magic Leap. However, I’ve realized that with the Spaces App, I can only export Spatial Anchors.

Is there a method to scan the environment with Magic Leap, reconstruct it in Unity, and then render virtual objects in the correct physical spaces when viewed through the headset?

Alternatively, could I use a pre-scanned virtual environment and perform external tracking of the Magic Leap to localize it within that virtual space?

Any hints or suggestions would be greatly appreciated.

Thank you!


Welcome to the community! Sounds like a fun project!

First, I'd recommend checking out MagicLeap's Workshop and Assist applications, which are free for you to use on your device-

Workshop is a collaborative design application that lets you import 3d models and other content into a shared space that you can interact with on your ML2 device. It sounds like you wanted to place items in the room using your pc and then view the result on the MagicLeap, but if you are alright with just using your ML2 to place the models in your room directly, and you don't need the design to persist, then Workshop may actually meet your needs.

ML Assist is a remote assistance application where a remote user can use a web browser to view and interact with what the ML user sees. I suggest taking a look at it because it demonstrates how the meshing platform feature can be used to generate a 3d mesh of your space. If you don't need a textured mesh, then you may be able to use a similar approach to create a 3d model of your room using your ML2.

That all said, I think we can break what you're trying to do into two parts-

  • Creating a digital twin of your space

As you found, when you create a map of your space using the Spaces app, the output is not directly consumable. There are apis, as well as a console application that you can use to export the map data. However, it will be output in an opaque binary format. As I mentioned, you might want to explore using the spatial meshing apis to create a mesh of your space in a Unity application. You could then use a library to export a 3d model that you could manipulate on your pc-

There is also a sample included in the ML Unity samples that you can download from ML Hub-

  • Aligning and localizing your ML2 in the digital twin

Once you've created your model, you would want to align it to your space on your ML2. We just released a sample application on GitHub that demonstrates how you can align a 3d model to you space-

Note that although the sample still relies on you being localized into a space that has been mapped using the Spaces app. It uses a spatial anchor to continuously localize the device as you traverse your space after you've done the initial alignment.

Hope that helps!