This is more of a general AR question than something ML specific, but I'm curious what other people are doing/planning to do with this issue on the ML2 hardware. Let's say you're developing a basic application where some 3D objects are playing out a scene for someone to watch (maybe a person standing on the floor, looking in the users direction and talking). What would you consider best-practices for the initial app-launch and setup (so that the people don't appear in the middle of a table/chair, figuring out where the floor is, etc.)?
In the past, I have shown a mesh or plane view of the room and then let the user/facilitator choose a spot that has enough room for the presentation (Sometimes showing a box around the pointer to denote how much approximate space is needed). That seems to work well enough, but given the power of the hardware we now have access to, i was wondering if there was a better way.
I have asked our design team on some recommendations regarding this question
I have found using a physical visual marker (qr code) that becomes an activation button in AR works well. This also helps if you have a multi-user experience as you can line up the world space to the qr code. Marker Tracking on the ML2 works well especially if you use aruco markers.
The down side to this qr codes are not aesthetically pleasing and that it's less mobile as the user needs to marker printed or shipped to them.
Yes sadly for my use-case I can't rely on externally placed markers as these devices will likely end up being given to less experienced people who aren't in a pre-set or controlled environment. Otherwise that would definitely be ideal.
Regarding placing objects in the environment without them intersecting with existing objects:
You can use the Plane Detection and Spatial Meshing features to query the user's environment and place the object in their correct spot. Here is an example user story:
- I launch the application
- The application checks how many spatial mesh objects are present.
- The application uses Plane Detection in tandem to determine if a surface is a floor, a wall or a ceiling.
- If the user does not have enough meshes / planes, the application asks the user to walk around to scan the environment.
- Once there is enough space, the application places an object in front of the user. To avoid intersecting with objects the applications chooses a location on a plane, then uses a collider and check the contact points to see if another mesh is intersecting with the object in the area. If no intersections are detected, the object can be placed. Alternatively, you could try to use the Unity Nav mesh ( Depending on your application this might be too processor intensive)
Also note that the Mesh that is generated on the Magic Leap persists across multiple apps, which means the users might have scanned the environment in a previous application.
Yeah that sounds pretty good, maybe use the floor/plane detection to estimate a spot in the middle of the room facing the user and then allow them to adjust as needed from there.