Meshing test error

Hello,

I am currently trying to run this example provided by the developer doc:

After setting up the example step by step, I have trouble building it to the device with this error:

What could be the reason that caused this issue? And how do I resolve it to have it build to the device? Many thanks.

On a side note, the simulator for this example doesn't turn on, possibily due to the lack of support from the SDK version I'm using.

Unity Editor version: 2022.3.14f1 :
ML2 OS version: 1.5.0 :
MLSDK version: 1.5.0 :
Host OS : MacOS

Would you mind closing the project and deleting the library folder of your project? This may fix your issue as some files there could be incorrectly generated.

Best,

El

Thanks for your reply. Unfortunately, deleting the library folder doesn't solve the issue. Once I reopen the project, I click fix set build target to Android, and build and run the scene, which shows the same error. I tried using ML SDK 1.6.0 with no success either. What else is there that I should look for besides checking my example implementation?

The OpenXR workflow does not support App Simulator at this time.

Have you been able to build the Magic Leap Unity Examples project that can be obtained from the Magic Leap Hub?

The example provided by the SDK works fine. I just redo the whole example (Simple Meshing setup) and now it compiles, but no mesh were displayed. Does the example includes meshing visualizer or do I need to include another script to visualize the mesh? Thanks.

When using OS version 1.6.0 and the Magic Leap Unity Examples 2.1.0 the meshing example will visualize the meshes as soon as the scene is loaded.

If you are using the non OpenXR workflow you will need to download the Magic Leap Unity Examples v1.12.0

As I said above, the Meshing example works perfectly. It's the example from the link I included above that doesn't work out for me, but the SDK example pretty much provided everything I need.

For some context, I am trying to detect a small object with high accuracy. This object will have fluorescent, so it will emit ~700nm light, and I will be running a longpass filter to block light below that wavelength. In this case, getting depth using visual image and projecting back seems to be the most straightforward solution.

Now I do have some questions:

  1. How accurate is meshing? From the example, it looks loike there are points that are off by a couple of centimeters.
  2. Is there a way to improve the accuracy? Maybe by using another example or improving the algorithm?
  3. How does meshing works in terms of hardware? Does the goggle shoot out NIR light to get depth then project mesh using the distances?
  4. Is there any example that use pure vision from camera to get depth and project mesh back(Probably the most important question)?

Sorry if those are long questions that might have already been answered by someone random on a forum. It would be nice to hear answers from the developers of Magic Leap. As always, I really appreciate the help and support!

Thank you for the explanation, we will get someone to take a look at the guide and see if any changes need to be made. We will keep you updated if we add additional information to it.

Regarding the other questions.

  1. We do not have the published surface level accuracy of the depth sensor, the accuracy depends on the surface and the environment conditions. For the meshing subsystem our device generates voxels are then combined into a mesh with each voxel being about 4cm
  2. The Unity Example script demonstrates how to adjust some of the Mesh settings. These settings are a trade off between performance, filtering and density.
  3. The Magic Leap has a ToF sensor on the front of the device next to the RGB camera. This sensor emits a light to get the depth of the enviornment.
  4. Developers can access the raw depth camera, the depth data can then be used to generate a mesh, however we do not provide an example of rendering a custom mesh based on the raw depth. You can use the 1.12.0 Unity Example project to see how to view the depth image from the sensor itself. We are working on adding this functionality to OpenXR in the future OS release using an OpenXR vendor extension.