Would you mind closing the project and deleting the library folder of your project? This may fix your issue as some files there could be incorrectly generated.
Thanks for your reply. Unfortunately, deleting the library folder doesn't solve the issue. Once I reopen the project, I click fix set build target to Android, and build and run the scene, which shows the same error. I tried using ML SDK 1.6.0 with no success either. What else is there that I should look for besides checking my example implementation?
The example provided by the SDK works fine. I just redo the whole example (Simple Meshing setup) and now it compiles, but no mesh were displayed. Does the example includes meshing visualizer or do I need to include another script to visualize the mesh? Thanks.
As I said above, the Meshing example works perfectly. It's the example from the link I included above that doesn't work out for me, but the SDK example pretty much provided everything I need.
For some context, I am trying to detect a small object with high accuracy. This object will have fluorescent, so it will emit ~700nm light, and I will be running a longpass filter to block light below that wavelength. In this case, getting depth using visual image and projecting back seems to be the most straightforward solution.
Now I do have some questions:
How accurate is meshing? From the example, it looks loike there are points that are off by a couple of centimeters.
Is there a way to improve the accuracy? Maybe by using another example or improving the algorithm?
How does meshing works in terms of hardware? Does the goggle shoot out NIR light to get depth then project mesh using the distances?
Is there any example that use pure vision from camera to get depth and project mesh back(Probably the most important question)?
Sorry if those are long questions that might have already been answered by someone random on a forum. It would be nice to hear answers from the developers of Magic Leap. As always, I really appreciate the help and support!
Thank you for the explanation, we will get someone to take a look at the guide and see if any changes need to be made. We will keep you updated if we add additional information to it.
Regarding the other questions.
We do not have the published surface level accuracy of the depth sensor, the accuracy depends on the surface and the environment conditions. For the meshing subsystem our device generates voxels are then combined into a mesh with each voxel being about 4cm
The Unity Example script demonstrates how to adjust some of the Mesh settings. These settings are a trade off between performance, filtering and density.
The Magic Leap has a ToF sensor on the front of the device next to the RGB camera. This sensor emits a light to get the depth of the enviornment.
Developers can access the raw depth camera, the depth data can then be used to generate a mesh, however we do not provide an example of rendering a custom mesh based on the raw depth. You can use the 1.12.0 Unity Example project to see how to view the depth image from the sensor itself. We are working on adding this functionality to OpenXR in the future OS release using an OpenXR vendor extension.