Neural Network Deployment on Magic Leap 2

Hello together,

my question is primarily focussed towards the related theory and experiences which you gained during deployment of neural networks on the ML2. There are several ways how one can try to deploy neural networks on the ML2, these include solutions such as Barracuda, Tensorflow Lite and ONNX. Moreover, one can try different runtime backends such as the NNAPI or XNNPACK. I certainly assume that some sort of neural networks are already running on the ML2 as in the case of the hand tracking.

My questions:

  • How is Magic Leap recommending to use neural networks on the ML2?
  • How can the GPU be used for such cases? (I read that the NNAPI is partly capable of using Vulkan for acceleration)
  • What are your experiences in that regard?

I have worked with Barracuda 3 and Unity during my thesis and had mixed experiences. Even minimalistic apps crashed after running it for some time and countless warnings from the Vulkan backend that some resources are not freed up correctly (Despite me explicitly at some point even destroying the input and output objects and creating them new with each pass, still same problem). My intuition is that a native application is much more suited for such purposes. Incorporation of OpenCV is also more convenient there.

Looking forward to your answers!

Alexander Dann

System processes like hand tracking run on a CVIP chip to help reduce the computational load. That said it is possible to run additional neural networks on Magic Leap 2.

We do not have any internal metrics of the different inference engines but when experimenting on a few projects, I found that Barracuda performed worse than TensorFlow Lite. I also heard that Unity recently released a new solution as a successor of Barracuda that should have better performance but I haven't used it myself.

The Barracuda package has been replaced by the Sentis package, which is in a closed beta phase. Refer to the Sentis documentation for more information. You can sign up for the closed beta.

Regarding performance, if you are preforming a series of repetitive tasks I recommend doing these calculations inside compute shaders.

To get more details about neural network inference in Unity and their performance you may want to ask this question on the Unity Forum.

Thanks for your input, really appreciate it!