Hello together,
my question is primarily focussed towards the related theory and experiences which you gained during deployment of neural networks on the ML2. There are several ways how one can try to deploy neural networks on the ML2, these include solutions such as Barracuda, Tensorflow Lite and ONNX. Moreover, one can try different runtime backends such as the NNAPI or XNNPACK. I certainly assume that some sort of neural networks are already running on the ML2 as in the case of the hand tracking.
My questions:
- How is Magic Leap recommending to use neural networks on the ML2?
- How can the GPU be used for such cases? (I read that the NNAPI is partly capable of using Vulkan for acceleration)
- What are your experiences in that regard?
I have worked with Barracuda 3 and Unity during my thesis and had mixed experiences. Even minimalistic apps crashed after running it for some time and countless warnings from the Vulkan backend that some resources are not freed up correctly (Despite me explicitly at some point even destroying the input and output objects and creating them new with each pass, still same problem). My intuition is that a native application is much more suited for such purposes. Incorporation of OpenCV is also more convenient there.
Looking forward to your answers!
Alexander Dann