I am trying build llama.cpp project for Magic Leap. I already deploy LLMs with llama.cpp using only CPU. Now I want to add GPU.
I wonder what GPU Magic Leap 2 uses. As far as I know, it is an AMD GPU. But I need more details, since I have to choose one from here.
Here is a link to the Magic Leap 2 specs : Hardware Specs | MagicLeap Developer Documentation
Note, we do not support GPU accelerated inference on our device.
Thank you for your information! I wonder the reason why LLMs can not be loaded on GPU.
Am I correct in understanding that direct code access to control GPU for neural network inference or other computational tasks is not possible on Magic Leap? Does this include the inability to use hipBLAS or OpenCL for direct GPU communication?
Is it necessary to only use the official SDK and APIs provided by Magic Leap to leverage the GPU for any form of graphics or computational acceleration? Are these APIs and tools primarily designed for graphics rendering rather than computational inference?
It falls down to the hardware / driver support for this task. Magic Leap has a Non-tiled GPU which is less commonly found on mobile devices.
If you wish to perform GPU related tasks should be able to use OpenGL and Vulkan compute shaders. Khronos released support for OpenCV 4.0:OpenCV 4.0 released with Vulkan and OpenCL support - The Khronos Group Inc - Although you may need to compile it yourself.
That said, the Magic Leap 2 CPU is powerful it does support running quantized models which might be enough to improve the performance of your application.
Thank you. I wonder whether Termux can be used on Magic Leap.
You can try installing the x86_64 version of the APK on Magic Leap and testing the functionality.
This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.