Using Remote Rendering with a Native Desktop Application


We created a desktop nanostructure visualization app and have developed a demo for HoloLens 2 using Holographic Remoting. Is it possible to port it to Magic Leap via Magic Leap Remote Rendering.

While we have managed to connect the render loop to the OpenXR runtime and display our demo on the Magic Leap, we would also like to use some of the Magic Leap C API, in order to get access to the microphone input for processing speech commands. Is this even possible?


Hi Nejc,

Welcome to the MagicLeap forums. Unfortunately, you would not be able to call ML C apis from an application running on a remote machine. OpenXR is the only ML api surface that is supported by MagicLeap remote rendering. There has been some interest in using microphone input with remote applications. Our remote render and audio teams are actively exploring solutions. However, mic input is not currently available to remote applications. Audio output is available however. Audio that is output to the default output device in Windows will be streamed to the MagicLeap.


1 Like

Hi Adam,

I appreciate your prompt reply. It's disappointing to learn that audio input isn't currently available. I'm hopeful that this feature will be implemented in the future. Having audio input would go a long way in using Magic Leap in remote application setups.

Best regards,

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.