Best practices for cloud rendering

We are planning to develop an interactive AR content using Magic Leap2. Could you please provide us with best practices to implement the following features:

  • High-quality content can be rendered in the cloud.
  • Multiple people can experience the same AR content simultaneously.
  • The user's actions can affect the content.

Also, we have heard that ML2 AR Cloud allows multiple people to experience the same content, but we are wondering if the rendering happens on the cloud or on each individual Magic Leap2 device?

Best Regards

@kk55555 We have some (brand new) documentation on Remote Rendering services, found here:

Your second, third, and fourth questions will require disambiguation, and will be answered as one response:

  • Tele-present, Co-present, or both?
    In either case, we have a draft of a guide on achieving this using Photon networking for Unity. This guide has not been released yet, but will be very soon. The concept of Spatial Anchors must also be understood in order to have virtual content persist in reliable positions in the real world. Our AR Cloud offering has several means of deployment (remote cloud or "edge") and can be used to automatically sync Spatial Anchors between headsets, which virtual content would be "pinned"/"anchored" to. Remote Rendering handles the rendering of content. We do not currently support the combination of Remote Rendering with AR Cloud as we only currently provide documentation for Remote Rendering with Unreal Engine 5 (and Omniverse coming soon), which does/do not currently have support for Spatial Anchors, yet.

Both Remote Rendering and AR Cloud require a minimum of a Developer Pro license for each Magic Leap 2 headset that will use them.

Thanks for your reply.

ML2 AR Cloud is offering the function of syncing virtual contents between multiple headsets, but not offering the function of cloud rendering?

I understood that the combination of AR Cloud and Remote Rendering is not offered now,
and It's not possible to create cloud rendered interactive contents for multi players at the same place with them.
Instead of AR Cloud and Remote Rendering, is it possible to cloud render AR contents for ML2s using CloudXR, or some other services?

@kk55555 AR Cloud does not sync content, it syncs the Anchor system between devices. Components your app will need and the solution for handling it:

  • Shared frame of reference: Spatial Anchors synced via AR Cloud
  • Shared, networked interactive content: Photon Engine or comparable networking library
  • Cloud-rendered content: Remote Rendering (an application created by Magic Leap, which is required to render content on the cloud, which is not currently offered in combination with Spatial Anchors due to platform support not yet being implemented) and either:
    • NVIDIA Omniverse (documentation coming soon)
    • Unreal Engine 5

Hi there, I'm one of the leads for Remote Rendering, and if you wish to use our solution I may be able to provide some extra context:

Over the next several releases, remote rendering will be increasing the number of spatial capabilities available for remote applications, eventually matching the same capabilities available on an app running locally on your Magic Leap 2.

In order to have a multi-user scenario, the key challenge is to get all ML2s to agree on a common spatial origin, and thus using spatial features such as marker tracking, anchors, etc. can enable you to create those experiences properly.

In our 1.0 release though, we do have some functionality to help enable multi-user remote rendering already, though it is more of a workaround than a full solution. Via a special launch argument, you can configure multiple devices to treat an Aruco code as the "Center of the world", thus having all of them agree on what that center is, in a way that your application is not explicitly aware of. I want to be clear that this is not the smoothest solution nor would I advocate relying upon it long-term, but it may be suitable to evaluate your use case while waiting for a more proper exposure of this information all the way up to your application. If you are interested, please let me know and I can explain how to enable it.

I do want to call out two more caveats based on your post:

  1. For the initial release of Remote Rendering, we are primarily focused on supporting remote rendering from a host computer on your local network, not from the cloud. We believe this should allow you to develop your solutions, and in the near future we will have cloud-based hosts as a first class solution ready for larger scale deployment
  2. Each Magic Leap 2 you may wish to include in a shared session requires it's own host computer to provide the Remote Rendering backend.

I hope this helps!