Offset of virtual content when using camera stream or capturing video

Give us as much detail as possible regarding the issue you're experiencing:
Good evening, I need to stream or capture a video of my MR application but in the captured video the virtual content is offset from where it appears from the user's perspective -> the user sees the content at the right place but the video capture has the virtual object bit above where it should be.

I saw similar posts here that the secondary views feature could solve this problem but that was introduced in SDK 2.3.0 and the digital eyewear sample from Vuforia that I'm using relies on SDK 2.1.0.

I came across this post as well: Mixed Reality Camera Stream which suggests Stabilization Overview | MagicLeap Developer Documentation and I'm wondering whether there is any more information on how to implement this stereo convergence point in my scene in Unity? Is the solution to add the script at the end of the page and where should that script be added?

Or is there anything else I could implement to get the virtual content to align correctly in the camera stream?

Any help appreciated!

Unity Editor version: 2022.3.42f1
ML2 OS version: 1.8.0
Unity SDK version: 2.1.0
Host OS: Windows

Error messages from logs (syntax-highlighting is supported via Markdown):

Why the offset occurs

This offset occurs because, when the secondary view is disabled, Magic Leap 2 composites the mixed reality stream by reusing the virtual image from the left eye and attempting to align it with the RGB camera, which is located at the center of the headset. However, when the secondary view is enabled, Magic Leap 2 performs an additional render pass to align the virtual content from the perspective of the RGB camera itself, resulting in better alignment.

Stereo Convergence Point

Note that without using OpenXR and the secondary view feature, some level of offset between physical and digital content is expected. That being said, adjusting the Stereo Convergence Point can help improve the alignment. Objects that are at or near the convergence point will appear more aligned, while those further away may still seem offset.

The script you mentioned from the Stabilization Overview doc improves the perceived stability of virtual content as the user moves through your environment, but it won’t provide the most accurate alignment. Since it uses the user’s fixation point, virtual content can shift in the Mixed Reality capture as the user’s gaze moves between different points of interest.

Setting the Stereo Convergence Point

Instead, you should consider manually setting the Stereo Convergence Point to the object that needs to appear most aligned. This way, you’ll have more control over how your content is presented in the capture.

To do this, ensure that the Magic Leap Camera component is attached to your main camera, and then set the Stereo Convergence Point to the object you want to align most accurately with the physical world.

For more information on how the Stereo Convergence Point works, here’s a useful guide:

Let me know if you need any further clarification or assistance!

Stereo Convergence vs Focus Distance When reading the documentation you may notice that stereo Convergence and Focus Distance are used to describe this API.

In Unity, the Stereo Convergence Point is represented as a transform in 3D space, while in C++, it’s defined as the focus distance at which the virtual content appears most stable to the user. Unity handles the conversion between point to distance automatically.

Hi,

Thanks for the reply and detailed answer!

I also reached out to Vuforia about issues with running magic leap SDK 2.3.0 and the digital eyewear sample from Unity asset store and the solution is here if anyone else is facing the same problem:

Best,
Thorbjorg

1 Like