Device Video Capture

Hi there. I've been having this issue for a long time now and with the latest 1.5 update it didn't get fixed.
When capturing video the virtual content isn't aligned with the reality as it is for the user. In provided video the blue outlines are shifted a lot from the square holes in the wall when for the user looking through the glasses the are perfectly aligned. Any fix for that?

Hi @maks,

Thank you for reaching out regarding this question. Check out this solution. It may point you in the right direction.

Let me know if you have any further questions.



It is something which affects the device from the start. It is a common problem of various OST-HMD since generally is taken the image rendered for the left eye and super imposed to the RGB image from the RGB camera. This result obviously in misalignment, the solution would be an additional rendering pass which renders hologram for the RGB camera perspective. Hololens 2 has solved this issue but it is not clear how (to me), in the sense that i have to understand if the fix come from the MRTK, from OpenXR or from Vuforia Image Tracking. The latter i think can be excluded, but the first time it was solved was from a Vuforia Forum thread.
It could be a useful feature the possibility to enable/disable this additional rendering pass for the RGB camera perspective. Obviously it will have some performance drawbacks, but i think that it could be possible for not so performance hungry applications. Moreover could be a nice feature even to integrate with Remote Rendering, since in that case the gpu performance issue should be limited.

Yeah, I've tried this convergence point solution, but it didn't work for me and on top it introduced some instability to the virtual image — sometimes it just jumps.

Why you guys don't just shift it on your side if the shift amount is known? The system is using virtual content rendered for one of the eyes and overlaying it with the RGB camera image. So why not just shift it once and make every user happy?

As @stradiot95 stated, the image is taken from the left eye and super imposed to the RGB image. Due to the different positions of the RGB camera and the left eye, the image is distorted. The focus point is then used to align the image as best as possible to the real world.
That said, we will share your feedback with our voice of customer team.

Could you elaborate more on what you mean by this?



The shift wasn't gone. Still there.

Thank you for your feedback. I will keep you updated as we learn more.



@maks Are you using the OpenXR workflow or the XR SDK ? When you set the object as the stereo convergence point on the Magic Leap Camera Component is the offset reduced? How far away is the object when the offset is still visible even when the object is set as the stereo convergence target?

@kbabilinski I'm using XR SDK.
The horizontal offset is reduced, yes, but the vertical stays the same. The object is about 2 meters away.
What I discovered playing with custom focus point of the Magic Leap Camera component is that adjusting X and Y coordinates of the custom focus point (empty game object) doesn't have any effect on the shift. Only changing Z coordinate makes the shift change. But it changes diagonally which helps to mitigate either horizontal or vertical shift separately but not together as the diagonal movement isn't aligned with the shift itself. Please see the video. It is taken in Simulator mode while changing Focus Point Z coordinate.
It seems like if you just give the developers and users a possibility to either record XR video into two separate files (RGB camera background and Virtual content) so it is possible to combine it after with the eliminated shift, or just give more options to set the focus position not only with Z coordinate – could be a solution to this issue.

Thank you for that information. You are correct. The focus point is only influenced by the distance from the user. Ideally, the point would be positioned directly at the distance of the virtual object.

Unfortunately, simply offsetting the content by a fixed amount would not make the content align properly. However, we are investigating ways to decrease the offset amount.

Secondary Views OpenXR Extension | MagicLeap Developer Documentation

I'm trying to understand how to activate those features, but they are not present in the feature list. As i understand, those feature should be enabled maybe through code? Or maybe re-building the openxr package with that features enabled? Sorry but i can't find any explanation over the web.
Thank you in advance for any suggestion!

I've partially understood. The solution seems that works only for native development.
Is there a way to implement the solution from openXR sample 1.7.0 in unity development?

This feature will become available in the next Unity SDK release, along with the 1.8.0 OS.

Note, secondary views impacts the performance of your application so developers may need to reduce the resolution or graphics quality of their app when the secondary view feature is active.

1 Like