We currently face an issue with the MR recording feature on the Magic Leap 2: There seems to be an offset between where the ML2 user sees holograms and where holograms are displayed in a camera recording. If e.g. the user puts a virtual ball onto a red cross on a real table, the ball will not be at that position from the recording’s view.
It is very important for our application that the mixed reality content in the video we record / stream is aligned with the ML2 user’s perspective.
As mentioned in the linked thread, this is handled on the HoloLens by being able to do the hologram composition from a different camera perspective.
We thus wanted to ask: Are there plans to improve the alignment of virtual and real content in the recording in the future?
El, to answer your question "Thank you for reaching out regarding this issue. If we were to add this feature in the capture API logic, how would you like to see it implemented?" The answer is, just fix the recorded offset. Arguably it's a bug. Thanks, Chris
To provide some background on how the capture works:
You are correct when mentioning that the virtual content is offset due to the position of the RGB camera compared to the virtual content. To reduce the performance overhead of recording or streaming mixed reality content, without rendering a third camera pass, the mixed reality stream overlays the virtual content that is rendered by the left eye, then it distorts the image so that it aligns with the RGB camera as closely as possible. However, you will notice that the virtual content becomes offset as it moves away from the camera's focus point. The focus point is where the content will align most accurately.
As mentioned in the previous point, you can reduce the offset of a specific object by setting the StereoConvergencePoint inside Unity on the Magic Leap Camera Component or the Focus Point if you are developing in C++. Here is a guide that explains how to use user's fixation point to control the focus point in Unity: Unity Stabilization Overview.
The example script will reduce the apparent offset of the virtual content that the user fixating on. You can also set the StereoConvergencePoint to an object's location, this would result in the specified object being aligned in the mixed reality recording regardless of what the user is looking at.
That said, I will share your feedback with our voice of customer team. Does the issue persist when setting the StereoConvergencePoint?
Hi Krystian, thanks, that really helps to understand the issue and potential fixes and we'll play with that. I think for our needs it may also be sufficient to have the device save the raw (split view) stream, and fix it in post. But, your explanation helps us to understand better what's going on, and why. Thanks for your advice!!
When adding the stereo convergence detector with default settings, this does not really seem to improve the offset. We tried using "Show Debug Visuals" and compared the offset between virtual objects in a recording at the position of the debug visual with that in the user's perspective.
Especially at close distances, which is most important to us, the offset is still very noticeable, e.g. if an object is at the corner of the table for the user, it's about 10cm from that corner in the recording.
(It's both a bit higher up the table and a bit too the right)