When we have virtual objects on ML screen which are aligned to / overlapping real world objects, and we capture CVCamera Image in MR mode, than the virtual objects are on different location on the captured image than we can see them on the MLDevice.
See the 3 sphere-pairs marked on the attached captured image, which should cover the realworld object as we see them on the Device screen.
Unity Editor version: 2022.3.11f1 ML2 OS version: 1.5.0 MLSDK version: 2.0.0 Host OS: Windows
The virtual content is offset due to the position of the RGB camera compared to the virtual content. To reduce the performance overhead of recording or streaming mixed reality content, without rendering a third camera pass, the mixed reality stream overlays the virtual content that is rendered by the left eye, then it distorts the image so that it aligns with the RGB camera as closely as possible. However, you will notice that the virtual content becomes offset as it moves away from the camera's focus point. The focus point is where the content will align most accurately.
As mentioned in the previous point, you can reduce the offset of a specific object by setting the StereoConvergencePoint inside Unity on the Magic Leap Camera Component or the Focus Point if you are developing in C++. Here is a guide that explains how to use user's fixation point to control the focus point in Unity: Unity Stabilization Overview .
The example script will reduce the apparent offset of the virtual content that the user fixating on. You can also set the StereoConvergencePoint to an object's location, this would result in the specified object being aligned in the mixed reality recording regardless of what the user is looking at.
You can also set the point to a specific object if you need that object to be most closely aligned with the real world.
Let me know if this resolves your issue. If you find that this does not resolve your issue or need the image completely aligned I am happy to share your feedback with our voice of customer team. I would just need a little more information on what you are trying to achieve and why it is important to your project. Additionally, would it be antiquate to render the virtual content aligned with the real world if there was some performance overhead when rendering the content?
We recognize objects in the real world and put their 3D model to the same place. (The camera and sensor images and metadata are captured and sent for processing to a server)
In the test documentation we'd like to include videos/images which show what the user can see meanwhile using the application.
(For decision makers it would be much more meaningful than numbers or charts which shows positioning errors. )
In this special test situation some additional performanc overhead would be OK. E.g. If we had MR and AlignedMR Capture mode for CVCamera, we could use the new AlignedMR for these speacial cases which would align virtual content to the real world on the whole image.
Did setting the focus distance / Stereo convergence point on the Magic Leap Camera provide a satisfactory result, or was the offset still too large for your documentation ?