Hologram Composition Offset

To provide some background on how the capture works:

You are correct when mentioning that the virtual content is offset due to the position of the RGB camera compared to the virtual content. To reduce the performance overhead of recording or streaming mixed reality content, without rendering a third camera pass, the mixed reality stream overlays the virtual content that is rendered by the left eye, then it distorts the image so that it aligns with the RGB camera as closely as possible. However, you will notice that the virtual content becomes offset as it moves away from the camera's focus point. The focus point is where the content will align most accurately.

As mentioned in the previous point, you can reduce the offset of a specific object by setting the StereoConvergencePoint inside Unity on the Magic Leap Camera Component or the Focus Point if you are developing in C++. Here is a guide that explains how to use user's fixation point to control the focus point in Unity: Unity Stabilization Overview.

The example script will reduce the apparent offset of the virtual content that the user fixating on. You can also set the StereoConvergencePoint to an object's location, this would result in the specified object being aligned in the mixed reality recording regardless of what the user is looking at.

That said, I will share your feedback with our voice of customer team. Does the issue persist when setting the StereoConvergencePoint?