Screenshot functionality

Unity Editor Version: 2022.3.21f1
Magic Leap OS Version: 1.5 (using OpenXR)
ML SDK Version: 2.0.0
Vuforia Version: 10.22.5

Hello! I am developing an app using Vuforia for image recognition on Magic Leap 2. I need to implement a feature that captures a screenshot of what the user sees, including both the real-world environment and virtual objects. However, I've encountered several issues while attempting this:

  1. I tried using this Magic Leap Camera Example script, but it didn’t work as expected.
  2. I explored this solution on the Magic Leap forum, which resulted in an image showing only the virtual objects over a black background, without the real-world environment.
  3. I then implemented a workaround where:
  • I captured the real-world scene to a texture.
  • I separately captured the virtual objects to another texture.
  • I used Graphics.Blit to combine these textures into a final image.

While this approach somewhat worked, the virtual objects remain fixed in the middle of the screenshot, regardless of their actual positions in the scene. For instance, if I look away from an Image Target, the objects still appear in the screenshot as if they’re stuck in front of the camera at their initial spawn position.

Goals:

I want to achieve functionality similar to Magic Leap’s Home+Bumper screenshot, where the output includes both the real-world environment and virtual objects in their correct positions. Additionally, I need the image to be saved directly to the app folder.

Is there a way to resolve the issue with virtual objects not aligning properly in the screenshot? Any guidance would be greatly appreciated!

Hey @gregkoutsikos,

The MLCamera API is currently marked as deprecated, so you may run into issues with it.

There are a few ways to go about getting Mixed Reality frame captures.
Note that for each of these, there is a capture mode setting that you can configure. Be sure to set it to capture mixed reality.

  1. MLCamera API
    Can you elaborate on what you mean when you say "didn't work as expected"? You can capture Mixed Reality Content, using the MLCamera API. You can see the Unity 1.12.0 Unity Example Project and the Camera Capture Scene to see how to save the capture to storage.
    API Overview:
    MLCamera Overview | MagicLeap Developer Documentation
    Camera Capture Sample :
    MagicLeapUnityExamples/Assets/MagicLeap/Examples/Scripts/CameraCapture/CameraCaptureExample.cs at release-sdk-1.12.0 · magicleap/MagicLeapUnityExamples · GitHub

  2. Pixel Sensor API
    You can make use of Magic Leaps Pixel Sensor API to access various camera's on the ML2 device and obtain data from them.
    You can read up on the API here:
    API Overview | MagicLeap Developer Documentation
    Mixed Reality Capture Mode settings:
    API Overview | MagicLeap Developer Documentation

  3. Android Camera2 API
    You can also use the Android Camera2 API, which is compatible across many Android devices.
    You can find more info on that here:
    Camera2 | MagicLeap Developer Documentation
    Mixed Reality Capture mode settings:
    Camera2 | MagicLeap Developer Documentation

Since you are just looking for screen capture, using the Android Camera2 API might be the easiest to implement, but you can use whichever API you like!

1 Like

You can see the Mixed Reality Capture Example Script for Unity in this post: Questions about eye tracking and Pixel Sensors since Oct Release OS 1.10.0 - #16 by kbabilinski

As for the offset, I'm not sure about your custom implementation, but the physical camera is not directly at eye level, which could cause an offset if you overlayed the content from the Main Camera. In OpenXR you can enabled Secondary View in the OpenXR features, inside your Project Setting, which renders a secondary camera at the position of the RGB camera to capture the virtual content without an offset.

Note this uses additional resources, but can be helpful when using the MR Capture methods that @cfeist mentioned.

1 Like

I encountered issues granting camera access between the MLCamera API and Vuforia, after granting the one or the other the camera control, the other stopped working. I appreciate your responses! I attempted to use the Android Camera 2 API as a solution, but I likely made a mistake because I couldn’t get it to work. I also tried the script referenced by @kbabilinski, and after some refactoring to manage the camera control it seems to work. While there is some displacement, it correctly captures everything. I plan to experiment further with the Secondary view feature (if I’m not mistaken, it requires ML SDK Version 2.5?).

1 Like

I tried using the Secondary view while using this script, and after i enabled it in the OpenXR settings and ran the application with this:

public class SecondaryViewStatus : MonoBehaviour{
    private void Update()    
    {        
       Debug.Log($"Secondary View active = {MLXrSecondaryViewState.IsActive}");
    }
}

, i only go False as a result.. Can i somehow enable the secondary view on runtime? Or do i miss something on how to enable it?

Thank you for pointing this out. This flag will only be true when Capture is active.


The issue regarding Vuforia might have been because you were trying to access the same camera stream twice. Magic Leap provides 2 streams. The Main Camera and CV camera. So using the default example from the developer portal would have blocked the CV camera stream from being using by Vuforia.

So i guess its not possible to use secondary view while also using Vuforia? Or i did something wrong while initializing secondary view? The MLXrSecondaryViewState.IsActive existed in an update function, and i was expecting it's flag to become true even for a frame while i was capturing, but it didn't. Is there any way to enable the feature on demand in runtime? Right now i disable Vuforia right before capturing a screenshot, and re-enabling it right after. I suppose if i can control the SecondaryView feature in runtime, i can do the same and maybe get it to work. Otherwise, after some testing i believe that (correct me if i am wrong) the virtual object displacement depends on the distance between the object and the user. Maybe i could set a custom offset for the virtual items and require a certain distance to capture the screenshot, and then use that offset to place the virtual objects using a matrix.

Are you capturing the image in Mixed Reality Mode?

Yes, i need both the real world and the virtual objects.. Thats why i need the secondary view, to align the secondary objects and remove the offset.

Just to make sure I understand everything,

When using the Mixed Reality Capture, from the script I linked earlier, the camera will not start properly if Vuforia is running? Additionally, MLXrSecondaryViewState.IsActive remains false even when the Mixed Reality Capture is active.

Do you mind trying to start / use the system capture and see if this works while using Vuforia and if the MLXrSecondaryViewState.State reports the correct value when the capture initializes? System Capture | MagicLeap Developer Documentation

When I just tested the script, with secondary view enabled in the project settings, it returned true when Mixed Reality Capture was active on my end :thinking:

I tested this, i can take a screenshot using either the script you provided earlier, or using the mixed reality capture(the home+bumper combination). Both methods work, but both methods don't enable secondary view, so the virtual objects have some displacement. I also tested using the MLXrSecondaryViewState.IsActive , and it returns false even if am capturing. So right now the only problem to solve is that displacement of the virtual objects.

Do you see the same issue when capturing a video?

Yes, the secondary view does not work on Video Capture either.

Interesting :face_with_monocle: would it be possible for you to provide additional information on how to reproduce this? Starting from the pre configured Unity Example Project?

Testing in the latest Magic Leap Unity Examples Project, using the script I provided earlier, I am not able to reproduce your results.

The issue might stem from using a plain Main Camera instead of an XR/ML/AR rig or an XR Origin component. When I attempt to use one of these setups, MRTK throws an error stating it cannot find the camera tagged as MainCamera, even though the tag is correctly assigned. What components or configuration does the Secondary View require to function properly?

I think the issue is related to how you are capturing the photo. Instead of using the Screenshot api in Unity, I recommend using the MLCamera API that is configured for Mixed Reality capture.