Camera Access while Streaming

I have a concern about the functionality of the camera and the way streaming has been implemented on this platform. I ran into this issue the other day when building an app for a hackathon. The app I was building relied heavily on access to the camera. We connect to the camera and capture images while mapping a space and placing virtual markers. I had planned to share this experience with others by opening a device stream in ML Hub to my computer. However, when I attempted this while running my app, it failed to connect. I then closed my app and restarted streaming successfully. When I opened my app, I was unable to connect to the camera. Clearly the camera was already in use.

This is a concern. I would expect the streaming functionality to exist at a much lower level in the OS so that the OS would maintain control of the camera so it can take the camera stream and send it where it wishes and then would wrap that functionality so that apps could access the camera after the OS had performed its processing.

This was a real problem for us. We ended up adapting by shoving a smart phone (camera side out) between the ML display and my head, taping it in place so that I could have my hands free to record a demo. Please don't tell me this is the plan going forward. Please tell me there is a plan to change the way the camera is handled by the OS to allow streaming and access to the camera by the apps simultaneously. While I was able to get by in our hackathon, I have no credible way of sharing our app experiences with anyone who is remote. And I certainly won't be impressing any executives with a smartphone taped to my head.

I'd love to hear your thoughts. Or any other creative solutions devs have come up with so far.

1 Like

@rscanlo2 is this post regarding the Magic Leap 1 or the Magic Leap 2?

The Magic Leap 2 provides access to two camera streams. The main camera stream provides access to features like capturing Mixed Reality Content , whole the second CV camera stream is intended to be used for computer vision processes like marker tracking.

2 Likes

Hi Krystian,

This is regarding the ML2. I'm having trouble locating the documentation for the CV camera stream. And it is pretty difficult to discern the difference between it and the main camera stream based on the Unity SDK examples. Can you send me the documentation and explain what I need do to enable the CV camera stream vs. the main camera stream?

In addition, our application is looking to leave the camera connected and rapidly capture still images at 640x480. Then, based on user input, switch to capture a single full resolution image (to be used for a different purpose) and then switch back to the 640x480 rapid still capture functionality. We are currently able to do this with the main camera stream, but cannot engage the Device Stream at the same time. Can this be accomplished using the CV Camera? We would like to be able to use the native Device Stream to share the experience with others while we perform the functionality described above. Which classes and configurations in the SDK would you recommend to accomplish this?

1 Like

We are still working on the documentation for the Camera API, however here is a more in depth explanation.

Magic Leap 2 allows developers to access two streams from the same physical camera. The camera streams are accessed as devices in the Unity API and have the following identifiers:

  • MLCamera.Identifier.Main - provides access to compressed video and still images. This device allows you to capture virtual, real-world, mixed reality content and is the performed choice if you are not performing computer vision tasks on the output or if it is being used for streaming, broadcasting, or images.
  • [MLCamera.Identifier.CV] - best used for Computer vision scenarios, uncompressed, raw frames.
    If you use this device to do CV in you application, you will be able to use the record/stream gameplay using the Capture Service, but you will not be able to perform Image or marker tracking using the SDK.

See the Camera scene in the Unity Examples for information on how to implement this feature.

1 Like

Here is a quick sample of a script that gets the CV camera image:

using System;
using System.Collections;
using UnityEngine;
using UnityEngine.XR.MagicLeap;

public class MagicLeapCamera : MonoBehaviour
{
    [SerializeField] private int width = 1280;
    [SerializeField] private int height = 720;
    [SerializeField] private MLCamera.CaptureFrameRate frameRate = MLCamera.CaptureFrameRate._15FPS;

    private MLCamera _cvCamera;
    private MLCamera.StreamCapability[] _streamCapabilities;
    private MLCamera.StreamCapability _currentStreamCapability;

    private bool _cameraDeviceAvailable = false;
    private readonly MLPermissions.Callbacks _permissionCallbacks = new MLPermissions.Callbacks();

    private Texture2D _rawVideoTexture;

    private void Awake()
    {
        _permissionCallbacks.OnPermissionGranted += OnPermissionGranted;
        _permissionCallbacks.OnPermissionDenied += OnPermissionDenied;
        _permissionCallbacks.OnPermissionDeniedAndDontAskAgain += OnPermissionDenied;
    }

    private void OnDestroy()
    {
        _permissionCallbacks.OnPermissionGranted -= OnPermissionGranted;
        _permissionCallbacks.OnPermissionDenied -= OnPermissionDenied;
        _permissionCallbacks.OnPermissionDeniedAndDontAskAgain -= OnPermissionDenied;

        if (_cvCamera != null)
            _cvCamera.OnRawVideoFrameAvailable -= RawVideoFrameAvailable;
    }

    // Start is called before the first frame update
    void Start()
    {
        MLPermissions.RequestPermission(MLPermission.Camera, _permissionCallbacks);
    }

    void HandlePermissionsDone(MLResult result)
    {
        if (!result.IsOk)
        {
            Debug.LogError(
                $"CameraIntrinsicTest failed to get requested permissions, disabling script. Reason: {result}");
            enabled = false;
            return;
        }

        Debug.Log("Succeeded in requesting all permissions");
        StartCoroutine(SetUpCamera());
    }

    private IEnumerator SetUpCamera()
    {
        while (!_cameraDeviceAvailable)
        {
            MLResult result = MLResult.Create(MLResult.Code.NotImplemented);
            Debug.Log("Get Camera Status!");
            try
            {
                result = MLCamera.GetDeviceAvailabilityStatus(MLCamera.Identifier.CV, out _cameraDeviceAvailable);
            }
            catch (Exception e)
            {
                Debug.Log(e);
            }

            if (!(result.IsOk && _cameraDeviceAvailable))
            {
                // Wait until camera device is available
                yield return new WaitForSeconds(1.0f);
            }
        }

        Debug.Log("Camera available!");
        yield return new WaitForSeconds(1.0f);
        ConnectCamera();
    }

    private void ConnectCamera()
    {
        MLCamera.ConnectContext connectContext = MLCamera.ConnectContext.Create();
        connectContext.CamId = MLCamera.Identifier.CV;
        connectContext.Flags = MLCamera.ConnectFlag.CamOnly;
        connectContext.EnableVideoStabilization = false;

        _cvCamera = MLCamera.CreateAndConnect(connectContext);

        if (_cvCamera != null)
        {
            Debug.Log("Camera device connected");

            _streamCapabilities = MLCamera.GetImageStreamCapabilitiesForCamera(_cvCamera, MLCamera.CaptureType.Video);
            if (_streamCapabilities == null || _streamCapabilities.Length == 0)
            {
                Debug.LogError("No stream caps received");
                return;
            }

            MLCamera.StreamCapability selectedCapability =
                MLCamera.GetBestFitStreamCapabilityFromCollection(_streamCapabilities, width, height,
                    MLCamera.CaptureType.Video);

            Debug.Log("Streaming in " + selectedCapability.Width + "x" + selectedCapability.Height);

            _currentStreamCapability = selectedCapability;

            Debug.Log("Camera device received stream caps");
            _cvCamera.OnRawVideoFrameAvailable += RawVideoFrameAvailable;

            MLCamera.CaptureConfig captureConfig = new MLCamera.CaptureConfig();
            captureConfig.CaptureFrameRate = frameRate;
            captureConfig.StreamConfigs = new MLCamera.CaptureStreamConfig[1];
            captureConfig.StreamConfigs[0] =
                MLCamera.CaptureStreamConfig.Create(selectedCapability, MLCamera.OutputFormat.RGBA_8888);
            MLResult result = _cvCamera.PrepareCapture(captureConfig, out MLCamera.Metadata _);

            if (!result.IsOk)
            {
                Debug.Log(result);
                return;
            }

            Debug.Log("Camera device received stream caps");
            _cvCamera.PreCaptureAEAWB();
            result = _cvCamera.CaptureVideoStart();

            if (!result.IsOk)
            {
                Debug.LogError($"Image capture failed. Reason: {result}");
            }
        }
        else
        {
            Debug.Log("Unable to properly Connect MLCamera");
        }
    }

    void RawVideoFrameAvailable(MLCamera.CameraOutput output, MLCamera.ResultExtras extras)
    {
        UpdateRGBTexture(ref _rawVideoTexture, output.Planes[0]);
        LogIntrinsics(extras.Intrinsics);
        LogExtrinsicsMatrix(extras.VCamTimestamp);
    }

    private void UpdateRGBTexture(ref Texture2D rawVideoTexture, MLCamera.PlaneInfo imagePlane)
    {
        int imageWidth = (int) (imagePlane.Stride / imagePlane.BytesPerPixel);

        if (rawVideoTexture != null &&
            (rawVideoTexture.width != imageWidth || rawVideoTexture.height != imagePlane.Height))
        {
            Destroy(rawVideoTexture);
            rawVideoTexture = null;
        }

        if (rawVideoTexture == null)
        {
            rawVideoTexture = new Texture2D(imageWidth, (int) imagePlane.Height, TextureFormat.RGBA32, false);
            rawVideoTexture.filterMode = FilterMode.Bilinear;
        }

        rawVideoTexture.LoadRawTextureData(imagePlane.Data);
        rawVideoTexture.Apply();
    }


    void LogExtrinsicsMatrix(MLTime vcamTimestampUs)
    {
#if UNITY_ANDROID
        Matrix4x4 outMatrix;
        MLResult result = MLCVCamera.GetFramePose(vcamTimestampUs, out outMatrix);
        if (result.IsOk)
        {
            Debug.Log("Rotation: " + outMatrix.rotation + " Position: " + outMatrix.GetPosition());
        }
#endif
    }

    void LogIntrinsics(MLCamera.IntrinsicCalibrationParameters? cameraIntrinsicParameters)
    {
        if (cameraIntrinsicParameters == null)
        {
            return;
        }

        var cameraParameters = cameraIntrinsicParameters.Value;
        Debug.Log("IntrinsicData " +
                  "\n Width: " + _currentStreamCapability.Width +
                  "\n Height:" + _currentStreamCapability.Height +
                  "\n FocalLength.x:" + cameraParameters.FocalLength.x +
                  "\n FocalLength.y:" + cameraParameters.FocalLength.y +
                  "\n PrincipalPoint.x:" + cameraParameters.PrincipalPoint.x +
                  "\n PrincipalPoint.y:" + cameraParameters.PrincipalPoint.y);
    }

    private void OnPermissionDenied(string permission)
    {
        HandlePermissionsDone(MLResult.Create(MLResult.Code.PermissionDenied));
    }

    private void OnPermissionGranted(string permission)
    {
        HandlePermissionsDone(MLResult.Create(MLResult.Code.Ok));
    }
}