Configuring Two Low-Latency Camera Streams (CV + Mixed Reality Capture) on Magic Leap in Unity

Unity Editor version: 6.2

Hi everyone,

I’m currently working on Magic Leap and I’m trying to set up two independent camera streams:

  1. A low-latency camera feed for computer-vision tasks

  2. A Mixed Reality Capture (MRC) feed to cast the scene to an external PC

Latency is a critical constraint for my use case, so I need to access frames as close as possible to the moment they are received by the system.

While exploring the Magic Leap APIs, I found a few relevant components:

  • In the AndroidCamera API, the CreateCaptureRequest method seems to allow defining a callback triggered as soon as a frame is available.

  • In the Camera2 API, I noticed both a CV Camera and a Mixed Reality Capture Camera, which appear to match my needs.

What I would like to achieve is:

  • Configure both camera streams using the appropriate Magic Leap camera types

  • For the CV stream, access the raw frame pointer directly so I can forward it to my native CV library without extra copying

  • In parallel, access the Mixed Reality Capture Camera stream with the lowest possible latency

  • Be able to open and close each stream independently

My questions:

  1. What is the recommended way to configure both streams simultaneously on Magic Leap?

  2. Is it possible to obtain a direct pointer to the CV frame buffer through the API?

  3. Are there best practices to minimize latency on both the CV and MRC streams?

  4. Can both streams operate independently without interfering with each other?

Any guidance, examples, or clarifications on the intended workflow would be extremely helpful.

Thanks in advance!

  1. You can see the ML Camera example for guidance on how to obtain the camera streams from the Magic Leap 2 using the ML Camera API.

  2. Yes both camera streams should have access to the direct pointer on both streams.

  3. We do not have any guidance regarding this but I have heard of some developers creating a custom h264 plugin and encoding the frames using the standard Android Media encoder API to decrease latency.

  4. The streams do overlap since it provides access to a single physical camera. Framerate, exposure, cannot be adjusted independently

Hello Krystian,

Thank you for your guidance; it has been very helpful.

I followed the MLCameraExample you shared, and it seems to fit my needs.

I managed to build a working solution using the example and the API. The camera itself is functioning correctly.

However, I am facing an issue related to the camera pose. I followed the example provided here:

I am encountering the error shown below, and I cannot find an obvious fix.

The issue occurs inside the event callback:

        private void RawVideoFrameAvailable(CameraOutput output, ResultExtras extras, Metadata metadataHandle) {

            MLCameraBase.PlaneInfo plane = output.Planes[0];
            IntrinsicCalibrationParameters? intrinsics = extras.Intrinsics;

            if (!plane.IsValid) return;
            // if (HasIntrinsicsChanged(intrinsics)) return;

            IntPtr imageDataPtr = plane.DataPtr;

            string cameraIntrinsics = $"[{nameof(HardwareCamera)}] : Camera Intrinsics";
            cameraIntrinsics += "\n Width " + extras.Intrinsics.Value.Width;
            cameraIntrinsics += "\n Height " + extras.Intrinsics.Value.Height;
            cameraIntrinsics += "\n FOV " + extras.Intrinsics.Value.FOV;
            cameraIntrinsics += "\n FocalLength " + extras.Intrinsics.Value.FocalLength;
            cameraIntrinsics += "\n PrincipalPoint " + extras.Intrinsics.Value.PrincipalPoint;
            Debug.Log(cameraIntrinsics);
            

            // Here is the error
            MLResult result = MLCVCamera.GetFramePose(extras.VCamTimestamp, out Matrix4x4 outMatrix);
            // if (!result.IsOk) return;

            //   FrameAvailable.Invoke(imageDataPtr, outMatrix);
        }

I have enabled Perception Snapshot in the Magic Leap Support OpenXR feature.

Do you have any insight into what might be causing this issue?

Thank you in advance.

Best regards,

Here is my work in progress script if more details needed :



// Some manager in my app : 

        /// <summary> Creates and initializes a  camera instance. </summary>
        public void CreateCamera() {
            HWCamera = new HardwareCamera();
            HWCamera.FrameAvailable.AddListener(OnFrameAvailable);

            StartCoroutine(ConnectCameraCoroutine());
        }

        // Coroutines ------------------------------------------------------------------------------

        //Waits for the camera to be ready and then connects to it.
        private IEnumerator ConnectCameraCoroutine() {

            float startTime = Time.time;
            float timeout = 5.0f;

            bool isCameraDeviceAvailable = false;

            //Checks the main camera's availability.
            while (isCameraDeviceAvailable == false) {
                if (Time.time - startTime > timeout) {
                    Debug.LogError("Timeout waiting for camera device to become available.");
                    yield break;
                }

                isCameraDeviceAvailable = HWCamera.CheckCameraAvailability();

                // if camera not available, wait and retry
                if (!isCameraDeviceAvailable) yield return new WaitForSeconds(1.0f);
            }

            _isCameraInitialized = HWCamera.TryConnectCamera();

            if (!_isCameraInitialized) {
                Debug.LogError($"[{nameof(DetectionManager)}] : Failed to connect to camera.");
            }
            yield break;
        }

// The camera script : 

using System;
using System.Collections;

using UnityEngine;
using UnityEngine.Events;
using UnityEngine.XR.MagicLeap;

using static UnityEngine.XR.MagicLeap.MLCameraBase;

// https://developer-docs.magicleap.cloud/docs/guides/unity/camera/ml-camera-overview/
// https://developer-docs.magicleap.cloud/docs/guides/unity/camera/ml-camera-example/#

namespace Core.Detection {

    public class HardwareCamera {

        // Constants & Enums -----------------------------------------------------------------------
        private const ConnectFlag _CONNECT_FLAG = ConnectFlag.CamOnly;
        private const Identifier _IDENTIFIER = Identifier.CV; // Main or CV; Main is the only camera that can access the virtual and mixed reality flags
        private const CaptureType _CAPTURE_TYPE = CaptureType.Video;
        private const OutputFormat _OUTPUT_FORMAT = OutputFormat.YUV_420_888;
        private const CaptureFrameRate _CAPTURE_FRAME_RATE = CaptureFrameRate._30FPS;
        private const bool _ENABLE_VIDEO_STABILIZATION = false;
        private const int _CAPTURE_WIDTH = 1280;
        private const int _CAPTURE_HEIGHT = 960;

        // Private Fields --------------------------------------------------------------------------
        private MLCamera _camera;
        private CaptureConfig _captureConfig;

        private bool _isCameraDeviceAvailable = false;
        private bool _isCapturing = false;

        private IntrinsicCalibrationParameters? _currentIntrinsics;

        // Properties --------------------------------------------------------------------------- ---

        // Events ----------------------------------------------------------------------------------
        internal UnityEvent<IntPtr, Matrix4x4> FrameAvailable { get; private set; } = new();
        internal UnityEvent<NativeIntrinsics> FirstIntrinsicsArrived { get; private set; } = new();
        internal UnityEvent<NativeIntrinsics> ImageFormatChanged { get; private set; } = new();
        internal UnityEvent<NativeIntrinsics> IntrinsicsChanged { get; private set; } = new();

        // Event Callbacks --------------------------------------------------------------------------
        private void RawVideoFrameAvailable(CameraOutput output, ResultExtras extras, Metadata metadataHandle) {

            MLCameraBase.PlaneInfo plane = output.Planes[0];
            IntrinsicCalibrationParameters? intrinsics = extras.Intrinsics;

            if (!plane.IsValid) return;
            // if (HasIntrinsicsChanged(intrinsics)) return;

            IntPtr imageDataPtr = plane.DataPtr;

            MLResult result = MLCVCamera.GetFramePose(extras.VCamTimestamp, out Matrix4x4 outMatrix);
            // if (!result.IsOk) return;

            //   FrameAvailable.Invoke(imageDataPtr, outMatrix);
        }

        // Private Methodes ------------------------------------------------------------------------
        internal bool CheckCameraAvailability() {
            if (_isCapturing) return true;

            Debug.Log($"[{nameof(HardwareCamera)}] CheckCameraAvailability ...");

            MLResult result = GetDeviceAvailabilityStatus(_IDENTIFIER, out _isCameraDeviceAvailable);
            if (result.IsOk == false || _isCameraDeviceAvailable == false) {
                // Wait until camera device is available
                return false;
            }

            return _isCameraDeviceAvailable;
        }
        internal bool TryConnectCamera() {
            if (!_isCameraDeviceAvailable) return false;

            Debug.Log($"[{nameof(HardwareCamera)}] TryConnectCamera ...");

            ConnectContext connectContext = ConnectContext.Create();
            connectContext.CamId = _IDENTIFIER;
            connectContext.Flags = _CONNECT_FLAG;
            connectContext.EnableVideoStabilization = _ENABLE_VIDEO_STABILIZATION;

            _camera = MLCamera.CreateAndConnect(connectContext);

            if (_camera == null) return false;

            ConfigureCameraInput();
            StartVideoCapture();

            _camera.OnRawVideoFrameAvailable_NativeCallbackThread += RawVideoFrameAvailable;

            return true;
        }
        internal void StopCameras() {
            if (_isCapturing) {
                _camera.CaptureVideoStop();
            }

            _camera.Disconnect();
            _camera.OnRawVideoFrameAvailable_NativeCallbackThread -= RawVideoFrameAvailable;
            _isCapturing = false;
        }
        private bool ConfigureCameraInput() {
            Debug.Log($"[{nameof(HardwareCamera)}] ConfigureCameraInput ...");

            //Gets the stream capabilities the selected camera. (Supported capture types, formats and resolutions)
            StreamCapability[] streamCapabilities = GetImageStreamCapabilitiesForCamera(_camera, _CAPTURE_TYPE);

            if (streamCapabilities.Length == 0) return false;

            //Set the default capability stream
            StreamCapability defaultCapability = streamCapabilities[0];

            //Try to get the stream that most closely matches the target width and height
            if (TryGetBestFitStreamCapabilityFromCollection(streamCapabilities, _CAPTURE_WIDTH, _CAPTURE_HEIGHT,
                    _CAPTURE_TYPE, out StreamCapability selectedCapability)) {
                defaultCapability = selectedCapability;
            }

            //Initialize a new capture config.
            _captureConfig = new CaptureConfig();
            OutputFormat outputFormat = _OUTPUT_FORMAT;
            _captureConfig.CaptureFrameRate = _CAPTURE_FRAME_RATE;

            //Initialize a camera stream config.
            //The Main Camera can support up to two stream configurations
            _captureConfig.StreamConfigs = new CaptureStreamConfig[1];
            _captureConfig.StreamConfigs[0] = CaptureStreamConfig.Create(defaultCapability, outputFormat);

            return true;
        }

        private bool StartVideoCapture() {
            Debug.Log($"[{nameof(HardwareCamera)}] StartVideoCapture ...");

            MLResult result = _camera.PrepareCapture(_captureConfig, out Metadata metaData);
            if (!result.IsOk) return false;

            //Trigger auto exposure and auto white balance
            _camera.PreCaptureAEAWB();
            //Starts video capture. This call can also be called asynchronously
            //Images capture uses the CaptureImage function instead.
            result = _camera.CaptureVideoStart();
            _isCapturing = MLResult.DidNativeCallSucceed(result.Result, nameof(_camera.CaptureVideoStart));

            if (_isCapturing) {
                Debug.Log($"[{nameof(HardwareCamera)}] Video capture started!");
            }
            else {
                Debug.LogError($"[{nameof(HardwareCamera)}] Could not start camera capture. Result : {result}");
            }

            return _isCapturing;
        }
        private bool HasIntrinsicsChanged(in IntrinsicCalibrationParameters? intrinsics) {
            // magic leap fournit parfois des intrinsics null ...

            if (!intrinsics.HasValue) return false; // Rien a comparer
            if (!_currentIntrinsics.HasValue) { // Premier set d'intrinsics
                _currentIntrinsics = intrinsics.Value;

                FirstIntrinsicsArrived.Invoke(ConvertMagicLeapToNativeIntrinsics(_currentIntrinsics.Value));
                return true;
            }

            bool hasImageFormatChanged = HasImageFormatChanged(intrinsics.Value, _currentIntrinsics.Value);
            bool hasIntrinsicsChanged = HasCameraParametersChanged(intrinsics.Value, _currentIntrinsics.Value);

            bool hasNotChanged = !hasImageFormatChanged && !hasIntrinsicsChanged;
            if (hasNotChanged) return false;

            // Mise a jour des intrinsics
            _currentIntrinsics = intrinsics.Value;

            NativeIntrinsics nativeIntrinsics = ConvertMagicLeapToNativeIntrinsics(_currentIntrinsics.Value);
            // Envois d'événements
            if (hasImageFormatChanged) ImageFormatChanged.Invoke(nativeIntrinsics);
            if (hasIntrinsicsChanged) IntrinsicsChanged.Invoke(nativeIntrinsics);

            return true;
        }
        private bool HasImageFormatChanged(in IntrinsicCalibrationParameters a, in IntrinsicCalibrationParameters b) {
            if (a.Width != b.Width) return true;
            if (a.Height != b.Height) return true;

            return false;
        }
        private bool HasCameraParametersChanged(in IntrinsicCalibrationParameters a, in IntrinsicCalibrationParameters b) {

            if (a.FOV != b.FOV) return true;

            if (a.FocalLength != b.FocalLength) return true;
            if (a.PrincipalPoint != b.PrincipalPoint) return true;

            for (int i = 0; i < a.Distortion.Length; i++) {
                if (a.Distortion[i] != b.Distortion[i]) return true;
            }

            return false;
        }
        private NativeIntrinsics ConvertMagicLeapToNativeIntrinsics(in IntrinsicCalibrationParameters intrinsics) {
            // For containement of Magic Leap dependencies within HardwareCamera.cs
            // Converts Magic Leap's IntrinsicCalibrationParameters to NativeIntrinsics struct
            NativeIntrinsics cameraIntrinsics = new();

            cameraIntrinsics.imageWidth = intrinsics.Width;
            cameraIntrinsics.imageHeight = intrinsics.Height;
            cameraIntrinsics.focalLengthX = intrinsics.FocalLength.x;
            cameraIntrinsics.focalLengthY = intrinsics.FocalLength.y;
            cameraIntrinsics.principalPointX = intrinsics.PrincipalPoint.x;
            cameraIntrinsics.principalPointY = intrinsics.PrincipalPoint.y;
            cameraIntrinsics.distortionCoefficients = new double[8];
            cameraIntrinsics.distortionCount = intrinsics.Distortion.Length;

            for (int i = 0; i < intrinsics.Distortion.Length && i < 8; i++) {
                cameraIntrinsics.distortionCoefficients[i] = intrinsics.Distortion[i];
            }

            return cameraIntrinsics;
        }

    }
}


Hmm I haven’t seen that error before. Are you calling the function on a background thread?

I recommend using the sample directly and then simply adding the MLResult result = MLCVCamera.GetFramePose(extras.VCamTimestamp, out Matrix4x4 outMatrix); call into the RawVideoFrameAvailable callback