Video Recording

Give us as much detail as possible regarding the issue you're experiencing:

Unity Editor version: 2022.3.48f1
ML2 OS version: 1.11.0
Unity SDK version: 2.5.0
Host OS: (Windows/MacOS): Winow 11

Error messages from logs (syntax-highlighting is supported via Markdown):

Hello, I want to make a script that have start video recording function and stop the video recording function. I want to record MR video having both real world and virtual content. Is there any codes that I can refer to?

Is it better to locally save the video in ML2, or sending to the server using webRTC?

Thanks!

We support using the standard Android MultiMedia APIs, so you could try to use the Android NDK to encode the Camera stream into h264 format and save it locally.

However, depending on your use case, you could use the deprecated MLSDK Media Recorder API. You can see an example of how to use this API in an older version of the Magic Leap Unity Examples Project (version 1.12.0)


However you can use the Unity WebRTC package to stream video as well. You can also use the System Recorder option via voice input or by pressing the bumper and menu on the controller.

Hello, thank you for answering!

I just made a script to record video as bellow. But I want to track raw frame's timestamp by calling captureCamera.OnRawVideoFrameAvailable += OnRawVideoFrameAvailable;.

However, it does not callback. What do you think about the problem?
This is my full code

using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using MagicLeap.Core;
using UnityEngine;
using UnityEngine.UI;
using UnityEngine.XR.MagicLeap;

namespace MagicLeap.Examples
{
    public class TestCameraRecording : MonoBehaviour
    {
        private MLCamera.CaptureFrameRate FrameRate = MLCamera.CaptureFrameRate._30FPS;
        private MLCamera.OutputFormat OutputFormat = MLCamera.OutputFormat.RGBA_8888;
        private MLCamera captureCamera;
        private bool isCapturingVideo = false;
        
        [Serializable]
        public struct CameraData
        {
            public long Timestamp;
            public MLCameraBase.IntrinsicCalibrationParameters intrinsicParameters;
            public Matrix4x4 cameraTransformMatrix;
        }
    
        private long _latestCaptureTime = 0;


        private readonly CameraRecorder cameraRecorder = new CameraRecorder();

        private const string validFileFormat = ".mp4";

        private bool RecordToFile = true;

        private string recordedFilePath;
        private MLCamera.CaptureType CaptureType = MLCamera.CaptureType.Video;

        private List<MLCamera.StreamCapability> streamCapabilities;

        [SerializeField, Tooltip("Button that starts the Capture")]
        private readonly MLPermissions.Callbacks permissionCallbacks = new MLPermissions.Callbacks();

        private bool cameraDeviceAvailable;
        public Queue<CameraData> CameraDataQueue = new Queue<CameraData>();
        private const string baseFileName = "VideoFrame";
        private string jsonFilePath;

        private void Awake()
        {
            permissionCallbacks.OnPermissionGranted += OnPermissionGranted;
            permissionCallbacks.OnPermissionDenied += OnPermissionDenied;
            permissionCallbacks.OnPermissionDeniedAndDontAskAgain += OnPermissionDenied;
        }

        private void Start()
        {
            Debug.Log("Start");
            MLPermissions.RequestPermission(MLPermission.Camera, permissionCallbacks);
            MLPermissions.RequestPermission(MLPermission.RecordAudio, permissionCallbacks);

            TryEnableMLCamera();
            StartVideoCapture();
        }

        private void TryEnableMLCamera()
        {
            if (!MLPermissions.CheckPermission(MLPermission.Camera).IsOk)
                return;

            StartCoroutine(EnableMLCamera());
        }

        private IEnumerator EnableMLCamera()
        {
            while (!cameraDeviceAvailable)
            {
                MLResult result =
                    MLCamera.GetDeviceAvailabilityStatus(MLCamera.Identifier.Main, out cameraDeviceAvailable);
                if (!(result.IsOk && cameraDeviceAvailable))
                {
                    // Wait until camera device is available
                    yield return new WaitForSeconds(1.0f);
                }
                else
                {
                    ConnectCamera();
                }
            }

            Debug.Log("Camera device available");
        }

        private void Update()
        {
        }

        private void OnPermissionDenied(string permission)
        {
            if (permission == MLPermission.Camera)
            {
                MLPluginLog.Error($"{permission} denied, example won't function.");
            }
            else if (permission == MLPermission.RecordAudio)
            {
                MLPluginLog.Error($"{permission} denied, audio wont be recorded in the file.");
            }
        }

        private void OnPermissionGranted(string permission)
        {
            MLPluginLog.Debug($"Granted {permission}.");
            TryEnableMLCamera();
        }

        public void StartVideoCapture()
        {
            var result = MLPermissions.CheckPermission(MLPermission.Camera);
            MLResult.DidNativeCallSucceed(result.Result, nameof(MLPermissions.RequestPermission));
            Debug.Log($"CLPermissions.CheckPermission {result}");
            if (!result.IsOk)
            {
                Debug.LogError($"{MLPermission.Camera} permission denied. Video will not be recorded.");
                return;
            }

            StartRecording();

        }

        private void StartRecording()
        {
            jsonFilePath = SharedInfomanager.Instance.GenerateUniqueFilePath(baseFileName, 0, "json");

            string fileName = DateTime.Now.ToString("MM_dd_yyyy__HH_mm_ss") + validFileFormat;
            recordedFilePath = System.IO.Path.Combine(Application.persistentDataPath, fileName);

            CameraRecorderConfig config = CameraRecorderConfig.CreateDefault();
            config.Width = streamCapabilities[0].Width;
            config.Height = streamCapabilities[0].Height;
            config.FrameRate = MapFrameRate(MLCamera.CaptureFrameRate._30FPS);

            cameraRecorder.StartRecording(recordedFilePath, config);

            int MapFrameRate(MLCamera.CaptureFrameRate frameRate)
            {
                switch (frameRate)
                {
                    case MLCamera.CaptureFrameRate.None: return 0;
                    case MLCamera.CaptureFrameRate._15FPS: return 15;
                    case MLCamera.CaptureFrameRate._30FPS: return 30;
                    case MLCamera.CaptureFrameRate._60FPS: return 60;
                    default: return 0;
                }
            }

            MLCamera.CaptureConfig captureConfig = new MLCamera.CaptureConfig();
            captureConfig.CaptureFrameRate = FrameRate;
            captureConfig.StreamConfigs = new MLCamera.CaptureStreamConfig[1];
            captureConfig.StreamConfigs[0] = MLCamera.CaptureStreamConfig.Create(streamCapabilities[0], OutputFormat);
            captureConfig.StreamConfigs[0].Surface = cameraRecorder.MediaRecorder.InputSurface;

            MLResult result = captureCamera.PrepareCapture(captureConfig, out MLCamera.Metadata _);

            if (MLResult.DidNativeCallSucceed(result.Result, nameof(captureCamera.PrepareCapture)))
            {
                captureCamera.PreCaptureAEAWB();

                if (CaptureType == MLCamera.CaptureType.Video)
                {
                    result = captureCamera.CaptureVideoStart();
                    isCapturingVideo = MLResult.DidNativeCallSucceed(result.Result, nameof(captureCamera.CaptureVideoStart));

                    if (isCapturingVideo)
                    {
                        Debug.Log("Video recording started successfully.");
                        // Automatically stop recording after 10 seconds
                        StartCoroutine(StopRecordingAfterDelay(10));
                    }
                }
            }
        }

        private IEnumerator StopRecordingAfterDelay(float delaySeconds)
        {
            yield return new WaitForSeconds(delaySeconds);
            StopRecording();
        }


        public async void StopRecording()
        {
            if (!isCapturingVideo)
            {
                Debug.LogWarning("No recording is in progress to stop.");
                return;
            }

            MLResult result = cameraRecorder.EndRecording();
            if (!result.IsOk)
            {
                Debug.LogError($"Failed to stop recording: {result}");
                recordedFilePath = string.Empty;
            }
            else
            {
                Debug.Log($"Recording saved at path: {recordedFilePath}");
            }

            isCapturingVideo = false;

            // Save camera data asynchronously
            await SaveCameraDataToJsonAsync(jsonFilePath);
        }
        private async Task SaveCameraDataToJsonAsync(string filePath)
        {
            try
            {
                // Convert the queue to a list for serialization
                List<CameraData> cameraDataList = new List<CameraData>(CameraDataQueue);

                // Serialize to JSON (can use JsonUtility or Newtonsoft.Json)
                string jsonData = JsonUtility.ToJson(new CameraDataListWrapper { CameraDataList = cameraDataList }, true);

                // Perform file writing asynchronously
                await Task.Run(() => File.WriteAllText(filePath, jsonData));

                Debug.Log($"Camera data saved asynchronously to: {filePath}");
            }
            catch (Exception ex)
            {
                Debug.LogError($"Error saving camera data asynchronously: {ex.Message}");
            }
        }

        // Wrapper class for JsonUtility (required for serializing lists)
        [Serializable]
        private class CameraDataListWrapper
        {
            public List<CameraData> CameraDataList;
        }

        private void ConnectCamera()
        {
            MLCamera.ConnectContext context = MLCamera.ConnectContext.Create();
            context.Flags = MLCamera.ConnectFlag.MR;
            context.EnableVideoStabilization = false;

            if (context.Flags != MLCamera.ConnectFlag.CamOnly)
            {
                context.MixedRealityConnectInfo = MLCamera.MRConnectInfo.Create();
                context.MixedRealityConnectInfo.MRQuality = MLCamera.MRQuality._648x720;
                context.MixedRealityConnectInfo.MRBlendType = MLCamera.MRBlendType.Additive;
                context.MixedRealityConnectInfo.FrameRate = MLCamera.CaptureFrameRate._30FPS;
            }

            captureCamera = MLCamera.CreateAndConnect(context);

            if (captureCamera != null)
            {
                Debug.Log("Camera device connected");
                if (GetImageStreamCapabilities())
                {
                    Debug.Log("Camera stream capabilities received.");
                    captureCamera.OnRawVideoFrameAvailable += OnRawVideoFrameAvailable;
                }
            }
        }

        private bool GetImageStreamCapabilities()
        {
            var result =
                captureCamera.GetStreamCapabilities(out MLCamera.StreamCapabilitiesInfo[] streamCapabilitiesInfo);

            if (!result.IsOk)
            {
                Debug.LogError("Failed to get stream capabilities info.");
                return false;
            }

            streamCapabilities = new List<MLCamera.StreamCapability>();

            foreach (var info in streamCapabilitiesInfo)
            {
                streamCapabilities.AddRange(info.StreamCapabilities);
            }

            return streamCapabilities.Count > 0;
        }

        private void OnRawVideoFrameAvailable(MLCamera.CameraOutput capturedFrame, MLCamera.ResultExtras resultExtras, MLCamera.Metadata metadataHandle)
        {
            Debug.Log("Frame callback triggered.");

        }
    }
}

Most likely your main issue is calling StartVideoCapture() in Start() before the camera is connected. However, I would recommend using the Capture Example from the Example project and then expanding it or condensing it based on your needs.

Well, video is well saved after 10 seconds, but I can't access the raw frame. is it still the problem of order?

Oh never mind. The issue is that the OnRawVideoFrameAvailable call will not be triggered if the surface is passed into the camera capture recorder. You could capture both the Main camera and CV camera streams separately.

I want time stamps of frames of MR video from Main Camera. If I get timestamps by CV camera, is it the same timestamps with Main Camera?

Hi,

I am on the same team of this project and after today's tests we figured out that the CV camera cannot be initiated properly using the current code example. Looking at the log files, we found that it always happens at the captureCamera.PrepareCapture() function, which will give logs like the following:

The code works just fine if we used the Main camera, no matter whether it's configured for CamOnly, VirtualOnly or MR. For the CV camera, it wouldn't go pass this step (we tried CamOnly, 640 * 480 and 1280 * 720, but neither worked).

We are trying to use two streams mainly because we want to also acquire the timestamp and camera info of each frame so that we can align the eye tracking data with them and project 3D gaze points onto 2D images. If the callback will not be reached in the video capture stream, would there be another approach for us to acquire timestamps & camera info? It would be nice if this CV camera issue can be fixed though.

We also tested on OS 1.12.0 and Unity SDK 2.6.0 but it also didn't work out.

Thanks

Have you tried to use the script above and then the following script for the CV Camera?

Do you have the source code for the RGB camera and then the one for the CV Camera?

The async example above targets the CV camera while the simple example targets the RGB camera.

You can try starting one script and then 5 seconds later starting the other.

I believe my CV camera code is exactly the async camera code. (I used it and simply changed CV to Main, and switching it back resulted in camera capture not being activated). Have you tried to run that? Or is there a similar frame rate or resolution difference between CV and Main camera just as between Main and MR camera, that resulted in CV camera's failture to "prepareCapture"? I have tried 1280 * 720 and 640 * 480, both at 30FPS but neither worked.

Hmm, when using the Async example script from the developer portal, and changing the camera ID to Main instead of CV , I was still able to get the frame callback. Do you mind checking again?

After reading carefully on the documentation, we found that it might be the marker understanding code that conflicts with the CV camera MLCamera Overview | MagicLeap Developer Documentation. I guess by saying

If you use this device to do CV in you application, you will be able to use the record/stream gameplay using the Capture Service, but you will not be able to perform Image or marker tracking using the SDK.

It actually means that CV camera stream will be used for marker understanding? It was kind of weird since I'm using the World Camera for my marker detector:

if (DetectorProfile == MarkerDetectorProfile.Custom)
        {
            CustomProfileSettings customProfileSettings = new CustomProfileSettings();
            customProfileSettings.AnalysisInterval = MarkerDetectorFullAnalysisInterval.Slow;
            customProfileSettings.CameraHint = MarkerDetectorCamera.World;
            customProfileSettings.CornerRefinement = MarkerDetectorCornerRefineMethod.Subpix;
            customProfileSettings.FPSHint = MarkerDetectorFPS.Low;
            customProfileSettings.ResolutionHint = MarkerDetectorResolution.Low; // not useful for World Camera
            _detectorSettings.CustomProfileSettings = customProfileSettings;
        }

Is it always gonna occupy the CV camera no matter what the configuration is? disabling the marker detectors seems to solve the issue, but we do need marker tracking in our application.

I suggest testing the Magic Leap Unity Example Scene alongside the example camera script provided on the developer portal. Add logging to the callback function to verify that it is being triggered successfully.

Next, run the Marker Tracking Example scene on your Magic Leap 2 device. Observe the logs to confirm that the CV camera callback is being invoked. Then, use the runtime UI to create a custom profile specifically targeting the world camera. This should help you better understand how the configuration affects marker tracking and confirm whether the CV camera is being used.

Unfortunately, that appears to be the case. However, I found that using the LargeFOV uses the world cameras for marker tracking and seems to allow the CV camera to continue running.

Thanks for the proposed solution. In our application we would need:
(1) marker tracking
(2) Main camera for MR capture
(3) CV camera for timestamps and camera transform and intrinsics

So probably we cannot apply your solution here. Fortunately we do not need to constantly track the marker so we decided to disable marker tracking before enabling video capture, and so far it seems to work fine. Several small questions I have that's related:

  1. With some simple tests I was kind of convinced that the CV camera callback contains almost everything we need to compute the world to screen point conversion (transform, FOV seems fine, and we only need to specify width and height as the Main (MR capture) camera width and height in the following code). However, when it comes to principal points, I think they are around (width/2, height/2) of any camera resolution specified but have some slight offsets from time to time. Since we are not super strict about the accuracy of the projected gaze points, I suppose using (width/2, height/2) seems reasonable?
public static Vector2 WorldPointToPixel(Vector3 worldPoint, int width, int height, MLCameraBase.IntrinsicCalibrationParameters parameters, Matrix4x4 cameraTransformationMatrix)
        {

            // Step 1: Convert the world space point to camera space
            Vector3 cameraSpacePoint = cameraTransformationMatrix.inverse.MultiplyPoint(worldPoint);

            // Step 2: Project the camera space point onto the normalized image plane
            Vector2 normalizedImagePoint = new Vector2(cameraSpacePoint.x / cameraSpacePoint.z, cameraSpacePoint.y / cameraSpacePoint.z);
            // Step 3: Adjust for FOV
            float verticalFOVRad = parameters.FOV * Mathf.Deg2Rad;
            float aspectRatio = width / (float)height;
            float horizontalFOVRad = 2 * Mathf.Atan(Mathf.Tan(verticalFOVRad / 2) * aspectRatio);
            // float horizontalFOVRad = 2 * Mathf.Atan(Mathf.Tan(verticalFOVRad / 2));

            normalizedImagePoint.x /= Mathf.Tan(horizontalFOVRad / 2);
            normalizedImagePoint.y /= Mathf.Tan(verticalFOVRad / 2);

            // Step 4: Convert normalized image coordinates to pixel coordinates
            // Vector2 pixelPosition = new Vector2(
            //     normalizedImagePoint.x * width + parameters.PrincipalPoint.x,
            //     normalizedImagePoint.y * height + parameters.PrincipalPoint.y
            // );
            Vector2 pixelPosition = new Vector2(
                normalizedImagePoint.x * width + width / 2,
                normalizedImagePoint.y * height + height / 2
            );

            return pixelPosition;
        }
  1. This might seem dumb, but for the "vertical" resolution options for the MR capture (e.g., 648 * 720, 972 * 1080), are they essential a sub part of those "horizontal" resolution options (e.g., 960 * 720, 1440 * 1080)? I'm asking since it appears to me this is the case with some camera shots, and the FOV returned is always 75 regardless of the direction. Is this expected and there's no real "extended vertical FOV" if we choose the more "vertical" resolutions?

Thanks!

Maybe this could help : Intrinsic/Extrinsic Parameters | MagicLeap Developer Documentation

If you're using OpenXR, enabling Secondary View improves MR content alignment and expands the Main Camera Capture width. By default, MR resolutions are smaller because Mixed Reality capture originally relied on the Left Eye Image, aligning it to match the RGB camera. However, with Secondary View enabled, a second virtual image is rendered from the RGB camera’s perspective, reducing offset and increasing the field of view.

I think I have secondary view enabled. The alignment looks good in the captured video - I'm only wondering if 972 * 720 is simple wider than 648 * 720, without sacrificing vertical FOV, given that the "FOV" returned in Camera intrinsics is always the same.

I believe you are correct. When secondary view is not active, you may see that the Mixed Reality content may show slightly wider depending on the resolution while the actual camera FoV image will remain the same.

Thanks for your explanation. I guess this also explains why we had fewer lost frames when secondary view was disabled. So far we were able to work with 972 * 720 MR capture + 640 * 480 camera capture for timestamp and camera transfrom readings, and all projections we made from 3D fixation points to 2D frames seem to be aligned well.

Many thanks for your timely responses!

1 Like