Help with camera capture, data flow, and body alignment

Hi,

I’m working on an academic project using Magic Leap 2 and am hoping to get some advice. The goal is to capture images or videos of a person standing in front of the device, analyze these frames (e.g. using a remote AI server for face or body recognition), and display a 3D model or avatar on top of the detected person, aligned correctly in the scene.

So far I’ve looked at the developer documentation, implemented a basic image capture using MLCamera.CaptureImageAsync in Unity C#, and tested saving the images locally (e.g. with Application.persistentDataPath). I’ve also tried sending the captured images to a server for external processing.
But I can’t save the images and therefore send them.

I’m having some difficulties at the moment:

Sometimes the images don’t seem to be reliably saved to disk, or I’m not sure which path to use.

I don’t know whether to stick with MLCamera, look at the Camera2 API, or use something else for this type of continuous image capture and streaming.

I’m trying to figure out how to overlay and align a 3D model in Unity to match the position of the person detected in the real world.

I would like to know if Magic Leap 2 provides built-in support for body tracking or human pose detection, or if this is something I will have to handle entirely with external tools.

If anyone has any tips, examples, or can point me to best practices for capturing and saving images, sending frames to an external server, and aligning the 3D models with the people in the scene, I would greatly appreciate it.

Thanks a lot for your help

Overlaying a virtual avatar on a person—with AI handling detection and alignment—is beyond the scope of this forum. Because the RGB camera provides only 2-D data, you would need extra logic to derive an accurate 3-D pose. Depth estimation, triangulation, or a fusion of multiple sensors can help, but implementing this reliably is non-trivial and can lead to unstable results.

Saving individual frames from the video feed can also introduce performance issues, as each frame must be encoded (e.g., to JPEG or PNG) and then written to storage or transmitted over the network.

Depending on your setup, consider the Magic Leap Unity WebRTC example, which demonstrates how to stream the camera image using the Unity WebRTC package:

https://github.com/magicleap/MagicLeap2UnityWebRTCExample

Yes I understand that there may be problems in the analysis of the image. But let’s say that I would also like to just take some photos of the person’s face in front of the camera and then make a 3d model appear that is not necessarily superimposed on the human subject.

What script can I use to take photos and save them in a folder?

We provide a few scripts on our developer documentation that show how to access the frames and intrinsic values of the camera (See the MLCamera). For saving the images, you would do this the same on Magic Leap 2 as on other Android platforms. Encoding the 2D image into a jpeg or png and then saving the bytes to the application data path. However, depending on your implementation you may be able to send the Texture2D to your inference engine directly.

Based on the MLCamera script example (MLCamera Examples | MagicLeap Developer Documentation)

I integrated this code by saving the images in the folder, but still no saving appears.

using System.Collections;
using UnityEngine;
using UnityEngine.XR.MagicLeap;
using static UnityEngine.XR.MagicLeap.MLCameraBase.Metadata;
using Debug = UnityEngine.Debug;

/// <summary>
/// This script provides an example of capturing images using the Magic Leap 2's Main Camera stream and Magic Leap 2 Camera APIs
/// It handles permissions, connects to the camera, captures images at regular intervals, and sends the result data to the Camera Capture visualizer.
/// </summary>
public class ImageCaptureExample : MonoBehaviour
{
    [SerializeField, Tooltip("The renderer to show the camera capture on JPEG format")]
    private Renderer _screenRendererJPEG = null;

    [SerializeField, Tooltip("Cartella dove salvare le immagini")]
    private string saveFolderPath = "";

    // Indicates if the camera is connected
    private bool isCameraConnected;
    // Reference to the MLCamera object that will access the device's camera
    private MLCamera colorCamera;
    // Indicates if the camera device is available
    private bool cameraDeviceAvailable;

    // Indicates if an image is currently being captured
    private bool isCapturingImage;

    // Reference to the MLPermissions.Callbacks object that will handle the permission requests and responses
    private readonly MLPermissions.Callbacks permissionCallbacks = new MLPermissions.Callbacks();

    // Used to display the JPEG image.
    private Texture2D imageTexture;

    // Register the permission callbacks in the Awake method
    private void Awake()
    {
        permissionCallbacks.OnPermissionGranted += OnPermissionGranted;
        permissionCallbacks.OnPermissionDenied += OnPermissionDenied;
        permissionCallbacks.OnPermissionDeniedAndDontAskAgain += OnPermissionDenied;
    }

    // Request the camera permission in the Start method
    void Start()
    {
        // Imposta la cartella di salvataggio (puoi cambiarla anche dall'inspector)
        if (string.IsNullOrEmpty(saveFolderPath))
        {
            saveFolderPath = Application.persistentDataPath + "/captures";
            Debug.Log("[Camera] saveFolderPath non impostato, uso: " + saveFolderPath);
        }

        Debug.LogError("Avvio App ");
        MLResult result = MLPermissions.RequestPermission(MLPermission.Camera, permissionCallbacks);
        if (!result.IsOk)
        {
            Debug.LogErrorFormat("Error: ImageCaptureExample failed to get requested permissions, disabling script. Reason: {0}", result);
            enabled = false;
        }
    }

    void OnDisable()
    {
        permissionCallbacks.OnPermissionGranted -= OnPermissionGranted;
        permissionCallbacks.OnPermissionDenied -= OnPermissionDenied;
        permissionCallbacks.OnPermissionDeniedAndDontAskAgain -= OnPermissionDenied;

        if (colorCamera != null && isCameraConnected)
        {
            DisableMLCamera();
        }
    }

    private void OnPermissionDenied(string permission)
    {
        MLPluginLog.Error($"{permission} denied, test won't function.");
    }

    private void OnPermissionGranted(string permission)
    {
        StartCoroutine(EnableMLCamera());
        StartCoroutine(CaptureImagesLoop());
    }

    private IEnumerator EnableMLCamera()
    {
        while (!cameraDeviceAvailable)
        {
            MLResult result = MLCamera.GetDeviceAvailabilityStatus(MLCamera.Identifier.Main, out cameraDeviceAvailable);
            if (!(result.IsOk && cameraDeviceAvailable))
            {
                yield return new WaitForSeconds(1.0f);
            }
        }

        Debug.Log("Camera device available.");
        ConnectCamera();

        while (!isCameraConnected)
        {
            yield return null;
        }

        Debug.Log("Camera device connected.");
        ConfigureAndPrepareCapture();
    }

    private IEnumerator CaptureImagesLoop()
    {
        while (true)
        {
            if (isCameraConnected && !isCapturingImage)
            {
                if (MLCamera.IsCaptureTypeSupported(colorCamera, MLCamera.CaptureType.Image))
                {
                    CaptureImage();
                }
            }
            yield return new WaitForSeconds(3.0f);
        }
    }

    private async void ConnectCamera()
    {
        MLCamera.ConnectContext context = MLCamera.ConnectContext.Create();
        context.EnableVideoStabilization = false;
        context.Flags = MLCameraBase.ConnectFlag.CamOnly;

        colorCamera = await MLCamera.CreateAndConnectAsync(context);

        if (colorCamera != null)
        {
            colorCamera.OnRawImageAvailable += OnCaptureRawImageComplete;
            isCameraConnected = true;
        }
    }

    private async void ConfigureAndPrepareCapture()
    {
        MLCamera.CaptureStreamConfig[] imageConfig = new MLCamera.CaptureStreamConfig[1]
        {
            new MLCamera.CaptureStreamConfig()
            {
                OutputFormat = MLCamera.OutputFormat.JPEG,
                CaptureType = MLCamera.CaptureType.Image,
                Width = 1920,
                Height = 1080
            }
        };

        MLCamera.CaptureConfig captureConfig = new MLCamera.CaptureConfig()
        {
            StreamConfigs = imageConfig,
            CaptureFrameRate = MLCamera.CaptureFrameRate._30FPS
        };

        MLResult prepareCaptureResult = colorCamera.PrepareCapture(captureConfig, out MLCamera.Metadata _);

        if (!prepareCaptureResult.IsOk)
        {
            Debug.LogError("[Camera] Errore PrepareCapture: " + prepareCaptureResult);
            return;
        }
    }

    private void DisableMLCamera()
    {
        if (colorCamera != null)
        {
            colorCamera.Disconnect();
            isCameraConnected = false;
        }
    }

    private async void CaptureImage()
    {
        isCapturingImage = true;

        var aeawbResult = await colorCamera.PreCaptureAEAWBAsync();
        if (!aeawbResult.IsOk)
        {
            Debug.LogError("Image capture failed!");
        }
        else
        {
            var result = await colorCamera.CaptureImageAsync(1);
            if (!result.IsOk)
            {
                Debug.LogError("Image capture failed!");
            }
        }

        isCapturingImage = false;
    }

    private void OnCaptureRawImageComplete(MLCamera.CameraOutput capturedImage, MLCamera.ResultExtras resultExtras, MLCamera.Metadata metadataHandle)
    {
        MLResult aeStateResult = metadataHandle.GetControlAEStateResultMetadata(out ControlAEState controlAEState);
        MLResult awbStateResult = metadataHandle.GetControlAWBStateResultMetadata(out ControlAWBState controlAWBState);

        if (aeStateResult.IsOk && awbStateResult.IsOk)
        {
            bool autoExposureComplete = controlAEState == MLCameraBase.Metadata.ControlAEState.Converged || controlAEState == MLCameraBase.Metadata.ControlAEState.Locked;
            bool autoWhiteBalanceComplete = controlAWBState == MLCameraBase.Metadata.ControlAWBState.Converged || controlAWBState == MLCameraBase.Metadata.ControlAWBState.Locked;

            if (autoExposureComplete && autoWhiteBalanceComplete)
            {
                if (capturedImage.Format == MLCameraBase.OutputFormat.JPEG)
                {
                    UpdateJPGTexture(capturedImage.Planes[0], _screenRendererJPEG);
                    GetComponent<FaceRecognitionClient>().SendCapturedTexture();
                }
            }
        }
    }

    private void UpdateJPGTexture(MLCamera.PlaneInfo imagePlane, Renderer renderer)
    {
        if (imageTexture != null)
        {
            Destroy(imageTexture);
        }

        imageTexture = new Texture2D(8, 8);
        bool status = imageTexture.LoadImage(imagePlane.Data);

        if (status && (imageTexture.width != 8 && imageTexture.height != 8))
        {
            renderer.material.mainTexture = imageTexture;
            Debug.Log("[Camera] Immagine aggiornata sul renderer");

            // Salva immagine su disco
            SaveImageToDisk(imagePlane.Data);

            GetComponent<FaceRecognitionClient>().UpdateTexture(imagePlane.Data);
            GetComponent<FaceRecognitionClient>().SendCapturedTexture();
        }
        else
        {
            Debug.LogWarning("[Camera] Immagine non valida o dimensioni errate");
        }
    }

    private void SaveImageToDisk(byte[] imageData)
    {
        if (string.IsNullOrEmpty(saveFolderPath))
        {
            Debug.LogWarning("[Camera] saveFolderPath non impostato, impossibile salvare l'immagine");
            return;
        }

        try
        {
            if (!System.IO.Directory.Exists(saveFolderPath))
            {
                System.IO.Directory.CreateDirectory(saveFolderPath);
                Debug.Log("[Camera] Creata cartella: " + saveFolderPath);
            }

            string fileName = $"capture_{System.DateTime.Now:yyyyMMdd_HHmmssfff}.jpg";
            string fullPath = System.IO.Path.Combine(saveFolderPath, fileName);

            System.IO.File.WriteAllBytes(fullPath, imageData);

            Debug.Log("[Camera] Immagine salvata su disco: " + fullPath);
        }
        catch (System.Exception e)
        {
            Debug.LogError("[Camera] Errore salvataggio immagine: " + e.Message);
        }
    }
}

What file path are you looking under? Do you see the resulting image in your application?

oh yes, I forgot to insert the path, but even writing the path of a folder on the disk, the result did not change, it does not save any photos inside.
Not even some debugs appear.
In the application the images are reproduced just for testing on a renderer as written in the documentation. More photos are taken so the renderer is always updated.

What on the device, directory are you checking to see if the images were saved?

Does this log get called?

  Debug.Log("[Camera] Immagine salvata su disco: " + fullPath);