Questions about Pixel/Camera API

Unity Editor version: 2022.3.42f1
ML2 OS version: 1.10.0
Unity SDK version: 2.5.0
Host OS: Windows

Hi,

I am recently upgrading to the latest OS version and it's nice to see new functions being implemented! It's especially inspring to see the new Pixel Sensor Unity Example and the eye tracking functions as well. I do have a couple of questions about this current version, that hopefully you can kindly answer:

  1. Now it seems to me that the Eye Tracker API provides access to pupil diameters. However, in this post that I made 9 months ago, it was said that pupil diamater is not available and the "open amount" is referring to something else. Now that I'm having a reading at around 0.003, may I assume that this is exactly the pupil diameter, measured in meters?

  2. It's nice to see the visualization of every Pixel Sensor you have currently, but it's still confusing to me how "Picture Center" and "World Center" sensors link are different and how potentially can we make use of them if we want to do world-to-pixel-space conversions in our apps. I remember when I first asked about this issue I was told that the RGB camera is not the same as the "world camera", or the "main camera" in Unity and that I have to manually do the conversion. Is this "Picture Center" feed essentially the same as the MLCamera feed? If so, is there a way for me to configure it similar to what we can do (e.g. resolution, real/virtual capture) using the MLCamera API? And if my app does not rely on the RGB images, would it be possible to use the "World Center" sensor and do conversions with Unity bulit-in methods, and effectively get the same visuals in grayscale? And if so, is there an example code I can follow to, say, save a video (or a series of images) together with their timestamps (preferably aligned with the eye tracker timestamps) locally or directly to a PC, through the Pixel Sensors API?

Also, I just noticed that your Unity Examples requires Unity 2022.3.11 or higher but has a target android API level of 29. Unity hub will install android SDK at level 31 by default for 2022.3.6f1 already, which could make it hard to build for those who don't know how to change it. Maybe you'll want to change that in the examples. I also noticed some decline in tracking performance since the update but it might be related to the space we are using it - seeing it mentioned already in this post, will wait to see if that's a problem.

Thanks in advance!

Regarding your first question, the Eye Tracker API provides access to the following OpenXR Extension: XR_ML_eye_tracker | MagicLeap Developer Documentation. You can refer to the OpenXR Spec for more detailed information.


As for your question about the pixel sensor cameras, I’ll do my best to clarify!

Yes, the Picture Center Camera provides access to the RGB camera located at the center of the Magic Leap 2. You can find more details here: Pixel Sensor Overview | MagicLeap Developer Documentation. This is an alternative way to access the camera image. However, the Android Camera NDK and MLCamera APIs offer more control over camera functions.

You might find the MLCamera APIs easier to use for your specific case. If you decide to go this route, I recommend familiarizing yourself with the API—especially with querying and setting the capabilities. Here’s more information: API Overview | MagicLeap Developer Documentation.

For examples, check out the Magic Leap Pixel Sensor Example in the latest Magic Leap Examples Project, as well as additional examples on our documentation portal.


Thanks for bringing the Android SDK change to our attention—we’ll look into the update! :slight_smile:

Thank you for your reply!

RE the pixel sensor cameras, one reason that I might want to use the world camera/picture camera is that I want them to have similar timestamps as the eye tracker data collected. I am not sure how extras.VCamTimestamp in MLCamera can be aligned with gazeBehavior.time in the EyeTracker API or frame.CaptureTime in the Pixel Sensors API. To do this I think I cannot rely on the system recording function or Magic Leap Hub, and have to record the timestamp of each image frame recorded. I'm looking at the Example you provided now and hopefully will be able to figure this out soon.

On the other hand, you mentioned that the eye tracker OpenXR feature requires an update after the latest release - is this XR_ML_eye_tracker | MagicLeap Developer Documentation extension still accessible using Unity SDK, using the following namespaces?

using UnityEngine.XR.OpenXR;
using MagicLeap.OpenXR.Features.EyeTracker;

If so, it does not seem to me that I need to change any of my current implementation for the code to work, and that I can confidently add the pupil diameter value to my current code. Haven't tested the OnsetTimeStamp issue yet, will try that later today.

Thanks again for your timely response!

So after today's tests I met the following issues:

  1. I was not able to obtain the capabilities of the world center camera using the provided documentation here specifically, I added the provided code to the given Pixel Sensors Unity Example, and here's the file I used. It essentially did not show anything in the log, not even the line: Debug.Log("Listing capabilities for world center sensor stream 0");. Not sure if this is because of me putting it at the wrong place or something happening on the backend, but maybe you'll be able to reproduce it on your end? Or simply specify the default specs and ranges in the documentation will be helpful.
    PixelSensorExample.cs (15.2 KB)
  2. The timestamps of the eye tracker API is still somewhat confusing. The OnsetTime for those gaze behaviors is still ~30s before the capture time; behavior duration seems to make much more sense but I'm not sure how that's computed if OnsetTime is not correct. For example, see the following screenshot of the data I collected: (left to right: capture time/event/onset time/duration. frequency is at ~60Hz, so ~0.016s between each row).
  3. When I put the eye tracker code into the Pixel Sensors Example, the frequency of eye tracking data dropped to ~30Hz. I suppose this is because of the resource consumed by the several cameras and rendering of everything, yet when I changed the eye tracking code to use FixedUpdate() at 0.014s and only record data when the new capture time of the data becomes different from the previous data, I found this weird thing that not only did the capture timestamp became much more unstable, but also it sometimes moved backward (see the attached image). I understand that it might be hard to maintain a nice 60Hz but why would it move backward? I guess my question is, would there be a stable method to acquire gaze data at a rather fixed interval of 60Hz?
  4. When I tried to test how the pixel sensor timestamp aligns with the eye tracker timestamp, there appeared to be ~60ms of gap between them. However, my test was conducted when setting eye tracker update to be inside fixedUpdate and pixel sensor to be in the normal update, as when I changed pixel sensor also to FixedUpdate the app crashes in one frame. Not sure if there's an async implementation for this or if by setting the frame rate to be at 60Hz this could work better, but given that I couldn't figure out the capabilities as mentioned in my first point, I couldn't figure out a way through this.
    Example timestamp difference:
11-13 21:04:00.155 10127 24885 24907 I Unity   : Capture Time2467936743036
11-13 21:04:00.155 10127 24885 24907 I Unity   : SensorInfo:UpdateVisualizer(StringBuilder)
11-13 21:04:00.155 10127 24885 24907 I Unity   : PixelSensorExample:Update()
11-13 21:04:00.155 10127 24885 24907 I Unity   : 
11-13 21:04:00.160 10127 24885 24907 I Unity   : Capture Time2467936743036
11-13 21:04:00.160 10127 24885 24907 I Unity   : SensorInfo:UpdateVisualizer(StringBuilder)
11-13 21:04:00.160 10127 24885 24907 I Unity   : PixelSensorExample:Update()
11-13 21:04:00.160 10127 24885 24907 I Unity   : 
11-13 21:04:00.165 10127 24885 24907 I Unity   : Capture Time2467936755036
11-13 21:04:00.165 10127 24885 24907 I Unity   : SensorInfo:UpdateVisualizer(StringBuilder)
11-13 21:04:00.165 10127 24885 24907 I Unity   : PixelSensorExample:Update()
11-13 21:04:00.165 10127 24885 24907 I Unity   : 
11-13 21:04:00.173 10127 24885 24907 I Unity   : Eye Tracking Pupil Time: 2467868483310
11-13 21:04:00.173 10127 24885 24907 I Unity   : EyeTracking:Update()
11-13 21:04:00.173 10127 24885 24907 I Unity   : 
11-13 21:04:00.174 10127 24885 24907 I Unity   : Gaze Behavior Time: 2467868483310
11-13 21:04:00.174 10127 24885 24907 I Unity   : EyeTracking:Update()

I haven't been able to test the conversion between world space to screen space on the World Center Sensor yet (using Unity built-in conversions with the main camera), would be very helpful if you can provide some input on whether this is worth a try.

I'll be happy to share with you more details/bug reports. Thanks in advance!

Let me see if I can clarify a few things.

1A. Regarding VCamTimestamp and GazeBehaviour Time

The VCamTimeStamp is returned in MLTime and may need to be converted to System Time. Here is a simple text to show how to use the time conversion feature in the SDK.

using System;
using UnityEngine;
using UnityEngine.UI;
using UnityEngine.XR.OpenXR;
using MagicLeap.OpenXR.Features;
using Debug = UnityEngine.Debug;

namespace MagicLeap.IntegrationTests
{
    public class TimespecConversionsTest : MonoBehaviour
    {
        [SerializeField]
        Text criteriaText1;

        [SerializeField]
        Text resultsText1;
    
        [SerializeField]
        Text criteriaText2;

        [SerializeField]
        Text resultsText2;

        private MagicLeapFeature magicLeapFeature;

        private void Start()
        {
            magicLeapFeature = OpenXRSettings.Instance.GetFeature<MagicLeapFeature>();
            if (!magicLeapFeature.enabled)
            {
                Debug.LogError($"{nameof(MagicLeapFeature)} was not enabled. Disabling Script.");
                enabled = false;
                return;
            }

            var currentTimeOffset = DateTimeOffset.UtcNow;
            var currentTimeStamp = DateTimeOffset.UtcNow.ToUnixTimeSeconds();
            criteriaText1.text = $"1. Now: Seconds: {currentTimeOffset.Second}";
            var xrNow = magicLeapFeature.ConvertSystemTimeToXrTime(currentTimeStamp);
            resultsText1.text = $"2.) XrTime (converted):\n\n{xrNow}";
            var convertedNs = magicLeapFeature.ConvertXrTimeToSystemTime(xrNow);
            resultsText2.text = $"3.) timestamp (converted back):\n\n{convertedNs}";

            if (convertedNs != currentTimeStamp)
            {
                resultsText2.text += "\n\nfailed roundtrip conversion!";
                resultsText2.text += $"\nbefore: {currentTimeStamp}\nafter: {convertedNs}";
                Debug.LogError("Failed roundtrip conversion: new timestamp is not the same as original timestamp");
            }
            else
            {
                resultsText2.text += "\n\nsuccess";
            }
        }
        
    }
}

1B. Regarding Pixel Sensor Device Capabilities

I won't be able to debug your code specifically , you may want to check out the Pixel Sensor Example in the Magic Leap Unity Project. The example allows you to configure each of the pixel sensors at runtime including the world and picture cameras.

Note, the World Center Camera does not have access to the mixed reality feed. That flag is exclusive to the Picture Center camera (RGB Camera). Pixel Sensor Capture time is exposed as a long

  1. Regarding the OnsetTime

I'm not sure why your timestamp shows such a difference between the two but maybe this thread will help:

  1. Regarding Decreased Frequency In Eye Tracker

The Pixel Sensor can only access the Eye Tracking Cameras at 30hz while the Eye Tracking API operates at 60. Additionally, when you poll the data you obtain the values that are first available. So it depends on your apps framerate and resources and the way you write/print the data. Would you be able to isolate this issue and share more details about it?

I'm not sure why it would fails when using Fixed Update, but I would try to follow the setup that was in the Unity Example project.

Are you referring to pixel to world space conversion? Given the pose and the intrinsic you should be able to project the pose in 3D. We have a few posts under the official tag on the forum that describe the process of undistorting and projecting pixel to world data for the depth and world cameras.

Thanks for your answers! I've run some additional tests and hopefully this can provide some extra infomation to you. Let me follow the same order of your message here:

1A. Regarding VCamTimestamp and GazeBehaviour Time:
I think I get what you mean here. I haven't tried to add back MLCamera in this application, but given that world camera cannot be used to get the mixed reality feed, I probably have to go back to it. On my list of TODOs.

1B. Regarding Pixel Sensor Device Capabilities
I'm not sure if I understand you correctly - are you saying that I should be able to change some of the configurations within the Pixel Sensor Unity Example, either using the provided user interface or in Unity editor? I wasn't able to do either of these, and looking at the code I only see this line related:

public PixelSensorAsyncOperationResult ConfigureSensorRoutine()
        {
            return PixelSensorFeature.ConfigureSensorWithDefaultCapabilities(SensorId, ConfiguredStream);
        }

which seems to apply the default configuration to all sensors. What I did is essentially pasting the following code to the initialization function:

availableSensors = pixelSensorFeature.GetSupportedSensors();
        PixelSensorId mySensorId = availableSensors.Find(x => x.SensorName == "World Center");
        Debug.Log("Sensor ID: " + mySensorId.SensorName);
        pixelSensorFeature.GetPixelSensorCapabilities(mySensorId, 0, out var capabilities);
        Debug.Log("Listing capabilities for world center sensor stream 0");
        foreach (var pixelSensorCapability in capabilities)
        {
            if (pixelSensorFeature.QueryPixelSensorCapability(mySensorId, pixelSensorCapability.CapabilityType,
                    0, out var range) && range.IsValid)
            {
                Debug.Log("Capability: " + pixelSensorCapability.CapabilityType);
                switch (range.RangeType)
                {
                    // See example below on how to handle the ranges 
                    case PixelSensorCapabilityRangeType.Boolean:
                        HandleBooleanRange(range);
                        break;
                    case PixelSensorCapabilityRangeType.Continuous:
                        HandleContinuousRange(range);
                        break;
                    case PixelSensorCapabilityRangeType.Discrete:
                        HandleDiscreteRange(range);
                        break;
                }
            }
        }

But I could see nothing in the log about the available capabilities, not even the line Debug.Log("Sensor ID: " + mySensorId.SensorName);. The rest of the example runs well, but I couldn't find a way to configure the sensor if I cannot access the available capabilities.

  1. Regarding the OnsetTime
    I think the post you linked to was also a post of mine talking about the same issue. If the duration time is accurate I should be able to rely on that.

  2. Regarding Decreased Frequency In Eye Tracker
    As you suggested, I isolated the eye tracking code and it worked fine in both FixedUpdate() and Update(). Only when pixel sensor is also used did it start to slow down and even moving backward when queried the capture time. I'm not using the Eye Tracking Cameras though, just running the eye tracking code within the Pixel Sensor Example (I tested with the world cameras specifically). Are you saying that all those sensors can only operate at 30Hz? That's what I observed from the timestamp of camera captures but given that in MLCamera the RGB camera can operate at 60Hz, I suppose it can be higher? I wanted to test if specifying a different update rate (as specified here) could help with this problem, but I cannot move forward since I can't find the capabilities of those cameras.

I think I will test with the MLCamera API again, but would be very helpful if you can provide some advice on whether or not we can use the Pixel Sensor API here, especially now that it gives access to the images.

  • for conversion I probably wouldn't need more help given that mixed reality feed is not available on the world center sensor. I had worked on conversion of the RGB camera and know how to do it already.

Thanks again!

Sorry, I was looking a a different demo. You can see the World Camera Example on the developer portal that queries the capabilities. Here is a version of a script that allows you to change the target exposure. Note changing target exposure is only available on the second stream, not the first one.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Unity.Collections;
using UnityEngine;
using UnityEngine.XR.MagicLeap;
using UnityEngine.XR.OpenXR;
using MagicLeap.OpenXR.Features.PixelSensors;

/// <summary>
/// This script captures images from a specified Magic Leap pixel sensor (e.g., World Center camera),
/// applies custom configurations (e.g., manual exposure), retrieves metadata, and displays the images
/// using specified renderers. It handles permissions, sensor initialization, configuration, data retrieval,
/// metadata processing, and cleanup.
/// </summary>
public class WorldCameraPixelSensor : MonoBehaviour
{
    [Header("Sensor Configuration")]
    [Tooltip("Set to one of the supported sensor names: World Center, World Left, World Right")]
    [SerializeField]
    private string pixelSensorName = "World Center";

    [Tooltip("Enable or disable stream 0")]
    [SerializeField]
    private bool useStream0 = true;

    [Tooltip("Enable or disable stream 1")]
    [SerializeField]
    private bool useStream1 = false;

    [Tooltip("Use custom properties for sensor configuration")]
    [SerializeField]
    private bool useCustomProperties = true;

    [Tooltip("Use manual exposure settings if true, otherwise use auto exposure")]
    [SerializeField]
    private bool useManualExposureSettings = false;

    [Header("Auto Exposure Settings")]
    [Tooltip("Exposure Mode: EnvironmentTracking (0) or ProximityIrTracking (1)")]
    [SerializeField]
    private PixelSensorAutoExposureMode autoExposureMode = PixelSensorAutoExposureMode.EnvironmentTracking;

    [Tooltip("Auto Exposure Target Brightness (-5.0 to 5.0)")]
    [SerializeField]
    [Range(-5.0f, 5.0f)]
    private float autoExposureTargetBrightness = 0.0f;

    [Header("Manual Exposure Settings")]
    [Tooltip("Manual Exposure Time in microseconds (e.g., 8500)")]
    [SerializeField]
    private uint manualExposureTimeUs = 8500;

    [Tooltip("Analog Gain (e.g., 100, higher values increase brightness)")]
    [SerializeField]
    private uint analogGain = 100;

    [Header("Render Settings")]
    [Tooltip("Renderers to display the streams. The array size should match the number of streams used.")]
    [SerializeField]
    private Renderer[] streamRenderers = new Renderer[2];

    private const string requiredPermission = MLPermission.Camera;

    // Array to hold textures for each stream
    private Texture2D[] streamTextures = new Texture2D[2];

    // Sensor ID used to interact with the specific sensor
    private PixelSensorId? sensorId;

    // List to keep track of which streams have been configured
    private readonly List<uint> configuredStreams = new List<uint>();

    // Reference to the Magic Leap Pixel Sensor Feature
    private MagicLeapPixelSensorFeature pixelSensorFeature;

    // Capabilities that are required for configuration
    private readonly PixelSensorCapabilityType[] requiredCapabilities = new[]
    {
        PixelSensorCapabilityType.UpdateRate,
        PixelSensorCapabilityType.Format,
        PixelSensorCapabilityType.Resolution
    };

    // Capabilities for manual exposure settings
    private readonly PixelSensorCapabilityType[] manualExposureCapabilities = new[]
    {
        PixelSensorCapabilityType.ManualExposureTime,
        PixelSensorCapabilityType.AnalogGain,
    };

    // Capabilities for auto exposure settings
    private readonly PixelSensorCapabilityType[] autoExposureCapabilities = new[]
    {
        PixelSensorCapabilityType.AutoExposureMode,
        PixelSensorCapabilityType.AutoExposureTargetBrightness,
    };

    private void Start()
    {
        InitializePixelSensorFeature();
    }

    /// <summary>
    /// Initializes the Magic Leap Pixel Sensor Feature and requests necessary permissions.
    /// </summary>
    private void InitializePixelSensorFeature()
    {
        // Check if OpenXRSettings.Instance is not null
        if (OpenXRSettings.Instance == null)
        {
            Debug.LogError("OpenXRSettings.Instance is null.");
            enabled = false;
            return;
        }

        // Get the Magic Leap Pixel Sensor Feature from the OpenXR settings
        pixelSensorFeature = OpenXRSettings.Instance.GetFeature<MagicLeapPixelSensorFeature>();
        if (pixelSensorFeature == null || !pixelSensorFeature.enabled)
        {
            Debug.LogError("Magic Leap Pixel Sensor Feature is not available or not enabled.");
            enabled = false;
            return;
        }

        // Request the necessary permission
        MagicLeap.Android.Permissions.RequestPermission(
            requiredPermission,
            OnPermissionGranted, OnPermissionDenied, OnPermissionDenied);
    }

    /// <summary>
    /// Callback when a permission is granted.
    /// </summary>
    /// <param name="permission">The permission that was granted.</param>
    private void OnPermissionGranted(string permission)
    {
        if (permission == requiredPermission)
        {
            Debug.Log($"Permission granted: {permission}");
            FindAndInitializeSensor();
        }
    }

    /// <summary>
    /// Callback when a permission is denied.
    /// </summary>
    /// <param name="permission">The permission that was denied.</param>
    private void OnPermissionDenied(string permission)
    {
        if (permission == requiredPermission)
        {
            Debug.LogError($"Permission denied: {permission}");
            enabled = false;
        }
    }

    /// <summary>
    /// Finds the sensor by name and attempts to initialize it.
    /// </summary>
    private void FindAndInitializeSensor()
    {
        List<PixelSensorId> sensors = pixelSensorFeature.GetSupportedSensors();
        int index = sensors.FindIndex(x => x.SensorName.Contains(pixelSensorName));

        if (index < 0)
        {
            Debug.LogError($"{pixelSensorName} sensor not found.");
            enabled = false;
            return;
        }

        sensorId = sensors[index];

        // Unsubscribe before subscribing to prevent multiple subscriptions
        pixelSensorFeature.OnSensorAvailabilityChanged += OnSensorAvailabilityChanged;
        TryInitializeSensor();
    }

    /// <summary>
    /// Handles changes in sensor availability.
    /// </summary>
    /// <param name="id">The sensor ID.</param>
    /// <param name="available">Whether the sensor is available.</param>
    private void OnSensorAvailabilityChanged(PixelSensorId id, bool available)
    {
        if (sensorId.HasValue && id.XrPath == sensorId.Value.XrPath && available)
        {
            Debug.Log($"Sensor became available: {id.SensorName}");
            TryInitializeSensor();
        }
    }

    /// <summary>
    /// Attempts to create and initialize the sensor.
    /// </summary>
    private void TryInitializeSensor()
    {
        if (sensorId.HasValue)
        {
            if (pixelSensorFeature.CreatePixelSensor(sensorId.Value))
            {
                Debug.Log("Sensor created successfully.");
                StartCoroutine(ConfigureSensorStreams());
            }
            else
            {
                Debug.LogWarning("Failed to create sensor. Will retry when available.");
            }
        }
        else
        {
            Debug.LogError("Sensor ID is not set.");
        }
    }

    /// <summary>
    /// Configures the sensor streams with custom capabilities and starts streaming.
    /// </summary>
    private IEnumerator ConfigureSensorStreams()
    {
        if (!sensorId.HasValue)
        {
            Debug.LogError("Sensor ID was not set.");
            enabled = false;
            yield break;
        }

        uint streamCount = pixelSensorFeature.GetStreamCount(sensorId.Value);

        if ((useStream0 && streamCount < 1) || (useStream1 && streamCount < 2))
        {
            Debug.LogError("Target streams are not available from the sensor.");
            enabled = false;
            yield break;
        }

        configuredStreams.Clear();

        if (useStream0)
        {
            configuredStreams.Add(0);
        }
        if (useStream1)
        {
            configuredStreams.Add(1);
        }

        // Ensure that the number of renderers matches the number of configured streams
        if (streamRenderers.Length < configuredStreams.Count)
        {
            Debug.LogError("Not enough stream renderers assigned for the configured streams.");
            enabled = false;
            yield break;
        }

        // Build the list of capabilities to configure
        List<PixelSensorCapabilityType> targetCapabilities = new List<PixelSensorCapabilityType>(requiredCapabilities);

        if (useCustomProperties)
        {
            if (useManualExposureSettings)
            {
                targetCapabilities.AddRange(manualExposureCapabilities);
            }
            else
            {
                targetCapabilities.AddRange(autoExposureCapabilities);
            }
        }

        // Iterate over each configured stream and apply capabilities
        foreach (uint streamIndex in configuredStreams)
        {
            // Get capabilities for the current stream
            if (pixelSensorFeature.GetPixelSensorCapabilities(sensorId.Value, streamIndex, out PixelSensorCapability[] capabilities))
            {
                // Create a HashSet of available capabilities for quick lookup
                HashSet<PixelSensorCapabilityType> availableCapabilities = capabilities.Select(c => c.CapabilityType).ToHashSet();

                foreach (PixelSensorCapabilityType capabilityType in targetCapabilities)
                {
                    // Check if the capability is available for this stream
                    if (!availableCapabilities.Contains(capabilityType))
                    {
                        Debug.LogWarning($"Capability {capabilityType} is not available for stream {streamIndex}. Skipping.");
                        continue;
                    }

                    // Find the capability we want to configure
                    PixelSensorCapability capability = capabilities.First(c => c.CapabilityType == capabilityType);

                    // Query the valid range for the capability
                    if (pixelSensorFeature.QueryPixelSensorCapability(sensorId.Value, capabilityType, streamIndex, out PixelSensorCapabilityRange range) && range.IsValid)
                    {
                        PixelSensorConfigData configData = new PixelSensorConfigData(capabilityType, streamIndex);

                        // Apply default values for required capabilities
                        if (requiredCapabilities.Contains(capabilityType))
                        {
                            configData = range.GetDefaultConfig(streamIndex);
                            pixelSensorFeature.ApplySensorConfig(sensorId.Value, configData);
                            yield return null;
                        }
                        else if (capabilityType == PixelSensorCapabilityType.ManualExposureTime)
                        {
                            // Apply manual exposure time
                            uint exposureTime = ClampUInt(manualExposureTimeUs, range.IntRange.Value.Min, range.IntRange.Value.Max);
                            configData.IntValue = exposureTime;
                            pixelSensorFeature.ApplySensorConfig(sensorId.Value, configData);
                            yield return null;
                        }
                        else if (capabilityType == PixelSensorCapabilityType.AnalogGain)
                        {
                            // Apply analog gain
                            uint gain = ClampUInt(analogGain, range.IntRange.Value.Min, range.IntRange.Value.Max);
                            configData.IntValue = gain;
                            pixelSensorFeature.ApplySensorConfig(sensorId.Value, configData);
                            yield return null;
                        }
                        else if (capabilityType == PixelSensorCapabilityType.AutoExposureMode)
                        {
                            // Apply auto exposure mode
                            if (range.IntValues.Contains((uint)autoExposureMode))
                            {
                                configData.IntValue = (uint)autoExposureMode;
                                pixelSensorFeature.ApplySensorConfig(sensorId.Value, configData);
                                yield return null;
                            }
                            else
                            {
                                Debug.LogWarning($"Auto Exposure Mode {autoExposureMode} is not supported for stream {streamIndex}.");
                                continue;
                            }
                        }
                        else if (capabilityType == PixelSensorCapabilityType.AutoExposureTargetBrightness)
                        {
                            // Apply auto exposure target brightness
                            float brightness = Mathf.Clamp(autoExposureTargetBrightness, range.FloatRange.Value.Min, range.FloatRange.Value.Max);
                            configData.FloatValue = brightness;
                            pixelSensorFeature.ApplySensorConfig(sensorId.Value, configData);
                            yield return null;
                        }
                    }
                    else
                    {
                        Debug.LogWarning($"Capability range for {capabilityType} is invalid or not supported for stream {streamIndex}. Skipping.");
                        continue;
                    }
                }
            }
            else
            {
                Debug.LogError($"Failed to get capabilities for stream {streamIndex}.");
                enabled = false;
                yield break;
            }
        }

        // Submit the configuration
        PixelSensorAsyncOperationResult configureOperation = pixelSensorFeature.ConfigureSensor(sensorId.Value, configuredStreams.ToArray());
        yield return configureOperation;

        if (!configureOperation.DidOperationSucceed)
        {
            Debug.LogError("Failed to configure sensor with custom capabilities.");
            enabled = false;
            yield break;
        }

        Debug.Log("Sensor configured with custom capabilities successfully.");

        // Obtain supported metadata types
        Dictionary<uint, PixelSensorMetaDataType[]> supportedMetadataTypes = new Dictionary<uint, PixelSensorMetaDataType[]>();
        foreach (uint streamIndex in configuredStreams)
        {
            if (pixelSensorFeature.EnumeratePixelSensorMetaDataTypes(sensorId.Value, streamIndex, out PixelSensorMetaDataType[] metaDataTypes))
            {
                // Request all available metadata types
                supportedMetadataTypes.Add(streamIndex, metaDataTypes);
            }
            else
            {
                Debug.LogWarning($"Failed to enumerate metadata types for stream {streamIndex}.");
            }
        }

        // Start the sensor streams with requested metadata
        PixelSensorAsyncOperationResult sensorStartAsyncResult = pixelSensorFeature.StartSensor(sensorId.Value, configuredStreams, supportedMetadataTypes);
        yield return sensorStartAsyncResult;

        if (!sensorStartAsyncResult.DidOperationSucceed)
        {
            Debug.LogError("Failed to start sensor streaming.");
            enabled = false;
            yield break;
        }

        Debug.Log("Sensor streaming started successfully.");

        // Start processing sensor data
        StartCoroutine(ProcessSensorData());
    }

    /// <summary>
    /// Coroutine to process sensor data continuously and retrieve metadata.
    /// </summary>
    private IEnumerator ProcessSensorData()
    {
        while (sensorId.HasValue && pixelSensorFeature.GetSensorStatus(sensorId.Value) == PixelSensorStatus.Started)
        {
            foreach (uint stream in configuredStreams)
            {
                if (stream >= streamRenderers.Length)
                {
                    Debug.LogWarning($"Stream index {stream} is out of bounds for renderers.");
                    continue;
                }

                // Get sensor data and metadata
                if (pixelSensorFeature.GetSensorData(
                        sensorId.Value,
                        stream,
                        out PixelSensorFrame frame,
                        out PixelSensorMetaData[] currentFrameMetaData,
                        Allocator.Temp,
                        shouldFlipTexture: true))
                {
                    // Get sensor pose (optional)
                    Pose sensorPose = pixelSensorFeature.GetSensorPose(sensorId.Value);
                    // You can use sensorPose as needed

                    // Process the frame and update the texture
                    ProcessFrame(frame, streamRenderers[stream], ref streamTextures[stream]);

                    // Process metadata
                    ProcessMetadata(currentFrameMetaData);
                }
                else
                {
                    Debug.LogWarning($"Failed to get sensor data for stream {stream}.");
                }
            }
            yield return null;
        }
    }

    /// <summary>
    /// Processes a sensor frame and updates the associated renderer's texture.
    /// </summary>
    /// <param name="frame">The sensor frame.</param>
    /// <param name="targetRenderer">The renderer to update.</param>
    /// <param name="targetTexture">The texture to update.</param>
    private void ProcessFrame(in PixelSensorFrame frame, Renderer targetRenderer, ref Texture2D targetTexture)
    {
        if (!frame.IsValid || targetRenderer == null || frame.Planes.Length == 0)
        {
            return;
        }

        TextureFormat textureFormat = GetTextureFormat(frame.FrameType);
        if (textureFormat == TextureFormat.R8 && frame.FrameType == PixelSensorFrameType.Yuv420888)
        {
            // Skip processing or implement YUV to RGB conversion
            return;
        }

        if (targetTexture == null)
        {
            ref PixelSensorPlane plane = ref frame.Planes[0];
            targetTexture = new Texture2D((int)plane.Width, (int)plane.Height, textureFormat, false);
            targetRenderer.material.mainTexture = targetTexture;
        }

        targetTexture.LoadRawTextureData(frame.Planes[0].ByteData);
        targetTexture.Apply();
    }

    /// <summary>
    /// Determines the appropriate Unity TextureFormat based on the frame type.
    /// </summary>
    /// <param name="frameType">The frame type.</param>
    /// <returns>The corresponding TextureFormat.</returns>
    private TextureFormat GetTextureFormat(PixelSensorFrameType frameType)
    {
        switch (frameType)
        {
            case PixelSensorFrameType.Grayscale:
                return TextureFormat.R8;
            case PixelSensorFrameType.Rgba8888:
                return TextureFormat.RGBA32;
            case PixelSensorFrameType.Yuv420888:
                Debug.LogWarning("YUV420888 format requires conversion to RGB. Skipping frame processing for this format.");
                return TextureFormat.R8; // Placeholder
            case PixelSensorFrameType.Depth32:
            case PixelSensorFrameType.DepthRaw:
            case PixelSensorFrameType.DepthConfidence:
            case PixelSensorFrameType.DepthFlags:
                return TextureFormat.RFloat;
            default:
                Debug.LogWarning("Unsupported frame type. Defaulting to RFloat.");
                return TextureFormat.RFloat;
        }
    }

    /// <summary>
    /// Processes the metadata retrieved from the sensor.
    /// </summary>
    /// <param name="metadataArray">An array of metadata objects.</param>
    private void ProcessMetadata(PixelSensorMetaData[] metadataArray)
    {
        foreach (var metadata in metadataArray)
        {
            StringBuilder builder = new StringBuilder();
            switch (metadata)
            {
                case PixelSensorExposureTime exposureTime:
                    builder.AppendLine($"Exposure Time: {exposureTime.ExposureTime:F1} ms");
                    break;
                case PixelSensorAnalogGain analogGain:
                    builder.AppendLine($"Analog Gain: {analogGain.AnalogGain}");
                    break;
                case PixelSensorDigitalGain digitalGain:
                    builder.AppendLine($"Digital Gain: {digitalGain.DigitalGain}");
                    break;
                case PixelSensorPinholeIntrinsics pinholeIntrinsics:
                    builder.AppendLine($"Pinhole Camera Intrinsics:");
                    builder.AppendLine($"Focal Length: {pinholeIntrinsics.FocalLength}");
                    builder.AppendLine($"Principal Point: {pinholeIntrinsics.PrincipalPoint}");
                    builder.AppendLine($"Field of View: {pinholeIntrinsics.FOV}");
                    builder.AppendLine($"Distortion Coefficients: {string.Join(", ", pinholeIntrinsics.Distortion)}");
                    break;
                case PixelSensorFisheyeIntrinsics fisheyeIntrinsics:
                    builder.AppendLine($"Fisheye Camera Intrinsics:");
                    builder.AppendLine($"Focal Length: {fisheyeIntrinsics.FocalLength}");
                    builder.AppendLine($"Principal Point: {fisheyeIntrinsics.PrincipalPoint}");
                    builder.AppendLine($"Field of View: {fisheyeIntrinsics.FOV}");
                    builder.AppendLine($"Radial Distortion Coefficients: {string.Join(", ", fisheyeIntrinsics.RadialDistortion)}");
                    builder.AppendLine($"Tangential Distortion Coefficients: {string.Join(", ", fisheyeIntrinsics.TangentialDistortion)}");
                    break;
                case PixelSensorDepthFrameIllumination depthIllumination:
                    builder.AppendLine($"Depth Frame Illumination Type: {depthIllumination.IlluminationType}");
                    break;
                // Handle other metadata types as needed
                default:
                    builder.AppendLine($"Unknown metadata type: {metadata.MetaDataType}");
                    break;
            }
            Debug.Log(builder.ToString());
        }
    }

    /// <summary>
    /// Stops the sensor and cleans up resources when the script is disabled.
    /// </summary>
    private void OnDisable()
    {
        // Unsubscribe from events
        if (pixelSensorFeature != null)
        {
            pixelSensorFeature.OnSensorAvailabilityChanged -= OnSensorAvailabilityChanged;
        }

        // Stop the sensor and destroy it
        StartCoroutine(StopSensor());
    }

    /// <summary>
    /// Coroutine to stop the sensor and clean up resources.
    /// </summary>
    private IEnumerator StopSensor()
    {
        if (sensorId.HasValue)
        {
            PixelSensorAsyncOperationResult stopSensorAsyncResult = pixelSensorFeature.StopSensor(sensorId.Value, configuredStreams);
            yield return stopSensorAsyncResult;

            if (stopSensorAsyncResult.DidOperationSucceed)
            {
                pixelSensorFeature.ClearAllAppliedConfigs(sensorId.Value);

                if (pixelSensorFeature.DestroyPixelSensor(sensorId.Value))
                {
                    Debug.Log("Sensor stopped and destroyed successfully.");
                }
                else
                {
                    Debug.LogWarning("Sensor stopped but failed to destroy the sensor.");
                }
            }
            else
            {
                Debug.LogError("Failed to stop the sensor.");
            }
        }
    }

    /// <summary>
    /// Clamps a uint value between a minimum and maximum value.
    /// </summary>
    private uint ClampUInt(uint value, uint min, uint max)
    {
        if (value < min) return min;
        else if (value > max) return max;
        else return value;
    }
}

Some sensors can operate faster than 30hz, I was only referring to the eye camera images which operate slower than the eye tracking feature itself.

You can obtain world cameras at 60hz by capturing both 30fps streams (0 and 1) together . Note the controller will not be able to track if the low exposure stream is not available.


You can use the Pixel sensor API, but I find the MLCamera API to be easier to work with because we have more standalone examples for that feature. If you choose to use the pixel sensor API, hopefully the script I posted above can help.

If you use MLCamera, note you can still use the FramePose call as long as you enable perception snapshots and change the reference space to unbounded. Or use the pixel sensor get frame pose call.

Note that the MLCameraResultExtras.vcam_timestamp returned will need to be converted from MLTime to SystemTime

var result = MLTime.ConvertMLTimeToSystemTime(mltimes[i], out long time);

So do you know what are the ranges of update rates that we can configure for the Picture Sensor?

I will try with the MLCamera API later.

The Picture Camera supports the following Frame Rates and configurations:

Thanks for your reply. I haven't got the chance to test again with the Pixel Sensor API for the Picture Camera yet, but when using the MLCamera to capture video using the main camera of the mixed reality feed, I cannot get the video stream out. The part of log that seems relevant is:

11-15 20:55:08.571  1000  3802  4265 E AudioSystem-JNI: Command failed for android_media_AudioSystem_setParameters: -38
11-15 20:55:08.571  1047  3382 10520 W amdCamera3ArbEng: Calling get Version
11-15 20:55:08.571  1047  3382 10520 E android.hardware.camera.provider.mero@2.4-service_64: Failed to get IAshmemDeviceService.
11-15 20:55:08.571  1047  3382 10520 E android.hardware.camera.provider.mero@2.4-service_64: Failed to get IAshmemDeviceService.
11-15 20:55:08.571  4046  4281  4315 E NovaInputService: input_event_manager.cpp:507: App (pid=4876, uid=4023) registered more than one handler for the same event.
11-15 20:55:08.571  4046  4281  4315 E NovaInputService: input_event_manager.cpp:507: App (pid=4876, uid=4023) registered more than one handler for the same event.
11-15 20:55:08.571  4046  4281  4298 E NovaInputService: input_event_manager.cpp:507: App (pid=4876, uid=4023) registered more than one handler for the same event.
11-15 20:55:08.572  4046  4281  4298 I chatty  : uid=4046(ml_input_service) Binder:4281_1 identical 5 lines
11-15 20:55:08.572  4046  4281  4298 E NovaInputService: input_event_manager.cpp:507: App (pid=4876, uid=4023) registered more than one handler for the same event.
11-15 20:55:08.572  1047  3449  4132 D Camera3-Device:  Found Tag For num_intrinsics -2147483621
11-15 20:55:08.572  1047  3449  4132 D Camera3-Device:  Found Tag For intrinsics -2147483620
11-15 20:55:08.572  4046  4281  4298 E NovaInputService: input_event_manager.cpp:507: App (pid=4876, uid=4023) registered more than one handler for the same event.
11-15 20:55:08.572  4046  4281  4298 I chatty  : uid=4046(ml_input_service) Binder:4281_1 identical 4 lines
11-15 20:55:08.572  4046  4281  4298 E NovaInputService: input_event_manager.cpp:507: App (pid=4876, uid=4023) registered more than one handler for the same event.
11-15 20:55:08.572  4023  4876  5439 I ml_input: ml_input.cpp:1420: At application start, found controller device 257, type 1, registered as index 0
11-15 20:55:08.572  4046  4281  4298 E NovaInputService: input_event_manager.cpp:507: App (pid=4876, uid=4023) registered more than one handler for the same event.
11-15 20:55:08.573  1047  3449  4132 D Camera3-Device: CVCameraFactory Create successful 
11-15 20:55:08.573  4023  4876  5439 W Godot42 : OpenXR EVENT: interaction profile changed!
11-15 20:55:08.573  4092  3474  3474 D ThirdEye: STARTUP: Camera Acquired
11-15 20:55:08.573  4023  4876  5439 W Godot42 : OpenXR: Interaction profile for /user/hand/right changed to /interaction_profiles/ml/ml2_controller
11-15 20:55:08.573  4092  3474  3474 D ThirdEye-Compositor: ThirdEyeStreamMode_1080p_1440x1080
11-15 20:55:08.573  4092  3474  3474 D ThirdEye-Compositor: initGraphicsClient ThirdEye Client created Successfully 
11-15 20:55:08.585 10126 21321 21342 W Unity   : Main Camera's nearClipPlane value is less than the minimum value for this device. Increasing to 0.37
11-15 20:55:08.591  4023  4876  5439 W Godot42 : Quad queued for creation; num quads = 1
11-15 20:55:08.604  4092  3474  3474 D ThirdEye-Compositor: Start Recording Session SUCCEEDED
11-15 20:55:08.604  4092  3474  3474 D ThirdEye-Compositor: STARTUP: ThirdEyeCompositor inited Graphics Client
11-15 20:55:08.604  4092  3474  3474 D ThirdEye-Reader: Captured buffer size: 2048 x 1536
11-15 20:55:08.604  4092  3474 21967 D ThirdEye-Reader: Processing with Camera
11-15 20:55:08.604  4092  3474  3474 D ThirdEye: STARTUP: ThirdEye Reader inited
11-15 20:55:08.605  4023  4876  5439 W Godot42 : Queued quad is now active
11-15 20:55:08.605  4092  3474  3474 D ThirdEye-Reader: Camera buffer size: 2048 x 1536
11-15 20:55:08.690 10126 21321 21342 E com.UnityTechnologies.com.unity.template.urpblank: leapcore/frameworks/perception/data_sources/include/pad/xpad_data_source.h(105) GetClosestTimestampedData():
11-15 20:55:08.690 10126 21321 21342 E com.UnityTechnologies.com.unity.template.urpblank: ERR: Data Not Found for timestamp: 2643110029us, now time: 2643164227us
11-15 20:55:08.690 10126 21321 21342 E com.UnityTechnologies.com.unity.template.urpblank: leapcore/frameworks/perception/data_sources/include/pad/xpad_data_source.h(105) GetClosestTimestampedData():
11-15 20:55:08.690 10126 21321 21342 E com.UnityTechnologies.com.unity.template.urpblank: ERR: Data Not Found for timestamp: 2643110029us, now time: 2643164324us
11-15 20:55:08.898 10126 21321 21342 E com.UnityTechnologies.com.unity.template.urpblank: leapcore/frameworks/perception/data_sources/include/pad/xpad_data_source.h(105) GetClosestTimestampedData():
11-15 20:55:08.898 10126 21321 21342 E com.UnityTechnologies.com.unity.template.urpblank: ERR: Data Not Found for timestamp: 2643293897us, now time: 2643371524us
11-15 20:55:08.898 10126 21321 21342 E com.UnityTechnologies.com.unity.template.urpblank: leapcore/frameworks/perception/data_sources/include/pad/xpad_data_source.h(105) GetClosestTimestampedData():
11-15 20:55:08.898 10126 21321 21342 E com.UnityTechnologies.com.unity.template.urpblank: ERR: Data Not Found for timestamp: 2643293897us, now time: 2643371620us
11-15 20:55:08.932 10126 21321 21321 W Thread-3: type=1400 audit(0.0:80443): avc: denied { search } for name="traces" dev="nvme0n1p37" ino=5316611 scontext=u:r:untrusted_app:s0:c126,c256,c512,c768 tcontext=u:object_r:trace_data_file:s0 tclass=dir permissive=0
11-15 20:55:08.948 10126 21321 21321 I chatty  : uid=10126(com.UnityTechnologies.com.unity.template.urpblank) identical 3 lines
11-15 20:55:08.952 10126 21321 21321 W Thread-3: type=1400 audit(0.0:80447): avc: denied { search } for name="traces" dev="nvme0n1p37" ino=5316611 scontext=u:r:untrusted_app:s0:c126,c256,c512,c768 tcontext=u:object_r:trace_data_file:s0 tclass=dir permissive=0
11-15 20:55:09.070  4045  4553  4553 D BluetoothGatt: readRssi() - device: C4:D0:93:3C:22:33
11-15 20:55:09.361  1047  3382  3496 W amdCamera3VirtualCamera: ============= Dump Streams =============
11-15 20:55:09.361  1047  3382  3496 W amdCamera3VirtualCamera: Width       : 2048
11-15 20:55:09.361  1047  3382  3496 W amdCamera3VirtualCamera: Height      : 1536
11-15 20:55:09.361  1047  3382  3496 W amdCamera3VirtualCamera: Format      : 35
11-15 20:55:09.361  1047  3382  3496 W amdCamera3VirtualCamera: State       : 1
11-15 20:55:09.361  1047  3382  3496 W amdCamera3VirtualCamera: stream Id   : 0
11-15 20:55:09.361  1047  3382  3496 W amdCamera3VirtualCamera: stream Type : 0
11-15 20:55:09.361  1047  3382  3496 W amdCamera3VirtualCamera: ==================================
11-15 20:55:09.362  4092  3474  3474 D ThirdEye: STARTUP: Camera inited
11-15 20:55:09.362  4092  3474  3474 D MRCameraServer: Callign UID for MRCameraServer is 4092
11-15 20:55:09.362  4092  3474  3474 D MRCameraServer: MRCamera setBlendType 1
11-15 20:55:09.362  4092  3474  3474 D ThirdEye: BlendType setting set to 1
11-15 20:55:09.363  4092  3474  3474 D MRCameraServer: MRCamera getCaptureConfiguration
11-15 20:55:09.363  4092  3474  3474 D ThirdEye-Reader: Composite buffer size: 1440 x 1080
11-15 20:55:09.363  4092  3474  3474 D MRCameraServer: 3rdeye initialized 1440x1080
11-15 20:55:09.364  1047  3382  3496 W amdCamera3Hwi: Invalid exposure upper limit - 1000000000, correct it
11-15 20:55:09.373 10126 21321 21342 I Unity   : Camera Connected.
11-15 20:55:09.378 10126 21321 21342 I Unity   : Stream: CaptureType : Video, Width : 1440, Height : 1080 selected with best fit.
11-15 20:55:09.378 10126 21321 21342 I Unity   : Preparing camera configuration.
11-15 20:55:09.379 10126 21321 21342 E ml_camera_client: (config->stream_config[0].capture_type == MLCameraCaptureType_Video && config->capture_frame_rate == mCaptureFrameRate) || (config->stream_config[0].capture_type == MLCameraCaptureType_Image && (config->capture_frame_rate == mCaptureFrameRate || config->capture_frame_rate == MLCameraCaptureFrameRate_None)) is false
11-15 20:55:09.380 10126 21321 21342 E Unity   : Error: MLCameraPrepareCapture in the Magic Leap API failed. Reason: MLResult_InvalidParam 
11-15 20:55:09.380 10126 21321 21342 E Unity   : Error: PrepareCapture in the Magic Leap API failed. Reason: MLResult_InvalidParam 
11-15 20:55:09.380 10126 21321 21342 E Unity   : Could not prepare capture. Result: InvalidParam .  Disconnecting Camera.
11-15 20:55:09.382  4092  3474 16283 D MRCameraServer: MRCamera Disconnect
11-15 20:55:09.382  4092  3474 16283 D MRCameraServer: Stopping the camera
11-15 20:55:09.382  4092  3474 16283 D ThirdEye: stopping ThirdEye capture
11-15 20:55:09.382  4092  3474 16283 D ThirdEye-Reader: Stop Capture Begin
11-15 20:55:09.382  4092  3474 16283 D ThirdEye-Compositor: setCount count 0 remaining 0
11-15 20:55:09.382  4092  3474 16283 D ThirdEye-Reader: stop capturing at 2643856036
11-15 20:55:09.382  4092  3474 16283 D ThirdEye-Compositor: ThirdEyeCompositor::resetProducer
11-15 20:55:09.382  4092  3474 16283 D ThirdEye-Compositor: resetBlendResources
11-15 20:55:09.382  4092  3474 16283 D ThirdEye: uninitializing ThirdEye
11-15 20:55:09.382  4092  3474 16283 D ThirdEye: ThirdEye::uninitCamera
11-15 20:55:09.443  4087  4938  5489 I ControlService: nova/frameworks/services/controlservice/service/src/control_adapter.cpp(401) printStatistics():
11-15 20:55:09.443  4087  4938  5489 I ControlService: ALW: TT-controlService,ControlAdapter,reason=stats:Constellation msgs_ps=8.98024, fail_ps=0, latency_ms=0, requests=90, failure=0
11-15 20:55:09.443  4087  4938  5489 I ControlService: nova/frameworks/services/controlservice/service/src/control_adapter.cpp(401) printStatistics():
11-15 20:55:09.443  4087  4938  5489 I ControlService: ALW: TT-controlService,ControlAdapter,reason=stats:LedSync msgs_ps=0.498902, fail_ps=0, latency_ms=14.4, requests=5, failure=0
11-15 20:55:09.443  4087  4938  5489 I ControlService: nova/frameworks/services/controlservice/service/src/control_adapter.cpp(401) printStatistics():
11-15 20:55:09.443  4087  4938  5489 I ControlService: ALW: TT-controlService,ControlAdapter,reason=stats:Calibration msgs_ps=0.0997805, fail_ps=0, latency_ms=0, requests=1, failure=0
11-15 20:55:09.443  4087  4938  5489 I ControlService: nova/frameworks/services/controlservice/service/src/control_adapter.cpp(401) printStatistics():
11-15 20:55:09.443  4087  4938  5489 I ControlService: ALW: TT-controlService,ControlAdapter,reason=stats:DeviceInfo msgs_ps=0.0997805, fail_ps=0, latency_ms=0, requests=1, failure=0
11-15 20:55:09.488  4092  3474  3474 D ThirdEye-Reader: STARTUP: First Frame Available
11-15 20:55:09.489  4092  3474  3474 D ThirdEye-Reader: STARTUP: First Frame CaptureCompleted callback
11-15 20:55:09.489  4092  3474  3474 D ThirdEye-Reader: STARTUP: First Frame Complete: Ready for proessing
11-15 20:55:09.489  4092  3474 21967 E ThirdEye-Reader: compositeThirdEye Dropping frame 1 with timestamp 2643931966 with startTime 0
11-15 20:55:09.522  4092  3474 21967 E ThirdEye-Reader: compositeThirdEye Dropping frame 2 with timestamp 2643965310 with startTime 0
11-15 20:55:09.588  4092  3474 21967 E ThirdEye-Reader: compositeThirdEye Dropping frame 3 with timestamp 2644032929 with startTime 0
11-15 20:55:09.589  1047  3449  4132 I CameraLatencyHistogram: Stream 0 dequeueBuffer latency histogram (3) samples:
11-15 20:55:09.589  1047  3449  4132 I CameraLatencyHistogram:         5     10     15     20     25     30     35     40     45    inf (max ms)
11-15 20:55:09.589  1047  3449  4132 I CameraLatencyHistogram:      100.00   0.00   0.00   0.00   0.00   0.00   0.00   0.00   0.00   0.00 (%)
11-15 20:55:09.590  1047  3382  3496 E amdCamera3Hwi: Invalid format or resolution: fmt-34, res = 320 x 240
11-15 20:55:09.590  1047  3382  3496 E amdCamera3Hal: Failed to configure streams for device (0x77001fc0e9c8) with (-22)
11-15 20:55:09.590  1047  3449  4132 E Camera3-Device: Camera 0: configureStreamsLocked: Set of requested inputs/outputs not supported by HAL
11-15 20:55:09.590  1047  3449  4132 E CameraDeviceClient: endConfigure: Camera 0: Unsupported set of inputs/outputs provided
11-15 20:55:09.590  4092  3474 16283 E Camera-Device: endConfigure fail Status(-8, EX_SERVICE_SPECIFIC): '3: endConfigure:739: Camera 0: Unsupported set of inputs/outputs provided'
11-15 20:55:09.598  1047  3382  3496 W amdCamera3ReprocessQueue: Queue is empty!
11-15 20:55:09.598  1047  3382  3496 I chatty  : uid=1047(cameraserver) HwBinder:3382_1 identical 1 line
11-15 20:55:09.599  1047  3382  3496 W amdCamera3ReprocessQueue: Queue is empty!
11-15 20:55:09.599  1047  3382  3482 E amdCamera3Container: Queue is empty
11-15 20:55:09.599  1047  3382  3481 E amdCamera3Compress: Queue is empty
11-15 20:55:09.599  1047  3382  3480 E amdCamera3Scaler: Queue is empty
11-15 20:55:09.599  1047  3382  3496 W amdCamera3VirtualCamera: Can not find zsl stream Id for camera (0)
11-15 20:55:09.599  1047  3382  3496 W amdCamera3Queue: Queue is empty!
11-15 20:55:09.599  1047  3382  3496 W amdCamera3VirtualCamera: logical cam request queue is empty
11-15 20:55:09.621  1047  3382  3491 E amdCamera3ArbEng: No Active Streams Found
11-15 20:55:09.621  1047  3449  4132 I Camera3-Device: disconnectImpl: E
11-15 20:55:09.621  1047  3449  4132 I CameraLatencyHistogram: ProcessCaptureRequest latency histogram (3) samples:
11-15 20:55:09.621  1047  3449  4132 I CameraLatencyHistogram:        40     80    120    160    200    240    280    320    360    inf (max ms)
11-15 20:55:09.621  1047  3449  4132 I CameraLatencyHistogram:      100.00   0.00   0.00   0.00   0.00   0.00   0.00   0.00   0.00   0.00 (%)
11-15 20:55:09.621  1047  3382  3496 W amdCamera3ReprocessQueue: Queue is empty!
11-15 20:55:09.621  1047  3382  3496 I chatty  : uid=1047(cameraserver) HwBinder:3382_1 identical 1 line
11-15 20:55:09.621  1047  3382  3496 W amdCamera3ReprocessQueue: Queue is empty!
11-15 20:55:09.621  1047  3382  3482 E amdCamera3Container: Queue is empty
11-15 20:55:09.621  1047  3382  3481 E amdCamera3Compress: Queue is empty
11-15 20:55:09.621  1047  3382  3480 E amdCamera3Scaler: Queue is empty
11-15 20:55:09.621  1047  3382  3496 W amdCamera3VirtualCamera: Can not find zsl stream Id for camera (0)
11-15 20:55:09.621  1047  3382  3496 W amdCamera3Queue: Queue is empty!
11-15 20:55:09.621  1047  3382  3496 W amdCamera3VirtualCamera: logical cam request queue is empty
11-15 20:55:09.621  1047  3382  3496 W amdCamera3VirtualCamera: Can not find zsl stream Id for camera (0)
11-15 20:55:09.621  1047  3382  3496 W amdCamera3StreamOperations: Looks all Cameras closed
11-15 20:55:09.621  1047  3382  3491 E amdCamera3ArbEng: Cameras not initialized
11-15 20:55:09.622  1047  3449  4132 I Camera3-Device: disconnectImpl: X
11-15 20:55:09.623  1047  3449  4132 I CameraService: disconnect: Disconnected client for camera 0 for PID 3474
11-15 20:55:09.623  4092  3474 16283 D CameraManager: Remove static instance
11-15 20:55:09.623  1047  3449  4132 I Camera3-Device: disconnectImpl: E
11-15 20:55:09.623  1047  3449  4132 I Camera2ClientBase: Closed Camera 0. Client was: com.UnityTechnologies.com.unity.template.urpblank (PID 3474, UID 10126)

And here's the full bug report if that would help. It didn't happen when the CamOnly mode is chosed, and I used the exact async code provided here. I used the input size of 1280 * 720 but appearently the code found a better fit? But either of them is in the table in the link you provided, and I'm not sure what's wrong with it. Will be very helpful if you could take a look at this.
MLHubLogs-20241115-205817-win.zip (736.1 KB)

Thanks in advance!

You may want to take a look at the Camera Capture Example in the Unity 1.12.0 Unity Example project. That example shows how to save a recording from the camera.

Unfortunately, the example that you linked to does not support Mixed Reality Capture, but I shared your feedback with voice of customer for us to include one in the future.

Are you saying that none of the examples provided on the MLCamera page wouldn't for MR capture? This would be quite an important feature of our work and we probably need a workaround. Based on the observation that Magic Leap Hub Streaming does capture everything at the same time, is there a way to save things from there, together with some timestamp? Is that stream using the RGB camera?

I also happened to find this post on using screen capture to do this, but is screen capture fast enough that we can still obtain camera parameters? It reminds me of the black background on Magic Leap Hub streaming that happened months ago, not sure if there will be a workaround here.

I would really appreciate it if you can advise on a way to capture mixed reality images with timestamps and camera transforms.

The examples on the Camera page are configured for Camera Capture only but they can be configured to support Mixed Reality Capture. However, not in their default configuration. You can see an example of how to obtain MR camera frame.

Here is a very simple example of the Mixed Reality Capture Script

using System;
using System.Linq;
using System.Threading.Tasks;
using UnityEngine;
using UnityEngine.XR.MagicLeap;

public class MagicLeapRGBCamera : MonoBehaviour
{
    public bool IsCameraConnected => _captureCamera != null && _captureCamera.ConnectionEstablished;

    [SerializeField, Tooltip("If true, the camera capture will start immediately.")]
    private bool _startCameraCaptureOnStart = true;

    [SerializeField, Tooltip("The renderer to show the camera capture on RGB format.")]
    private Renderer _screenRendererRGB = null;
    
    [SerializeField, Tooltip("The resolution width for the camera capture.")]
    private int _targetImageWidth = 1920;

    [SerializeField, Tooltip("The resolution height for the camera capture.")]
    private int _targetImageHeight = 1080;
    
    [SerializeField, Tooltip("Enable Mixed Reality capture. Only available for Main Camera.")]
    private MLCameraBase.ConnectFlag _connectFlag =MLCameraBase.ConnectFlag.MR;
    
    [SerializeField]
    private MLCameraBase.Identifier _cameraIdentifier = MLCameraBase.Identifier.Main;

    [SerializeField, Tooltip("Flip the camera frame vertically.")]
    private bool flipFrame = false;

    private Texture2D _videoTextureRgb;
    
    private MLCameraBase.CaptureFrameRate _targetFrameRate = MLCameraBase.CaptureFrameRate._30FPS;
    private MLCameraBase.OutputFormat _outputFormat = MLCameraBase.OutputFormat.RGBA_8888;

    private MLCamera _captureCamera;
    private bool _isCapturingVideo = false;
    private bool? _cameraPermissionGranted;
    private bool _isCameraInitializationInProgress;

    private readonly MLPermissions.Callbacks _permissionCallbacks = new();

    private void Awake()
    {
        _permissionCallbacks.OnPermissionGranted += OnPermissionGranted;
        _permissionCallbacks.OnPermissionDenied += OnPermissionDenied;
        _permissionCallbacks.OnPermissionDeniedAndDontAskAgain += OnPermissionDenied;
        _isCapturingVideo = false;
    }

    private void OnValidate()
    {
        if ((_connectFlag== MLCameraBase.ConnectFlag.VirtualOnly || _connectFlag == MLCameraBase.ConnectFlag.MR) 
            && _cameraIdentifier == MLCameraBase.Identifier.CV)
        {
            Debug.LogError($"Mixed Reality Capture {_connectFlag} is only supported when targeting the Main Camera! Disabling MR Capture.");
            _connectFlag = MLCameraBase.ConnectFlag.CamOnly;
        }
    }

    private void Start()
    {
        if (_startCameraCaptureOnStart)
        {
            StartCameraCapture(_targetImageWidth, _targetImageHeight);
        }
    }

    private void OnDisable()
    {
        _ = DisconnectCameraAsync();
    }

    private void OnPermissionGranted(string permission)
    {
        if (permission == MLPermission.Camera)
        {
            _cameraPermissionGranted = true;
            Debug.Log($"Permission granted for {permission}.");
        }
    }

    private void OnPermissionDenied(string permission)
    {
        if (permission == MLPermission.Camera)
        {
            _cameraPermissionGranted = false;
            Debug.LogError($"{permission} denied. Camera capture will not function.");
        }
    }

    public void StartCameraCapture(int width, int height, Action<bool> onCameraCaptureStarted = null)
    {
        if (_isCameraInitializationInProgress)
        {
            Debug.LogError("Camera initialization is already in progress.");
            onCameraCaptureStarted?.Invoke(false);
            return;
        }

        _targetImageWidth = width;
        _targetImageHeight = height;

        TryEnableMLCamera(onCameraCaptureStarted);
    }

    private async void TryEnableMLCamera(Action<bool> onCameraCaptureStarted = null)
    {
        if (_isCameraInitializationInProgress) return;

        _isCameraInitializationInProgress = true;
        _cameraPermissionGranted = null;

        Debug.Log("Requesting camera permission...");
        MLPermissions.RequestPermission(MLPermission.Camera, _permissionCallbacks);

        while (!_cameraPermissionGranted.HasValue)
        {
            await Task.Delay(TimeSpan.FromSeconds(1.0f));
        }

        if (MLPermissions.CheckPermission(MLPermission.Camera).IsOk || _cameraPermissionGranted == true)
        {
            Debug.Log("Initializing camera...");
            bool isCameraAvailable = await WaitForCameraAvailabilityAsync();
            if (isCameraAvailable)
            {
                await ConnectAndConfigureCameraAsync();
            }
        }

        _isCameraInitializationInProgress = false;
        onCameraCaptureStarted?.Invoke(_isCapturingVideo);
    }

    private async Task<bool> WaitForCameraAvailabilityAsync()
    {
        bool cameraDeviceAvailable = false;
        int maxAttempts = 10;
        int attempts = 0;

        while (!cameraDeviceAvailable && attempts < maxAttempts)
        {
            var result = MLCameraBase.GetDeviceAvailabilityStatus(_cameraIdentifier, out cameraDeviceAvailable);
            if (!result.IsOk && !cameraDeviceAvailable)
            {
                await Task.Delay(TimeSpan.FromSeconds(1.0f));
            }
            attempts++;
        }

        return cameraDeviceAvailable;
    }

    private async Task<bool> ConnectAndConfigureCameraAsync()
    {
        Debug.Log("Connecting and configuring the camera...");

        if (_connectFlag != MLCameraBase.ConnectFlag.CamOnly && _cameraIdentifier != MLCameraBase.Identifier.Main)
        {
            Debug.LogError("Mixed Reality capture is only supported for the Main Camera.");
            return false;
        }

        var context = CreateCameraContext();

        _captureCamera = await MLCamera.CreateAndConnectAsync(context);
        if (_captureCamera == null)
        {
            Debug.LogError("Could not create or connect to a valid camera.");
            return false;
        }

        Debug.Log("Camera connected successfully.");

        bool hasImageStreamCapabilities = GetStreamCapabilityWBestFit(out var streamCapability);
        if (!hasImageStreamCapabilities)
        {
            Debug.LogError("No valid image streams available. Disconnecting camera.");
            await DisconnectCameraAsync();
            return false;
        }

        Debug.Log("Preparing camera configuration...");

        var captureConfig = CreateCaptureConfig(streamCapability);
        var prepareResult = _captureCamera.PrepareCapture(captureConfig, out _);
        if (!MLResult.DidNativeCallSucceed(prepareResult.Result, nameof(_captureCamera.PrepareCapture)))
        {
            Debug.LogError("Failed to prepare capture.");
            await DisconnectCameraAsync();
            return false;
        }

        Debug.Log("Starting video capture...");
        bool captureStarted = await StartVideoCaptureAsync();
        if (!captureStarted)
        {
            Debug.LogError("Failed to start video capture.");
            await DisconnectCameraAsync();
            return false;
        }

        return _isCapturingVideo;
    }

    private MLCameraBase.ConnectContext CreateCameraContext()
    {
        var context = MLCameraBase.ConnectContext.Create();
        context.CamId = _cameraIdentifier;
        context.Flags = _connectFlag;

        if (_connectFlag != MLCameraBase.ConnectFlag.CamOnly)
        {
            // Using Defaults
            context.MixedRealityConnectInfo = MLCameraBase.MRConnectInfo.Create();
        }

        return context;
    }
    

    private MLCameraBase.CaptureConfig CreateCaptureConfig(MLCameraBase.StreamCapability streamCapability)
    {
        return new MLCameraBase.CaptureConfig
        {
            CaptureFrameRate = _targetFrameRate,
            StreamConfigs = new[]
            {
                MLCameraBase.CaptureStreamConfig.Create(streamCapability, _outputFormat)
            }
        };
    }

    private async Task<bool> StartVideoCaptureAsync()
    {
        await _captureCamera.PreCaptureAEAWBAsync();
        var result = await _captureCamera.CaptureVideoStartAsync();
        _isCapturingVideo = MLResult.DidNativeCallSucceed(result.Result, nameof(_captureCamera.CaptureVideoStart));

        if (_isCapturingVideo)
        {
            _captureCamera.OnRawVideoFrameAvailable += RawVideoFrameAvailable;
        }

        return _isCapturingVideo;
    }

    private async Task DisconnectCameraAsync()
    {
        if (_captureCamera != null)
        {
            if (_isCapturingVideo)
            {
                await _captureCamera.CaptureVideoStopAsync();
                _captureCamera.OnRawVideoFrameAvailable -= RawVideoFrameAvailable;
            }

            await _captureCamera.DisconnectAsync();
            _captureCamera = null;
        }

        _isCapturingVideo = false;
    }

    private bool GetStreamCapabilityWBestFit(out MLCameraBase.StreamCapability streamCapability)
    {
        streamCapability = default;
        if (_captureCamera == null) return false;

        var streamCapabilities = MLCameraBase.GetImageStreamCapabilitiesForCamera(_captureCamera, MLCameraBase.CaptureType.Video);
        if (streamCapabilities.Length == 0) return false;

        return MLCameraBase.TryGetBestFitStreamCapabilityFromCollection(
            streamCapabilities, _targetImageWidth, _targetImageHeight, MLCameraBase.CaptureType.Video, out streamCapability
        );
    }

    private void RawVideoFrameAvailable(MLCameraBase.CameraOutput output, MLCameraBase.ResultExtras extras, MLCameraBase.Metadata metadataHandle)
    {
        if (output.Format == MLCameraBase.OutputFormat.RGBA_8888)
        {
            if (flipFrame)
            {
                MLCameraBase.FlipFrameVertically(ref output);
            }
            UpdateRGBTexture(ref _videoTextureRgb, output.Planes[0], _screenRendererRGB);
        }
    }

    private void UpdateRGBTexture(ref Texture2D videoTextureRGB, MLCameraBase.PlaneInfo imagePlane, Renderer renderer)
    {
        if (videoTextureRGB != null &&
            (videoTextureRGB.width != imagePlane.Width || videoTextureRGB.height != imagePlane.Height))
        {
            Destroy(videoTextureRGB);
            videoTextureRGB = null;
        }

        if (videoTextureRGB == null)
        {
            videoTextureRGB = new Texture2D((int)imagePlane.Width, (int)imagePlane.Height, TextureFormat.RGBA32, false)
            {
                filterMode = FilterMode.Bilinear
            };

            Material material = renderer.material;
            material.mainTexture = videoTextureRGB;
        }

        int actualWidth = (int)(imagePlane.Width * imagePlane.PixelStride);

        if (imagePlane.Stride != actualWidth)
        {
            byte[] correctedData = new byte[actualWidth * imagePlane.Height];
            for (int i = 0; i < imagePlane.Height; i++)
            {
                Buffer.BlockCopy(imagePlane.Data, i * (int)imagePlane.Stride, correctedData, i * actualWidth, actualWidth);
            }
            videoTextureRGB.LoadRawTextureData(correctedData);
        }
        else
        {
            videoTextureRGB.LoadRawTextureData(imagePlane.Data);
        }

        videoTextureRGB.Apply();
    }
}

Thanks for this implementation! I have no access to ML2 right now but it seems that the only difference is the part:

private MLCameraBase.ConnectContext CreateCameraContext()
    {
        var context = MLCameraBase.ConnectContext.Create();
        context.CamId = _cameraIdentifier;
        context.Flags = _connectFlag;

        if (_connectFlag != MLCameraBase.ConnectFlag.CamOnly)
        {
            // Using Defaults
            context.MixedRealityConnectInfo = MLCameraBase.MRConnectInfo.Create();
        }

        return context;
    }

In which you added this MixedRealityConnectInfo? I will be testing this soon and come back to you if this worked.

In addition to changing the Magic Leap Camera Id from CV to main camera. Do you mind if I close this topic and allow you to create a new one once you try using the MLCamera API ? Based on the title I think the thread might have traveled too far off topic to be helpful when searching for answers.

Sure please go ahead. Apologies for taking us this far.

Haha thank you for pointing out that we don't have a mixed reality capture example on the developer portal. :smiley: