GetFramePose is Not OK

Give us as much detail as possible regarding the issue you're experiencing:

Unity Editor version: 2022.3.19f1
ML2 OS version: 1.5
Unity SDK version: 1.5
Host OS: Windows

This line of code return HeadTracking is not available. so the result is not OK

private void OnCaptureRawVideoFrameAvailable(MLCameraBase.CameraOutput cameraOutput,MLCameraBase.ResultExtras resultExtras,MLCameraBase.Metadata metadata)
{
	MLResult result = MLCVCamera.GetFramePose(resultExtras.VCamTimestamp, out Matrix4x4 outMatrix);
	if (result.IsOk)
	{
		string cameraExtrinsics = "Camera Extrinsics";
		cameraExtrinsics += "Position " + outMatrix.GetPosition();
		cameraExtrinsics += "Rotation " + outMatrix.rotation;
		Debug.Log(cameraExtrinsics);

	}
}

Does this call fail constantly or does it occasionally fail? Do you mind providing more information about how your project is configured and how you are calling this function?

Note, the Magic Leap keeps track of around 500ms (can be less depending on the application's performance) of historical head pose that can be looked up based on a frame's timestamp. If the timestamp of the frame is greater than 500ms the frame pose query will fail.

I use the async exemple from here
https://developer-docs.magicleap.cloud/docs/guides/unity/camera/ml-camera-example/
My project is configured with OpenXR.
I receved correctly the frame of the camera but not the pose.

How can i verify if the timestamp is greater than 500ms ?

Do you mind providing the logs for your application? Are you building on top of the Magic Leap Examples project ? Can you verify that your The Magic Leap 2 Support Feature is enabled with Perception Snapshots enabled in it's settings?

Can you verify that the unity camera follows the movement of your headset?

Yes Perception Snapshots is enable !

Yes I m build on the top of Magic Leap Examples project.
On the Hello cube scene I juste had a gameObject with this scripts and a raw image to display the video.

using System;
using System.Threading.Tasks;
using UnityEngine;
using UnityEngine.UI;
using UnityEngine.XR.MagicLeap;

///


/// A script that enables and disables the RGB camera using the async methods.
///

public class MagicLeapRGBCamera : MonoBehaviour
{
///
/// Can be used by external scripts to query the status of the camera and see if the camera capture has been started.
///

public bool IsCameraConnected => _captureCamera != null && _captureCamera.ConnectionEstablished;

[SerializeField]
[Tooltip("If true, the camera capture will start immediately.")]
private bool _startCameraCaptureOnStart = true;

#region Capture Config

private int _targetImageWidth = 1920;
private int _targetImageHeight = 1080;
private MLCameraBase.Identifier _cameraIdentifier = MLCameraBase.Identifier.CV;
private MLCameraBase.CaptureFrameRate _targetFrameRate = MLCameraBase.CaptureFrameRate._30FPS;
private MLCameraBase.OutputFormat _outputFormat = MLCameraBase.OutputFormat.RGBA_8888;

#endregion

#region Magic Leap Camera Info
//The connected Camera
private MLCamera _captureCamera;
// True if CaptureVideoStartAsync was called successfully
private bool _isCapturingVideo = false;
#endregion

private bool? _cameraPermissionGranted;
private bool _isCameraInitializationInProgress;

private readonly MLPermissions.Callbacks _permissionCallbacks = new MLPermissions.Callbacks();

private Texture2D _videoTextureRgb;

public RawImage _RawImage = null;

private void Awake()
{
    _permissionCallbacks.OnPermissionGranted += OnPermissionGranted;
    _permissionCallbacks.OnPermissionDenied += OnPermissionDenied;
    _permissionCallbacks.OnPermissionDeniedAndDontAskAgain += OnPermissionDenied;
    _isCapturingVideo = false;
}

void Start()
{
    if (_startCameraCaptureOnStart)
    {
        StartCameraCapture();
    }
}

/// <summary>
/// Starts the Camera capture with the target settings.
/// </summary>
/// <param name="cameraIdentifier">Which camera to use. (Main or CV)</param>
/// <param name="width">The width of the video stream.</param>
/// <param name="height">The height of the video stream.</param>
/// <param name="onCameraCaptureStarted">An action callback that returns true if the video capture started successfully.</param>
public void StartCameraCapture(MLCameraBase.Identifier cameraIdentifier = MLCameraBase.Identifier.CV, int width = 1920, int height = 1080, Action<bool> onCameraCaptureStarted = null)
{
    if (_isCameraInitializationInProgress)
    {
        Debug.LogError("Camera Initialization is already in progress.");
        onCameraCaptureStarted?.Invoke(false);
        return;
    }

    this._cameraIdentifier = cameraIdentifier;
    _targetImageWidth = width;
    _targetImageHeight = height;
    TryEnableMLCamera(onCameraCaptureStarted);
}

private void OnDisable()
{
    _ = DisconnectCameraAsync();
}

private void OnPermissionGranted(string permission)
{
    if (permission == MLPermission.Camera)
    {
        _cameraPermissionGranted = true;
        Debug.Log($"Granted {permission}.");
    }
}

private void OnPermissionDenied(string permission)
{
    if (permission == MLPermission.Camera)
    {
        _cameraPermissionGranted = false;
        Debug.LogError($"{permission} denied, camera capture won't function.");
    }
}

private async void TryEnableMLCamera(Action<bool> onCameraCaptureStarted = null)
{
    // If the camera initialization is already in progress, return immediately
    if (_isCameraInitializationInProgress)
    {
        onCameraCaptureStarted?.Invoke(false);
        return;
    }

    _isCameraInitializationInProgress = true;

    _cameraPermissionGranted = null;
    Debug.Log("Requesting Camera permission.");
    MLPermissions.RequestPermission(MLPermission.Camera, _permissionCallbacks);

    while (!_cameraPermissionGranted.HasValue)
    {
        // Wait until we have permission to use the camera
        await Task.Delay(TimeSpan.FromSeconds(1.0f));
    }

    if (MLPermissions.CheckPermission(MLPermission.Camera).IsOk || _cameraPermissionGranted.GetValueOrDefault(false))
    {
        Debug.Log("Initializing camera.");
        bool isCameraAvailable = await WaitForCameraAvailabilityAsync();

        if (isCameraAvailable)
        {
            await ConnectAndConfigureCameraAsync();
        }
    }

    _isCameraInitializationInProgress = false;
    onCameraCaptureStarted?.Invoke(_isCapturingVideo);
}

/// <summary>
/// Connects the MLCamera component and instantiates a new instance
/// if it was never created.
/// </summary>
private async Task<bool> WaitForCameraAvailabilityAsync()
{
    bool cameraDeviceAvailable = false;
    int maxAttempts = 10;
    int attempts = 0;

    while (!cameraDeviceAvailable && attempts < maxAttempts)
    {
        MLResult result =
            MLCameraBase.GetDeviceAvailabilityStatus(_cameraIdentifier, out cameraDeviceAvailable);

        if (result.IsOk == false && cameraDeviceAvailable == false)
        {
            // Wait until the camera device is available
            await Task.Delay(TimeSpan.FromSeconds(1.0f));
        }
        attempts++;
    }

    return cameraDeviceAvailable;
}

private async Task<bool> ConnectAndConfigureCameraAsync()
{
    Debug.Log("Starting Camera Capture.");

    MLCameraBase.ConnectContext context = CreateCameraContext();

    _captureCamera = await MLCamera.CreateAndConnectAsync(context);
    if (_captureCamera == null)
    {
        Debug.LogError("Could not create or connect to a valid camera. Stopping Capture.");
        return false;
    }

    Debug.Log("Camera Connected.");

    bool hasImageStreamCapabilities = GetStreamCapabilityWBestFit(out MLCameraBase.StreamCapability streamCapability);
    if (!hasImageStreamCapabilities)
    {
        Debug.LogError("Could not start capture. No valid Image Streams available. Disconnecting Camera.");
        await DisconnectCameraAsync();
        return false;
    }

    Debug.Log("Preparing camera configuration.");

    // Try to configure the camera based on our target configuration values
    MLCameraBase.CaptureConfig captureConfig = CreateCaptureConfig(streamCapability);
    var prepareResult = _captureCamera.PrepareCapture(captureConfig, out MLCameraBase.Metadata _);
    if (!MLResult.DidNativeCallSucceed(prepareResult.Result, nameof(_captureCamera.PrepareCapture)))
    {
        Debug.LogError($"Could not prepare capture. Result: {prepareResult.Result} .  Disconnecting Camera.");
        await DisconnectCameraAsync();
        return false;
    }

    Debug.Log("Starting Video Capture.");

    bool captureStarted = await StartVideoCaptureAsync();
    if (!captureStarted)
    {
        Debug.LogError("Could not start capture. Disconnecting Camera.");
        await DisconnectCameraAsync();
        return false;
    }

    return _isCapturingVideo;
}

private MLCameraBase.ConnectContext CreateCameraContext()
{
    var context = MLCameraBase.ConnectContext.Create();
    context.CamId = _cameraIdentifier;
    context.Flags = MLCameraBase.ConnectFlag.CamOnly;
    return context;
}

private MLCameraBase.CaptureConfig CreateCaptureConfig(MLCameraBase.StreamCapability streamCapability)
{
    var captureConfig = new MLCameraBase.CaptureConfig();
    captureConfig.CaptureFrameRate = _targetFrameRate;
    captureConfig.StreamConfigs = new MLCameraBase.CaptureStreamConfig[1];
    captureConfig.StreamConfigs[0] = MLCameraBase.CaptureStreamConfig.Create(streamCapability, _outputFormat);
    return captureConfig;
}

private async Task<bool> StartVideoCaptureAsync()
{
    // Trigger auto exposure and white balance
    await _captureCamera.PreCaptureAEAWBAsync();

    var startCapture = await _captureCamera.CaptureVideoStartAsync();
    _isCapturingVideo = MLResult.DidNativeCallSucceed(startCapture.Result, nameof(_captureCamera.CaptureVideoStart));

    if (!_isCapturingVideo)
    {
        Debug.LogError($"Could not start camera capture. Result : {startCapture.Result}");
        return false;
    }

    _captureCamera.OnRawVideoFrameAvailable += OnCaptureRawVideoFrameAvailable;
   
    return true;
}

private async Task DisconnectCameraAsync()
{
    if (_captureCamera != null)
    {
        if (_isCapturingVideo)
        {
            await _captureCamera.CaptureVideoStopAsync();
            _captureCamera.OnRawVideoFrameAvailable -= OnCaptureRawVideoFrameAvailable;
        }

        await _captureCamera.DisconnectAsync();
        _captureCamera = null;
    }

    _isCapturingVideo = false;
}

/// <summary>
/// Gets the Image stream capabilities.
/// </summary>
/// <returns>True if MLCamera returned at least one stream capability.</returns>
private bool GetStreamCapabilityWBestFit(out MLCameraBase.StreamCapability streamCapability)
{
    streamCapability = default;

    if (_captureCamera == null)
    {
        Debug.Log("Could not get Stream capabilities Info. No Camera Connected");
        return false;
    }

    MLCameraBase.StreamCapability[] streamCapabilities =
        MLCameraBase.GetImageStreamCapabilitiesForCamera(_captureCamera, MLCameraBase.CaptureType.Video);

    if (streamCapabilities.Length <= 0)
        return false;

    if (MLCameraBase.TryGetBestFitStreamCapabilityFromCollection(streamCapabilities, _targetImageWidth,
            _targetImageHeight, MLCameraBase.CaptureType.Video,
            out streamCapability))
    {
        Debug.Log($"Stream: {streamCapability} selected with best fit.");
        return true;
    }

    Debug.Log($"No best fit found. Stream: {streamCapabilities[0]} selected by default.");
    streamCapability = streamCapabilities[0];
    return true;
}

private void OnCaptureRawVideoFrameAvailable(MLCameraBase.CameraOutput cameraOutput,
    MLCameraBase.ResultExtras resultExtras,
    MLCameraBase.Metadata metadata)
{
    //Cache or use camera data as needed
    //TODO: Implement use of camera data
    MLResult result = MLCVCamera.GetFramePose(resultExtras.VCamTimestamp, out Matrix4x4 outMatrix);
    if (result.IsOk)
    {
        string cameraExtrinsics = "Camera Extrinsics";
        cameraExtrinsics += "Position " + outMatrix.GetPosition();
        cameraExtrinsics += "Rotation " + outMatrix.rotation;
        Debug.Log(cameraExtrinsics);
    }
    else
    {
        Debug.Log(MLResult.CodeToString(  result.Result));
    }

    if (cameraOutput.Format == MLCamera.OutputFormat.RGBA_8888)
    {
        //Flips the frame vertically so it does not appear upside down.
        MLCamera.FlipFrameVertically(ref cameraOutput);
        UpdateRGBTexture(ref _videoTextureRgb, cameraOutput.Planes[0], _RawImage);
    }
}

private void UpdateRGBTexture(ref Texture2D videoTextureRGB, MLCamera.PlaneInfo imagePlane, RawImage rawImage)
{

    if (videoTextureRGB != null &&
        (videoTextureRGB.width != imagePlane.Width || videoTextureRGB.height != imagePlane.Height))
    {
        Destroy(videoTextureRGB);
        videoTextureRGB = null;
    }

    if (videoTextureRGB == null)
    {
        videoTextureRGB = new Texture2D((int)imagePlane.Width, (int)imagePlane.Height, TextureFormat.RGBA32, false);
        videoTextureRGB.filterMode = FilterMode.Bilinear;

        _RawImage.texture = videoTextureRGB;
    }

    int actualWidth = (int)(imagePlane.Width * imagePlane.PixelStride);

    if (imagePlane.Stride != actualWidth)
    {
        var newTextureChannel = new byte[actualWidth * imagePlane.Height];
        for (int i = 0; i < imagePlane.Height; i++)
        {
            Buffer.BlockCopy(imagePlane.Data, (int)(i * imagePlane.Stride), newTextureChannel, i * actualWidth, actualWidth);
        }
        videoTextureRGB.LoadRawTextureData(newTextureChannel);
    }
    else
    {
        videoTextureRGB.LoadRawTextureData(imagePlane.Data);
    }
    videoTextureRGB.Apply();

   



}

}

The video is correctly display on the raw image and yes the unity camera follows the movement of the headset.

I share the log with you:

log.txt (614.9 KB)

Thank you for the additional information. I was able to reproduce your issue and reported it to the Unity SDK team. It appears that the ML Camera Pose function does not work properly when using OpenXR.

Thank you for your help !

Do you have an idea of ​​how long it will take to correct this issue?

We need to know this position to continue our project

I have escalated the issue internally but do not have an eta for the update. Would it be possible for you to use the MLSDK while we work on resolving this issue or is OpenXR a hard requirement ?

Yes OpenXR is a hard requirement because my project must be multi-platform. I will try the MLSDK while waiting for you to resolve the issue. Can you keep me informed of progress?

1 Like

Absolutely, we are working to include a fix for this in the next SDK release. Sorry for any inconvenience this may cause.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.