Obtain Eye Camera Images (Example) Unity

Give us as much detail as possible regarding the issue you're experiencing.

Unity Editor version: 2022.2.0f1
ML2 OS version 1.3.0:
MLSDK version: 1.9.0
Host OS: (Windows/MacOS) Windows

This post includes information about receiving and visualizing the eye camera images in your Unity application. Note this is an experimental API and is subject to change without proper forward or backward capability.

This example requires the com.magicleap.permissions.EYE_CAMERA to be declared in your project's manifest file. This is a dangerous permission and needs to be requested at runtime.

Below is the EyeCameraTest.cs script that obtains the camera images and sends it to a visualizer to be displayed on a renderer. Note, that displaying 4 camera images can cause the MLEyeCamera.GetLatestCameraData() function to timeout. By default the timeout is set to zero but should be adjusted based on your use case. Note this function is not thread safe

public class EyeCameraTest : MonoBehaviour
    {
        [SerializeField, Tooltip("Reference to the Raw Video Capture Visualizer gameobject for Grayscale frames")]
        private EyeCameraCaptureVisualizer eyeCameraCaptureVisualizer = null;

        private MLEyeCamera.EyeCameraData data;

        private readonly MLPermissions.Callbacks permissionCallbacks = new MLPermissions.Callbacks();

        private bool isPermissionGranted;

        void Start()
        {
            permissionCallbacks.OnPermissionGranted += OnPermissionGranted;
            permissionCallbacks.OnPermissionDenied += OnPermissionDenied;
            permissionCallbacks.OnPermissionDeniedAndDontAskAgain += OnPermissionDenied;
            MLPermissions.RequestPermission(MLPermission.EyeCamera, permissionCallbacks);
        }

        void OnDestroy()
        {
            permissionCallbacks.OnPermissionGranted -= OnPermissionGranted;
            permissionCallbacks.OnPermissionDenied -= OnPermissionDenied;
            permissionCallbacks.OnPermissionDeniedAndDontAskAgain -= OnPermissionDenied;
        }

        void Update()
        {
            if (!isPermissionGranted) return;

            MLEyeCamera.GetLatestCameraData(out data, 0);

            // disregard when the cameras aren't showing all frames
            if (data.Frames.Length < MLEyeCamera.ActiveCamerasCount) return;

            int frameDataIndex = 0;
            for (int i = 0; i < MLEyeCamera.MaxFrameCount; i++)
            {
                // only render camera positions of active cameras
                var framePosition = (EyeCameraCaptureVisualizer.EyeCameraFramePosition)i;
                if (EyeCameraCaptureVisualizer.HasEyeCameraIdentifier(framePosition))
                {
                    eyeCameraCaptureVisualizer.OnCaptureDataReceived(framePosition, data, frameDataIndex);
                    frameDataIndex++;
                }
            }
        }

        private void OnPermissionGranted(string permission)
        {
            isPermissionGranted = true;
            
        }

        private void OnPermissionDenied(string permission)
        {
            Debug.LogError($"Error: EyeCameraTest failed to get {permission} permissions, disabling script.");
            enabled = false;
        }
    }

The EyeCameraCaptureVisualizer.cs script allows users to toggle which eye cameras are displayed and sets the texture of 4 quads with the eye camera image. It also sets the aspect ratio of the quads so that it matches the aspect ratio of the images received from the eye camera.

 /// <summary>
    /// This class handles visualization of the video and the UI with the status
    /// of the recording.
    /// </summary>
    public class EyeCameraCaptureVisualizer : MonoBehaviour
    {
        /// <summary>
        /// Enumeration of all the available eye camera sensors in the order they are obtained from the native eye camera data.
        /// </summary>
        public enum EyeCameraFramePosition { RightNasal, RightTemple, LeftNasal, LeftTemple }

        [SerializeField, Tooltip("The renderer to show the camera capture on Grayscale format")]
        private Renderer[] _screenRendererGrayscales = null;

        private Texture2D[] rawVideoTexturesGrayscales = new Texture2D[4];
        private float currentAspectRatio;
        private byte[] managedArray;
        private byte[] updatedArray;

        /// <summary>
        /// Checks if there is an active eye camera for a given frame position.
        /// </summary>
        /// <param name="framePosition">The frame position of a corresponding eye camera.</param>
        /// <returns>Whether there is an active eye camera for a frame position.</returns>
        public static bool HasEyeCameraIdentifier(EyeCameraFramePosition framePosition)
        {
            return MLEyeCamera.ActiveCameras.HasFlag(GetNativeEyeCameraIdentifier(framePosition));
        }

        /// <summary>
        /// Converts a frame position into the corresponding MLEyeCameraIdentifier enumeration.
        /// </summary>
        /// <param name="framePosition">The frame position of a corresponding eye camera.</param>
        /// <returns>An MLEyeCameraIdentifier enumerated value.</returns>
        public static MLEyeCamera.MLEyeCameraIdentifier GetNativeEyeCameraIdentifier(EyeCameraFramePosition framePosition)
        {
            if (framePosition == EyeCameraFramePosition.RightNasal)
                return MLEyeCamera.MLEyeCameraIdentifier.RightNasal;
            else if (framePosition == EyeCameraFramePosition.RightTemple)
                return MLEyeCamera.MLEyeCameraIdentifier.RightTemple;
            else if (framePosition == EyeCameraFramePosition.LeftNasal)
                return MLEyeCamera.MLEyeCameraIdentifier.LeftNasal;
            else if (framePosition == EyeCameraFramePosition.LeftTemple)
                return MLEyeCamera.MLEyeCameraIdentifier.LeftTemple;
            else
                return MLEyeCamera.MLEyeCameraIdentifier.None;
        }

        /// <summary>
        /// Updates the eye camera settings to include a given frame position and displays it.
        /// </summary>
        /// <param name="framePosition">The frame position of a corresponding eye camera.</param>
        public void AddCamera(EyeCameraFramePosition framePosition)
        {
            if (!HasEyeCameraIdentifier(framePosition))
            {
                MLEyeCamera.MLEyeCameraIdentifier currentCameras = MLEyeCamera.ActiveCameras;
                currentCameras = currentCameras | GetNativeEyeCameraIdentifier(framePosition);
                MLEyeCamera.UpdateSettings(currentCameras);
                _screenRendererGrayscales[(int)framePosition].gameObject.SetActive(true);
            }
        }

        /// <summary>
        /// Updates the eye camera settings to exclude a given frame position and stops displaying it.
        /// </summary>
        /// <param name="framePosition">The frame position of a corresponding eye camera.</param>
        public void RemoveCamera(EyeCameraFramePosition framePosition)
        {
            if (HasEyeCameraIdentifier(framePosition))
            {
                MLEyeCamera.MLEyeCameraIdentifier currentCameras = MLEyeCamera.ActiveCameras;
                currentCameras &= ~GetNativeEyeCameraIdentifier(framePosition);
                MLEyeCamera.UpdateSettings(currentCameras);
                _screenRendererGrayscales[(int)framePosition].gameObject.SetActive(false);
            }
        }

        /// <summary>
        /// Interprets the current eye camera data of a corresponding eye camera and renders it appropriately.
        /// </summary>
        /// <param name="framePosition">The frame position of a corresponding eye camera.</param>
        /// <param name="data">The data associated with the corresponding eye camera.</param>
        /// <param name="frameDataIndex">The index to read the native byte array eye camera data.</param>
        public void OnCaptureDataReceived(EyeCameraFramePosition framePosition, MLEyeCamera.EyeCameraData data, int frameDataIndex)
        {
            int positionNumber = (int)framePosition;
            MLEyeCamera.EyeCameraFrame frameData = data.Frames[frameDataIndex];

            UpdateGrayscaleTexture(ref rawVideoTexturesGrayscales[positionNumber], frameData.FrameBuffer, _screenRendererGrayscales[positionNumber], framePosition);
        }

        private void UpdateGrayscaleTexture(ref Texture2D videoTextureGrayscale, MLEyeCamera.EyeCameraFrameBuffer imageFrame,
                                      Renderer renderer, EyeCameraFramePosition framePosition)
        {
            int width = (int)(imageFrame.Stride / imageFrame.BytesPerPixel);

            if (videoTextureGrayscale != null &&
                (videoTextureGrayscale.width != width || videoTextureGrayscale.height != imageFrame.Height))
            {
                Destroy(videoTextureGrayscale);
                videoTextureGrayscale = null;
            }

            SetProperRatio(width, (int)imageFrame.Height, renderer);
            RotateAndTintImageData(framePosition, imageFrame, width);
            ApplyVideoTexture(ref videoTextureGrayscale, width, imageFrame, renderer, framePosition);
        }

        private void RotateAndTintImageData(EyeCameraFramePosition framePosition, MLEyeCamera.EyeCameraFrameBuffer imageFrame, int width)
        {
            if (managedArray == null)
                managedArray = new byte[imageFrame.Size];

            MLEyeCamera.CopyImageFrameDataToByteArray(imageFrame, ref managedArray);

            if (updatedArray == null)
                updatedArray = new byte[managedArray.Length * 4];

            int w = width;
            int h = (int)imageFrame.Height;
            bool clockwise = framePosition == EyeCameraFramePosition.RightTemple || framePosition == EyeCameraFramePosition.RightNasal;
            int iRotated;
            int iOriginal;

            for (int i = 0; i < h; i++)
            {
                for (int j = 0; j < w; j++)
                {
                    iRotated = (j + 1) * h - i - 1;
                    iOriginal = clockwise ? managedArray.Length - 1 - (i * w + j) : i * w + j;

                    updatedArray[4 * iRotated] = (byte)(1.5f * managedArray[iOriginal] + 0.1125f);
                    updatedArray[4 * iRotated + 1] = (byte)(1.5f * managedArray[iOriginal] + 0.1125f);
                    updatedArray[4 * iRotated + 2] = (byte)(1.5f * managedArray[iOriginal] + 0.1125f);
                    updatedArray[4 * iRotated + 3] = 1;
                }
            }
        }

        private void ApplyVideoTexture(ref Texture2D videoTextureGrayscale, int width, MLEyeCamera.EyeCameraFrameBuffer imageFrame,
                                       Renderer renderer, EyeCameraFramePosition framePosition)
        {
            if (videoTextureGrayscale == null)
            {
                videoTextureGrayscale = new Texture2D(width, (int)imageFrame.Height, TextureFormat.RGBA32, false);

                videoTextureGrayscale.filterMode = FilterMode.Bilinear;

                renderer.material.mainTexture = videoTextureGrayscale;

                if (framePosition == EyeCameraFramePosition.RightNasal || framePosition == EyeCameraFramePosition.LeftNasal)
                {
                    renderer.material.mainTextureScale = new Vector2(-1.0f, -1.0f);
                }
            }

            videoTextureGrayscale.LoadRawTextureData(updatedArray);
            videoTextureGrayscale.Apply();
        }

        private void SetProperRatio(int textureWidth, int textureHeight, Renderer renderer)
        {
            float ratio = textureWidth / (float)textureHeight;

            if (Math.Abs(currentAspectRatio - ratio) < float.Epsilon)
                return;

            currentAspectRatio = ratio;
            var localScale = renderer.transform.localScale;
            localScale = new Vector3(currentAspectRatio * localScale.y, localScale.y, 1);
            renderer.transform.localScale = localScale;
        }
    }
1 Like

Hi kbabilinski,

would like to ask what is the function of RotateAndTintImageData method?
If the RotateAndTintImageData method is not used, will the obtained image data be in the right-handed coordinate system, with the origin top-left corner(0,0)?

The eye images are rotated. Some of them are rotated 90° left and the other ones 90° right if you are using the unity open XR as DK then the upcoming version of the magically SDK for open XR will include access to the pixel sensory API, which will provide additional information about the frames. (The rotation of each image)

Please assist in explaining the methods used in this eye image processing and the reasons behind these methods (for example, why the eye images need to be flipped, etc.). Thank you.


"Continuing from the previous question, I would like to understand the specific processing each of the four images underwent, from the original files to the images displayed on the screen. Thank you.