LoadRawTextureData showing error

Unity Editor version: 2022.3.21f1
Unity SDK version: 2.0.0
Host OS: Windows

Error messages from logs :

2024-08-20 08:01:10.614;10120;17887;17908;Error;Unity;UnityException: LoadRawTextureData: not enough data provided (will result in overread).;
2024-08-20 08:01:10.614;10120;17887;17908;Error;Unity;at UnityEngine.Texture2D.LoadRawTextureData (System.Byte [ ] data) [0x00000] in <00000000000000000000000000000000>:0;

Full Logs with debugging statements

2024-08-20 08:01:10.355;10120;17887;17908;Info;Unity;Video capture started!;
2024-08-20 08:01:10.356;10120;17887;17908;Info;Unity;Start function is called;
2024-08-20 08:01:10.356;10120;17887;17908;Info;Unity;OpenCVMarkerDetection:Start();
2024-08-20 08:01:10.356;10120;17887;17908;Info;Unity;;
2024-08-20 08:01:10.564;4045;4245;4245;Debug;BluetoothGatt;readRssi() - device: C8:9F:3A:CD:A3:B5;
2024-08-20 08:01:10.604;10120;17887;17908;Info;Unity;Camera Intrinsics;
2024-08-20 08:01:10.604;10120;17887;17908;Info;Unity;Width 1280;
2024-08-20 08:01:10.604;10120;17887;17908;Info;Unity;Height 720;
2024-08-20 08:01:10.604;10120;17887;17908;Info;Unity;FOV 75.78043;
2024-08-20 08:01:10.604;10120;17887;17908;Info;Unity;FocalLength (1004.58, 1004.41);
2024-08-20 08:01:10.604;10120;17887;17908;Info;Unity;PrincipalPoint (651.36, 365.76);
2024-08-20 08:01:10.604;10120;17887;17908;Info;Unity;Fileformat = RGBA;
2024-08-20 08:01:10.609;10120;17887;17908;Info;Unity;Inside UpdateRGBTexture;
2024-08-20 08:01:10.609;10120;17887;17908;Info;Unity;_rawVideoTextureRGBA is null;
2024-08-20 08:01:10.613;10120;17887;17908;Info;Unity;No padding;
2024-08-20 08:01:10.614;10120;17887;17908;Error;Unity;UnityException: LoadRawTextureData: not enough data provided (will result in overread).;
2024-08-20 08:01:10.614;10120;17887;17908;Error;Unity;at UnityEngine.Texture2D.LoadRawTextureData (System.Byte[] data) [0x00000] in <00000000000000000000000000000000>:0;
2024-08-20 08:01:10.614;10120;17887;17908;Error;Unity;at OpenCVMarkerDetection.UpdateRGBTexture (UnityEngine.XR.MagicLeap.MLCameraBase+PlaneInfo imagePlane) [0x00000] in <00000000000000000000000000000000>:0;
2024-08-20 08:01:10.614;10120;17887;17908;Error;Unity;at OpenCVMarkerDetection.RawVideoFrameAvailable (UnityEngine.XR.MagicLeap.MLCameraBase+CameraOutput output, UnityEngine.XR.MagicLeap.MLCameraBase+ResultExtras resultExtras, UnityEngine.XR.MagicLeap.MLCameraBase+Metadata metadataHandle) [0x00000] in <00000000000000000000000000000000>:0;
2024-08-20 08:01:10.614;10120;17887;17908;Error;Unity;at UnityEngine.XR.MagicLeap.Native.MLThreadDispatch+DispatchPayload3`3[A,B,C].Dispatch () [0x00000] in <00000000000000000000000000000000>:0;
2024-08-20 08:01:10.614;10120;17887;17908;Error;Unity;at UnityEngine.XR.MagicLeap.Native.MLThreadDispatch.DispatchAll () [0x00000] in <00000000000000000000000000000000>:0;

Despite using the code given in the official Magic Leap's documentation, below code LoadRawTextureData() is not working. You can trace the behavior of the below code using the debugging statements given in the above logs.

Here is code snippet

    private void UpdateRGBTexture(MLCamera.PlaneInfo imagePlane)
    {
        Debug.Log("Inside UpdateRGBTexture");
        int actualWidth = (int)(imagePlane.Width * imagePlane.PixelStride);
        
        if (_rawVideoTextureRGBA != null &&
            (_rawVideoTextureRGBA.width != imagePlane.Width || _rawVideoTextureRGBA.height != imagePlane.Height))
        {
            Destroy(_rawVideoTextureRGBA);
            _rawVideoTextureRGBA = null;
            Debug.Log("Destroying _rawVideoTextureRGBA");
        }

        if (_rawVideoTextureRGBA == null)
        {
            Debug.Log("_rawVideoTextureRGBA is null");
            // Create a new texture that will display the RGB image
            _rawVideoTextureRGBA = new Texture2D((int)imagePlane.Width, (int)imagePlane.Height, TextureFormat.RGBA32, true);
            _rawVideoTextureRGBA.filterMode = FilterMode.Bilinear;

            //Got the texture
            //_screenRendererRGBA.texture = _rawVideoTextureRGBA;
        } else
        {
            Debug.Log("_rawVideoTextureRGBA is NOT null");
        }

        // Image width and stride may differ due to padding bytes for memory alignment. Skip over padding bytes when accessing pixel data.
        if (imagePlane.Stride != actualWidth)
        {
            Debug.Log("There is some padding");

            if (imagePlane.Data.Length < imagePlane.Stride * imagePlane.Height)
            {
                Debug.LogError($"Data array length ({imagePlane.Data.Length}) is less than expected ({imagePlane.Stride * imagePlane.Height}).");
            }


            // Create a new array to store the pixel data without padding
            var newTextureChannel = new byte[actualWidth * imagePlane.Height];

            // Loop through each row of the image
            for (int i = 0; i < imagePlane.Height; i++)
            {
                // Copy the pixel data from the original array to the new array, skipping the padding bytes
                Buffer.BlockCopy(imagePlane.Data, (int)(i * imagePlane.Stride), newTextureChannel, i * actualWidth, actualWidth);
            }
            // Load the new array as the texture data
            _rawVideoTextureRGBA.LoadRawTextureData(newTextureChannel);
        }
        else // If the stride is equal to the width, no padding bytes are present
        {
            Debug.Log("No padding");
            if (imagePlane.Data.Length < actualWidth * imagePlane.Height)
            {
                Debug.LogError($"Data array length ({imagePlane.Data.Length}) is less than expected ({actualWidth * imagePlane.Height}).");
            }
            _rawVideoTextureRGBA.LoadRawTextureData(imagePlane.Data);
        }

        _rawVideoTextureRGBA.Apply();

        Debug.Log("Starting Marker Detection");
        PerformMarkerDetection(_rawVideoTextureRGBA);

    }

Here is my full source code:

using System;
using System.Collections;
using UnityEngine;
using UnityEngine.UI;
using TMPro;
using UnityEngine.XR.MagicLeap;

//OpenCVForUnity libaries for marker detection
using OpenCVForUnity.UnityUtils.Helper;
using OpenCVForUnity.UnityUtils;
using OpenCVForUnity.ImgprocModule;
using OpenCVForUnity.ObjdetectModule;
using OpenCVForUnity.CoreModule;
using System.Collections.Generic;

public class OpenCVMarkerDetection : MonoBehaviour
{
    //Just for testing
    public TextMeshProUGUI logTextBox;
    [SerializeField, Tooltip("Desired width for the camera capture")]
    private int captureWidth = 1280;
    [SerializeField, Tooltip("Desired height for the camera capture")]
    private int captureHeight = 720;
    //The identifier can either target the Main or CV cameras.
    private MLCamera.Identifier _identifier = MLCamera.Identifier.CV;
    //Cached version of the MLCamera instance.
    private MLCamera _camera;
    //Is true if the camera is ready to be connected.
    private bool _cameraDeviceAvailable;
    //Cache the capture configure for later use.
    private MLCamera.CaptureConfig _captureConfig;
    //The camera capture state
    bool _isCapturing;

    // Variable to check if Camera permission has been granted by the user
    private bool permissionGranted = false;
    private readonly MLPermissions.Callbacks permissionCallbacks = new MLPermissions.Callbacks();

    //For OpenCV
    private Texture2D _rawVideoTextureRGBA;
    private DetectorParameters detectorParameters;
    private Dictionary dictionary;
    WebCamTextureToMatHelper webCamTextureToMatHelper;
    private ArucoDetector myDetector;
    private Mat rgbMat;

    // Variables to hold results
    private List<Mat> corners;
    private List<Mat> rejectedCorners;
    private Mat ids;



    void Start()
    {
        MLPermissions.RequestPermission(MLPermission.Camera, permissionCallbacks);
        logTextBox.text = "Start function is called";
        Debug.Log("Start function is called");
        //StartCoroutine(EnableMLCamera());

        //Configuring ARuCo marker
        detectorParameters = new DetectorParameters();
        dictionary = Objdetect.getPredefinedDictionary(Objdetect.DICT_5X5_100);
        ids = new Mat();
        corners = new List<Mat>();
        rejectedCorners = new List<Mat>();
        rgbMat = new Mat();
        myDetector = new ArucoDetector(dictionary, detectorParameters);
    }

    //Waits for the camera to be ready and then connects to it.
    private IEnumerator EnableMLCamera()
    {
        //Checks the main camera's availability.
        while (!_cameraDeviceAvailable)
        {
            MLResult result = MLCamera.GetDeviceAvailabilityStatus(_identifier, out _cameraDeviceAvailable);
            if (result.IsOk == false || _cameraDeviceAvailable == false)
            {
                // Wait until camera device is available
                yield return new WaitForSeconds(1.0f);
            }
        }
        ConnectCamera();
    }

    private void ConnectCamera()
    {
        //Once the camera is available, we can connect to it.
        if (_cameraDeviceAvailable)
        {
            MLCamera.ConnectContext connectContext = MLCamera.ConnectContext.Create();
            connectContext.CamId = _identifier;
            //The MLCamera.Identifier.Main is the only camera that can access the virtual and mixed reality flags
            connectContext.Flags = MLCamera.ConnectFlag.CamOnly;
            connectContext.EnableVideoStabilization = true;

            _camera = MLCamera.CreateAndConnect(connectContext);
            if (_camera != null)
            {
                logTextBox.text = "Camera device connected";
                Debug.Log("Camera device connected");
                ConfigureCameraInput();
                SetCameraCallbacks();
            }
        }
    }

    private void ConfigureCameraInput()
    {
        //Gets the stream capabilities the selected camera. (Supported capture types, formats and resolutions)
        MLCamera.StreamCapability[] streamCapabilities = MLCamera.GetImageStreamCapabilitiesForCamera(_camera, MLCamera.CaptureType.Video);

        if (streamCapabilities.Length == 0)
            return;

        //Set the default capability stream
        MLCamera.StreamCapability defaultCapability = streamCapabilities[0];

        //Try to get the stream that most closely matches the target width and height
        if (MLCamera.TryGetBestFitStreamCapabilityFromCollection(streamCapabilities, captureWidth, captureHeight,
                MLCamera.CaptureType.Video, out MLCamera.StreamCapability selectedCapability))
        {
            defaultCapability = selectedCapability;
        }

        //Initialize a new capture config.
        _captureConfig = new MLCamera.CaptureConfig();
        //Set RGBA video as the output
        MLCamera.OutputFormat outputFormat = MLCamera.OutputFormat.RGBA_8888;
        //Set the Frame Rate to 30fps
        _captureConfig.CaptureFrameRate = MLCamera.CaptureFrameRate._30FPS;
        //Initialize a camera stream config.
        //The Main Camera can support up to two stream configurations
        _captureConfig.StreamConfigs = new MLCamera.CaptureStreamConfig[1];
        _captureConfig.StreamConfigs[0] = MLCamera.CaptureStreamConfig.Create(
            defaultCapability, outputFormat
        );
        StartVideoCapture();
    }

    private void StartVideoCapture()
    {
        MLResult result = _camera.PrepareCapture(_captureConfig, out MLCamera.Metadata metaData);
        if (result.IsOk)
        {
            // Trigger auto exposure and auto white balance
            _camera.PreCaptureAEAWB();
            // Starts video capture. This call can also be called asynchronously 
            // Images capture uses the CaptureImage function instead.
            result = _camera.CaptureVideoStart();
            if (result.IsOk)
            {
                logTextBox.text = "Video capture started!";
                Debug.Log("Video capture started!");
                _isCapturing = true;
            }
            else
            {
                Debug.LogError("Failed to start video capture!");
            }
        }
    }

    private void SetCameraCallbacks()
    {
        //Provides frames in either YUV/RGBA format depending on the stream configuration
        _camera.OnRawVideoFrameAvailable += RawVideoFrameAvailable;
    }

    void RawVideoFrameAvailable(MLCamera.CameraOutput output, MLCamera.ResultExtras resultExtras, MLCamera.Metadata metadataHandle)
    {
        if (resultExtras.Intrinsics != null)
        {
            string cameraIntrinsics = "Camera Intrinsics";
            cameraIntrinsics += "\n Width " + resultExtras.Intrinsics.Value.Width;
            cameraIntrinsics += "\n Height " + resultExtras.Intrinsics.Value.Height;
            cameraIntrinsics += "\n FOV " + resultExtras.Intrinsics.Value.FOV;
            cameraIntrinsics += "\n FocalLength " + resultExtras.Intrinsics.Value.FocalLength;
            cameraIntrinsics += "\n PrincipalPoint " + resultExtras.Intrinsics.Value.PrincipalPoint;
            logTextBox.text = cameraIntrinsics;
            Debug.Log(cameraIntrinsics);
        }

        if (output.Format == MLCamera.OutputFormat.JPEG)
        {
            // JPEG Output
            //logTextBox.text = "Fileformat = JPEG";
            Debug.Log("Fileformat = JPEG");

        }
        else if (output.Format == MLCamera.OutputFormat.YUV_420_888)
        {
            // YUV Output
            //logTextBox.text = "Fileformat = YUV";
            Debug.Log("Filefomrat = YUV");
        }
        else if (output.Format == MLCamera.OutputFormat.RGBA_8888)
        {
            // RGBA Output
            //logTextBox.text = "Fileformat = RGBA";
            Debug.Log("Fileformat = RGBA");

            MLCamera.FlipFrameVertically(ref output);
            UpdateRGBTexture(output.Planes[0]);
        }
    }

    private void UpdateRGBTexture(MLCamera.PlaneInfo imagePlane)
    {
        Debug.Log("Inside UpdateRGBTexture");
        int actualWidth = (int)(imagePlane.Width * imagePlane.PixelStride);
        
        if (_rawVideoTextureRGBA != null &&
            (_rawVideoTextureRGBA.width != imagePlane.Width || _rawVideoTextureRGBA.height != imagePlane.Height))
        {
            Destroy(_rawVideoTextureRGBA);
            _rawVideoTextureRGBA = null;
            Debug.Log("Destroying _rawVideoTextureRGBA");
        }

        if (_rawVideoTextureRGBA == null)
        {
            Debug.Log("_rawVideoTextureRGBA is null");
            // Create a new texture that will display the RGB image
            _rawVideoTextureRGBA = new Texture2D((int)imagePlane.Width, (int)imagePlane.Height, TextureFormat.RGBA32, true);
            _rawVideoTextureRGBA.filterMode = FilterMode.Bilinear;

            //Got the texture
            //_screenRendererRGBA.texture = _rawVideoTextureRGBA;
        } else
        {
            Debug.Log("_rawVideoTextureRGBA is NOT null");
        }

        // Image width and stride may differ due to padding bytes for memory alignment. Skip over padding bytes when accessing pixel data.
        if (imagePlane.Stride != actualWidth)
        {
            Debug.Log("There is some padding");

            if (imagePlane.Data.Length < imagePlane.Stride * imagePlane.Height)
            {
                Debug.LogError($"Data array length ({imagePlane.Data.Length}) is less than expected ({imagePlane.Stride * imagePlane.Height}).");
            }


            // Create a new array to store the pixel data without padding
            var newTextureChannel = new byte[actualWidth * imagePlane.Height];

            // Loop through each row of the image
            for (int i = 0; i < imagePlane.Height; i++)
            {
                // Copy the pixel data from the original array to the new array, skipping the padding bytes
                Buffer.BlockCopy(imagePlane.Data, (int)(i * imagePlane.Stride), newTextureChannel, i * actualWidth, actualWidth);
            }
            // Load the new array as the texture data
            _rawVideoTextureRGBA.LoadRawTextureData(newTextureChannel);
        }
        else // If the stride is equal to the width, no padding bytes are present
        {
            Debug.Log("No padding");
            if (imagePlane.Data.Length < actualWidth * imagePlane.Height)
            {
                Debug.LogError($"Data array length ({imagePlane.Data.Length}) is less than expected ({actualWidth * imagePlane.Height}).");
            }
            _rawVideoTextureRGBA.LoadRawTextureData(imagePlane.Data);
        }

        _rawVideoTextureRGBA.Apply();

        Debug.Log("Starting Marker Detection");
        PerformMarkerDetection(_rawVideoTextureRGBA);

    }

    private void PerformMarkerDetection(Texture2D texture)
    {
        Debug.Log("Marker detection started");

        if (texture == null)
        {
            logTextBox.text = "Texture is null. Cannot perform marker detection.";
            Debug.LogError("Texture is null. Cannot perform marker detection.");
            return;
        }

        //Checking which conversion works properly

        //First method
        Mat mat = new Mat(texture.height, texture.width, CvType.CV_8UC4);
        Utils.fastTexture2DToMat(texture, mat);
        if (!mat.empty())
        {
            // Get the color value of the first pixel (0, 0) in the Mat
            double[] matPixel = mat.get(0, 0); // Returns an array of 4 values (R, G, B, A)

            // Get the color value of the first pixel in the Texture2D
            Color32 texturePixel = texture.GetPixel(0, 0); // Gets the pixel in the Texture2D

            // Log the pixel values for comparison
            Debug.Log($"First method Mat first pixel: R={matPixel[0]}, G={matPixel[1]}, B={matPixel[2]}, A={matPixel[3]}");
            Debug.Log($"First method Texture2D first pixel: R={texturePixel.r}, G={texturePixel.g}, B={texturePixel.b}, A={texturePixel.a}");

            // Compare the values
            if (matPixel[0] == texturePixel.r &&
                matPixel[1] == texturePixel.g &&
                matPixel[2] == texturePixel.b &&
                matPixel[3] == texturePixel.a)
            {
                Debug.Log("First method First pixel values match between Mat and Texture2D.");
            }
            else
            {
                logTextBox.text = "First method First pixel values do NOT match between Mat and Texture2D.";
                Debug.LogError("First method First pixel values do NOT match between Mat and Texture2D.");
                return;
            }
        }
        else
        {
            logTextBox.text = "First method Mat is empty. Conversion failed.";
           Debug.LogError("First method Mat is empty. Conversion failed.");
            return;
        }


        //second method
        Mat mat2 = new Mat(texture.height, texture.width, CvType.CV_8UC4);
        Utils.texture2DToMat(texture, mat2);
        if (!mat2.empty())
        {
            // Get the color value of the first pixel (0, 0) in the Mat
            double[] matPixel2 = mat2.get(0, 0); // Returns an array of 4 values (R, G, B, A)

            // Get the color value of the first pixel in the Texture2D
            Color32 texturePixel2 = texture.GetPixel(0, 0); // Gets the pixel in the Texture2D

            // Log the pixel values for comparison
            Debug.Log($"Second method Mat first pixel: R={matPixel2[0]}, G={matPixel2[1]}, B={matPixel2[2]}, A={matPixel2[3]}");
            Debug.Log($"Second method Texture2D first pixel: R={texturePixel2.r}, G={texturePixel2.g}, B={texturePixel2.b}, A={texturePixel2.a}");

            // Compare the values
            if (matPixel2[0] == texturePixel2.r &&
                matPixel2[1] == texturePixel2.g &&
                matPixel2[2] == texturePixel2.b &&
                matPixel2[3] == texturePixel2.a)
            {
                Debug.Log("Second method First pixel values match between Mat and Texture2D.");
            }
            else
            {
                logTextBox.text = "Second method First pixel values do NOT match between Mat and Texture2D.";
                Debug.LogError("Second method First pixel values do NOT match between Mat and Texture2D.");
                return;
            }
        }
        else
        {
            logTextBox.text = "Second method Mat is empty. Conversion failed.";
            Debug.LogError("Second method Mat is empty. Conversion failed.");
            return;
        }




        Mat grayMat = new Mat(mat.rows(), mat.cols(), CvType.CV_8UC1);
        Imgproc.cvtColor(mat, grayMat, Imgproc.COLOR_RGBA2GRAY);


        myDetector.detectMarkers(grayMat, corners, ids, rejectedCorners);
        if (corners.Count == ids.total() || ids.total() == 0)
        {
            Debug.Log("Detected marker IDs and their corner coordinates:");

            for (int i = 0; i < ids.total(); i++)
            {
                // Print the marker ID
                int id = (int)ids.get(i, 0)[0];
                logTextBox.text = "Marker ID: " + id; 
                Debug.Log("Marker ID: " + id);

                // Get the corners for this marker
                Mat cornerMat = corners[i];

                for (int j = 0; j < cornerMat.rows(); j++)
                {
                    double[] corner = cornerMat.get(j, 0);
                    Debug.Log($"Corner {j + 1}: X = {corner[0]}, Y = {corner[1]}");
                }
            }
        }
        }

    private void StopVideoCapture()
    {
        if (_isCapturing)
        {
            _camera.CaptureVideoStop();
        }
        _isCapturing = false;
    }

    private void Awake()
    {
        permissionCallbacks.OnPermissionGranted += OnPermissionGranted;
        permissionCallbacks.OnPermissionDenied += OnPermissionDenied;
        permissionCallbacks.OnPermissionDeniedAndDontAskAgain += OnPermissionDenied;
    }

    private void OnDestroy()
    {
        permissionCallbacks.OnPermissionGranted -= OnPermissionGranted;
        permissionCallbacks.OnPermissionDenied -= OnPermissionDenied;
        permissionCallbacks.OnPermissionDeniedAndDontAskAgain -= OnPermissionDenied;
    }

    private void OnPermissionDenied(string permission)
    {
        logTextBox.text = $"{permission} denied. The example will not function as expected.";
        Debug.Log($"{permission} denied. The example will not function as expected.");
    }

    private void OnPermissionGranted(string permission)
    {
        logTextBox.text = $"{permission} granted. The example will function as expected.";
        permissionGranted = true;
        Debug.Log($"{permission} granted. The example will function as expected.");
        StartCoroutine(EnableMLCamera());
    }
}

Have you tested the example directly from the developer portal without your changes? Does the example produce this error as well?

I haven't tested it, but we can assume that the official code provided in the documentation will also show this error. If you trace the code, you'll find that the error occurs in the portion (LoadRawTextureData) all code is copied directly from the official documentation, with only some added debugging statements.

It's important to verify whether the issue exists in the official example. I recommend testing the unmodified code from the Magic Leap documentation first to see if the error persists. This will help narrow down whether the issue is in the provided code or introduced by your modifications. We don't have the capacity to debug every individual implementation :wink:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.