Need Help Copying Renderer Texture for Both Local and Remote Video Frames on Canvas

Hello Community,

I'm facing an issue while trying to copy renderer textures from video frames onto a Canvas. Specifically, I need to take both local and remote video frames rendered via WebRTC and copy them to textures on a Canvas in Unity.

Unfortunately, I've been having trouble with this task as the texture data is not directly accessible or copyable from the MLWebRTC.VideoSink.Frame object.

Here are the details of my environment:

Unity Editor version: 2022-2.0f1
ML2 OS version: Version 1.3.0-dev1, Build B3E.230427.10-R.023, Android API Level 29
MLSDK version: 1.7.0
Host OS: Windows 11

I've tried to copy the texture data using Texture2D.GetPixels(), create a new texture, and then use Texture2D.SetPixels() on the new texture. However, it seems like MLWebRTC.VideoSink.Frame does not allow me to access the texture data in this way.

Here is the problematic part of my code:

        private void UpdateYUVTextureChannel(ref Texture2D channelTexture, MLWebRTC.VideoSink.Frame.PlaneInfo planeInfo,
                                     Renderer renderer, string samplerName, byte[] newTextureChannel)
        {
            byte[] planeData = new byte[planeInfo.Stride * planeInfo.Height * planeInfo.BytesPerPixel];
            Marshal.Copy(planeInfo.DataPtr, planeData, 0, planeData.Length);
            if (planeData == null)
            {
                return;
            }

            if (channelTexture != null && (channelTexture.width != planeInfo.Width || channelTexture.height != planeInfo.Height))
            {
                Destroy(channelTexture);
                channelTexture = null;
            }
            if (channelTexture == null)
            {
                channelTexture = new Texture2D((int)planeInfo.Width, (int)(planeInfo.Height), TextureFormat.Alpha8, false)
                {
                    filterMode = FilterMode.Bilinear
                };

                renderer.material.SetTexture(samplerName, channelTexture);
            }

            // Create a new texture to hold the original color data
            Texture2D originalTexture = new Texture2D((int)planeInfo.Width, (int)(planeInfo.Height), TextureFormat.Alpha8, false);
            originalTexture.filterMode = FilterMode.Bilinear;
            originalTexture.SetPixels32(channelTexture.GetPixels32());
            originalTexture.Apply();

            int pixelStride = (int)(planeInfo.Stride / planeInfo.Width);

            if (pixelStride == 1)
            {
                channelTexture.LoadRawTextureData(planeInfo.DataPtr, (int)planeInfo.Size);
                channelTexture.Apply();
            }
            else
            {
                if (newTextureChannel == null || newTextureChannel.Length != (planeInfo.Width * planeInfo.Height))
                {
                    newTextureChannel = new byte[planeInfo.Width * planeInfo.Height];
                }

                for (int y = 0; y < planeInfo.Height; y++)
                {
                    for (int x = 0; x < planeInfo.Width; x++)
                    {
                        newTextureChannel[y * planeInfo.Width + x] = planeData[y * planeInfo.Stride + x * pixelStride];
                    }
                }
                channelTexture.LoadRawTextureData(newTextureChannel);
                channelTexture.Apply();
            }

            // Set copyTexture.texture to the original texture after processing
            copyTexture.texture = originalTexture;
        }

I am copying the originalTexture in the above function into copyTexture.texture (UnityEngine.UI.RawTexture) in last function.

            // Set copyTexture.texture to the original texture after processing
            copyTexture.texture = originalTexture;

Does anyone know a way around this issue? Any help to get a readable/copyable texture from the MLWebRTC.VideoSink.Frame or any alternative solutions would be greatly appreciated.

Thank you in advance for your help.

Hi @usman.bashir, I'm not sure I understand your question completely.

The Magic Leap Unity SDK includes the MLWebRTCVideoSinkBehavior.cs that can be used to render the video texture onto a renderer. It can also be referenced for information on how to obtain and render the content onto a canvas.

Here is an example method that can be used to get the data directly from the frame similar to how the ChannelTextures obtain their data. Note, YUV is provided as 3 separate textures but only the Y channel will be updated in the example below.

  private void UpdateYUVTextureChannel(ref Texture2D channelTexture, MLWebRTC.VideoSink.Frame.PlaneInfo planeInfo,
                                         Renderer renderer, string samplerName, byte[] newTextureChannel)
    {
        byte[] planeData = new byte[planeInfo.Stride * planeInfo.Height * planeInfo.BytesPerPixel];
        Marshal.Copy(planeInfo.DataPtr, planeData, 0, planeData.Length);
        if (planeData == null)
        {
            return;
        }

        if (channelTexture != null && (channelTexture.width != planeInfo.Width || channelTexture.height != planeInfo.Height))
        {
            Destroy(channelTexture);
            channelTexture = null;
        }
        if (channelTexture == null)
        {
            channelTexture = new Texture2D((int)planeInfo.Width, (int)(planeInfo.Height), TextureFormat.Alpha8, false)
            {
                filterMode = FilterMode.Bilinear
            };

            renderer.material.SetTexture(samplerName, channelTexture);
        }

        int pixelStride = (int)(planeInfo.Stride / planeInfo.Width);

        if (pixelStride == 1)
        {
            channelTexture.LoadRawTextureData(planeInfo.DataPtr, (int)planeInfo.Size);
            channelTexture.Apply();
        }
        else
        {
            if (newTextureChannel == null || newTextureChannel.Length != (planeInfo.Width * planeInfo.Height))
            {
                newTextureChannel = new byte[planeInfo.Width * planeInfo.Height];
            }

            for (int y = 0; y < planeInfo.Height; y++)
            {
                for (int x = 0; x < planeInfo.Width; x++)
                {
                    newTextureChannel[y * planeInfo.Width + x] = planeData[y * planeInfo.Stride + x * pixelStride];
                }
            }
            channelTexture.LoadRawTextureData(newTextureChannel);
            channelTexture.Apply();
        }

        //Create custom Texture if the channel is Y channel
        if (samplerName == samplerNamesYUV[0])
        {
            if (CustomVideoTexture != null && (CustomVideoTexture.width != planeInfo.Width || CustomVideoTexture.height != planeInfo.Height))
            {
                Destroy(CustomVideoTexture);
                CustomVideoTexture = null;
            }
            if (CustomVideoTexture == null)
            {
                CustomVideoTexture = new Texture2D((int)planeInfo.Width, (int)(planeInfo.Height), TextureFormat.Alpha8, false)
                {
                    filterMode = FilterMode.Bilinear
                };
            }

            //Once the channel texture is update, update the custom texture texture
            if (pixelStride == 1)
            {
                CustomVideoTexture.LoadRawTextureData(planeInfo.DataPtr, (int)planeInfo.Size);
                CustomVideoTexture.Apply();
            }
            else
            {
                //newTextureChannel is updated in the original 'channelTexture' above
                CustomVideoTexture.LoadRawTextureData(newTextureChannel);
                CustomVideoTexture.Apply();
            }
        }
    }

If you are more familiar with the Unity WebRTC package, you can also use that on Magic Leap 2 . Although you will have to build the plugin yourself until the following pull request is approved.

Issue: Copying Video Frames to Canvas's RawImage in MLWebRTCVideoSinkBehavior.cs

Overview

I am attempting to copy video frames from the MLWebRTCVideoSinkBehavior.cs script to a canvas's RawImage. The goal is to do this for both local and remote videos. As part of the effort, I modified the UpdateYUVTextureChannel() function slightly.

Techniques Tried

I experimented with two techniques:

Technique 1:

Attempted to copy channelTexture directly into copyTexture (which is UnityEngine.UI.RawImage). The code implemented was:

copyTexture.texture = channelTexture;

Technique 2:

Created a new Texture2D object, copied all pixels from channelTexture into it, and then applied that texture into copyTexture.texture. The code implemented was:

copyTexture.texture = copiedTexture;

Problem

Both techniques did not yield the expected results. QuadRenderer displays the video perfectly in color in the world context. However, on the canvas, it shows a display resembling a black and white transparency.

Hi Usman,

Thank you for that information. The issue is that the Canvas images needs to support YUV image format. Which is composed of 3 Textures that gets combined into a colored image. Because of this, you may want to consider using the same material that is on the YUV renderer on the UI canvas.

You can also combine the YUV image into a single colored texture using blitting. Similar to what we do in our Camera Visualization Example : Visualize Camera Output | MagicLeap Developer Documentation

@kbabilinski I have visited your provided link and make some modifications:

Here is my updated code stack in MLWebRTCVideoSinkBehavior.cs file:


private void RenderWebRTCFrameYUV(MLWebRTC.VideoSink.Frame frame)
        {
            //copyTexture.texture= rawVideoTexturesYUV[0];
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[0], frame.ImagePlanes[0], yuvRenderer, samplerNamesYUV[0], yChannelBuffer);
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[1], frame.ImagePlanes[1], yuvRenderer, samplerNamesYUV[1], uChannelBuffer);
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[2], frame.ImagePlanes[2], yuvRenderer, samplerNamesYUV[2], vChannelBuffer);

            if (!_renderTexture)
            {
                // Create a render texture that will display the RGB image
                _renderTexture = new RenderTexture((int)frame.ImagePlanes[0].Width, (int)(frame.ImagePlanes[0].Height), 0, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);

                // Create a command buffer that will be used for Blitting
                _commandBuffer = new CommandBuffer();
                _commandBuffer.name = "YUV2RGB";

                // Create a Material with a shader that will combine all of our channels into a single Render Texture
                _yuvMaterial = new Material(Shader.Find("Unlit/YUV_Camera_Shader"));

                // Assign the RawImage Texture to the Render Texture
                //_screenRendererYUV.texture = _renderTexture;
                copyTexture.texture = _renderTexture;
            }

            // Set the texture's scale based on the output image
            //_yuvMaterial.mainTextureScale = new Vector2(1f / frame.ImagePlanes[0].Stride, -1.0f);

            // Blit the resulting Material into a single render texture
            _commandBuffer.Blit(null, _renderTexture, _yuvMaterial);
            Graphics.ExecuteCommandBuffer(_commandBuffer);
            _commandBuffer.Clear();
        }

I have performed debugging and analyze this code stack, I am not facing any crash in this implementation but I am not getting anything on canvas texture is still white.

Team, Can you please assist in this situation?

We did spot a bug in the camera visualizer example script. You can verify the scripts behavior in the CV Camera example scene inside the Magic Leap Unity Example Project. Simply change the Camera output to YUV and call the OnCaptureDataReceived in the YUVVisualizer.

You may also want to add an object into your scene that uses the YUV_Camera_Shader so the shader is not stripped during build. You can also expose the material property to be public instead of trying to create it at runtime

using System;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.UI;
using UnityEngine.XR.MagicLeap;

public class YUVVisualizer : MonoBehaviour
{
    [SerializeField, Tooltip("The UI to show the camera capture in YUV format")]
    private RawImage _screenRendererYUV = null;

    //The Image Textures for each channel Y,U,V
    private Texture2D[] _rawVideoTexturesYuv = new Texture2D[3];
    private byte[] _yChannelBuffer;
    private byte[] _uChannelBuffer;
    private byte[] _vChannelBuffer;

    private static readonly string[] SamplerNamesYuv = new string[] { "_MainTex", "_UTex", "_VTex" };

    // The texture that will display our final image
    private RenderTexture _renderTexture;
    private Material _yuvMaterial;
    private CommandBuffer _commandBuffer;

    public void OnCaptureDataReceived(MLCamera.CameraOutput output, MLCamera.ResultExtras resultExtras, MLCamera.Metadata metadataHandle)
    {

        if (output.Format == MLCamera.OutputFormat.YUV_420_888)
        {
            if (!_renderTexture)
            {
                // Create a render texture that will display the RGB image
                _renderTexture = new RenderTexture((int)output.Planes[0].Width, (int)(output.Planes[0].Height), 0, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);

                // Create a command buffer that will be used for Blitting
                _commandBuffer = new CommandBuffer();
                _commandBuffer.name = "YUV2RGB";

                // Create a Material with a shader that will combine all of our channels into a single Render Texture
                _yuvMaterial = new Material(Shader.Find("Unlit/YUV_Camera_Shader"));

                // Assign the RawImage Texture to the Render Texture
                _screenRendererYUV.texture = _renderTexture;
            }

            UpdateYUVTextureChannel(ref _rawVideoTexturesYuv[0], output.Planes[0],
                      SamplerNamesYuv[0], ref _yChannelBuffer);
            UpdateYUVTextureChannel(ref _rawVideoTexturesYuv[1], output.Planes[1],
                SamplerNamesYuv[1], ref _uChannelBuffer);
            UpdateYUVTextureChannel(ref _rawVideoTexturesYuv[2], output.Planes[2],
                SamplerNamesYuv[2], ref _vChannelBuffer);

            // Set the texture's scale based on the output image
            _yuvMaterial.mainTextureScale = new Vector2(1f / output.Planes[0].PixelStride, -1.0f);

            // Blit the resulting Material into a single render texture
            _commandBuffer.Blit(null, _renderTexture, _yuvMaterial);
            Graphics.ExecuteCommandBuffer(_commandBuffer);
            _commandBuffer.Clear();
        }
    }

    private void UpdateYUVTextureChannel(ref Texture2D channelTexture, MLCamera.PlaneInfo imagePlane,
                                               string samplerName, ref byte[] newTextureChannel)
    {
        if (channelTexture != null &&
            (channelTexture.width != imagePlane.Width || channelTexture.height != imagePlane.Height))
        {
            Destroy(channelTexture);
            channelTexture = null;
        }

        if (channelTexture == null)
        {
            if (imagePlane.PixelStride == 2)
            {
                channelTexture = new Texture2D((int)imagePlane.Width, (int)(imagePlane.Height), TextureFormat.RG16, false)
                {
                    filterMode = FilterMode.Bilinear
                };
            }
            else
            {
                channelTexture = new Texture2D((int)imagePlane.Width, (int)(imagePlane.Height), TextureFormat.Alpha8, false)
                {
                    filterMode = FilterMode.Bilinear
                };
            }
            _yuvMaterial.SetTexture(samplerName, channelTexture);
        }

        int actualWidth = (int)(imagePlane.Width * imagePlane.PixelStride);
        if (imagePlane.Stride != actualWidth)
        {
            if (newTextureChannel == null || newTextureChannel.Length != (actualWidth * imagePlane.Height))
            {
                newTextureChannel = new byte[actualWidth * imagePlane.Height];
            }

            for (int i = 0; i < imagePlane.Height; i++)
            {
                Buffer.BlockCopy(imagePlane.Data, (int)(i * imagePlane.Stride), newTextureChannel,
                    i * actualWidth, actualWidth);
            }

            channelTexture.LoadRawTextureData(newTextureChannel);
        }
        else
        {
            channelTexture.LoadRawTextureData(imagePlane.Data);
        }

        channelTexture.Apply();
    }
}

@kbabilinski I have not mentioned this issue as bug. I have already review the code base and I know everything is working fine in MLWebRTCExample.cs script.

Secondly, you have posted code from this link: Visualize Camera Output | MagicLeap Developer Documentation. If you have scene my previous answer in this thread, I have updated code stack based on this script.

Here is my updated code stack in MLWebRTCVideoSinkBehavior.cs file:

private void RenderWebRTCFrameYUV(MLWebRTC.VideoSink.Frame frame)
        {
            //copyTexture.texture= rawVideoTexturesYUV[0];
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[0], frame.ImagePlanes[0], yuvRenderer, samplerNamesYUV[0], yChannelBuffer);
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[1], frame.ImagePlanes[1], yuvRenderer, samplerNamesYUV[1], uChannelBuffer);
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[2], frame.ImagePlanes[2], yuvRenderer, samplerNamesYUV[2], vChannelBuffer);

            if (!_renderTexture)
            {
                // Create a render texture that will display the RGB image
                _renderTexture = new RenderTexture((int)frame.ImagePlanes[0].Width, (int)(frame.ImagePlanes[0].Height), 0, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);

                // Create a command buffer that will be used for Blitting
                _commandBuffer = new CommandBuffer();
                _commandBuffer.name = "YUV2RGB";

                // Create a Material with a shader that will combine all of our channels into a single Render Texture
                _yuvMaterial = new Material(Shader.Find("Unlit/YUV_Camera_Shader"));

                // Assign the RawImage Texture to the Render Texture
                //_screenRendererYUV.texture = _renderTexture;
                copyTexture.texture = _renderTexture;
            }

            // Set the texture's scale based on the output image
            //_yuvMaterial.mainTextureScale = new Vector2(1f / frame.ImagePlanes[0].Stride, -1.0f);

            // Blit the resulting Material into a single render texture
            _commandBuffer.Blit(null, _renderTexture, _yuvMaterial);
            Graphics.ExecuteCommandBuffer(_commandBuffer);
            _commandBuffer.Clear();
        }

I want to create copies of local and remote video renderers as canvas's RawImage. I am using the same material that is on the YUV renderer on the UI canvas.

@usman.bashir sorry for the missunderstanding, the script on our portal is being updated as we spotted a minor bug in the example.

You can use the updated script and information to verify rendering the YUV texture in the CV Camera example into an RGB image. Also note:

You may also want to add an object into your scene that uses the YUV_Camera_Shader so the shader is not stripped during build. You can also expose the material property to be public instead of trying to create it at runtime

@kbabilinski I am using the same material that is on the YUV renderer on the UI canvas.

I am going to update the script based on the changes you have pointed out. Thank you so much for the assistance.

@kbabilinski I have scene the portal but it is still showing " Version: 14 Jun 2023 ".

I am using this portal: Visualize Camera Output | MagicLeap Developer Documentation

The update will take some time to be reflected on the portal. Also please use the YUV_Camera_Shader shader in your scene or assign a material with this shader to the Visualizer material property. This shader should not to be attached to the Raw image as the Raw image will display a standard RGB texture

I have updated the value and now I am getting this. No frames on RawImage, even local render color has been changed with greenish effect.

Here is my updated code stack in MLWebRTCVideoSinkBehavior.cs file:

 private void RenderWebRTCFrameYUV(MLWebRTC.VideoSink.Frame frame)
        {
            //copyTexture.texture= rawVideoTexturesYUV[0];
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[0], frame.ImagePlanes[0], yuvRenderer, samplerNamesYUV[0], yChannelBuffer);
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[1], frame.ImagePlanes[1], yuvRenderer, samplerNamesYUV[1], uChannelBuffer);
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[2], frame.ImagePlanes[2], yuvRenderer, samplerNamesYUV[2], vChannelBuffer);

            if (!_renderTexture)
            {
                // Create a render texture that will display the RGB image
                _renderTexture = new RenderTexture((int)frame.ImagePlanes[0].Width, (int)(frame.ImagePlanes[0].Height), 0, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);

                // Create a command buffer that will be used for Blitting
                _commandBuffer = new CommandBuffer();
                _commandBuffer.name = "YUV2RGB";

                // Create a Material with a shader that will combine all of our channels into a single Render Texture
                _yuvMaterial = new Material(Shader.Find("Unlit/YUV_Camera_Shader"));

                // Assign the RawImage Texture to the Render Texture
                copyTexture.texture = _renderTexture;
            }

            // Set the texture's scale based on the output image
            _yuvMaterial.mainTextureScale = new Vector2(1f / frame.ImagePlanes[0].Stride, -1.0f);

            // Blit the resulting Material into a single render texture
            _commandBuffer.Blit(null, _renderTexture, _yuvMaterial);
            Graphics.ExecuteCommandBuffer(_commandBuffer);
            _commandBuffer.Clear();
        }

There was a minner mistake regarding Local Renderer. I have solved that, now I am able to see Local Renderer in RGB format, but RAWImage is still in greenish state.

image

I have performed debugging on my code base, and I am getting the following:

_renderTexture:

copyTexture:

From line by line debugging and inspecting variables, I am getting similar values for 'copyTexture' and '_renderTexture'.

 if (!_renderTexture)
            {
                // Create a render texture that will display the RGB image
                _renderTexture = new RenderTexture((int)frame.ImagePlanes[0].Width, (int)(frame.ImagePlanes[0].Height), 0, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);

                // Create a command buffer that will be used for Blitting
                _commandBuffer = new CommandBuffer();
                _commandBuffer.name = "YUV2RGB";

                // Create a Material with a shader that will combine all of our channels into a single Render Texture
                _yuvMaterial = new Material(Shader.Find("Unlit/YUV_Camera_Shader"));

                //_yuvMaterial = copyTexture.material;
                // Assign the RawImage Texture to the Render Texture
                copyTexture.texture = _renderTexture;
            }

I created a simple script for you to visualize the YUV image. You will need to drag the YUV_Shader shader into the public field. It looks like the YUV image from the CV camera and WebRTC API are slightly different.

using System;
using System.Runtime.InteropServices;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.UI;
using UnityEngine.XR.MagicLeap;

public class YUVVisualizer : MonoBehaviour
{
    [SerializeField, Tooltip("The UI to show the camera capture in YUV format")]
    private RawImage _screenRendererYUV = null;
    //Set to YUV_Shader located under Packages/com.magicleap.unitysdk/Runtime/APIs/WebRTC/Shaders/YUV_Shader.shader
    public Shader yuvShader;

    //The Image Textures for each channel Y,U,V
    private Texture2D[] rawVideoTexturesYUV = new Texture2D[MLWebRTC.VideoSink.Frame.NativeImagePlanesLength[MLWebRTC.VideoSink.Frame.OutputFormat.YUV_420_888]];
    private byte[] _yChannelBuffer;
    private byte[] _uChannelBuffer;
    private byte[] _vChannelBuffer;

    private static readonly string[] SamplerNamesYuv = new string[] { "_MainTex", "_UTex", "_VTex" };

    // The texture that will display our final image
    private RenderTexture _renderTexture;
    private Material _yuvMaterial;
    private CommandBuffer _commandBuffer;

    public void RenderWebRTCFrameYUV(MLWebRTC.VideoSink.Frame frame)
    {
        if (frame.Format == MLWebRTC.VideoSink.Frame.OutputFormat.YUV_420_888)
        {
            Debug.Log("GET YUV FRAME");
            if (!_renderTexture)
            {
                // Create a render texture that will display the RGB image
                _renderTexture = new RenderTexture((int)frame.ImagePlanes[0].Width, (int)(frame.ImagePlanes[0].Height), 0, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);

                // Create a command buffer that will be used for Blitting
                _commandBuffer = new CommandBuffer();
                _commandBuffer.name = "YUV2RGB";

                // Create a Material with a shader that will combine all of our channels into a single Render Texture
                _yuvMaterial = new Material(yuvShader);

                // Assign the RawImage Texture to the Render Texture
                _screenRendererYUV.texture = _renderTexture;
            }

            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[0], frame.ImagePlanes[0],
                SamplerNamesYuv[0], _yChannelBuffer);
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[1], frame.ImagePlanes[1],
                SamplerNamesYuv[1], _uChannelBuffer);
            UpdateYUVTextureChannel(ref rawVideoTexturesYUV[2], frame.ImagePlanes[2],
                SamplerNamesYuv[2], _vChannelBuffer);

            // Set the texture's scale based on the output image
         //   _yuvMaterial.mainTextureScale = new Vector2(1f / frame.ImagePlanes[0].Stride, -1.0f);

            // Blit the resulting Material into a single render texture
            _commandBuffer.Blit(null, _renderTexture, _yuvMaterial);
            Graphics.ExecuteCommandBuffer(_commandBuffer);
            _commandBuffer.Clear();
        }
    }

    private void UpdateYUVTextureChannel(ref Texture2D channelTexture, MLWebRTC.VideoSink.Frame.PlaneInfo planeInfo, string samplerName, byte[] newTextureChannel)
    {
        byte[] planeData = new byte[planeInfo.Stride * planeInfo.Height * planeInfo.BytesPerPixel];
        Marshal.Copy(planeInfo.DataPtr, planeData, 0, planeData.Length);
        if (planeData == null)
        {
            return;
        }

        if (channelTexture != null && (channelTexture.width != planeInfo.Width || channelTexture.height != planeInfo.Height))
        {
            Destroy(channelTexture);
            channelTexture = null;
        }
        if (channelTexture == null)
        {
            channelTexture = new Texture2D((int)planeInfo.Width, (int)(planeInfo.Height), TextureFormat.Alpha8, false)
            {
                filterMode = FilterMode.Bilinear
            };

            _yuvMaterial.SetTexture(samplerName, channelTexture);
        }

        int pixelStride = (int)(planeInfo.Stride / planeInfo.Width);

        if (pixelStride == 1)
        {
            channelTexture.LoadRawTextureData(planeInfo.DataPtr, (int)planeInfo.Size);
            channelTexture.Apply();
        }
        else
        {
            if (newTextureChannel == null || newTextureChannel.Length != (planeInfo.Width * planeInfo.Height))
            {
                newTextureChannel = new byte[planeInfo.Width * planeInfo.Height];
            }

            for (int y = 0; y < planeInfo.Height; y++)
            {
                for (int x = 0; x < planeInfo.Width; x++)
                {
                    newTextureChannel[y * planeInfo.Width + x] = planeData[y * planeInfo.Stride + x * pixelStride];
                }
            }
            channelTexture.LoadRawTextureData(newTextureChannel);
            channelTexture.Apply();
        }
    }
}

@kbabilinski Thanks for the generic script. I have made the following modifications, but still no success at all.

 public void RenderWebRTCFrameYUV(MLWebRTC.VideoSink.Frame frame)
        {
            if (frame.Format == MLWebRTC.VideoSink.Frame.OutputFormat.YUV_420_888)
            {
                Debug.Log("GET YUV FRAME");

                if (!_renderTexture)
                {
                    // Create a render texture that will display the RGB image
                    _renderTexture = new RenderTexture((int)frame.ImagePlanes[0].Width, (int)frame.ImagePlanes[0].Height, 0, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);

                    // Create a command buffer that will be used for Blitting
                    _commandBuffer = new CommandBuffer();
                    _commandBuffer.name = "YUV2RGB";

                    // Assign the RawImage Texture to the Render Texture
                    copyTexture.texture = _renderTexture;

                    // Use existing Material from copyTexture
                    _yuvMaterial = copyTexture.material;
                }

                //UpdateYUVTextureChannel(ref rawVideoTexturesYUV[0], frame.ImagePlanes[0], samplerNamesYUV[0], yChannelBuffer);
                //UpdateYUVTextureChannel(ref rawVideoTexturesYUV[1], frame.ImagePlanes[1], samplerNamesYUV[1], uChannelBuffer);
                //UpdateYUVTextureChannel(ref rawVideoTexturesYUV[2], frame.ImagePlanes[2], samplerNamesYUV[2], vChannelBuffer);

                UpdateYUVTextureChannel(ref rawVideoTexturesYUV[0], frame.ImagePlanes[0], yuvRenderer, samplerNamesYUV[0], yChannelBuffer);
                UpdateYUVTextureChannel(ref rawVideoTexturesYUV[1], frame.ImagePlanes[1], yuvRenderer, samplerNamesYUV[1], uChannelBuffer);
                UpdateYUVTextureChannel(ref rawVideoTexturesYUV[2], frame.ImagePlanes[2], yuvRenderer, samplerNamesYUV[2], vChannelBuffer);

                // Blit the resulting Material into a single render texture
                _commandBuffer.Blit(null, _renderTexture, _yuvMaterial);
                Graphics.ExecuteCommandBuffer(_commandBuffer);
                _commandBuffer.Clear();
            }
        }

        private void UpdateYUVTextureChannel(ref Texture2D channelTexture, MLWebRTC.VideoSink.Frame.PlaneInfo planeInfo,
                                             Renderer renderer, string samplerName, byte[] newTextureChannel)
        {
            byte[] planeData = new byte[planeInfo.Stride * planeInfo.Height * planeInfo.BytesPerPixel];
            Marshal.Copy(planeInfo.DataPtr, planeData, 0, planeData.Length);
            if (planeData == null)
            {
                return;
            }

            if (channelTexture != null && (channelTexture.width != planeInfo.Width || channelTexture.height != planeInfo.Height))
            {
                Destroy(channelTexture);
                channelTexture = null;
            }
            if (channelTexture == null)
            {
                channelTexture = new Texture2D((int)planeInfo.Width, (int)(planeInfo.Height), TextureFormat.Alpha8, false)
                {
                    filterMode = FilterMode.Bilinear
                };

                renderer.material.SetTexture(samplerName, channelTexture);

                // Set copyTexture.texture to the original texture after processing
                //copyTexture.texture = channelTexture;
            }

            int pixelStride = (int)(planeInfo.Stride / planeInfo.Width);

            if (pixelStride == 1)
            {
                channelTexture.LoadRawTextureData(planeInfo.DataPtr, (int)planeInfo.Size);
                channelTexture.Apply();
            }
            else
            {
                if (newTextureChannel == null || newTextureChannel.Length != (planeInfo.Width * planeInfo.Height))
                {
                    newTextureChannel = new byte[planeInfo.Width * planeInfo.Height];
                }

                for (int y = 0; y < planeInfo.Height; y++)
                {
                    for (int x = 0; x < planeInfo.Width; x++)
                    {
                        newTextureChannel[y * planeInfo.Width + x] = planeData[y * planeInfo.Stride + x * pixelStride];
                    }
                }
                channelTexture.LoadRawTextureData(newTextureChannel);
                channelTexture.Apply();
            }
        }

I have made copyTexture field ' Public ' as you mentioned above.

If I am using this function, my original local video renderer stops working:


        private void UpdateYUVTextureChannel(ref Texture2D channelTexture, MLWebRTC.VideoSink.Frame.PlaneInfo planeInfo, string samplerName, byte[] newTextureChannel)
        {
            byte[] planeData = new byte[planeInfo.Stride * planeInfo.Height * planeInfo.BytesPerPixel];
            Marshal.Copy(planeInfo.DataPtr, planeData, 0, planeData.Length);
            if (planeData == null)
            {
                return;
            }

            if (channelTexture != null && (channelTexture.width != planeInfo.Width || channelTexture.height != planeInfo.Height))
            {
                Destroy(channelTexture);
                channelTexture = null;
            }
            if (channelTexture == null)
            {
                channelTexture = new Texture2D((int)planeInfo.Width, (int)(planeInfo.Height), TextureFormat.Alpha8, false)
                {
                    filterMode = FilterMode.Bilinear
                };

                _yuvMaterial.SetTexture(samplerName, channelTexture);
            }

            int pixelStride = (int)(planeInfo.Stride / planeInfo.Width);

            if (pixelStride == 1)
            {
                channelTexture.LoadRawTextureData(planeInfo.DataPtr, (int)planeInfo.Size);
                channelTexture.Apply();
            }
            else
            {
                if (newTextureChannel == null || newTextureChannel.Length != (planeInfo.Width * planeInfo.Height))
                {
                    newTextureChannel = new byte[planeInfo.Width * planeInfo.Height];
                }

                for (int y = 0; y < planeInfo.Height; y++)
                {
                    for (int x = 0; x < planeInfo.Width; x++)
                    {
                        newTextureChannel[y * planeInfo.Width + x] = planeData[y * planeInfo.Stride + x * pixelStride];
                    }
                }
                channelTexture.LoadRawTextureData(newTextureChannel);
                channelTexture.Apply();
            }
        }

Did you assign the shader property on the component?

@kbabilinski you can check this out.

The Raw Image does not need a material, the script above combines the YUV image and renders it as a single RGB texture on the raw image.

@usman.bashir If you want to Render the image on a Raw Image texture and use the material. You do not have to convert it into an RGB texture. Instead do the following

You can assign the same material to the Raw Image as the Local Image Renderer.
In your script set the RawImage.texture to the Y channel texture.

RawImage.texture = rawVideoTexturesYUV[0];