Hello Community,
I'm facing an issue while trying to copy renderer textures from video frames onto a Canvas. Specifically, I need to take both local and remote video frames rendered via WebRTC and copy them to textures on a Canvas in Unity.
Unfortunately, I've been having trouble with this task as the texture data is not directly accessible or copyable from the MLWebRTC.VideoSink.Frame
object.
Here are the details of my environment:
Unity Editor version: 2022-2.0f1
ML2 OS version: Version 1.3.0-dev1, Build B3E.230427.10-R.023, Android API Level 29
MLSDK version: 1.7.0
Host OS: Windows 11
I've tried to copy the texture data using Texture2D.GetPixels()
, create a new texture, and then use Texture2D.SetPixels()
on the new texture. However, it seems like MLWebRTC.VideoSink.Frame
does not allow me to access the texture data in this way.
Here is the problematic part of my code:
private void UpdateYUVTextureChannel(ref Texture2D channelTexture, MLWebRTC.VideoSink.Frame.PlaneInfo planeInfo,
Renderer renderer, string samplerName, byte[] newTextureChannel)
{
byte[] planeData = new byte[planeInfo.Stride * planeInfo.Height * planeInfo.BytesPerPixel];
Marshal.Copy(planeInfo.DataPtr, planeData, 0, planeData.Length);
if (planeData == null)
{
return;
}
if (channelTexture != null && (channelTexture.width != planeInfo.Width || channelTexture.height != planeInfo.Height))
{
Destroy(channelTexture);
channelTexture = null;
}
if (channelTexture == null)
{
channelTexture = new Texture2D((int)planeInfo.Width, (int)(planeInfo.Height), TextureFormat.Alpha8, false)
{
filterMode = FilterMode.Bilinear
};
renderer.material.SetTexture(samplerName, channelTexture);
}
// Create a new texture to hold the original color data
Texture2D originalTexture = new Texture2D((int)planeInfo.Width, (int)(planeInfo.Height), TextureFormat.Alpha8, false);
originalTexture.filterMode = FilterMode.Bilinear;
originalTexture.SetPixels32(channelTexture.GetPixels32());
originalTexture.Apply();
int pixelStride = (int)(planeInfo.Stride / planeInfo.Width);
if (pixelStride == 1)
{
channelTexture.LoadRawTextureData(planeInfo.DataPtr, (int)planeInfo.Size);
channelTexture.Apply();
}
else
{
if (newTextureChannel == null || newTextureChannel.Length != (planeInfo.Width * planeInfo.Height))
{
newTextureChannel = new byte[planeInfo.Width * planeInfo.Height];
}
for (int y = 0; y < planeInfo.Height; y++)
{
for (int x = 0; x < planeInfo.Width; x++)
{
newTextureChannel[y * planeInfo.Width + x] = planeData[y * planeInfo.Stride + x * pixelStride];
}
}
channelTexture.LoadRawTextureData(newTextureChannel);
channelTexture.Apply();
}
// Set copyTexture.texture to the original texture after processing
copyTexture.texture = originalTexture;
}
I am copying the originalTexture
in the above function into copyTexture.texture
(UnityEngine.UI.RawTexture) in last function.
// Set copyTexture.texture to the original texture after processing
copyTexture.texture = originalTexture;
Does anyone know a way around this issue? Any help to get a readable/copyable texture from the MLWebRTC.VideoSink.Frame
or any alternative solutions would be greatly appreciated.
Thank you in advance for your help.