Use RenderTexture as input to MLMediaRecorder or WebRTC stream

I am interested in recording my own composite of mixed reality. My use case is combining CVCamera and Eyetracking to create a composite 2D texture of the RGB camera and fixation point, this is saved or streamed out for research.

I have tried using Renderheads AVCapture Pro but it fails on h264, h265 works but the video image is garbled as if there is a problem with color space conversion.

I am wondering if there is some way to feed my own frames into MLMediaRecorder for on device recording, or the WebRTC example for local network streaming.

thanks,
-Thomas

From my understanding renderheads does not currently support Vulkan. We recommend reaching out to them to see if/when this feature will be available. Until then, currently we do not support the ability to encode custom media. You could however, encode the image using a native library such as ffmeg or creating an encoder using the Android NDK.