Unity : Rendering YUV 4k Camera Stream

Hey everyone! I wanted to share a little project I put together to solve the problem of rendering a 4K camera feed on Magic Leap 2 without tanking performance or running into crashes. I initially tried pushing out full RGBA frames, but that overwhelmed the device’s bandwidth. YUV turned out to be much more efficient—especially using the NV12 format, which is how ML2’s camera feed is typically provided.

NOTE: In this example I used unsafe code to improve performance

Note the stream I am referring to is specifically the YUV_420_888 format provided by the ML Camera Capture not the Camera Preview texture.

I am also going to update the developer documentation on our portal to reflect this information.

Quick primer on the YUV 420 (“YUV_420_888”) format Magic Leap’s camera returns

Concept What it means in practice
Y (luma) plane Holds brightness for every pixel at full resolution. In the API this is planes[0]. Each byte is one pixel (PixelStride == 1).
U & V (chroma) planes Contain colour‐difference information, subsampled 2 × 2 (4:2:0). In NV12 the U and V bytes are interleaved: 1 byte U, 1 byte V, repeating. The API still exposes two plane records:
planes[1] – start of the interleaved UV buffer
planes[2]same buffer, offset by one byte so it “looks” like a pure-V plane.
Because they point into the same memory you only need planes[0] and planes[1].
PixelStride vs Stride PixelStride = bytes between horizontally adjacent samples inside a row (1 for Y, 2 for interleaved UV).
Stride = bytes from the start of one row to the start of the next (may be ≥ width×PixelStride if the driver pads each row). Use both values when copying to avoid tearing.

So, when you request YUV_420_888 from the ML camera:

  1. Read planes[0] for Y.
  2. Read planes[1] for the interleaved UV.

planes[2] provides VU which can be ignored in our instance

Below you’ll find:

  1. An AsyncCameraCapture script that starts the camera, subscribes to the ML Camera events, and passes frames to our visualizer.
  2. A YUVCameraVisualizer script (the main worker) that takes the Y, U, and V planes and converts them into textures on the GPU, supporting both the main thread callback and a faster native callback path.
  3. A YUV RG16 Shader that samples the luma (Y) plane and interleaved chroma (UV) plane to produce an RGB output.

By relying on the YUV format, we avoid pushing full RGBA frames around, which is a huge performance win—especially at 4K. I also learned that while the ML camera feed might conceptually have three planes (Y, U, V), in NV12 the second and third planes overlap in memory, so you only actually need to read the Y plane and the single interleaved UV plane.

Below are the scripts you can simply drag and drop into your ML2 Unity project. Then follow these steps:

  1. Import the attached scripts (place them in your Assets folder).
  2. Attach the AsyncCameraCapture script to any GameObject in your scene (e.g., an empty GameObject).
  3. Create a Quad (or a plane) to display the camera feed. Position it in front of your camera.
  4. Attach the YUVCameraVisualizer script to that Quad (the same GameObject with the MeshRenderer).
  5. Assign the YUV RG16 Shader to the Visualizer’s Shader field (or set it in the inspector).
  6. Assign the YUVCameraVisualizer reference inside the AsyncCamera component (the “Visualizer” field).
  7. Run your scene on device: you should see the camera feed rendered on your Quad, at high resolution!

AsyncCameraCapture.cs (11.7 KB)
YUVVisualizerNative.cs (10.3 KB)
YUV_RG16_Shader.shader (2.9 KB)

4 Likes

Here is a managed version of the visualizer that does not require enabling the unsafe flag in the project settings. Note the scripts are named the same so only one can be in your project at a time unless you rename the script and it’s reference in the AsyncCameraCapture script.

YUVCameraVisualizer.cs (6.4 KB)