Strange blue lines appearing in processed RGB camera feed

Give us as much detail as possible regarding the issue you're experiencing.

ML2 OS version: 1.4
MLSDK version: 1.6.0
Host OS: (Windows/MacOS) Windows 10

Error messages from logs (syntax-highlighting is supported via Markdown):

I am working on an app that is based on the camera preview sample and uses the depth camera to filter the RGB camera feed and display it. This is done by accessing the RGB camera data pointer, looping through it and setting certain pixels to black (0,0,0,255) if they must be filtered based on depth readings at the corresponding location in the depth image.

We are seeing that blue lines appear at regular locations in the image, at the same locations in each frame, and only in the regions of the image that have been filtered by setting them to black. We have analyzed the RGB frames after our processing and there do not appear to be any blue pixels, but when we input the frame into GLTexImage2D to display the image, they appear.

Let me know if any further information on the code implementation is needed to answer this question.

Cooper

Sounds like an intriguing visual issue. A few questions:

  • Could you tell us the format parameters (internal format, pixel format, pixel type) you're using when you call GLTexImage2D?
  • How are you storing the processed RGB data after filtering? Is it in a simple byte array or something more complex?
  • If possible, please share the relevant sections of code where you create the OpenGL texture and the part where you do the depth-based filtering.

The additional information will help me track this down – it might be a format mismatch or something related to how your data is structured.

kbabilinksi,

  • Could you tell us the format parameters (internal format, pixel format, pixel type) you're using when you call GLTexImage2D?

Here is the call to the glTexImage2D function in the code. We based our code on the camera preview sample and have not changed this line:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, capture_width_, capture_height_, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);

  • How are you storing the processed RGB data after filtering? Is it in a simple byte array or something more complex?

We are simply overwriting values for some pixels in the original RGBA_8888 image in order to filter the image. This is the original row-major format with 4 bytes per pixel.

  • If possible, please share the relevant sections of code where you create the OpenGL texture and the part where you do the depth-based filtering.

I have not created an OpenGL texture, but have modified the existing RGBA image. This has been done like this:

uint8_t *ptr;
ptr = (uint8_t *)data; // data is the RGB data structure that already existing in the camera preview sample

pixel_to_change is then defined as the current pixel in the row-major image format as we iterate through the pixels in the image, and pixel_index is defined this way to be the index in the uint8_t ptr:

int pixel_index = (pixel_to_change)*4;

Depth filtering is then done by changing pixels to black if they are above a depth threshold:

if (depthVal > depth_threshold)
{

    ptr[pixel_index] = 0; // red channel
    ptr[pixel_index+1] = 0; // green channel
    ptr[pixel_index+2] = 0; // blue channel


}

Thanks for looking into this,
Cooper