Black bands/artifacts on depth camera images

Hi, I'm getting some weird black bands occasionally appearing across the images coming from the depth camera. It's inconsistent and only occurs on some frames, and different rows of the image each time, but is always concentrated on the very bottom portion of the image. They appears some times more than others, but in a recent series of 300 captures of the Raw IR image (captured and saved on the ML2 using a native app, then copied over via device bridge), I counted about 65 frames that had these black banding artifacts (this is more frequent than normal).

I've seen this occur in both the depth image and the raw IR image.

Examples: Just looking at the wall/my monitor, you can see the bars can appear at variable locations and thicknesses.
frame_0340

frame_0335

And an interesting thing I noticed on these 2 is that it appears that sometimes the black bands seem to contain a darker version of the image from the previous frame, as you can see some objects and edges are duplicated in the black bands compared to the adjacent portion of the image, like the border of my mouse pad, wrist rest, and the corners on some pieces of paper on my desk.
frame_0305

frame_0223

Any idea what might be causing this? Is this a software issue, firmware issue, or hardware/sensor issue? Is it possible I damaged the sensor in some way?

Thanks

Hi @alex.teepe,

Thanks for reaching out to us. Before we dig in to this, I would like to ask for just a few more details.

  • Unity Editor version
  • ML2 OS Version
  • ML SDK Version
  • Development OS (e.g. Mac, Windows)

This will help us when trying to solve your issue.

Best,

El

Hi @etucker, thanks for your response. To answer your questions:

  • Unity Editor version:
    N/A - we are not using unity. We are doing Android Native development

  • ML2 OS Version:
    1.2.0

  • ML SDK Version:
    1.2.0

  • Development OS (e.g. Mac, Windows)
    Windows 10, Android Studio

I should also note, that I can see these artifacts live on the headset running the provided Depth Camera sample app under "\MagicLeap\mlsdk\v1.2.0\samples\c_api\samples\depth_camera"

I have reached out to our developers on this issue and report back as we uncover more. Thank you for providing those details. They will be very helpful moving forward.

Hi @alex.teepe,

Would you happen to have another Magic Leap 2 device to run the same tests on? This may help us isolate the issue.

Thanks,

El

@etucker Unfortunately we have only purchased 1 headset so far, so we are unable to test it on a second device.

Hi @alex.teepe,

I just wanted to update you and let you know that we have created a ticket for this and are actively working on this. If we uncover any workarounds for this issue, we will let you know as soon as possible.

Thanks,

El

Hi @alex.teepe,

May I ask what you see on the headset when you open Settings > Battery > Power Grid Efficiency? Is it 50hz or 60hz?

Thanks,

El

It's set to 60 Hz, which matches my country (United States)

I did some more testing and found it's specific to when you're running both the depth and RGB cameras at the same time.

I had modified the depth camera sample app to also enable streaming 4k video from the RGB camera at the same time as the depth camera. The intent was to save RGB and Depth frames side by side to see if you could find a mapping between them to correlate pixels in the RBB image with pixels in the Depth image or vice versa.

Anyway, this involved copying the RGB camera setup code from the camera_preview sample app and adding it to the depth camera sample. On the Main RGB camera's "OnVideoAvailable()" callback, I copy the "output->planes[0].data" image into a class member variable each frame, and on the next Depth frame, I save both the depth image and the latest RGB image to the SD card (if a checkbox I added for recording frames is checked on the ImGUI window).

If I comment out copying the RGB frame on the RGB camera's OnVideoAvailable() callback, the frequency of the black bands decreases substantially. And if I don't initialize the RGB camera at all, then they disappear completely.

So this seems to be an issue running RGB and depth cameras simultaneously.

Here's a minimally reproducible example code. Run this in place of the Depth_Camera smaple app. I couldn't upload a .cpp file and the code is really long, so hoping this collapsible formatting works.

If you run that as-is, you should see occasional black bars on the depth/IR images. And if you comment out "SetupCamera()" and "StartCapture()" under "SetupRestrictedResources()", then they should go away.

Main.cpp
#define ALOG_TAG "com.magicleap.capi.sample.depth_camera"

#include "confidence_material.h"
#include "depth_material.h"

#include <time.h>
#include <cstdlib>

#include <app_framework/application.h>
#include <app_framework/components/renderable_component.h>
#include <app_framework/geometry/quad_mesh.h>
#include <app_framework/gui.h>
#include <app_framework/material/textured_material.h>
#include <app_framework/toolset.h>

#include <ml_depth_camera.h>
#include <ml_head_tracking.h>
#include <ml_perception.h>
#include <ml_time.h>

#include <ml_camera_v2.h>

//#include <opencv2/opencv.hpp>

#include <condition_variable>

#include <app_framework/logging.h>
#include <app_framework/registry.h>
#include <ml_media_error.h>
#include <ml_cv_camera.h>

#include <glm/gtc/quaternion.hpp>
#include <glm/gtx/quaternion.hpp>
#include <glm/gtx/transform.hpp>


#define UNWRAP_RET_MEDIARESULT(res) UNWRAP_RET_MLRESULT_GENERIC(res, UNWRAP_MLMEDIA_RESULT);

#ifdef ML_LUMIN
#include <EGL/egl.h>
#define EGL_EGLEXT_PROTOTYPES
#include <EGL/eglext.h>
#endif

using namespace ml::app_framework;

namespace {
    const char* GetMLDepthCameraFrameTypeString(const MLDepthCameraFrameType& frame_type) {
        switch (frame_type) {
            case MLDepthCameraFrameType_Unknown: return "Unknown";
            case MLDepthCameraFrameType_LongRange: return "LongRange";
            default: return "Error";
        }
    }

    const char* GetMLDepthCameraModeString(const MLDepthCameraMode& mode) {
        switch (mode) {
            case MLDepthCameraMode_LongRange: return "LongRange";
            default: return "Error";
        }
    }

    const char* GetMLDepthCameraFlagsString(const MLDepthCameraFlags& flag) {
        switch (flag) {
            case MLDepthCameraFlags_DepthImage: return "DepthImage";
            case MLDepthCameraFlags_Confidence: return "Confidence";
            case MLDepthCameraFlags_AmbientRawDepthImage: return "AmbientRawDepthImage";
            case MLDepthCameraFlags_RawDepthImage: return "RawDepthImage";
            default: return "Error";
        }
    }
}

namespace EnumHelpers {
    const char *GetMLCameraErrorString(const MLCameraError &err) {
        switch (err) {
            case MLCameraError::MLCameraError_None: return "";
            case MLCameraError::MLCameraError_Invalid: return "Invalid/Unknown error";
            case MLCameraError::MLCameraError_Disabled: return "Camera disabled";
            case MLCameraError::MLCameraError_DeviceFailed: return "Camera device failed";
            case MLCameraError::MLCameraError_ServiceFailed: return "Camera service failed";
            case MLCameraError::MLCameraError_CaptureFailed: return "Capture failed";
            default: return "Invalid MLCameraError value!";
        }
    }

    const char *GetMLCameraDisconnectReasonString(const MLCameraDisconnectReason &reason) {
        switch (reason) {
            case MLCameraDisconnectReason::MLCameraDisconnect_DeviceLost: return "Device lost";
            case MLCameraDisconnectReason::MLCameraDisconnect_PriorityLost: return "Priority lost";
            default: return "Invalid MLCameraDisconnectReason value!";
        }
    }
}  // namespace EnumHelpers

using namespace ml::app_framework;
using namespace std::chrono_literals;


class DepthCameraApp : public Application {
public:
    DepthCameraApp(struct android_app *state)
            : Application(state, std::vector<std::string>{"android.permission.CAMERA", "com.magicleap.permission.DEPTH_CAMERA"}, USE_GUI),
              cam_context_(ML_INVALID_HANDLE),
              last_dcam_frametype_(MLDepthCameraFrameType_Unknown),
              last_dcam_intrinsics_{},
              last_dcam_frame_number_(-1),
              last_dcam_pose_{},
              min_value_(0.f),
              max_value_(0.f),
              curr_idx_(3),
              is_preview_invalid_(false)
    {
        capture_width_ = 4096;
        capture_height_ = 3072;

        // Fill arrays with initial values.
        texture_id_.fill(0);
        texture_width_.fill(544);
        texture_height_.fill(480);

        // Clean timestamp char array.
        memset(last_dcam_timestamp_str_, 0, sizeof(last_dcam_timestamp_str_));

        // Initialize limit values for shaders. These values are BY NO MEANS suggested ranges for depth camera data.
        min_distance_[GetIndexFromCameraFlag(MLDepthCameraFlags_DepthImage)] = 0.0f;
        max_distance_[GetIndexFromCameraFlag(MLDepthCameraFlags_DepthImage)] = 5.0f;
        distance_limit_[GetIndexFromCameraFlag(MLDepthCameraFlags_DepthImage)] = 7.5f;
        legend_unit_[GetIndexFromCameraFlag(MLDepthCameraFlags_DepthImage)] = 'm';

        min_distance_[GetIndexFromCameraFlag(MLDepthCameraFlags_Confidence)] = 0.0f;
        max_distance_[GetIndexFromCameraFlag(MLDepthCameraFlags_Confidence)] = 100.0f;
        distance_limit_[GetIndexFromCameraFlag(MLDepthCameraFlags_Confidence)] = 100.0f;
        legend_unit_[GetIndexFromCameraFlag(MLDepthCameraFlags_Confidence)] = '%';

        min_distance_[GetIndexFromCameraFlag(MLDepthCameraFlags_AmbientRawDepthImage)] = 5.0f;
        max_distance_[GetIndexFromCameraFlag(MLDepthCameraFlags_AmbientRawDepthImage)] = 2000.0f;
        distance_limit_[GetIndexFromCameraFlag(MLDepthCameraFlags_AmbientRawDepthImage)] = 2000.0f;
        legend_unit_[GetIndexFromCameraFlag(MLDepthCameraFlags_AmbientRawDepthImage)] = ' ';

        min_distance_[GetIndexFromCameraFlag(MLDepthCameraFlags_RawDepthImage)] = 5.0f;
        max_distance_[GetIndexFromCameraFlag(MLDepthCameraFlags_RawDepthImage)] = 3000.0f;
        distance_limit_[GetIndexFromCameraFlag(MLDepthCameraFlags_RawDepthImage)] = 3000.0f;
        legend_unit_[GetIndexFromCameraFlag(MLDepthCameraFlags_RawDepthImage)] = ' ';
    }

    void OnStart() override {
        // Load legend bar textures.
        color_map_tex_ = Registry::GetInstance()->GetResourcePool()->LoadTexture("depth_gradient.png", GL_RGBA);
        conf_map_tex_ = Registry::GetInstance()->GetResourcePool()->LoadTexture("confidence_gradient.png", GL_RGBA);

        // Initialize head tracker to move preview with head.
        MLHandle head_tracker;
        UNWRAP_MLRESULT(MLHeadTrackingCreate(&head_tracker));
        UNWRAP_MLRESULT(MLHeadTrackingGetStaticData(head_tracker, &head_static_data_));
        SetHeadHandle(head_tracker); ///< This will cause Application class to destroy HT handle on app exit.

        // Initialize depth camera related structures.
        MLDepthCameraSettingsInit(&camera_settings_);
        camera_settings_.flags = MLDepthCameraFlags_DepthImage | MLDepthCameraFlags_Confidence | MLDepthCameraFlags_AmbientRawDepthImage | MLDepthCameraFlags_RawDepthImage;
        camera_settings_.mode = MLDepthCameraMode_LongRange;
        MLDepthCameraDataInit(&depth_camera_data_);

        const auto head_pose_opt = GetHeadPoseOrigin();
        if (!head_pose_opt.has_value()) {
            ALOGE("No head pose available at application start! For best experience, start the application with the device on.");
        }
        const Pose head_pose = head_pose_opt.value_or(GetRoot()->GetWorldPose()).HorizontalRotationOnly();
        const Pose gui_offset(glm::vec3(.25f, 0.f, -2.f));  //> Make gui not obscure the preview too much
        GetGui().Place(head_pose + gui_offset);
    }

    void OnResume() override {
        if (ArePermissionsGranted()) {
            SetupRestrictedResources();
            GetGui().Show();
        }
    }

    void OnPause() override {
        if (MLHandleIsValid(cam_context_)) {
            UNWRAP_MLRESULT(MLDepthCameraDisconnect(cam_context_));
            cam_context_ = ML_INVALID_HANDLE;
        }
    }

    void OnUpdate(float) override {
        UpdatePreview();
        UpdateGui();
        AcquireNewFrames();
    }

    void OnStop() override {
        color_map_tex_.reset();
        conf_map_tex_.reset();
    }

private:
    void SetupRestrictedResources() {
        if (MLHandleIsValid(cam_context_)) {
            return;
        }

        ASSERT_MLRESULT(SetupCamera());
        ASSERT_MLRESULT(StartCapture());

        ASSERT_MLRESULT(MLDepthCameraConnect(&camera_settings_, &cam_context_));
        SetupPreview();
    }

    void UpdateGui() {
        auto &gui = GetGui();
        gui.BeginUpdate();
        bool is_running = true;
        if (gui.BeginDialog("Depth Camera", &is_running, ImGuiWindowFlags_NoMove | ImGuiWindowFlags_NoResize | ImGuiWindowFlags_AlwaysAutoResize | ImGuiWindowFlags_NoCollapse)) {
            ImGui::NewLine();
            ImGui::Text("Basic frame information");
            {
                ImGui::Text("\tFrame number: %ld", last_dcam_frame_number_);
                ImGui::Text("\tFrame type: %s", GetMLDepthCameraFrameTypeString(last_dcam_frametype_));

                ImGui::NewLine();
                ImGui::Text("\tMin: %.1f", min_value_);
                ImGui::Text("\tMax: %.1f", max_value_);
            }

            DrawIntrinsicDetails("Intrinsics:", last_dcam_intrinsics_);

            ImGui::NewLine();
            ImGui::Separator();
            ImGui::Text("Settings");
            {
                ImGui::Text("Data type:"); ImGui::SameLine();
                DrawDataTypeRadioButton(MLDepthCameraFlags_DepthImage); ImGui::SameLine();
                DrawDataTypeRadioButton(MLDepthCameraFlags_Confidence);
                ImGui::Indent(100.f); DrawDataTypeRadioButton(MLDepthCameraFlags_AmbientRawDepthImage);
                ImGui::Unindent(100.f); ImGui::SameLine();
                DrawDataTypeRadioButton(MLDepthCameraFlags_RawDepthImage);

                ImGui::NewLine();
                ImGui::Text("Camera mode:"); ImGui::SameLine();
                DrawModeRadioButton(MLDepthCameraMode_LongRange);

                const bool is_slider_disabled = GetCameraFlagFromIndex(curr_idx_) == MLDepthCameraFlags_Confidence;
                if (is_slider_disabled) { ImGui::BeginDisabled(); }
                {
                    ImGui::NewLine();
                    ImGui::Text("Shader valid values range:");
                    if (ImGui::SliderFloat("Minimum", &min_distance_[curr_idx_], 0.0, max_distance_[curr_idx_])) {
                        UpdateMinDepth();
                        SetLegendText(curr_idx_);
                    }
                    if (ImGui::SliderFloat("Maximum", &max_distance_[curr_idx_], min_distance_[curr_idx_], distance_limit_[curr_idx_])) {
                        UpdateMaxDepth();
                        SetLegendText(curr_idx_);
                    }
                }

                if (is_slider_disabled) { ImGui::EndDisabled(); }
            }
        }
        gui.EndDialog();
        gui.EndUpdate();

        if (!is_running) {
            FinishActivity();
        }
    }

    void UpdateMinDepth() {
        auto depth_mat = std::static_pointer_cast<DepthMaterial>(shader_materials_[curr_idx_]);
        auto conf_mat = std::static_pointer_cast<ConfidenceMaterial>(shader_materials_[curr_idx_]);
        if (depth_mat) {
            depth_mat->SetMinDepth(min_distance_[curr_idx_]);
        } else if (conf_mat) {
            conf_mat->SetMinDepth(min_distance_[curr_idx_]);
        }
    }

    void UpdateMaxDepth() {
        auto depth_mat = std::static_pointer_cast<DepthMaterial>(shader_materials_[curr_idx_]);
        auto conf_mat = std::static_pointer_cast<ConfidenceMaterial>(shader_materials_[curr_idx_]);
        if (depth_mat) {
            depth_mat->SetMaxDepth(max_distance_[curr_idx_]);
        } else if (conf_mat) {
            conf_mat->SetMaxDepth(max_distance_[curr_idx_]);
        }
    }

    void DrawModeRadioButton(MLDepthCameraMode mode) {
        if (ImGui::RadioButton(GetMLDepthCameraModeString(mode), camera_settings_.mode == mode) && camera_settings_.mode != mode) {
            camera_settings_.mode = mode;
            UNWRAP_MLRESULT(MLDepthCameraUpdateSettings(cam_context_, &camera_settings_));
        }
    }

    void DrawDataTypeRadioButton(MLDepthCameraFlags flag) {
        const auto flag_idx = GetIndexFromCameraFlag(flag);
        if (ImGui::RadioButton(GetMLDepthCameraFlagsString(flag), curr_idx_ == flag_idx) && curr_idx_ != flag_idx) {
            SetPreviewVisibility(curr_idx_, false);
            SetPreviewVisibility(flag_idx, true);
            curr_idx_ = flag_idx;
        }
    }

    void DrawIntrinsicDetails(const char* label, const MLDepthCameraIntrinsics& params) {
        if (ImGui::CollapsingHeader(label)) {
            ImGui::Text("Camera width: %d", params.width);
            ImGui::Text("Camera height: %d", params.height);
            ImGui::Text("Camera focal length: %.4f %.4f", params.focal_length.x,  params.focal_length.y);
            ImGui::Text("Camera principal point: %.4f %.4f", params.principal_point.x,  params.principal_point.y);
            ImGui::Text("Camera field of view: %.4f", params.fov);
        }
    }

    void UpdatePreview() {
        MLSnapshot *snapshot = nullptr;
        UNWRAP_MLRESULT(MLPerceptionGetSnapshot(&snapshot));
        MLTransform head_transform = {};
        UNWRAP_MLRESULT(MLSnapshotGetTransform(snapshot, &head_static_data_.coord_frame_head, &head_transform));
        UNWRAP_MLRESULT(MLPerceptionReleaseSnapshot(snapshot));
        grouped_node_->SetWorldPose(Pose{head_transform});
    }

    void DestroyPreview() {
        GetRoot()->RemoveChild(grouped_node_);
        grouped_node_.reset();
        for (auto& node : preview_nodes_) {
            node.reset();
        }
        glDeleteTextures(texture_id_.size(), texture_id_.data());
        texture_id_.fill(0);
    }

    void ResizePreview(int w, int h, uint8_t idx) {
        if (w != texture_width_[idx] || h != texture_height_[idx]) {
            texture_width_[idx] = w;
            texture_height_[idx] = h;
            is_preview_invalid_ = true;
        }
    }

    void SetupPreview() {
        if (texture_id_[0]) {
            DestroyPreview();
        }
        grouped_node_ = std::make_shared<Node>();

        // Generate textures to write data to.
        glGenTextures(texture_id_.size(), texture_id_.data());
        CameraFlagsArray<std::shared_ptr<Texture>> texs;
        for (uint8_t i = 0; i < texs.size(); ++i) {
            glBindTexture(GL_TEXTURE_2D, texture_id_[i]);
            glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, texture_width_[i], texture_height_[i], 0, GL_RED, GL_FLOAT, nullptr);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
            texs[i] = std::make_shared<Texture>(GL_TEXTURE_2D, texture_id_[i], texture_width_[i], texture_height_[i]);
        }

        // Create specific materials.
        const auto depth_idx = GetIndexFromCameraFlag(MLDepthCameraFlags_DepthImage);
        const auto conf_idx = GetIndexFromCameraFlag(MLDepthCameraFlags_Confidence);
        const auto ambient_idx = GetIndexFromCameraFlag(MLDepthCameraFlags_AmbientRawDepthImage);
        const auto raw_idx = GetIndexFromCameraFlag(MLDepthCameraFlags_RawDepthImage);
        shader_materials_[depth_idx] = std::make_shared<DepthMaterial>(texs[depth_idx], color_map_tex_, min_distance_[depth_idx], max_distance_[depth_idx]);
        shader_materials_[conf_idx] = std::make_shared<ConfidenceMaterial>(texs[depth_idx], texs[conf_idx], min_distance_[depth_idx], max_distance_[depth_idx]);
        shader_materials_[ambient_idx] = std::make_shared<DepthMaterial>(texs[ambient_idx], color_map_tex_, min_distance_[ambient_idx], max_distance_[ambient_idx]);
        shader_materials_[raw_idx] = std::make_shared<DepthMaterial>(texs[raw_idx], color_map_tex_, min_distance_[raw_idx], max_distance_[raw_idx]);

        // Create preview nodes.
        CreatePreviewNode(color_map_tex_, depth_idx);
        CreatePreviewNode(conf_map_tex_, conf_idx);
        CreatePreviewNode(color_map_tex_, ambient_idx);
        CreatePreviewNode(color_map_tex_, raw_idx);

        // Group nodes together and add them to the root to make them render.
        for (auto& node : preview_nodes_) {
            grouped_node_->AddChild(node);
        }
        GetRoot()->AddChild(grouped_node_);
        is_preview_invalid_ = false;
    }

    void CreatePreviewNode(std::shared_ptr<Texture> legend_texture, uint8_t idx) {
        // Create preview quad.
        auto quad = Registry::GetInstance()->GetResourcePool()->GetMesh<QuadMesh>();
        auto material = shader_materials_[idx];
        material->SetPolygonMode(GL_FILL);
        auto gui_renderable = std::make_shared<RenderableComponent>(quad, material);
        auto gui_node = std::make_shared<Node>();
        gui_node->SetLocalTranslation(glm::vec3{0.f, -0.05f, -2.5f});
        gui_node->SetLocalScale({1.f, -1.f * texture_height_[idx] / texture_width_[idx], 1.f});
        gui_node->AddComponent(gui_renderable);
        preview_nodes_[idx] = std::make_shared<Node>();
        preview_nodes_[idx]->AddChild(gui_node);

        // Create a colored bar legend over the preview.
        auto combo_node = std::make_shared<Node>();
        auto legend_quad = Registry::GetInstance()->GetResourcePool()->GetMesh<QuadMesh>();
        auto legend_mat = std::make_shared<TexturedMaterial>(legend_texture);
        legend_mat->SetPolygonMode(GL_FILL);
        auto legend_renderable = std::make_shared<RenderableComponent>(legend_quad, legend_mat);
        auto legend_node = std::make_shared<Node>();
        legend_node->SetLocalScale({1.f, 1.f / 51.2f, 1.f});
        legend_node->AddComponent(legend_renderable);

        // Create a text node under the legend bar.
        legend_text_nodes_[idx] = ml::app_framework::CreatePresetNode(ml::app_framework::NodeType::Text);
        SetLegendText(idx);
        legend_text_nodes_[idx]->SetLocalTranslation(glm::vec3{-0.505f, -0.02f, 0.f});
        constexpr auto text_scale = 0.02f / 8.f;
        legend_text_nodes_[idx]->SetLocalScale(glm::vec3{text_scale, -text_scale, 1.f});

        // Group legend-related nodes together.
        combo_node->AddChild(legend_node);
        combo_node->AddChild(legend_text_nodes_[idx]);
        combo_node->SetLocalTranslation(glm::vec3{0.f, 0.5f, -2.5f}); // Move everything away from the user, over the camera preview.

        // Set top preview node.
        preview_nodes_[idx]->AddChild(combo_node);
        SetPreviewVisibility(idx, curr_idx_ == idx);
    }

    void SetLegendText(uint8_t idx) {
        char legend_label[75];
        const float range_min = min_distance_[idx];
        const float range_max = max_distance_[idx];
        const char unit = legend_unit_[idx];
        snprintf(legend_label, 75, "%1.1f%c%32.1f%c%32.1f%c", range_min, unit, (range_max - range_min) / 2.f, unit, range_max, unit);
        legend_text_nodes_[idx]->GetComponent<ml::app_framework::TextComponent>()->SetText(legend_label);
    }

    void SetPreviewVisibility(uint8_t idx, bool is_visible) {
        for (auto& child : preview_nodes_[idx]->GetChildren()) {
            SetComponentVisibility(child, is_visible);
            for (auto& subchild : child->GetChildren()) {
                SetComponentVisibility(subchild, is_visible);
            }
        }
    }

    void SetComponentVisibility(std::shared_ptr<Node> node, bool is_visible) {
        auto comp = node->GetComponent<RenderableComponent>();
        if (comp) {
            comp->SetVisible(is_visible);
        }
    }

    void AcquireNewFrames() {
        MLDepthCameraData* data_ptr = &depth_camera_data_;
        const auto res = MLDepthCameraGetLatestDepthData(cam_context_, 0, &data_ptr);
        if (res == MLResult_Ok) {
            // Check and update basic frame data.
            UpdateFrameData(depth_camera_data_);
            CheckFrameNumber(depth_camera_data_);

            // Submit data to the shader.
            UpdateImage(depth_camera_data_, MLDepthCameraFlags_DepthImage);
            UpdateImage(depth_camera_data_, MLDepthCameraFlags_Confidence);
            UpdateImage(depth_camera_data_, MLDepthCameraFlags_AmbientRawDepthImage);
            UpdateImage(depth_camera_data_, MLDepthCameraFlags_RawDepthImage);

            UNWRAP_MLRESULT(MLDepthCameraReleaseDepthData(cam_context_, data_ptr));
        } else if (res != MLResult_Timeout) {
            UNWRAP_MLRESULT(res);
        }
    }

    void UpdateImage(MLDepthCameraData& data, MLDepthCameraFlags data_flag) {
        const auto idx = GetIndexFromCameraFlag(data_flag);
        auto data_map = GetCameraFrameBuffer(data, data_flag);

        // All depth data in this app is float type, just making sure that this is right and our vectors won't break.
        constexpr auto type_size = sizeof(float);
        if (data_map->bytes_per_unit != type_size) {
            ALOGE("Bytes per pixel equal to %d, instead of %ld! Data alignment mismatch!", data_map->bytes_per_unit, type_size);
            FinishActivity();
            return;
        }

        ResizePreview(data_map->stride / type_size, data_map->height, idx);
        SetNewFrame(idx, data_map->data);

        // Get min/max values for currently viewed feed
        if (idx == curr_idx_) {
            float* data_float_ptr = reinterpret_cast<float*>(data_map->data);
            const size_t data_size = data_map->stride / type_size * data_map->height;
            const auto [min, max] = std::minmax_element(data_float_ptr, data_float_ptr + data_size);
            min_value_ = *min;
            max_value_ = *max;
        }
    }

    void UpdateFrameData(MLDepthCameraData& data) {
        last_dcam_pose_ = data.camera_pose;
        last_dcam_frametype_ = data.frame_type;
        last_dcam_intrinsics_ = data.intrinsics;
        GetMLTimeString(data.frame_timestamp, last_dcam_timestamp_str_, last_dcam_timestamp_str_size_);
    }

    void GetMLTimeString(const MLTime time, char* cstr, size_t size) {
        timespec ts = {};
        UNWRAP_MLRESULT(MLTimeConvertMLTimeToSystemTime(time, &ts));
        const auto hours_div = std::lldiv(ts.tv_sec, 60 * 60);
        const auto mins_div = std::lldiv(hours_div.rem, 60);
        snprintf(cstr, size, "%lld:%02lld:%02lld", hours_div.quot, mins_div.quot, mins_div.rem);
    }

    void CheckFrameNumber(MLDepthCameraData& data) {
        const int64_t current_frame_number = data.frame_number;
        if (last_dcam_frame_number_ >= 0) {
            const int64_t diff = current_frame_number - last_dcam_frame_number_;
            if (diff > 1) {
            }
        }
        last_dcam_frame_number_ = current_frame_number;
    }

    void SetNewFrame(uint8_t idx, void* data) {
        if (is_preview_invalid_) {
            SetupPreview();
        }
        glBindTexture(GL_TEXTURE_2D, texture_id_[idx]);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, texture_width_[idx], texture_height_[idx], 0, GL_RED, GL_FLOAT, data);
        glBindTexture(GL_TEXTURE_2D, 0);
    }

    uint8_t GetIndexFromCameraFlag(MLDepthCameraFlags flag) {
        switch(flag) {
            case MLDepthCameraFlags_DepthImage: return 0;
            case MLDepthCameraFlags_Confidence: return 1;
            case MLDepthCameraFlags_AmbientRawDepthImage: return 2;
            case MLDepthCameraFlags_RawDepthImage: return 3;
            default:
                return 0;
        }
    }

    MLDepthCameraFlags GetCameraFlagFromIndex(uint8_t idx) {
        switch(idx) {
            case 0: return MLDepthCameraFlags_DepthImage;
            case 1: return MLDepthCameraFlags_Confidence;
            case 2: return MLDepthCameraFlags_AmbientRawDepthImage;
            case 3: return MLDepthCameraFlags_RawDepthImage;
            default:
                return MLDepthCameraFlags_DepthImage;
        }
    }

    MLDepthCameraFrameBuffer* GetCameraFrameBuffer(MLDepthCameraData& data, MLDepthCameraFlags flag) {
        switch(flag) {
            case MLDepthCameraFlags_DepthImage: return data.depth_image;
            case MLDepthCameraFlags_Confidence: return data.confidence;
            case MLDepthCameraFlags_AmbientRawDepthImage: return data.ambient_raw_depth_image;
            case MLDepthCameraFlags_RawDepthImage: return data.raw_depth_image;
            default:
                return data.depth_image;
        }
    }

    bool IsCameraInitialized() const {
        return MLHandleIsValid(camera_context_);
    }


    MLResult SetupCamera() {
        if (IsCameraInitialized()) {
            return MLResult_Ok;
        }
        MLCameraDeviceAvailabilityStatusCallbacks device_availability_status_callbacks = {};
        MLCameraDeviceAvailabilityStatusCallbacksInit(&device_availability_status_callbacks);
        device_availability_status_callbacks.on_device_available = [](
                const MLCameraDeviceAvailabilityInfo *avail_info) {
            CheckDeviceAvailability(avail_info, true);
        };
        device_availability_status_callbacks.on_device_unavailable = [](
                const MLCameraDeviceAvailabilityInfo *avail_info) {
            CheckDeviceAvailability(avail_info, false);
        };

        UNWRAP_RET_MEDIARESULT(MLCameraInit(&device_availability_status_callbacks, this));

        {  // wait for maximum 2 seconds until the main camera becomes available
            std::unique_lock<std::mutex> lock(camera_device_available_lock_);
            camera_device_available_condition_.wait_for(lock, 2000ms,
                                                        [&]() { return camera_device_available_; });
        }

        if (!camera_device_available_) {
            return MLResult_Timeout;
        } else {
        }

        MLCameraConnectContext camera_connect_context = {};
        MLCameraConnectContextInit(&camera_connect_context);
        camera_connect_context.cam_id = MLCameraIdentifier_CV;
        camera_connect_context.flags = MLCameraConnectFlag_CamOnly;
        camera_connect_context.enable_video_stab = false;
        UNWRAP_RET_MEDIARESULT(MLCameraConnect(&camera_connect_context, &camera_context_));
        UNWRAP_RET_MEDIARESULT(SetCameraDeviceStatusCallbacks());
        UNWRAP_RET_MEDIARESULT(SetCameraCaptureCallbacks());
        return MLResult_Ok;
    }

    static void CheckDeviceAvailability(const MLCameraDeviceAvailabilityInfo *device_availability_info,
                                        bool is_available) {
        if (device_availability_info == nullptr) {
            return;
        }
        DepthCameraApp *this_app = static_cast<DepthCameraApp *>(device_availability_info->user_data);
        if (this_app) {
            if (device_availability_info->cam_id == MLCameraIdentifier_MAIN) {
                {
                    std::unique_lock<std::mutex> lock(this_app->camera_device_available_lock_);
                    this_app->camera_device_available_ = is_available;
                }
                this_app->camera_device_available_condition_.notify_one();
            }
        }
    }

    MLResult StartCapture() {
        if (has_capture_started_) {
            return MLResult_Ok;
        }
        MLHandle metadata_handle = ML_INVALID_HANDLE;
        MLCameraCaptureConfig config = {};
        MLCameraCaptureConfigInit(&config);
        config.stream_config[0].capture_type = MLCameraCaptureType_Video;
        config.stream_config[0].width = capture_width_; // 3840
        config.stream_config[0].height = capture_height_;   // 2160
        config.stream_config[0].output_format = MLCameraOutputFormat_RGBA_8888;
        config.stream_config[0].native_surface_handle = ML_INVALID_HANDLE;
        config.capture_frame_rate = MLCameraCaptureFrameRate_30FPS;
        config.num_streams = 1;
        UNWRAP_RET_MEDIARESULT(MLCameraPrepareCapture(camera_context_, &config, &metadata_handle));
        UNWRAP_MLMEDIA_RESULT(MLCameraPreCaptureAEAWB(camera_context_));
        UNWRAP_RET_MEDIARESULT(MLCameraCaptureVideoStart(camera_context_));
        has_capture_started_ = true;

        return MLResult_Ok;
    }

    MLResult SetCameraCaptureCallbacks() {
        MLCameraCaptureCallbacks camera_capture_callbacks = {};
        MLCameraCaptureCallbacksInit(&camera_capture_callbacks);

        camera_capture_callbacks.on_capture_failed = [](const MLCameraResultExtras *, void *) {
        };

        camera_capture_callbacks.on_capture_aborted = [](void *) { };

        camera_capture_callbacks.on_video_buffer_available = OnVideoAvailable;
        UNWRAP_RET_MEDIARESULT(MLCameraSetCaptureCallbacks(camera_context_, &camera_capture_callbacks, this));
        return MLResult_Ok;
    }

    MLResult SetCameraDeviceStatusCallbacks() {
        MLCameraDeviceStatusCallbacks camera_device_status_callbacks = {};
        MLCameraDeviceStatusCallbacksInit(&camera_device_status_callbacks);

        camera_device_status_callbacks.on_device_error = [](MLCameraError err, void *) {
        };

        camera_device_status_callbacks.on_device_disconnected = [](MLCameraDisconnectReason reason, void *) {
        };
        UNWRAP_RET_MEDIARESULT(MLCameraSetDeviceStatusCallbacks(camera_context_, &camera_device_status_callbacks, this));
        return MLResult_Ok;
    }

    static void OnVideoAvailable(const MLCameraOutput *output, const MLHandle metadata_handle,
                                 const MLCameraResultExtras *extra, void *data) {
        DepthCameraApp *pThis = reinterpret_cast<DepthCameraApp *>(data);

        auto frame = output->planes[0];

        pThis->mLastFrameMutex.lock();
        pThis->mLastMainCameraFrame.resize(frame.width * frame.height * 4);
        memcpy(pThis->mLastMainCameraFrame.data(), frame.data, frame.width * frame.height * 4);
        pThis->mLastFrameMutex.unlock();
    }


    bool camera_device_available_, has_capture_started_;
    std::mutex camera_device_available_lock_;
    std::condition_variable camera_device_available_condition_;
    int32_t capture_width_, capture_height_;
    MLCameraContext camera_context_ {ML_INVALID_HANDLE};

    std::mutex mLastFrameMutex;
    std::vector<char> mLastMainCameraFrame;


    static const uint8_t CAMERA_FLAGS_NO = 4;
    template<typename T>
    using CameraFlagsArray = typename std::array<T, CAMERA_FLAGS_NO>;

    CameraFlagsArray<GLuint> texture_id_;
    CameraFlagsArray<int32_t> texture_width_, texture_height_;
    CameraFlagsArray<std::shared_ptr<Node>> preview_nodes_;
    CameraFlagsArray<std::shared_ptr<Material>> shader_materials_;
    CameraFlagsArray<std::shared_ptr<Node>> legend_text_nodes_;
    CameraFlagsArray<float> max_distance_, min_distance_, distance_limit_;
    CameraFlagsArray<char> legend_unit_;

    MLHandle cam_context_;
    MLHeadTrackingStaticData head_static_data_;
    MLDepthCameraSettings camera_settings_;
    MLDepthCameraData depth_camera_data_;

    std::shared_ptr<Texture> color_map_tex_, conf_map_tex_;
    std::shared_ptr<Node> grouped_node_;

    MLDepthCameraFrameType last_dcam_frametype_;
    MLDepthCameraIntrinsics last_dcam_intrinsics_;
    int64_t last_dcam_frame_number_;
    MLTransform last_dcam_pose_;
    char last_dcam_timestamp_str_[100];
    const size_t last_dcam_timestamp_str_size_ = sizeof(last_dcam_timestamp_str_);

    float min_value_, max_value_;
    int curr_idx_;
    bool is_preview_invalid_;

};

void android_main(struct android_app *state) {
    DepthCameraApp app(state);
    app.RunApp();
}

Do they go away entirely?

As far as I can tell. After turning the RGB camera back off, I have not seen the black banding occur again so far.

Awesome! Thank you for this information. I'll go ahead and pass it over to the team.

Would you mind rebooting your device and running the dcam sample to see if you still see the issue?

@alex.teepe thank you for providing code, we will take a closer look.

I have one question though. Earlier you said you could reproduce the "bands" running our dcam sample app which does not use rgb cam at all. Did you run your app before running our dcam app? Would it be possible to check our dcam app after reboot to see if bands are there?

I also noticed this in your code:

    // Initialize head tracker to move preview with head.
        MLHandle head_tracker;
        UNWRAP_MLRESULT(MLHeadTrackingCreate(&head_tracker));
        UNWRAP_MLRESULT(MLHeadTrackingGetStaticData(head_tracker, &head_static_data_));

Not sure if you use it for debugging purposes but wanted to let you know we have MLGraphicsFlags_Headlock option in ml_graphics api. You can check head_locked sample to see how to use it.

Sorry, I was mistaken. The depth camera sample app provided by Magic Leap does not produce the black bands issue on my device. I wrote my test app by copying and modifying the sample code, and I forgot when I added the RGB camera changes, and assumed the bands had been there from the start.

Thanks, that may have been copied over from my other RGB camera app where I was using the head pose to transform an image plane to keep it pinned like a HUD to a corner of the user's vision. I wasn't using that for this Depth camera app. I'll take a look at the head lock sample to see if it's what I'm looking for.

@alex.teepe thank you for providing details including srcs. we will check internally and get back to you.

Hi @alex.teepe,

Would you mind trying the latest version of the OS? There have been significant improvements were made for the DCAM API and underlying implementation. This may fix your issue.

Best,

El

Hi @etucker ,

I updated my sample app from the 1.2 to 1.3 SDK and re-ran my test. I noticed you really revamped the API for the depth camera. Good news is that also fixed the black banding issue. I ran long range, short range, 5hz, 30hz, and 60hz, and haven't noticed any visual tearing or other issues. The latest SDK/OS improvements seem to have fixed them all.

Thanks!