I’ve been working on object detection with the depth and RGB camera and I’m pretty close. I have a small problem with rotational alignment being off and I tracked it back to the depth camera. I wasn’t thinking and I realize now that the point cloud I reconstruct from the depth camera is going to show my objects as tilted if my head is tilted. And further you can’t use pose of the headset / camera to fix this because it has no relation yet to the object. In otherwords the depth camera gives me a point cloud but it’s not a world aligned point cloud (by design). So my question is has anyone solved this issue already? Is there a smart way to get a world aligned point cloud with the ML before I go off on an adventure to try to solve this?
Thank you!