World aligned point cloud?

I’ve been working on object detection with the depth and RGB camera and I’m pretty close. I have a small problem with rotational alignment being off and I tracked it back to the depth camera. I wasn’t thinking and I realize now that the point cloud I reconstruct from the depth camera is going to show my objects as tilted if my head is tilted. And further you can’t use pose of the headset / camera to fix this because it has no relation yet to the object. In otherwords the depth camera gives me a point cloud but it’s not a world aligned point cloud (by design). So my question is has anyone solved this issue already? Is there a smart way to get a world aligned point cloud with the ML before I go off on an adventure to try to solve this?

Thank you!

Did you make the change to the SDK that allows you to query a depth point at a specific sensor time?

No? Sorry not sure what you mean here.

The change mentioned here : Hologram Drift Issue When Tracking Object with Retroreflective Markers using Depth Camera (Raw data) - #8 by kbabilinski

Ah while that’s good information to know for my next step it doesn’t address the point cloud being misaligned from the depth sensor. Which I know is natural for the raw depth sensor, I was just hoping there was already an alignment solution out there. Like aligning the depth sensor data to the mesh for example.