Small tracking tilt, problematic over large distances

We have customers using ML2 in large spaces and they find that after walking a distance, the height of the hologram is off. It seems to be a subtle tilt in the devices understanding of vertical axis that isn’t apparent until the user walks. There must be a way to recalibrate to fix this.

We’ve observed this in at least half a dozen different headsets over the course of the last 2 years on multiple versions of the operating system. It’s not on every device, or at least not uniformly present. We need a way to solve it without sending it in for repair. It’s too frequent a problem and the customers facing it are widespread geographically.

Regarding the magnetometer, the Magic Leap has a threshold for the accuracy of the gravity orientation. The gravity vector will always slightly skewed, it’s difficult to compensate for the resulting error. In some sessions the mesh may tilt a bit left or right; in others, forward or backward.

I suggest using plane finding and adopting the floor plane’s normal as your up vector, which is often more accurate. Alternatively, place three markers on the floor and derive the floor plane’s orientation from their detected poses.

Ok. So my understanding is that the gravity vector can’t get super accurate and we need to account for that in software. I can do that. I have seen it be suddenly more severe before. What triggers a recalibration? If that happens, how do we fix it in the field?

The sudden shift or tilt might be a result of the headset attempting to regain head-pose. Does the sudden tilt usually resolve itself after a few seconds?

The headset tries to regain the head pose if the world cameras cannot capture enough stable feature points or if the view is unstructured.

I flew to the site and got more insight on the conditions there. I also brought an experimental feature to override the tilt but hit a major problem in testing it. That was extreme drift. Historically, we’ve been able to treat the ML2 as a near drift-free device. But in these very large rooms with plain, if messy, black floor. Some shine to it. We were experiencing as much drift as we usually see in iPads. walking in just a 20 meter circle would see as much as .2m offset from the expected location. If we return to the starting location, it would snap back. But by introducing warping and seams in the map, we have clashes between the alignment points and in general, the customer can’t trust the alignment as much as they need to.

I would love more insight on the particulars of the SLAM that are failing so severely in this environment. Is it using the lidar as well as the cameras? Could that be why we’re getting the height drifting so much with black floors? Is it using a pnp (or similar) that fails with planar features?

Because the floor, while black and featureless in shape, was actually quite messy, with perfectly unique scuffs and tape marks and other visual details that should contribute well. Is there anything we can do in terms of settings or modes or anything that could help improve this tracking? significant or time-consuming changes to the environment are a non-starter. We sort of get what we get there.