Fallback to IMU Only Tracking in Low Light Conditions?

As it is known when ambient brightness falls below the certain threshold, ML2 displays the “Tracking lost” warning and SLAM pauses.

I see couple posts that are mentioned low light problem as well, I’ve found one answer which mentions “Regarding Tracking: The headset uses feature points to track your position in a space as well as IMU sensors. However, since the tracking is primary based on tracking visual feature points, it is unable to track the user using the IMU sensor alone.”

So my question is even though primary based is feature based tracking, Can ML2’s pipeline be forced into an IMU only mode when feature based tracking fails?
Even this would not be accurate as much before, fallbacking to this scenerio essentially what we’re need.

This feature is not currently on the roadmap. Since the device relies on the world cameras for much of it’s functionality (Controller Tracking, Hand Tracking, Head Tracking) it would be difficult to implement. However, I have submitted your post as feedback to keep this in mind.

1 Like

Thank you for the reply, I have a follow up question regarding to handling trakcing loss events mentioned in Handling Tracking Loss . It states that “The Magic Leap 2 lets developers manage their own tracking loss behavior – some developers may want to pause the update loop and display a splash image, while others may want the app to continue playing.” In this context I’d like to ask questions Can I still render virtual elements even warning is still displayed? and Is there any way to overlay custom ui on top of system warning?

Hi @Kaan , sorry that appears to be a misprint. The custom behavior was supposed to refer to what your application does after the head pose is lost. Unfortunately you cannot continue to render content when the warning is displayed and there is no way to disable this notification.

1 Like

Thank you for the information.

Hi again, Is there low level access to the camera feed buffer?

I’m not sure I completely understand but, you can access the RGB camera using the Android Camera API. You can also access the camera image using the ML Camera API, which does not provide this type of functionality out of the box.

Regarding the world cameras, this Pixel Sensor API does not provide this type of access to the camera data