How to adjust camera settings (shutter speed/exposure) to reduce motion blur for marker tracking?

Hi, I'm wondering if it's possible to adjust the main camera settings (shutter speed/exposure) to reduce motion blur or get sharper images in a video capture?

I'm using a 4k video feed from the main camera and attempting to track markers continuously frame by frame, using custom opencv based marker tracking rather than ML marker tracking API. The headset is intended to be relatively stable, but the marker will be handheld and dragged across a surface while in use, and the intention is to closely track the path of the marker across said surface.

The environment will be a well lit interior room, so low lighting isn't a concern. The user will not see the images/video feed either, so visual quality from a human perspective is not important, as long as the marker edges and corners appear sharp and clearly detectable by image processing algorithms as black borders on a white background.

I know image sharpening filters exist, but for the best accuracy, I'd like to first check if we can adjust the shutter speed or exposure time to get sharper images directly from the video feed?

Thanks

ML2 OS version: 1.1.0
MLSDK version: 1.1.0
Host OS: Windows 10

Yes, can adjust camera parameters such as exposure of the RGB camera sensor using camera_metadata and MLCameraMetadataSetSensorExposureTime. Adjusting these parameters to get the desired result might be difficult. Please read the inline documentation for this specific API and the description at the top of the ml_camera_metadata_v2.h.

Currently we do not have an example that demonstrates this API. However, I did send this request to our voice of customer team.

1 Like

Hi @kbabilinski, thanks for the help and passing along the request to create an example for this.

I have tried using MLCameraMetadataSetSensorExposureTime() in the Camera_Preview native api sample, which shows a live preview of the camera in view, but I'm not seeing any difference in the image regardless of what exposure time set.

I've attached my code changes so hopefully you can check if I'm modifying it correctly.

Camera Setup: CV camera, video only (Though I've tried with MAIN camera as well and have same issue)

Capture Setup: 4k RGB 30fps
I set an initial exposure time, immediately after MLCameraPrepareCapture() (have tried both setting here and waiting until after receiving first frame), and save the camera meta data handle

On video capture: getting and setting camera exposure time each frame as sanity check

The documentation says the exposure time is in nanoseconds, and I've tried setting to 10ms, 5ms, 1ms, 0.1ms, 0.05ms, 1microsecond, and even 1 nanosecond, and the api claims that each call succeeds regardless of how low I set the exposure, but the onscreen video playback image stays the same regardless of my exposure setting, where you'd expect the exposure value to brighten or darken the image, but I see no change.
image

Is there something I'm doing incorrectly with the camera or meta data handle?

Thanks

And here are some example screenshots I took using Device Stream.

These images are supposedly taken with 1 ns exposure (I doubt that's even possible, but api claims it set properly), and image looks the same regardless of exposure I set.

And here's some manually introduced motion blur that I'm trying to reduce.

So I finally figured out how to apply a fixed exposure time, I was missing some calls to disable AE (auto-exposure) mode, and update the camera settings after setting the new exposure time.

The minimum additions you'd need to make to apply a new manual exposure time is:

          MLCameraMetadataControlAEMode mode = MLCameraMetadataControlAEMode::MLCameraMetadataControlAEMode_Off;
          MLCameraMetadataSetControlAEMode(metadata_handle, &mode);
          
          int64_t exposureTime = 2000000;  // 2.0 ms
          MLCameraMetadataSetSensorExposureTime(metadata_handle, &exposureTime);

          MLCameraUpdateCaptureSettings(camera_context_);

Here's my full code inside of StartCapture(), highlighting the portions I was missing.

Previously, the camera was stuck on an exposure time of 16.656 ms (1/60), and I managed to get the exposure down to 2ms. Any lower and the image ends up too dark for marker detection, and I can't find any reference to Aperture settings in the API.

Is it possible to increase the Aperture to compensate for shorter shutter speeds?

Thank you for posting your solution. I am tracking down your question regarding the Aperture.

You cannot adjust the Aperture speed. Currently the only way to improve the results is to improve the lighting conditions. I have passed this information along to our voice of customer team.

Thanks for your response. That is unfortunate to hear, though I will continue to find a workaround.

As an update to how this affects what I'm working on, here's some images of the motion blur I was able to reduce.

Using the camera's default 1/60 shutter speed (16.656 ms), this is how a 35x35mm aruco marker gets blurred during light motion:


As you can see, this introduces some error in the corner detection, particularly on that upper left hand corner.

The corner error is only about 0.8mm off, but can cause up to +/- 5mm error in the object I'm tracking (The marker is on top of a stylus and I'm tracking the tip of the stylus about 100 mm away).

Adjusting the shutter speed down to 1.5 ms gave me much sharper images, even during intense motion, albeit the images are very dark, and dropping exposure any lower will prevent it from detecting the markers due to lack of contrast.

However, even with blur basically eliminated, head/camera motion is still giving me noise/error in tracking.

Here's a video example to help explain (if I can get the embedded link to work)

And an image explaination

The yellow dots are points I'm collecting while dragging a stylus along an area. (The blue dot is the current position). The outline is the approximate size of the area I need to track within and is about 50mm by 35mm. Also within this area, we're trying to get < 1mm error while tracking.

The outline was drawn while the camera was stable and the stylus was moving. I then put the stylus in the middle of the view and began turning/moving my head around and the blob of points in the middle is the amount of error introduced by the camera motion. All markers in view were perfectly stable during this time.

Since motion blur has been basically eliminated thanks to the shorter shutter speeds, I'm thinking there must be some other form of motion related distortion occurring. Video stabilization is turned off on my settings, and am not sure if that would help or hurt the issue, but one of my camera experienced coworkers suggested it may be a rolling shutter issue, so I took some video with camera motion in it to see if there was any rolling shutter related distortion, and got this as an example:
(2ms shutter speed, holding a pair of metal calipers in my hand and turning my head left and right)

The calipers are perfectly straight, but appear slightly curved in this frame, so I'm guessing rolling shutter is to blame for the noise/distortion. I don't suppose there's anything we can do to minimize that?

In the end, I'm betting we'll have to take another approach and simply disable tracking during moderate head motion by polling the gyro and accelerometer data like from here: Magic Leap 2 IMU data

This is extremely helpful. I have shared all of this information with our teams. I will let you know as soon as I have more information about decreasing the motion blur when using our camera capture.

Have you tried using the world cameras. They will not have the rolling shutter issue and will have reduced motion blur. However the resolution and framerate on those cameras is much lower than the standard RGB camera.