At first blush it looks like methods for capturing frames from ML2's world cameras won't work with the RGB camera, and vice-versa. For example, the header 'ml_world_camera.h' addresses the (3) world cameras but not the RGB camera. Is there a unified approach that works with both camera types?
Welcome to the Magic Leap 2 developer forums. We are so happy to have you here engaging with us.
May I ask what your use case for this question? Also what version of the MLSDK are you running and what is your OS version on the headset?
Hello @etucker, thanks for the quick reply!
To answer, we'd like to capture depth, RGB and world cameras as synchronously as possible to prevent any camera motion between captures. Since the depth camera updates are ~5Hz, we'd like to trigger depth polling first and then have callbacks to capture world+RGB camera stills as close to simultaneously as we can manage. For that reason I was hoping for a unified retrieval approach, but it seems the camera polling code was developed separately and for different needs, so there's no way to treat the cameras as indices to the same method.
Is that right?
I'm running June 2023 Release - 1.3.0-dev2 with the latest headset OS via the ML Hub.
Okay thank you for clarifying. I've reached out to our team to see what the best practices are when retrieving depth camera and RGB camera images simultaneously.
At the present moment we do not provide APIs that allow for capturing data across multiple sensors. In the meantime, you should still be able to write your application to capture data from these sensors separately but they are not intended to be used in unison currently.
You may be able to use the timestamp from each of the frames to determine which frames were captured within a period of time. However, the API cannot guarantee that the data will be captured in sync.
Note, you can use the MLTime API to convert between System and MLTime. For example:
var result = MLTime.ConvertMLTimeToSystemTime(frame.TimeStamp, out long timestampNs);