If anyone's wondering, I did finally get OpenCV for Android Native compiling and running inside one of the samples. Took about a week of fighting nonstop with Android Studio and gradle in particular.
I'll post the steps I took to get it working here for anyone else who needs this (and for my future self if I forget)
The best instructions and only resource that worked for me was from this repo, loosely following the "How to create the native OpenCV project from scratch" section, but starting from one of the ML samples instead of a blank project, and skipping the java portion.
I used the latest OpenCV 4.7.0, downloading from the option for Android, which has Aruco library included
https://opencv.org/releases/
I used the Camera-Preview sample as the base project to start from and
Modified the CMakeLists.txt to add include directories, and library location

Edit the build.gradle file's externalNativeBuild->cmake->arguments to use -DANDROID_STL=c++shared instead of -DANDROID_STL=c++static. The project will compile even if you don't, but will fail to load the library on launch without it, since the OpenCV libraries require a shared lib, not a static lib.

and then for good measure, add all of the camera permissions
<uses-permission android:name="android.permission.CAMERA"/>
<uses-feature android:name="android.hardware.camera"/>
<uses-feature android:name="android.hardware.camera.autofocus"/>
<uses-feature android:name="android.hardware.camera.front"/>
<uses-feature android:name="android.hardware.camera.front.autofocus"/>
And then the build error that had me stumped for days was
The server may not support the client's requested TLS protocol versions: (TLSv1.2, TLSv1.3). You may need to configure the client to allow other protocols to be used.
Which had dozens of answers on how to fix, but none worked for me, except for finally editing the gradle.properties file and adding the line
systemProp.https.protocols=TLSv1.2
in order to prevent it from trying to use TLSv1.3, I guess, which some sites don't accept, and was causing gradle to inexplicably fail on the most basic steps as it was unable to resolve hosts to download dependencies from.

And finally I was able to edit the Camera_Preview sample to convert the camera's image data into a cv::Mat and do corner detection on it
static void OnVideoAvailable(const MLCameraOutput *output, const MLHandle metadata_handle,
const MLCameraResultExtras *extra, void *data)
{
CameraPreviewApp* pThis = reinterpret_cast<CameraPreviewApp*>(data);
// edited camera settings to read 4k images at 30 fps
auto frame = output->planes[0];
// initialize cv::Mat using a pointer to the image buffer inside of frame.
// (No Copying occurs)
// (But data is stored as RGBA, while opencv thinks it's BGRA, though that doesn't matter for my purposes)
cv::Mat mat = cv::Mat(frame.height, frame.width, CV_8UC4, frame.data);
// optional initialization by copying data
// (memcpy Takes about 5ms)
//cv::Mat mat = cv::Mat(frame.height, frame.width, CV_8UC4);
//memcpy(mat.data, frame.data, frame.size);
// (if you explicitly convert from RGBA to BGRA, it takes about 15 ms on the Magic Leap 2)
//cv::cvtColor(mat, mat, cv::COLOR_RGBA2BGRA);
// (if you convert from RGBA to grayscale, it takes about 3 ms)
cv::cvtColor(mat, mGrayscaleImage, cv::COLOR_RGBA2GRAY);
// I run camera calibration on pre-collected images in background on startup,
// and wait until calibration finishes running before doing aruco detection.
if (pThis->mbCameraCalibrated)
{
std::vector<int> markerIds;
std::vector<std::vector<cv::Point2f>> markerCorners, rejectedCandidates;
ALOGI("attempting to find markers\n");
// Detect marker(s) and corners
// This takes around 50-80 ms on 4k images on Magic Leap 2
pThis->mArucoDetector.detectMarkers(mGrayscaleImage, markerCorners, markerIds, rejectedCandidates);
std::map<int, std::vector<cv::Point2f>> markers;
for (int i = 0; i < (int)markerIds.size(); i++)
{
int markerID = markerIds[i];
std::vector<cv::Point2f> corners = markerCorners[i];
markers.emplace(markerID, corners);
ALOGI("detected marker: %d", markerID);
}
// drawing markers only works on 1 or 3 component images, so remove alpha and add back after.
cv::Mat tmp;
cv::cvtColor(mat, tmp, cv::COLOR_RGBA2RGB);
cv::aruco::drawDetectedMarkers(tmp, markerCorners, markerIds);
cv::cvtColor(tmp, mat, cv::COLOR_RGB2RGBA);
}
CameraPreviewApp *this_app = reinterpret_cast<CameraPreviewApp *>(data);
if (!this_app->is_frame_available_) {
memcpy(this_app->framebuffer_.data(),
mat.data, // output->planes[0].data // display cv image instead of raw camera footage
output->planes[0].size);
this_app->is_frame_available_ = true;
} else {
// When running with ZI, as the video needs be transfered from device to host, lots of frame
// dropping is expected. So don't flood with this logging.
#ifdef ML_LUMIN
ALOGW("%s() dropped a frame! This should never happen, apart from the app startup/teardown phase!", __func__);
#endif
}
}
Benchmark results on the Magic Leap 2:
- Converting ML 4k camera frame into a cv::Mat using memcpy: 5ms
- using cv::cvtColor(mat, mat, cv::COLOR_RGBA2BGRA) to convert from camera's RGBA to openCV's expected BGRA: 15ms (optional depending on your usage)
- using cv::cvtColor() to convert from RGBA to Grayscale: ~3ms
- cv::Aruco::ArucoDetector::DetectMarkers(): 50-80ms depending on the frame
Based on DetectMarker corners alone, the ML2 is only able to detect markers using CV's aruco library at around 12.5 - 20 fps at 4k, so I will likely need to do that part asynchronously in separate threads, but the accuracy of the borders looks really spot on, so that's good.
Hopes this helps anyone also wanting to use OpenCV or aruco detection. Took me a really long time to get this working.