How to expose API to WebXR?

Hello there!

I've been thinking for a long time, also in previous VR/AR headsets, about how to expose native APIs (eye tracking, spatial mapping, camera access for computer vision, ...) into WebXR.

Do you think that I can create some kind of "run in background" app (maybe in Unity) with some local websocket server to communicate with a web app, like MLCamera results of some computer vision operations? or EyeTracking data?

It could be better if there was already some browser flags that can used to directly access some functions, at least from a developer/prototyper/researcher perspective.

In all cases, it can be great to found a way to give access to some API directly to the web without depending on anything (will it implemented by the team? when? etc.?)

I think I'm not the only one that could be interested:

Also, if the method is really clever, it can be used inside other headsets.

Thanks in advance,
Have a good day!

Hi @hyro,

I have shared your request with our voice of customer team. For creating background applications, you can check out this documentation from Android.

Best,

El

We are working on supporting more device features via the WebXR APIs. That said, we do support hit test on WebXR when using the 1.5.0 OS. Is there a specific API that you would like to see? Do you mind providing more information about the application you are trying to build?

Thanks for your responses.
From a researcher perspective, it could be great to access all the native API like depth camera, image tracking, eye tracking, etc.
The applications that I will build explore new interactions, notably between AR headset and handheld devices (smartphones, tablets, ...).
In priority, image tracking would be great, but I'm also really interested about eye tracking in WebXR.

Have a good day !

1 Like

I'm agree with hyro, will be great having available in JavaScript (magic leap's browser space) APIs for eye tracking, head tracking, magic leap spaces, planes, etc.