Hello Community,
I'm working on a native 2D Android application for Magic Leap 2 and I am interested in integrating gaze and head tracking to control the UI. The goal is to enable users to interact with the application—such as clicking buttons and navigating between screens—using gaze direction.
I'm currently utilizing the C APIs for development and would like to understand how I can leverage these for gaze or head tracking within a 2D application context.
Here are the details of my development environment:
Android Studio version: Android Studio Giraffe | 2022.3.1
Build #AI-223.8836.35.2231.10406996, built on June 29, 2023
Runtime version: 17.0.6+0-b2043.56-10027231 amd64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o.
ML2 OS version: 1.4.0
MLSDK version: 1.4.0
Host OS: Windows 11
I have experience building 3D applications using Magic Leap's Unity Examples, but I'm now transitioning to a 2D interface and facing challenges in mapping the 3D head tracking data to a 2D plane for UI interaction.
I haven't encountered any error messages since this is a conceptual stage issue. I'm seeking advice on how to start, any relevant examples, documentation, or insights from those who have worked on similar integrations.
Thank you for any help or direction you can provide!
Hi @usman.bashir,
The best way to get started with eye tracking in a native application is by checking out our native samples. To learn more, check out: Native Development | MagicLeap Developer Documentation
@usman.bashir 2D apps are largely ignorant of the fact that they are rendering in a 3d context. We abstract the Control and Hand tracking events into standard android Mouse / Motion events which is why android 2d apps mostly "just work" without any modifications.
Ill ask @ababilinski to bubble eye Gaze as an additional request to this system with the Voice of the Customer team.
So,
If I want my existing 2D applications to support gaze control, I would need to shift their context to 3D, correct?
- What are the minimal steps I can take to make this happen?
- Can you please list down the steps how can I convert my 2D app to 3D app using Android Studio?
- Do I need to develop the UI of my app using Magic Leap's APIs?
I am seeking specific details concerning Android development, as I need to implement gaze control in my Android apps.
I think you might be blocked here unless you also wish to replace the UI with primitives from something like MRTK3. By switching to a Native Application (3D) context from an Android View (2D) you'll lose a lot of that built in UI framework and it will essentially be a complete rewrite of the application.
At current there is no practical way to get Gaze interactions while being an Android 2D application, you would also need to know the world coordinates of the floating Android window which is not currently exposed to even determine the gaze intent.
If all this work doesn't dissuade you and you want to completely rewrite the applications from scratch then I suggest you use MRTK3 which has pretty nice Gaze + Pinch support built into it.
If this isn't absolutely critical to you then once we assign Gaze interaction to the release schedule in the future ill bump this thread again and your Android 2D app will get it "for free".
Thank you so much for clarity on things.
If this isn't absolutely critical to you then once we assign Gaze interaction to the release schedule in the future ill bump this thread again and your Android 2D app will get it "for free".
Could you please provide an estimated timeline for when gaze interaction will be added to the release schedule for Android 2D apps, as previously mentioned?
@usman.bashir We can't share any timelines yet but I will keep you posted if I get any more information. Would you expect the interaction to be a combination of voice and eye tracking, voice and hand tracking or for items to be selected after a period of time. If possible do you mind sharing more information about your use case?
Acknowledging the challenges in adapting a 2D application without source code for Unity, I'm looking into gaze or head movement controls for button selection to bypass the need for ML2 controller, voice, or hand gestures, aiming for a more accessible user interface.
Thank you for that information. I will add that to the request that was submitted to Voice of Customer
1 Like
We understand you want to use only eye movements for input. However, using eyes to click or select, like with blinking, is hard. When you try to keep looking at a target to select it, blinking by accident can stop the action. Even with filtering, if you tried to use eye gaze to "dwell" or stay on a button to activate it, blinking could disrupt this, making it difficult to use effectively.
To better understand your vision and how you imagine this working, could you share a brief description on how a user would interact with the system using eye-based controls. It'll help us grasp what you're aiming for and discuss possible solutions.