This is the first time I develop applications in AR. So my questions might be stupid. I try to build an industrial application where users use it generaly in dark rooms, so I want to use an extra head tracking system to track their heads in stead of the built in one, then render the content and display it on usesers glasses, I am now choosing the right AR/VR glasses and want to make sure if magic leap 2 allow me to do it.
The pipeline I prefer is following:
first eye tracking get the angles of eyes.
2nd the seperate head tracking system get the head coordintes
3rd, eye angles together with the head coorinates are sent to unity or other engine and render the views for both eyes accordingly
My questions are:
1.Can I magic leap 2 in this way? If it works, will this pipline have a larger latency than the native one, assume the head tracking system is way faster than the 120Hz refresh rate of the display.
2. In general, which module handles the eye angle and head coordinates to render view convertion part, is it unity sdk or mrtk or urp or something else? Do I need to write this algorithm by my self?
3. Does it render every frame of scenes base on the angles of eyes detected in real time or not, assume if I don't need the eye focus interaction.
4. Does the unity or any compatible engine compatible with magic leap2 currently support this external head tracking input?
5. Does magic leap 2 have a display only mode, in which I can let unity or other engines/apps display the content on magic leap 2 directly using the type-c port, without interfereing any thing with head or eye tracking data?
6. Is the 2000nits brightness the final light energy to eyes after all the optical components or the brightness of the light source before passing through all the optical components?
7. When I use AR cloud , assume there is sufficient hardware resource, the document said ar cloud supports up to 10,000m2 of room. Is this the biggest size one user can experience for one project, what if I want to build a construction site larger than this size?
Thanks. Have to understand how magic leap 2 works before make the decision, really love the performance of magic leap 2.
Thank you for reaching out. We are so glad to hear that you are interested in the Magic Leap 2 headset.
I will answer some of your questions here. For the others, I will need to consult my team to make sure I am giving you the most accurate information possible.
1.) The Magic Leap 2 headset is able to work with external hardware through the USB-C port on the compute pack as long as that device is compatible with Android devices. There is only one port, so it is not possible to charge the headset while using external hardware.
2.) The eye data is securely acquired and distributed by the headset with permissions. Once that data is acquired, it can be used in your applications by accessing it using our API's.
3.) Eye input is not required for the most basic functions of the headset. We only acquire eye data if the user grants permissions.
4.) Unity does not offer built-in support for some external hardware. You would have to check with Unity to see if they have plugins for the particular hardware you are using.
Display only mode I called is more like a streaming mode, in which the headset get video or sequence of images from a computer, plays whatever the computer sends, just like a monitor on ones eyes. If it works like that, will there be any latency?
Yes. Basically, I want to extent the display using the display only mode. It dosn't have to be with a hardware level direct type-c, if I can use some software to display what displayed on my computer screen on this glasses at a fairly low latency, then, it is fine. Just like some VR glasses does.