Unity Editor version: 2022.3.54
ML2 OS version: 1.10.0 (ML2 Developer Edition)
ML2 Unity SDK: 2.5.0
Host OS: Windows
Hi,
I am trying to integrate Vuforia to ML2, but it does not work. I am using latest version of ML2 SDK.
I use free basic Vuforia license with cylinder targets.
I used ML Rig prefab to create ML2 scene, and added "Vuforia Behaviour" and "Default Initialization Error Handler" scripts on to the camera as described in Vuforia website "Getting Started with Vuforia Engine and Magic Leap 2". Vuforia works in Unity editor with webcam on ML Rig, but it does not work in ML2 glasses.
In Vuforia website, it is written as "To set up your scene, replace the Camera with the Magic Leap Main Camera->Prefab, located in Packages/Magic Leap SDK/Runtime/Tools/Prefabs/Main Camera.prefab", but there is no such a prefab in new ML2 SDK.
I gave all the permissions of ML2.
I tried with normal unity camera, I tried with Complete XR setup camera, but nothing works.
It is important to use Vuforia for us since ML2 SDK does not support cylinder targets, and plane targets are unrealiable since sometimes they have the risk of being located in reverse orientation. We are developing a medical surgery navigation system and we would not like to have any error risks.
Hi, I am trying to make Vuforia work for almost 3 weeks. I managed to do it except one thing.
At first Vuforia never works with ML-Rig. If there is any XR Session on an object Vuforia cannot start the camera (not RED camera dot appears). But if I remove XR session from the rig, then Vuforia works but ML trackings cannot work correctly, depending on different settings, either head/controller are not moving or there is an offset with real objects (for instance, controller or hands are not on real world objects, there is several cm offset).
Vuforia works correctly with old ML Camera which is initialized from camera prefab in Magic Leap SDK packages folder and if Settings are checked as "Magic Leap" in XR Plugin Management settings in addition to OpenXR. But this time Secondary View is not working so there is a significant difference between camera image and virtual objects. If I use stereo convergence point offset decrease but not enough to showcase people.
If Settings are NOT checked as "Magic Leap" in XR Plugin Management. Then everything works but this time Vuforia cannot locate objects to its correct position. Vuforia targets are visualized on floor instead of in front of me. With old ML Camera if "Magic Leap" is not checked, XR origin is not set correctly. Although I select device from anywhere I found in settings including MRTK3 settings, floor is chosen and head coordinate starts at about 1.05 meters instead of zero meter. So Vuforia thinks it is putting objects on device but actually it puts markers to floor.
I spent weeks to find a working configuration but always at least one thing works in correct. What shall I do?
Sorry about the trouble. I did some cursory testing with Vuforia on my ML2 and it seems to work okey for me, at least just tracking 2d targets.
First, just to be sure-
Were you able to successfully build and run the digital eyewear sample provided by Vuforia on your ML2?
Are you now trying to build an OpenXR application from scratch?
Given that, let's clarify some of the issues described here-
It sounds like you tried to create a scene from scratch using the ML rig sample prefab that's available to import from the ML Unity sdk package. You don't see the recording icon and Vuforia tracking doesn't work unless you remove or disable the ARSession script on the rig, which you probably wouldn't want to do.
It sounds like you tried to use the MLCamera apis. MLCamera is still supported by the way, even though it may make non-OpenXR calls internally. You'll need to enable perception snapshots in your project if you want to use it. (at least if you need to be able to extract pose information) See notes here- MLCamera | MagicLeap Developer Documentation
It sounds like you observed a ~meter offset. If you're making use of MLCamera, then make sure that perception snapshots are enabled as in the previous point.
It sounds like some of your problems may have been related to having perception snapshots disabled. If that's not it, maybe we can work through establishing clear requirements for your application and ensure that we can build a simple scene that meets them. (eg- what ML platform functionality do you require)
I used to make digital eyewear sample work before. It works ok when Magic Leap Plugin Provider is checked along with OpenXR checked also. But the problem is Secondary View does not work when Magic Leap is checked as seen in the image. When Secondary View does not work virtual objects not correctly overlap video stream. We have a congress on Friday, I would like to successfully demonstrate Magic Leap with our App on screen, but I cannot. We are using standard image and cylinder targets for now.
When I unchecked Magic Leap, then objects exactly overlap video stream but this time the XR Origion changes incorrectly to floor, although it shall not be and Vuforia works incorrectly. Perception Snapshots are checked now and settings are as in the image.
Unity does not support using both the Magic Leap and OpenXR Magic Leap XR Loaders at the same time. Only one plugin providers can be loaded at a given time. This is most likely the reason why you are not seeing a difference when the OpenXR feature is enabled.
If you choose to you the legacy Magic Leap XR loader, you can decrease the amount of offset between the virtual and physical content by specifying the Stereo Convergence Point on the Magic Leap Camera Component that is attached to the Main Camera.
Set the Convergence point to the object that should be the most closely aligned with the physical environment. (Note : even when the point is specified, the content in the MR camera capture will still have some offset.)
I already use Stereo Convergence Point and focus on the farest object in scene, but as you say it is not accurate enough.
I dont want to choose legacy ML XR loader, but Vuforia don't track correctly with OpenXR and new ML Rig.
It seems I have to do the secondary view rendering myself and put on top of non virtual video stream myself, but how can I find exact relative camera position, rotation and FOV settings of RGB camera stream in order to put a second camera relative to the ML Camera Rig, get render and overlap to RGB image?
The convergence point can be set to the point you want to best align with your scene. If you are using Vuforia you may want to set it to be the object that appears when using their marker or object tracking.
Attempting to fix the offset manually is a nontrivial task, and would require a lot of planning in order to keep the performance impact to a minimum.