Hello.
I’m trying to figure out how to properly instantiate objects in the world based on the mouse input over the ML MR camera sent to a desktop app.
On the computer app, [0,0] is bottom-left, and [1,1] is top-right, and the camera texture received has 1440x1080. I noticed that the Magic Leap Viewport in Unity is 720x920.
My problem is that i cant seem to find the correct way to instantiate the objects in the world. There seems to be a small offset which i cant seem to find the proper way to adress it.
I tryed to use the “CameraUtilities” present here “Localizing detected faces using ML2 transformation matrix?” but it mades things worse.
A simple exercise i made:
- created a Line render (inf form of spiral) on the PC App, over the camera textre, around the edges and going to the center;
- sent this line render points to the magic leap.
- instantiate this line using a CustomLineRenderer script on the magic leap app, in mid air, using a simple script (bellow).
- on the PC app, we can see that the CustomLineRenderer is not properly centered as expected, and is not completly visible (the line around the edges cant be seen).
Script used to render the line in the ML app:
Vector2 flat_coords;
Vector2 viewport = new(720, 920);
Vector2 feed = new(1440, 1080);
Vector2 squashedFeed;
private void Update()
{
//updaTePoints
hittedObject = null;
this.points.Clear();
foreach (Vector2 point in point_array)
{
flat_coords = new Vector2(point.x, point.y);
squashedFeed = feed / viewport;
Vector2 cornerBL = new(-(squashedFeed.x - 1) / 2, -(squashedFeed.y - 1) / 2);
Vector2 cornerTR = new(1 + (squashedFeed.x - 1) / 2, 1 + (squashedFeed.y - 1) / 2);
Vector2 newPoint = new(Mathf.Lerp(cornerBL.x, cornerTR.x, flat_coords.x), Mathf.Lerp(cornerBL.y, cornerTR.y, flat_coords.y));
Ray ray = cam.ViewportPointToRay(new Vector3(newPoint.x, newPoint.y));
origin = ray.origin;
// Get the shortest distance to the origin
if (Physics.Raycast(ray, out RaycastHit ray_hit, this.Distance * 2, LayerMask.GetMask("Spatial Awareness")) && ray_hit.distance <= this.Distance)
{
point_pos = origin + ray.direction * ray_hit.distance - new Vector3(0, 0, 0.01f);
if (ray_hit.collider) if (ray_hit.collider.GetComponentInParent<ModelContainer>(true)) hittedObject = ray_hit.collider.GetComponentInParent<ModelContainer>(true).gameObject;
}
else
{
point_pos = origin + ray.direction * this.Distance;
if (ray_hit.collider) if (ray_hit.collider.GetComponentInParent<ModelContainer>(true)) hittedObject = ray_hit.collider.GetComponentInParent<ModelContainer>(true).gameObject;
}
this.points.Add(point_pos);
}
//line_renderer.positionCount = this.points.Count;
custom_line_renderer.SetPositions(this.points.ToArray());
Some more context, with images.
In this image, you can see the line in green which was drawn on the PC app.
And in red is the CustomLineRender on the ML app.
You can see that the center is not aligned, and the edges of the line are not visible.
The lines are similar because we are doing some transformations (code above).
But if we do not do any transformation to the points, then the result in the ML App is something like this (blue lines are projected into world, ignore the red lines on the following image):
To make sure I understand. You have a PC app in which you display the ML Camera Image. On the PC you can select a pixel on the Image and expect to cast the point to the point in front of the user that is wearing the Magic Leap 2.
Is the following flow correct:
User clicks on an image on the PC and sends the 2D pixel position to the Magic Leap.
The Magic Leap then converts the 2D pixel position to world space.
Have you tried the following example. I noticed that it says “Screen Point” but that might be misleading as it is referring to the Pixel Position:
CastRayFromScreenToWorldPoint()
You will need to make sure that you are undistorting the pixel position since the raw camera stream has some pinhole distortion.
If you are using Open XR, I recommend enabling the Secondary View feature to make sure the virtual content is not offset in the MR Image.
If you use the OpenXR APIs you will also need to make sure that Perception Snapshots are enabled and that the tracking origin is set to Unbounded so that the MLCVCamera.GetPose function works properly:
Yes, you got it right.
The PC app renders the ML camera image (1440x1080) and the user can draw lines on top of it, which are then sent to the Magic Leap user in Vector2’s with values between 0 and 1.
I tryed the pixel-to-world-position methods you sent, specifically the CastRayFromScreenToWorldPoint with the example script that shows how to position 5 objects (cubes) to the corners. But it does not work correctly. The 5 cubes were just floating around, constantly moving, really weird behaviour. I expected that the ML camera image on the PC app would show the cubes anchored in the corners of the texture.
I’m using OpenXR, and i made sure i have Perception Snapshots, Secondary View and Unbounded Tracking Mode.
I see. Are you simply using one of the MLCamera examples scripts with the additional CastRayFromWorldPoint or CastRayFromViewPort method?
Does it behave the same way when using the CVCamera ?
Yes, i’m using the ML camera example unchanged (Async Camera Initialization), together with the CastRayFromScreenToWorldPoint().
Switched to CV camera, changed size & weight of the camera to 1920x1080, still not working properly. The cubes dont move so much has before, but they are definetly not at the corners.
Do you mind commenting out the raycast section of the Camera Utilities? You may want to add a layermask parameter to prevent collisions with the graphic or unwanted objects
Line 48 - 54 CameraUtilities.cs
// // Raycast against the WorldMesh to find where the ray intersects.
// // TODO: Add a layer mask filter to prevent unwanted obstructions.
// if (Physics.Raycast(ray, out RaycastHit hit, 100))
// {
// hitPoint = hit.point;
// _rayLength = hit.distance;
// }
If you want to add a layer mask it would be somthing like
LayerMask worldMeshMask = 1 << LayerMask.NameToLayer("WorldMesh");
* if (Physics.Raycast(ray, out RaycastHit hit, 100f, worldMeshMask)) */
Commented the raycast section.
Cube objects still not anchored to the corners.
This objects are not visible on the magic leap, exept when i look up/down (to the ceiling/floor).
When i stop the ML camera, the objects are not updated anymore and indeed it seems that they have the size of the ML camera, they just arent positioned correctly on the corners when the script is running.
I made a video to show.
prof.zip (11.9 MB)
Hmm
When I tested the script I did not run into this issue. Is the result the same when you use the CV camera? Is your XR origin / XR Rig at 0,0,0?
Just tryed and yes, seems to be the same.
Also i dont think i can use de CV camera in my use case because i need Virtual Contents visible.
Do you mind testing the logic and the script in an empty scene that does not use MRTK instead uses the base XRI rig. Also make sure that the visualizations for the 5 cubes do not have a collider on the object.
Also try to set the unbounded mode using the the following script:
Also make sure you have Magic Leap 2 Reference Spaces enabled under the OpenXR Feature Groups.
Tested with an empty scene with XRI.
Found out that the problem was that i needed to enable Reference Spaces and also force Unbounded mode with the above script. Now it also works with MRKT.
But still, i’m not getting the desired behaviour. The result looks like the same using this CameraUtilies script as not using it.
Results:the following image shows a spriral line rendered on the PC app on a Canvas, drawn on top of the ML camera texture (received with WebRTC).
This spiral is an array of points, which are sent to the ML app.
Upon receiving on the ML app, using the CameraUtilies CastRayFromScreenToWorldPoint, the result is this:
The following images shows the overlap of both lines on the PC view, you can see that they are very different. There is no collisions (no layer mask).
Unfortunately, I wasn’t able to reproduce the issue on my end. Below are the two scripts I’m using.
You could try visualizing the RGB image on a RawImage—or on a quad that matches the camera texture’s aspect ratio—and then spawning a cube at the clicked position. This approach simplifies the pipeline and should help you pinpoint where the problem is occurring.
I used the Magic Leap Unity Examples project as a starting point. The Hello Cube scene specifically.
Here is a way you can test locally.
Prerequisites
- Clean Magic Leap Unity Examples Project
- A script that sets the tracking mode to Unbounded
- The Hello Cube Scene
1. Create the Clickable view port
- Add the script to the scene that sets the tracking to unbounded.
- I added the RawImage to the instruction canvas and disabled the background.
- Set the scale of the Raw Image to
<0.34,0.34,0.34>
- Added the attached
RawImageViewportClick
script to the raw image.
2. Create the Simple Visual
- Create a small cube Called “SimpleVisual” and scale it to be `<0.05,0.05,0.05>
- Remove the box collider from the cube.
- Create a prefab from the SimpleVisual object and then remove it from your scene
3. Add the camera logic
- Add the
MagicLeapRGBCamera
script to the scene
- Assign the Visualizer and Viewport properties in the inspector
Scripts:
MagicLeapRGBCamera.cs (14.6 KB)
RawImageViewportClick.cs (4.6 KB)
CameraUtilities.cs (6.9 KB)
Note: The CameraUtilities.cs
script was modified to allow you to set a distance and a raycast layer if desired. Default values are provided but they can be changed when calling the functions in the utility script such as CastRayFromViewPortToWorldPoint(..)
or `CastRayFromScreenToWorldPoint(..)
Learn More
///
/// Casts a ray from a 2D viewport position to a point in world space.
/// This method is used as Unity’s Camera.ScreenToWorld functions are limited to Unity’s virtual cameras,
/// whereas this method provides a raycast from the actual physical RGB camera.
///
/// Intrinsic Calibration parameters of the camera.
/// Transform matrix of the camera.
/// 2D viewport point to be cast.
/// metres along the ray.
/// Layers for . 0 (default) disables raycasting entirely and always returns .
/// Maximum distance (metres) for the physics raycast when ≠ 0.
/// Either the hit‐point on the supplied layers or (fallback) the point at a set
/// metres along the ray.
public static Vector3 CastRayFromViewPortToWorldPoint(MLCamera.IntrinsicCalibrationParameters icp, Matrix4x4 cameraTransformMatrix, Vector2 viewportPoint, float depth = 0.4f,LayerMask layerMask = default, float maxRayDistance = 100f)
You were right, it works correctly on a Clean Magic Leap Unity Examples Project.
I’ll try to use a simple MRKT scene with this logic, and then if it works i’ll try to use this in my use case
Thank you