Hello.
I’m trying to figure out how to properly instantiate objects in the world based on the mouse input over the ML MR camera sent to a desktop app.
On the computer app, [0,0] is bottom-left, and [1,1] is top-right, and the camera texture received has 1440x1080. I noticed that the Magic Leap Viewport in Unity is 720x920.
My problem is that i cant seem to find the correct way to instantiate the objects in the world. There seems to be a small offset which i cant seem to find the proper way to adress it.
I tryed to use the “CameraUtilities” present here “Localizing detected faces using ML2 transformation matrix?” but it mades things worse.
A simple exercise i made:
- created a Line render (inf form of spiral) on the PC App, over the camera textre, around the edges and going to the center;
- sent this line render points to the magic leap.
- instantiate this line using a CustomLineRenderer script on the magic leap app, in mid air, using a simple script (bellow).
- on the PC app, we can see that the CustomLineRenderer is not properly centered as expected, and is not completly visible (the line around the edges cant be seen).
Script used to render the line in the ML app:
Vector2 flat_coords;
Vector2 viewport = new(720, 920);
Vector2 feed = new(1440, 1080);
Vector2 squashedFeed;
private void Update()
{
//updaTePoints
hittedObject = null;
this.points.Clear();
foreach (Vector2 point in point_array)
{
flat_coords = new Vector2(point.x, point.y);
squashedFeed = feed / viewport;
Vector2 cornerBL = new(-(squashedFeed.x - 1) / 2, -(squashedFeed.y - 1) / 2);
Vector2 cornerTR = new(1 + (squashedFeed.x - 1) / 2, 1 + (squashedFeed.y - 1) / 2);
Vector2 newPoint = new(Mathf.Lerp(cornerBL.x, cornerTR.x, flat_coords.x), Mathf.Lerp(cornerBL.y, cornerTR.y, flat_coords.y));
Ray ray = cam.ViewportPointToRay(new Vector3(newPoint.x, newPoint.y));
origin = ray.origin;
// Get the shortest distance to the origin
if (Physics.Raycast(ray, out RaycastHit ray_hit, this.Distance * 2, LayerMask.GetMask("Spatial Awareness")) && ray_hit.distance <= this.Distance)
{
point_pos = origin + ray.direction * ray_hit.distance - new Vector3(0, 0, 0.01f);
if (ray_hit.collider) if (ray_hit.collider.GetComponentInParent<ModelContainer>(true)) hittedObject = ray_hit.collider.GetComponentInParent<ModelContainer>(true).gameObject;
}
else
{
point_pos = origin + ray.direction * this.Distance;
if (ray_hit.collider) if (ray_hit.collider.GetComponentInParent<ModelContainer>(true)) hittedObject = ray_hit.collider.GetComponentInParent<ModelContainer>(true).gameObject;
}
this.points.Add(point_pos);
}
//line_renderer.positionCount = this.points.Count;
custom_line_renderer.SetPositions(this.points.ToArray());
Some more context, with images.
In this image, you can see the line in green which was drawn on the PC app.
And in red is the CustomLineRender on the ML app.
You can see that the center is not aligned, and the edges of the line are not visible.
The lines are similar because we are doing some transformations (code above).
But if we do not do any transformation to the points, then the result in the ML App is something like this (blue lines are projected into world, ignore the red lines on the following image):
To make sure I understand. You have a PC app in which you display the ML Camera Image. On the PC you can select a pixel on the Image and expect to cast the point to the point in front of the user that is wearing the Magic Leap 2.
Is the following flow correct:
User clicks on an image on the PC and sends the 2D pixel position to the Magic Leap.
The Magic Leap then converts the 2D pixel position to world space.
Have you tried the following example. I noticed that it says “Screen Point” but that might be misleading as it is referring to the Pixel Position:
CastRayFromScreenToWorldPoint()
You will need to make sure that you are undistorting the pixel position since the raw camera stream has some pinhole distortion.
If you are using Open XR, I recommend enabling the Secondary View feature to make sure the virtual content is not offset in the MR Image.
If you use the OpenXR APIs you will also need to make sure that Perception Snapshots are enabled and that the tracking origin is set to Unbounded so that the MLCVCamera.GetPose function works properly:
Yes, you got it right.
The PC app renders the ML camera image (1440x1080) and the user can draw lines on top of it, which are then sent to the Magic Leap user in Vector2’s with values between 0 and 1.
I tryed the pixel-to-world-position methods you sent, specifically the CastRayFromScreenToWorldPoint with the example script that shows how to position 5 objects (cubes) to the corners. But it does not work correctly. The 5 cubes were just floating around, constantly moving, really weird behaviour. I expected that the ML camera image on the PC app would show the cubes anchored in the corners of the texture.
I’m using OpenXR, and i made sure i have Perception Snapshots, Secondary View and Unbounded Tracking Mode.
I see. Are you simply using one of the MLCamera examples scripts with the additional CastRayFromWorldPoint or CastRayFromViewPort method?
Does it behave the same way when using the CVCamera ?
Yes, i’m using the ML camera example unchanged (Async Camera Initialization), together with the CastRayFromScreenToWorldPoint().
Switched to CV camera, changed size & weight of the camera to 1920x1080, still not working properly. The cubes dont move so much has before, but they are definetly not at the corners.
Do you mind commenting out the raycast section of the Camera Utilities? You may want to add a layermask parameter to prevent collisions with the graphic or unwanted objects
Line 48 - 54 CameraUtilities.cs
// // Raycast against the WorldMesh to find where the ray intersects.
// // TODO: Add a layer mask filter to prevent unwanted obstructions.
// if (Physics.Raycast(ray, out RaycastHit hit, 100))
// {
// hitPoint = hit.point;
// _rayLength = hit.distance;
// }
If you want to add a layer mask it would be somthing like
LayerMask worldMeshMask = 1 << LayerMask.NameToLayer("WorldMesh");
* if (Physics.Raycast(ray, out RaycastHit hit, 100f, worldMeshMask)) */
Commented the raycast section.
Cube objects still not anchored to the corners.
This objects are not visible on the magic leap, exept when i look up/down (to the ceiling/floor).
When i stop the ML camera, the objects are not updated anymore and indeed it seems that they have the size of the ML camera, they just arent positioned correctly on the corners when the script is running.
I made a video to show.
prof.zip (11.9 MB)
Hmm
When I tested the script I did not run into this issue. Is the result the same when you use the CV camera? Is your XR origin / XR Rig at 0,0,0?