ScreenPointToRay offset

Give us as much detail as possible regarding the issue you're experiencing.

Unity Editor version: 2022.2.21f1
ML2 OS version: 1.2.0
MLSDK version: 1.2.0
Host OS: (Windows/MacOS) both


A scene has been setup with WebRTC and Meshing. Webrtc is transmitting the video and am trying to place game objects remotely. We transmit the click location on the video from remote, and then use following code to place game object.

Ray newTouchRay = Camera.main.ScreenPointToRay(touchPosition);
if(Physics.Raycast(newTouchRay, out RaycastHit hitInfo, 100, LayerMask.GetMask("Plane")))
    // Code to place game object

There is an offset in the placement. The markers are having more offset as the clicks go further away from the centre of the screen. The offset is also in the direction away from centre of the screen.

1 Like

Hi @T_ReKT,

Welcome to the Magic Leap 2 Forums. We are so grateful to have you here engaging with us.

Would you mind sharing a video? This will help me gain a better understanding of expected behavior.

If not, would you mind sharing the reproduction steps necessary to create this issue.




Sure here is a video. We transmit the mouse click locations from webrtc video stream on a browser and place markers on device using above described method.

What we are noticing is as we move more towards the corners of the screen, the offset becomes larger.
Placing at the exact centre works on point.

To make the clicks align to the real world, you will need to use the Camera's intrinsic and extrinsic values to convert pixel to world positions.

Here is an example that can be used with the CV camera image example:


I tried this solution, it seems to work for the corners and the centre, but anywhere else and the markers are placed with an offset. This time the offset is towards the centre of the screen rather than away tho.

The cyan markers are placed exactly the way its in the script (4 in the corners and one in centre of screen). An extra cyan marker is placed at the touch point as well. The brown markers are placed by using Physics.Raycast using the "worldRay".

I have not been able to reproduce that issue on my end. The points will appear to no be aligned properly if the depth value does not match the actual depth of the object you are recasting against. I would recommend changing the function and using a physics raycast that would cast on the mesh generated by Magic Leap.

If you want more precise alignment, I recommend expanding the function to take into account the distortion coefficient.

The brown arrows in video above are from physics raycast.

using UnityEngine;
using UnityEngine.XR.MagicLeap;

public class SimpleCameraRaycast : MonoBehaviour
          public Vector3 CastRayFromPixelToWorldPoint(int width, int height, Vector2 pixelPosition, MLCameraBase.IntrinsicCalibrationParameters parameters, Matrix4x4 cameraTransformationMatrix, float depth)
            // Step 1: Normalize the image coordinates
            Vector2 normalizedImagePoint = new Vector2(
                (pixelPosition.x - parameters.PrincipalPoint.x) / width,
                (pixelPosition.y - parameters.PrincipalPoint.y) / height);

            // Account for aspect ratio
            float aspectRatio = width / (float)height;

            // Account for FOV
            float verticalFOVRad = parameters.FOV * Mathf.Deg2Rad;
            float horizontalFOVRad = 2 * Mathf.Atan(Mathf.Tan(verticalFOVRad / 2) * aspectRatio);
            normalizedImagePoint.x *= Mathf.Tan(horizontalFOVRad / 2);
            normalizedImagePoint.y *= Mathf.Tan(verticalFOVRad / 2);

            // Step 2: Convert normalized image coordinates to camera coordinates
            Vector3 cameraPoint = new Vector3(normalizedImagePoint.x, normalizedImagePoint.y, 1);

            // Step 3: Create a 3D ray from camera position in the direction of the camera coordinate
            Ray cameraRay = new Ray(, cameraPoint);

            // Step 4: Convert the ray to world coordinates
            Quaternion rotation = cameraTransformationMatrix.rotation;
            Vector3 position = cameraTransformationMatrix.MultiplyPoint(cameraRay.origin);
            Vector3 direction = rotation * cameraRay.direction;
            Ray worldRay = new Ray(position, direction);

            // Return the point in world space at the specified depth along the ray
            return worldRay.GetPoint(depth);

In this function i return the world ray and perform the raycast on it.

One thing I didn't really understand is the "1" in Vector3 cameraPoint = new Vector3(normalizedImagePoint.x, normalizedImagePoint.y, 1);

It seems to also change the position of the markers placed.

I am performing raycast on the worldRay

Another thing is, that the stream is from the Main camera, and pose is from the CV camera. Will that be causing issues?
If so, don't think there is any way to capture mixed reality content in CV camera. Is it possible to get the transformation matrix of main camera? I could not find such api's in the docs.

@T_ReKT Yes, you can get the camera information from both main and CV streams:

The "1" in the camera point vector is the direction of the ray with the Z axis being the forward direction. Then we transform the direction from local to world space relative to the CV camera pose.

1 Like

I have filed a bug ticket regarding this issue. I have been able to reproduce the issue you mentioned when using Mixed Reality Capture.

There seems to be an issue with the original script. I have updated the script in the previous post with the correct function. See the Camera Utility script that has been added: