Reprojection and pointer pose API in native C_API

I am trying to implement re-projection; I was wondering if the native C_API provide an API to override the re-projection matrix. because as far as I Know, OpenXR does require this facility to exist in the API.

on a second note, is there an API to directly get the pointer pose, something like the beam coming out of the controller, but from the hand? I am implementing an interactive application that doesn't require
the controller, so if such API exists that would be useful.


1 Like

Hi, Thanks for your post. I talked with our C API engineers and you could potentially look into the MLGestureClassfication C APIs. In the ml_gesture_classification.h header we have MLGestureClassificationStaticData.hand_interaction point which could be handy for your use case. Please let me know if this unblocks you.

For your first question, we're discussing and I'll report back asap.

1 Like, For your first question, I confirmed with our engineers the C API does not allow you to customize the re-projection matrix. Please let me know if you have any additional questions.

1 Like If you think overriding the projection matrix is important, can you let us know your use case so we can submit it to the voice of customer team?

1 Like

Thanks for your reply.
The use case is that we have a streaming Service over a network, we render on the server and send the rendered image to the client. To compensate for the latency on the network, as well as the time it takes to render the image, we use reprojection. Having an API that allow to override it directly, simplifies the operation and I hope it will also be more performant that writing a custom vertex & fragment shaders that do it.

1 Like

Thanks for your reply,

This from my understanding, this interaction point is static? that is, it doesn't get updated?
Compared to Hand Tracking or headtracking, where there are get_static_data method and get_data methods.

However, for Gesture Classification, it seems hand_interaction point is only a static data, how does it get updated?
from the documentation, it mentions it gets updated based on the posture, but I don't find the method to update/get the hand_interaction_point based on the newest snapshot/detected hand.

1 Like

For the record, this is how I am doing it right now, but it seems to be wrong to me.

//Called once at start up
    UNWRAP_MLRESULT_FATAL(MLGestureClassificationGetStaticData(gesture_handle_, &gesture_static_data));


//-------- Called on every frame ------------//
   MLSnapshot *snapshot = nullptr;
  if(MLResult_Ok == MLSnapshotGetTransform(snapshot,&gesture_static_data.hand_interaction[0], 

The reason I believe this is wrong, because the left_hand interaction point transform is logged, even if there is no hand in the scene.
I believe this method should return :PoseNotFound as per the documentation of MLSnapShotGetTransform

1 Like