Trigger on controller no longer sends input after switching scenes

The trigger seems to only send input intermittently. By intermittently, I mean that in some builds it works, while in others it doesn't. In this latest case, I have found when I switch scenes that the when the trigger works in one scene it will not work in the next scene.

Specifically I am talking about calling this function:
If I call that function in my start scene and attach an event handler, most of the time, it receives the trigger events. However, if I switch scenes and call that function again, I will no longer receive trigger events. I have tried calling the converse InputSubsystem.Extensions.Controller.RemoveTriggerListener function upon disposing or destroying scene elements, but that has not worked. Then I thought that maybe the attach can only be called once. So I refactored my controller wrapper class to act as a singleton and only call this function once, never calling the remove function. That did not work either. Seems like it is an issue with the Magic Leap Unity SDK.

Unity Editor version: 2022.2.5f1
ML2 OS version: 1.3.1 (???) Though, I think this is a typo and that it should be 1.3.0 based on previously having 1.3.0-dev1/2 and prior naming conventions having the dev1/2 released followed by the non-dev release
MLSDK version: 1.3.0
Host OS: (Windows/MacOS) macOS Ventura 13.1

Error messages from logs (syntax-highlighting is supported via Markdown): None

Hi @rscanlo2,

Thank you for bringing this to our attention. I'll go ahead and look into this and try to reproduce it.



Hmm. I'm unable to reproduce this issue on my end. What methods are you using to switch scenes?



Hi Leapers,
I have the same problem after loading other scenes with SceneManagement.LoadSceneMode.Single.
No button interaction of the controller in the new loaded scene. Same happens, if you go into the ML-main menu and switch back to the Unity standalone

Hi El,

I am using the following method to load my scene:

SceneManager.LoadScene(sceneName, LoadSceneMode.Single);

I have also tried using SceneManager.UnloadSceneAsync(currentScene); after after calling LoadScene to ensure the scene is unloaded and all of the objects are destroyed. In the OnDestroy method of my scene switching component I am unsubscribing from all events related to the controllers, in hopes that this will prevent any unhandled exceptions that might be stopping the controller from functioning properly in the next scene.


Side note: I've noticed a lot of the testing of issues reported here happens by checking against the examples project. I've used this project in the past to try understand how to implement certain functionality, but it is very opaque. I usually end up giving up and copying the scripts into my own projects and hacking at them until I get them to do what I want. But many of the scripts are very bloated and tie directly to UI components. My stuff ultimately, works, but I usually don't end up understanding how or why they work (which for a 20+ year veteran of writing C# code, is very atypical and unsettling). Particularly confusing is the use of XR Input and Actions. If you are not well versed in the Unity XR Interaction Toolkit, it is almost impossible to figure out how the controller interacts with the menu in the examples. This is especially frustrating if you are trying to create a custom interaction, like utilizing hand tracking or eye tracking to manipulate UI elements. I guess my big question is, after a year of having the examples project out in the wild, are there any plans to build some tutorial pages that walk you through, step by step, how to build those examples? Its great to see that this functionality can be implemented in Unity, but the examples don't give you a clue how to actually build them yourself.

Have you encountered our Unity Learn Tutorials? Should it be something like this but more granular? Should it be something completely different? Is it a code readability issue? This feedback is tremendously helpful for improving our developer resources.

Have you tried using the MagicLeapInputs InputActionAsset? Does the issue still occur?



Hi Leapers,

for me it looks like in the ML examples
is causing the problem, cause that seems not to be destroyed when loading other scenes/ OnDestroy().

in void OnDestroy() doesn't seem to help - the InputSubsystem doesn't deliver data after scene switching/ or going back to ML mainmenu and jumping into the Unity app again.

Is there an easy fix?

Are you able to reproduce this issue in the ML examples project?

I would recommend trying MagicLeapInputs InputActionAsset linked above and checking to see if it still creates the issue.

Hi etucker,
[MagicLeapInputs InputActionAsset ] doesn't have that critical code inside... Like rscanlo2 said - <InputSubsystem.Extensions.Controller.AttachTriggerListener.> seems to be buggy in some way

Have you tried using the MagicLeapInputs InputActionAsset ?

I have used them. In fact, I use the MagicLeapInputs.ControllerActions in my controller wrapper class to report Position, Orientation, the state of the Bumper & Menu buttons as well as handle the Gesture events. I'll be honest the Unity InputAction class is confusing. I was able to attach event handlers to some of the events, but it is not clear when "started" or "performed" would be called for each of the events. I believe I tried to use it solely in a much earlier version of the ML SDK and was frustrated by the inconsistent results. In addition, for the trigger, I needed the actual value of the trigger, which I could not find in the ControllerActions. Thus, ultimately calling AttachTriggerListener in my wrapper class.

Have you encountered our Unity Learn Tutorials?

I have not seen those before. I will try to work my way through them. Maybe they will give me better insight into how the Unity Input System is supposed to work. Here will be the litmus test: If after reading that guide, I am able to figure out how to create new controller mappings to manipulate the menu UI components in the examples using eye tracking to do my raycasting, rather than the orientation of the controller, then they will be sufficient. If not, then they need more work.

Another side note: After a year into the release of a product that is solely for enterprise users, why aren't there better bootstrapped tools to get enterprise applications built quicker? Specifically, when it comes to manipulating the UI, etc. The examples show how hand tracking and eye tracking is possible using the hardware and rudimentary SDK classes. But there are no demos on how to utilize that capability to do something useful like manipulate UI elements or interact with 3D objects by grasping them, etc. Are those coming? If you are relying on MRTK for those, can we at least get some examples of using MRTK to achieve that kind of functionality?

Sorry to hear that you are having a difficult time with Unity's Input System. We tried to make a comprehensive guide on the controller events. This guide explains when the started, performed and canceled events are called and and how to read the trigger value (see line 40 in the example script)

You can also use the developer template from our MRTK 3 fork. This is a pre-configured project that works with Magic Leap out of the box. See our Quick Start Guide for instructions on how to get started.

Lastly, we appricate the feedback and want to assure you that we are actively working on creating more resources to help developers get started with XRI.

Thanks El. I just read the "comprehensive guide" from your response. That is the first time I've laid eyes on it. It does a good job explaining the three different InputAction values, even if the naming is a bit counterintuitive. I wish this guide had been available around a year ago when I started working with the ML2. It would have saved me some time. I guess that is part of the trouble I'm running into. It seems there is a lot of new and helpful documentation and guides coming out, pretty regularly. But if I've already gone through the quick start guide months ago, I'm not going to go back there looking for new content. It would be great to be informed when new guides become available or updated. Perhaps this is something that could be a regular occurrence on the General topic in the developer forums? Just post a blast anytime any new documentation or guide goes live. That way I'd know there was something to look at.

Thank you for the very helpful feedback @rscanlo2. We are always looking to improve our developer experience and I will let the team know about this.