Hello,
I'm developing a very simple Unity application that coordinates content displayed on a standard monitor with content displayed on the ML2 headset. Simply put, the ML2 headset is only being used as an output device to display content and nothing more.
Based on this thread, @aheaney suggests that such basic rendering is possible in Unity. Would you mind giving me some guidance on how I could achieve this? My guess is that I'll need to write some basic networking protocol to stream the contents of the Unity Display to the ML2 headset, but the developer docs don't seem to have information regarding this.
For further context, up until recently, I've been using the HL2 to develop this application, and this device features a remote rendering feature where you simply need to provide the video encoding parameters, and it forwards the rendered contents of the specified Unity Camera to the HL2 device itself (see image attached below). I'm essentially looking for something very similar to this.
Note: I'm aware that the HL2 feature is only implemented to work in Unity preview / play mode, and not in an application, but the point I'm trying to make is that since it works on a basic level, I should be able to hack it together to work in a fully built executable by replicating the logic of the Unity plug in the HL2 SDK provides. I'm hoping to be able to directly implement the remote rendering I've described above to the fully built app itself.
Thank you for taking your time to read and answer my question! If there's anything unclear in my message, please feel free to reply so I can follow-up.
Cheers,
Monde
Unity Editor version: Currently on 2022f, but I can migrate to whatever other version as required
ML2 OS version: Newly purchased, can update to whichever version required
Host OS: Windows
Error messages from logs (syntax-highlighting is supported via Markdown):