I'm currently modifying some OpenXR code we are currently using to render to a Meta Quest 2 using OpenGL for rendering. I'm running ML2 OS 1.7.0 and the remote render service. We are having some trouble rendering text to the Quad Composition Layer. When rendering to the eyes, I see the recommendation setting the layer flag to XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND. I'm wondering whether there are other similar flags to be set when rendering to the quad composition layer and/or different protocols when rendering to multiple layers with your device.
Hello,
What sort of issues do you observe when trying to render text using a quad layer on your ML2 compared to Quest?
For clarity, using the ALPHA_BLEND environment blending mode allows you to control the behavior of the dimmer panel in your ML2 on a per pixel basis. (based on the content of the alpha channel in frames submit by your application)
As a quick test, you can effectively disable the dimmer panel by setting the blend mode to XR_ENVIRONMENT_BLEND_MODE_ADDITIVE instead.
Best,
Adam