So while experimenting with different use cases for Segmented dimming we've come across some shortcomings that I would like to bring up for discussion;
-
Segmented dimming doesn't ignore input below one-pixel size (no Anti Aliasing)
This can be rather weird when objects get too small in screen space. The correct solution would probably be to implement some kind of (customizable) anti-aliasing for the segmented dimming layer only. Is this something ML devs are looking into? -
Virtual Objects Occlusion doesn't affect segmented dimming layer
This is most apparent when using Spatial Mapping with occlusion and the user pushes an object with Segmented dimming into a wall. The virtual layer is occluded but not the dimming layer.
I guess a global solution in the render pipeline somewhere best approach as well. -
Using Segmented Dimming halves our vertex budget
The current implementation requires different geometry for the virtual and dimming layer, often just a copy of the original mesh scaled down (as according to the design guidelines).
The best approach would probably be to create a shader graph node that we can implement in our own shaders which draws to the segmented dimming layer with vertices manipulated inwards (against the normal face). I don't know too much about how the dimming works to know if this is feasible.
Would love to hear other's approach to these issues!