Odd shrinking downsizing effect to remote WebRTC screen

We are observing an issue when using the (latest) WebRTC example whereby the Remote video display shrinks by about 30% every 6-10 seconds, towards the bottom left corner of the canvas (as if the upper right-hand corner is being dragged towards the bottom left to shrink the video); the canvas size remains unchanged;the displayed video also shows a 1 pixel orange line around the top and right edge when this happens, and we see a single "fuzzy white noise" frame just before the resize occurs. This continues to repeat: the image gets smaller and smaller, towards the bottom left. If we stop it, then restart the connection, the projected images is larger than the canvas, and offset to the right (the opposite of the shrinkage), but then shrinks in a similar pattern (about 30%, towards the bottom left, every 6-10s). We have commented outage only code in the loop that might explain the resizing and suspect this might be an SDK bug? We have also reproduced this with the generic WebRTC example.

On a related note: we are observing two other odd video sizing issues, please let me know if separate tickets should be opened but it seems they may be related:

  1. The scalar size of the outer canvas seems to impact the size of an overlayed video, which was not previously the case with Lumin (ML1), or android: previously, we would scale a canvas to match the relative size of the video (e.g., 16:9), in order to get a 16:9 display, but now we must scale the same container 1:1 and let the video force the resize according to it's raw dimensions. Is this a known change and do yo have many more detail on this you can share, please?

  2. When working with a video projected onto a sphere (clamped, intended for skyscape), the larger we make the sphere, the more distorted its shape becomes: if the sphere is 100x100x100u for example, it projects like it's dimensions are 1x100x100 (a giant, flat disc with a small <.5u gap in the middle). This is also evident with a smaller sphere (like 2x2x2), but less distortion is evident; it appears the distortion is proportional to the scale in some way. This also feels like an SKD or native player bug (?); we have this working without issues on Lumin (ML1), and other android releases. I will open a separate ticket for this issue, specifically; just mentioning it here in case it is related. Thank you for your help!!

Unity Editor version: 2022.2.0a14.2006
ML2 OS version: ML2 OS Release B3E.220619.07-R.124 (user)

Magic Leap Lumin OS ML2 Sprint 13 Bravo

MLSDK version: 0.53.1

Error messages from logs (syntax-highlighting is supported via Markdown): N/A

2 Likes

Hi Chris,

Thank you for reporting this. Unfortunately, I was not able to reproduce the scaling issue using our default WebRTC example provided in the MagicLeap_Unity example project. I also noticed that the example does not use a Canvas based object, instead it uses A Quad and resizes the object based on remote videos aspect ratio.

Are you noticing the scaling issue when using both the example scene and example signaling server?

1 Like

We are; we'll do some more testing and see if we can narrow it down further, thank you.

1 Like

Hi Krystian, would it be possible to re-tag this as makeSEA private so we can share more on the topic privately, please? Thank you.

1 Like

I have edited this Topic to be under the MakeSEA Category.

1 Like

Thanks, Shane.

So, we have narrowed this down to being related to using a filter on the sending computer side to pre-process the video: in our ML1 implementation, we can use a Snap filter or OBS to pre-green screen video sent from Chrome browser, so that the player on the ML side can chromakey the green out.

Now with ML2's implementation, if we connect the browser directly to the webcam, the phenomenon does not occur; the stream is stable indefinitely (30min+). However, if we connect the browser to the Snap or OBS processed stream, the downsizing occurs, consistently, on the ML2 end, just seconds after connecting. This is not the case with ML1.

Also, we've noticed this implementation on ML2 is preferring the "native" screen, whereas previously we were using the YUV screen (same browser/source stream setup). Could this downsizing issue be related to that difference?
Why is the Native screen preferred vs. the YUV player?
And, what determines the player selection is it the source stream type? (we were always confused by why the ML1 used the YUV player vs the RGB player for a stream from Chrome --doesn't Chrome send RGB?).

Does this shed any light on the potential explanation and fix? We really need to be able to use that pre-processed stream for some use cases; it works flawlessly on ML1.

Thank you!

2 Likes

@cstavros is reporting that this is still occurring on 0.53.2.

1 Like

Update:

This is a known issue, internally, and has an associated bug filed, with a high priority assigned to it.

1 Like