Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not prevent keyframe detection when throttling PLIs sent (#408) #409

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

avila-devlogic
Copy link
Contributor

@avila-devlogic avila-devlogic commented Mar 12, 2018

When using H264 simulcast client it receives huge number of PLIs.
This change avoids preventing keyframe detection for activation of individual simulcast RTPEngines when throttling PLIs sent

it will basically ignore any keyframes within 300ms since PLI has been sent.
First packet received will probably be intra frame so it will trigger another PLI and block keyframe detection for another 300ms.
the reason why lowest simulcast layer is marked as active is because it has highest probability of being received just in time when 300ms elapses since it's smallest in byte size.
since the other two higher layers haven't been marked as active, it will request PLI on lowest stream again... and show continues.

When using simulcast client it receives huge number of PLIs.
This change avoids preventing keyframe detection for activation of individual simulcast RTPEngines when throttling PLIs sent

it will basically ignore any keyframes within 300ms since PLI has been sent.
First packet received will probably be intra frame so it will trigger another PLI and block keyframe detection for another 300ms.
the reason why lowest simulcast layer is marked as active is because it has highest probability of being received just in time when 300ms elapses since it's smallest in byte size.
since the other two higher layers haven't been marked as active, it will request PLI on lowest stream again... and show continues.
@avila-devlogic avila-devlogic changed the title Do not prevent keyframe detection when throttling PLIs sent (408) Do not prevent keyframe detection when throttling PLIs sent (#408) Mar 12, 2018
@jitsi-jenkins
Copy link

Hi, thanks for your contribution!
If you haven't already done so, could you please make sure you sign our CLA (https://jitsi.org/icla for individuals and https://jitsi.org/ccla for corporations)? We would unfortunately be unable to merge your patch unless we have that piece :(.

@bgrozev
Copy link
Member

bgrozev commented Mar 12, 2018

Jenkins, add to whitelist

@avila-devlogic
Copy link
Contributor Author

@bgrozev , test failure doesn't make any sense since there is no TCP/UDP stack modifications

@bgrozev
Copy link
Member

bgrozev commented Mar 14, 2018

Thanks, @avila-devlogic ! I'll probably take a look early next week. The test failure is probably unrelated.

@bgrozev
Copy link
Member

bgrozev commented Mar 16, 2018

I want to make sure that I understand what's going on correctly.

Suppose we have 3 simulcast stream: 0 (lowest resolution), 1 (middle), 2 (highest resolution). We send a PLI and the next keyframe that we receive is for stream 0 (for example). We mark 0 as active, and 1 and 2 as inactive because of this code. Next we receive keyframes for streams 1 and 2, but we fail to mark the streams as active because of this code. As a result, streams 1 and 2 are left marked as inactive even though they are actually active.

Is this a correct description of the problem that you encounter?

Assuming that it is. As the comment describes, the webrtc.org vp8 simulcast implementation always sends keyframes for the simulcast streams in the order 2, 1, 0, which is why we haven't run into this problem before (we end up ignoring a keyframe for streams 1 and 0, but they have already been marked as active, so it ends up working correctly). Do you know how the h264 simulcast implementation behaves in this case?

Unfortunately, if my understanding is correct, your changes would brake vp8, because we would handle the keyframe on stream 2 first, and mark the streams as all active. Then we would handle the keyframe on stream 1, and mark stream 2 as inactive. Finally we would handle the keyframe on stream 0 and mark streams 1 and 2 as inactive.

I would favor a solution which does not rely on the ordering of the keyframes (whether we receive a keyframe for the high resolution stream or the low resolution stream first), but I don't know what it would look like. I don't understand why we have this code which, on reception of a keyframe for a given stream marks all higher resolution streams as inactive, but I suspect there is some reason. Perhaps @gpolitis remembers?

A possible workaround for this would be to make your h264 simulcast implementation send keyframes in the order in which libjitsi expects them. Personally I would prefer a more general solution.

@avila-devlogic
Copy link
Contributor Author

You have explained it correctly in the first paragraph.

If VP8 stream ordering is 2, 1, 0 as you've mentioned, then this change will indeed break it.

I really don't see a point in relying on stream ordering. In fact, I don't see a point in marking the other two streams as active/inactive if we receive a keyframe for the first one regardless of ordering.

If such inter-dependency has to be met for whatever the reason, we should not rely on ordering.
I will amend my pull request with ordering independent solution so we can try it out.

h264 simulcast implementation is as proposed to webrtc team that generalizes simulcast for both h264 and vp8.

@gpolitis
Copy link
Member

@avila-devlogic the stream ordering trick is not used to detect when a stream has been activated, i's used to detect a stream has become inactive. Without this trick, we have to rely on timeouts (which can lead to periods without frames for a particular receiver) and unnecessary PLIs (and unnecessary key frames).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants