CDN latency stripped out of bandwidth measurement
Created by: AxelDelmas
I'd like to open this issue to discuss the bandwidth measurement for ABR, as it has been discussed in the f2f in Berlin.
In the ThroughputRule, the throughput estimation is based on tfinished - tresponse
, tresponse being the first progress event for the request, that we approximate to first byte receive. By doing that, we're stripping out the CDN latency from the measurement.
This throughput is then averaged over the several previous requests and passed to abrController.getQualityForBitrate to determine the next bitrate to load. As @robertbryer pointed out, the latency is somewhat taken into account in that method: https://github.com/Dash-Industry-Forum/dash.js/blob/d19e5233d96d63b70da98cf84c0e342019c32c24/src/streaming/controllers/AbrController.js#L363-L372
However, we weigh the bitrate by the ratio of latency / fragmentDuration
which is very different than measuring bitrate using the total request duration instead of stripping out bitrate, that I find more intuitive.
Let's take the example of a 2s fragment, weighing 300kB, and that was downloading in 100ms, 70ms of which were the CDN latency:
- the fist approach will give me a raw bitrate of 300 / 0.03 = 10000kB/s = 10mB/s, a dead time ratio of 0.03 / 2 = 0.015, and thus a "compensated" bitrate of 9.85mB/s, or 78.8mbps.
- the second approach will give me a bitrate of 300 /0.1 = 3000kB/s = 3mB/s, or 24mbps
In this example, the result given by the 1st approach is ~3.3x the result given by the second one, which can have a huge impact in terms of ABR.
As discussed during the f2f, the best way to find out which is the right approach is to be able to compare both with production data, so I'm not sure what's the right action; but I wanted to point that subtelty out to feed the discussions about what to make configurable by developpers in order to make A/B testing different approaches easier.