[Bloat] setting queue depth on tail drop configurations of pfifo_fast

David Lang david at lang.hm
Fri Mar 27 19:40:35 EDT 2015


On Fri, 27 Mar 2015, Bill Ver Steeg (versteb) wrote:

> Dave Lang-
>
>
>
> Yup, you got the intent.
>
>
>
> The ABR video delivery stack is actually one level more complex. The 
> application uses plain old HTTP to receive N==2 second chunks of video, which 
> in turn uses TCP to get the data, which in turn interacts with the various 
> queuing mechanisms, yada, yada, yada. So, the application rate adaptation 
> logic is using the HTTP transfer rate to decide whether to upshift to a higher 
> video rate, downshift to a lower video rate, or stay at the current video rate 
> at each chunk boundary.
>
>
>
> There are several application layer algorithms in use (Netflix, MPEG DASH, 
> Apple, Microsoft, etc), and many of them use more than one TCP/HTTP session to 
> get chunks. Lots of moving parts, and IMHO most of these developers are more 
> concerned with getting the best possible throughput than being bloat-friendly. 
> Driving the network at the perceived available line rate for hours at a time 
> is simply not network friendly.....

although if the user is only using the line for this purpose, it may be exactly 
the right thing to do :-/

> Clearly, the newer AQM algorithms will handle these types of aggressive ABR 
> algorithms better. There also may be a way to tweak the ABR algorithm to "do 
> the right thing" and make the system work better - both from a "make my video 
> better" standpoint and a "don't impact cross traffic" standpoint. As a start, 
> I am thinking of ways to keep the sending rate between the max video rate and 
> the (perceived) network rate. This does impact how such a flow competes with 
> other flows, and

You aren't really going to be able to measure your impact on other traffic 
(unless you can have the client do something else at the same time that would 
show the latency)

We've been working for a long time to directly measure bufferbloat and it's been 
quite a struggle. The best that we've been able to do is to compare the ping 
response time while under load and watch for it to climb (it tends to go up 
_very_ quickly when bufferbloat starts kicking in)

> Regarding peeking into the kernel ----- The overall design of the existing 
> systems assumes that they need to run on several OSes/platforms, and therefore 
> they (generally) do not peak into the kernel. I have done some work that does 
> look into the kernel to examine TCP receive queue sizes --- 
> https://smartech.gatech.edu/bitstream/handle/1853/45059/GT-CS-12-07.pdf -- and 
> it worked pretty well. That scheme would be difficult to productize, and I am 
> thinking about server-based methods in addition to client based methods to 
> keep out congestion jail. Perhaps using HTTP pragmas to have the client signal 
> the desired send rate to the HTTP server.

I was thinking in terms of the sender peeking into the kernel, you normally have 
a much more limited set of OSs on your server. But if you are transferring 
things via a standard HTTP server, you can't do this.

do you really have any better option than saying "I expected it to take X ms to 
send 2 sec worth of data, but it took X + Y ms to finish the HTTP transfer" and 
then take action based on the value of Y (which could be negative if the 
connection improved)?

If you have the ability to do something else (something very lightweight, 
ideally UDP based so you don't have TCP retries to deal with) in a separate 
connection while you are downloading the 2s of video and can detect the delays 
of that.

David Lang

> Bill Ver Steeg
>
> -----Original Message-----
> From: David Lang [mailto:david at lang.hm]
>
> re-reading your post for the umpteenth time, here's what I think I may be 
> seeing.
>
>
>
> you are working on developing video streaming software that can adapt the bit 
> rate of the streaming video to have it fit within the available bandwidth. You 
> are trying to see how this interacts with the different queuing options.
>
>
>
> Is this a good summary?
>
>
>
>
>
> If so, then you are basically wanting to do the same thing that the TCP stack 
> is doing and when you see a dropped packet or ECN tagged packet, slow down the 
> bit rate of the media that you are streaming so that it will use less 
> bandwidth.
>
>
>
> This sounds like an extremely interesting thing to do, it will be interesting 
> to see the response from folks who know the deeper levels of the OS as to what 
> options you have to learn that such events have taken place.
>
>
>
> David Lang
>



More information about the Bloat mailing list