General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] setting queue depth on tail drop configurations of pfifo_fast
@ 2015-03-27 21:45 Bill Ver Steeg (versteb)
  2015-03-27 22:02 ` David Lang
  0 siblings, 1 reply; 7+ messages in thread
From: Bill Ver Steeg (versteb) @ 2015-03-27 21:45 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 2006 bytes --]

Bloaters-

I am looking into how Adaptive Bitrate video algorithms interact with the various queue management schemes. I have been using the netperf and netperf wrapper tools, along with the macros to set the links states (thanks Toke and Dave T). I am using HTB rather than BQL, which may have something to do with the issues below. I am getting some interesting ABR results, which I will share in detail with the group once I write them up.

I need to set the transmit queue length of my Ubuntu ethernet path while running tests against the legacy pfifo_fast (tail drop) algorithm.  The default value is 1000 packets, which boils down to 1.5 MBytes. At 100 Mbps, this gives me a 120ms tail drop buffer, which is big, but somewhat reasonable. When I then run tests at 10 Mbps, the buffer becomes a 1.2 second bloaty buffer. When I run tests at 4 Mbps, the buffer becomes a 3 second extra-bloaty buffer. This gives me some very distinct ABR results, which I am looking into in some detail. I do want to try a few more delay values for tail drop at 4 Mbps.

https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel says to set txqueuelen to the desired size, which makes sense. I have tried several ways to do this on Ubuntu, with no glory. The way that seems it should have worked was "ifconfig eth8 txqueuelen 100". When I then check the txqueuelen using ifconfig, it looks correct. However, the delay measurements still stay up near 3 seconds under load. When I check the queue depth using "tc -s -d qdisc ls dev ifb_eth8", it shows the very large backlog in pfifo_fast under load.

So, has anybody recently changed the ethernet/HTB transmit packet queue size for pfifo_fast in Ubuntu? If so, any pointers? I will also try to move over to BQL and see if that works better than HTB...... I am not sure that my ethernet drivers have BQL support though, as they complain when I try to load it as the queue discipline.

Thanks in advance
Bill VerSteeg


[-- Attachment #2: Type: text/html, Size: 4662 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] setting queue depth on tail drop configurations of pfifo_fast
  2015-03-27 21:45 [Bloat] setting queue depth on tail drop configurations of pfifo_fast Bill Ver Steeg (versteb)
@ 2015-03-27 22:02 ` David Lang
  2015-03-27 22:14   ` Bill Ver Steeg (versteb)
  0 siblings, 1 reply; 7+ messages in thread
From: David Lang @ 2015-03-27 22:02 UTC (permalink / raw)
  To: Bill Ver Steeg (versteb); +Cc: bloat

[-- Attachment #1: Type: TEXT/Plain, Size: 3784 bytes --]

BQL and HTB are not really comparible things.

all the BQL does is to change the definition of the length of a buffer from X 
packets to X bytes.

using your example, 1000 packets of 1500 bytes is 1.5MB or 120ms at 100Mb. But 
if you aren't transmitting 1500 byte packets, and are transmitting 75 byte 
packets instead, it's only 6ms worth of buffering.

The bottom line is that sizing buffers by packets doesn't work.


HTB creates virtual network interfaces that chop up the available bandwidth of 
the underlying device. I believe that if the underlying device supports BQL, HTB 
is working on byte length allocations, not packet counts.


fq_codel doesn't have fixed buffer sizes, it takes a completely different 
approach that works much better in practice.

The document that you found is actually out of date. Rather than trying to tune 
each thing for optimum performance and then measureing things, just benchmark 
the stock, untuned setup that you have and the simple fq_codel version without 
any tweaks and see if that does what you want. You can then work on tweaking 
things from there, but the improvements will be minor compared to doing the 
switch in the first place.

A good tool for seeing the performance (throughput and latency) is 
netperf-wrapper. Set it up and just test the two configs. The RRUL test is 
especially good at showing the effects of the switch.

David Lang


On Fri, 27 Mar 2015, Bill Ver Steeg (versteb) wrote:

> Date: Fri, 27 Mar 2015 21:45:11 +0000
> From: "Bill Ver Steeg (versteb)" <versteb@cisco.com>
> To: "bloat@lists.bufferbloat.net" <bloat@lists.bufferbloat.net>
> Subject: [Bloat] setting queue depth on tail drop configurations of	pfifo_fast
> 
> Bloaters-
>
> I am looking into how Adaptive Bitrate video algorithms interact with the 
> various queue management schemes. I have been using the netperf and netperf 
> wrapper tools, along with the macros to set the links states (thanks Toke and 
> Dave T). I am using HTB rather than BQL, which may have something to do with 
> the issues below. I am getting some interesting ABR results, which I will 
> share in detail with the group once I write them up.
>
> I need to set the transmit queue length of my Ubuntu ethernet path while 
> running tests against the legacy pfifo_fast (tail drop) algorithm.  The 
> default value is 1000 packets, which boils down to 1.5 MBytes. At 100 Mbps, 
> this gives me a 120ms tail drop buffer, which is big, but somewhat reasonable. 
> When I then run tests at 10 Mbps, the buffer becomes a 1.2 second bloaty 
> buffer. When I run tests at 4 Mbps, the buffer becomes a 3 second extra-bloaty 
> buffer. This gives me some very distinct ABR results, which I am looking into 
> in some detail. I do want to try a few more delay values for tail drop at 4 
> Mbps.
>
> https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel 
> says to set txqueuelen to the desired size, which makes sense. I have tried 
> several ways to do this on Ubuntu, with no glory. The way that seems it should 
> have worked was "ifconfig eth8 txqueuelen 100". When I then check the 
> txqueuelen using ifconfig, it looks correct. However, the delay measurements 
> still stay up near 3 seconds under load. When I check the queue depth using 
> "tc -s -d qdisc ls dev ifb_eth8", it shows the very large backlog in 
> pfifo_fast under load.
>
> So, has anybody recently changed the ethernet/HTB transmit packet queue size 
> for pfifo_fast in Ubuntu? If so, any pointers? I will also try to move over to 
> BQL and see if that works better than HTB...... I am not sure that my ethernet 
> drivers have BQL support though, as they complain when I try to load it as the 
> queue discipline.
>
> Thanks in advance
> Bill VerSteeg
>
>

[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] setting queue depth on tail drop configurations of pfifo_fast
  2015-03-27 22:02 ` David Lang
@ 2015-03-27 22:14   ` Bill Ver Steeg (versteb)
  2015-03-27 22:18     ` Toke Høiland-Jørgensen
  2015-03-27 22:46     ` David Lang
  0 siblings, 2 replies; 7+ messages in thread
From: Bill Ver Steeg (versteb) @ 2015-03-27 22:14 UTC (permalink / raw)
  To: David Lang; +Cc: bloat

Dave Lang

Thanks for the quick response

For this very specific test, I am doing one-way netperf-wrapper packet tests that will (almost) always be sending 1500 byte packets. I am then running some ABR traffic cross traffic to see how it responds to FQ_AQM and AQM (where AQM == Codel and PIE). I am using the pfifo_fast as a baseline. The Codel, FQ_codel, PIE and FQ_PIE stuff is working fine. I need to tweak the pfifo_fast queue length to do some comparisons.

One of the test scenarios is a 3 Mbps ABR video flow on a 4 Mbps link, with and without cross traffic. I have already done what you suggested, and the ABR traffic drives the pfifo_fast code into severe congestion (even with no cross traffic), with a 3 second bloat. This is a bit surprising until you think about how the ABR code fills its video buffer at startup and then during steady state playout. I will send a detailed note once I get a chance to write it up properly. 

I would like to reduce the tail drop queue size to 100 packets (down from the default of 1000) and see how that impacts the test. 3 seconds of bloat is pretty bad, and I would like to compare how ABR works at at 1 second and at 200-300 ms.


Bill Ver Steeg
DISTINGUISHED ENGINEER 
versteb@cisco.com












-----Original Message-----
From: David Lang [mailto:david@lang.hm] 
Sent: Friday, March 27, 2015 6:02 PM
To: Bill Ver Steeg (versteb)
Cc: bloat@lists.bufferbloat.net
Subject: Re: [Bloat] setting queue depth on tail drop configurations of pfifo_fast

BQL and HTB are not really comparible things.

all the BQL does is to change the definition of the length of a buffer from X packets to X bytes.

using your example, 1000 packets of 1500 bytes is 1.5MB or 120ms at 100Mb. But if you aren't transmitting 1500 byte packets, and are transmitting 75 byte packets instead, it's only 6ms worth of buffering.

The bottom line is that sizing buffers by packets doesn't work.


HTB creates virtual network interfaces that chop up the available bandwidth of the underlying device. I believe that if the underlying device supports BQL, HTB is working on byte length allocations, not packet counts.


fq_codel doesn't have fixed buffer sizes, it takes a completely different approach that works much better in practice.

The document that you found is actually out of date. Rather than trying to tune each thing for optimum performance and then measureing things, just benchmark the stock, untuned setup that you have and the simple fq_codel version without any tweaks and see if that does what you want. You can then work on tweaking things from there, but the improvements will be minor compared to doing the switch in the first place.

A good tool for seeing the performance (throughput and latency) is netperf-wrapper. Set it up and just test the two configs. The RRUL test is especially good at showing the effects of the switch.

David Lang


On Fri, 27 Mar 2015, Bill Ver Steeg (versteb) wrote:

> Date: Fri, 27 Mar 2015 21:45:11 +0000
> From: "Bill Ver Steeg (versteb)" <versteb@cisco.com>
> To: "bloat@lists.bufferbloat.net" <bloat@lists.bufferbloat.net>
> Subject: [Bloat] setting queue depth on tail drop configurations of	pfifo_fast
> 
> Bloaters-
>
> I am looking into how Adaptive Bitrate video algorithms interact with 
> the various queue management schemes. I have been using the netperf 
> and netperf wrapper tools, along with the macros to set the links 
> states (thanks Toke and Dave T). I am using HTB rather than BQL, which 
> may have something to do with the issues below. I am getting some 
> interesting ABR results, which I will share in detail with the group once I write them up.
>
> I need to set the transmit queue length of my Ubuntu ethernet path 
> while running tests against the legacy pfifo_fast (tail drop) 
> algorithm.  The default value is 1000 packets, which boils down to 1.5 
> MBytes. At 100 Mbps, this gives me a 120ms tail drop buffer, which is big, but somewhat reasonable.
> When I then run tests at 10 Mbps, the buffer becomes a 1.2 second 
> bloaty buffer. When I run tests at 4 Mbps, the buffer becomes a 3 
> second extra-bloaty buffer. This gives me some very distinct ABR 
> results, which I am looking into in some detail. I do want to try a 
> few more delay values for tail drop at 4 Mbps.
>
> https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_ben
> chmarking_Codel_and_FQ_Codel says to set txqueuelen to the desired 
> size, which makes sense. I have tried several ways to do this on 
> Ubuntu, with no glory. The way that seems it should have worked was 
> "ifconfig eth8 txqueuelen 100". When I then check the txqueuelen using 
> ifconfig, it looks correct. However, the delay measurements still stay 
> up near 3 seconds under load. When I check the queue depth using "tc 
> -s -d qdisc ls dev ifb_eth8", it shows the very large backlog in 
> pfifo_fast under load.
>
> So, has anybody recently changed the ethernet/HTB transmit packet 
> queue size for pfifo_fast in Ubuntu? If so, any pointers? I will also 
> try to move over to BQL and see if that works better than HTB...... I 
> am not sure that my ethernet drivers have BQL support though, as they 
> complain when I try to load it as the queue discipline.
>
> Thanks in advance
> Bill VerSteeg
>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] setting queue depth on tail drop configurations of pfifo_fast
  2015-03-27 22:14   ` Bill Ver Steeg (versteb)
@ 2015-03-27 22:18     ` Toke Høiland-Jørgensen
  2015-03-27 22:46     ` David Lang
  1 sibling, 0 replies; 7+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-03-27 22:18 UTC (permalink / raw)
  To: Bill Ver Steeg (versteb), David Lang; +Cc: bloat


>I would like to reduce the tail drop queue size to 100 packets (down
>from the default of 1000) and see how that impacts the test. 3 seconds
>of bloat is pretty bad, and I would like to compare how ABR works at at
>1 second and at 200-300 ms.

Did you re-initiate the pfifo_fast qdisc after changing txqlen? IIRC, it picks up the len at init time, and so won't update automatically when you change it...

-Toke

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] setting queue depth on tail drop configurations of pfifo_fast
  2015-03-27 22:14   ` Bill Ver Steeg (versteb)
  2015-03-27 22:18     ` Toke Høiland-Jørgensen
@ 2015-03-27 22:46     ` David Lang
  2015-03-27 23:18       ` Bill Ver Steeg (versteb)
  1 sibling, 1 reply; 7+ messages in thread
From: David Lang @ 2015-03-27 22:46 UTC (permalink / raw)
  To: Bill Ver Steeg (versteb); +Cc: bloat

On Fri, 27 Mar 2015, Bill Ver Steeg (versteb) wrote:

> For this very specific test, I am doing one-way netperf-wrapper packet tests 
> that will (almost) always be sending 1500 byte packets. I am then running some 
> ABR traffic cross traffic to see how it responds to FQ_AQM and AQM (where AQM 
> == Codel and PIE). I am using the pfifo_fast as a baseline. The Codel, 
> FQ_codel, PIE and FQ_PIE stuff is working fine. I need to tweak the pfifo_fast 
> queue length to do some comparisons.
>
> One of the test scenarios is a 3 Mbps ABR video flow on a 4 Mbps link, with 
> and without cross traffic. I have already done what you suggested, and the ABR 
> traffic drives the pfifo_fast code into severe congestion (even with no cross 
> traffic), with a 3 second bloat. This is a bit surprising until you think 
> about how the ABR code fills its video buffer at startup and then during 
> steady state playout. I will send a detailed note once I get a chance to write 
> it up properly.
>
> I would like to reduce the tail drop queue size to 100 packets (down from the 
> default of 1000) and see how that impacts the test. 3 seconds of bloat is 
> pretty bad, and I would like to compare how ABR works at at 1 second and at 
> 200-300 ms.

I think the real question is what are you trying to find out?

No matter how you fiddle with the queue size, we know it's not going to work 
well. Without using BQL, if you have a queue short enough to not cause horrific 
bloat when under load with large packets, it's not going to be long enough to 
keep the link busy with small packets.

If you are trying to do A/B comparisons to show that this doesn't work, that's 
one thing (and it sounds like you have already done so). But if you are trying 
to make fixed size buffers work well, we don't think that it can be done (not 
just that we have better ideas now, but the 'been there, tried that, nothing 
worked' side of things)

Even with 100 packet queue lengths you can easily get bad latencies under load.


re-reading your post for the umpteenth time, here's what I think I may be 
seeing.

you are working on developing video streaming software that can adapt the bit 
rate of the streaming video to have it fit within the available bandwidth. You 
are trying to see how this interacts with the different queuing options.

Is this a good summary?


If so, then you are basically wanting to do the same thing that the TCP stack is 
doing and when you see a dropped packet or ECN tagged packet, slow down the bit 
rate of the media that you are streaming so that it will use less bandwidth.

This sounds like an extremely interesting thing to do, it will be interesting to 
see the response from folks who know the deeper levels of the OS as to what 
options you have to learn that such events have taken place.

David Lang

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] setting queue depth on tail drop configurations of pfifo_fast
  2015-03-27 22:46     ` David Lang
@ 2015-03-27 23:18       ` Bill Ver Steeg (versteb)
  2015-03-27 23:40         ` David Lang
  0 siblings, 1 reply; 7+ messages in thread
From: Bill Ver Steeg (versteb) @ 2015-03-27 23:18 UTC (permalink / raw)
  To: David Lang; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 5320 bytes --]

Dave Lang-



Yup, you got the intent.



The ABR video delivery stack is actually one level more complex. The application uses plain old HTTP to receive N==2 second chunks of video, which in turn uses TCP to get the data, which in turn interacts with the various queuing mechanisms, yada, yada, yada. So, the application rate adaptation logic is using the HTTP transfer rate to decide whether to upshift to a higher video rate, downshift to a lower video rate, or stay at the current video rate at each chunk boundary.



There are several application layer algorithms in use (Netflix, MPEG DASH, Apple, Microsoft, etc), and many of them use more than one TCP/HTTP session to get chunks. Lots of moving parts, and IMHO most of these developers are more concerned with getting the best possible throughput than being bloat-friendly. Driving the network at the perceived available line rate for hours at a time is simply not network friendly.....



Clearly, the newer AQM algorithms will handle these types of aggressive ABR algorithms better. There also may be a way to tweak the ABR algorithm to "do the right thing" and make the system work better - both from a "make my video better" standpoint and a "don't impact cross traffic" standpoint. As a start, I am thinking of ways to keep the sending rate between the max video rate and the (perceived) network rate. This does impact how such a flow competes with other flows, and



Regarding peeking into the kernel ----- The overall design of the existing systems assumes that they need to run on several OSes/platforms, and therefore they (generally) do not peak into the kernel. I have done some work that does look into the kernel to examine TCP receive queue sizes ---  https://smartech.gatech.edu/bitstream/handle/1853/45059/GT-CS-12-07.pdf -- and it worked pretty well. That scheme would be difficult to productize, and I am thinking about server-based methods in addition to client based methods to keep out congestion jail. Perhaps using HTTP pragmas to have the client signal the desired send rate to the HTTP server.

Bill Ver Steeg







-----Original Message-----
From: David Lang [mailto:david@lang.hm]
Sent: Friday, March 27, 2015 6:46 PM
To: Bill Ver Steeg (versteb)
Cc: bloat@lists.bufferbloat.net
Subject: RE: [Bloat] setting queue depth on tail drop configurations of pfifo_fast



On Fri, 27 Mar 2015, Bill Ver Steeg (versteb) wrote:



> For this very specific test, I am doing one-way netperf-wrapper packet

> tests that will (almost) always be sending 1500 byte packets. I am

> then running some ABR traffic cross traffic to see how it responds to

> FQ_AQM and AQM (where AQM == Codel and PIE). I am using the pfifo_fast

> as a baseline. The Codel, FQ_codel, PIE and FQ_PIE stuff is working

> fine. I need to tweak the pfifo_fast queue length to do some comparisons.

>

> One of the test scenarios is a 3 Mbps ABR video flow on a 4 Mbps link,

> with and without cross traffic. I have already done what you

> suggested, and the ABR traffic drives the pfifo_fast code into severe

> congestion (even with no cross traffic), with a 3 second bloat. This

> is a bit surprising until you think about how the ABR code fills its

> video buffer at startup and then during steady state playout. I will

> send a detailed note once I get a chance to write it up properly.

>

> I would like to reduce the tail drop queue size to 100 packets (down

> from the default of 1000) and see how that impacts the test. 3 seconds

> of bloat is pretty bad, and I would like to compare how ABR works at

> at 1 second and at

> 200-300 ms.



I think the real question is what are you trying to find out?



No matter how you fiddle with the queue size, we know it's not going to work well. Without using BQL, if you have a queue short enough to not cause horrific bloat when under load with large packets, it's not going to be long enough to keep the link busy with small packets.



If you are trying to do A/B comparisons to show that this doesn't work, that's one thing (and it sounds like you have already done so). But if you are trying to make fixed size buffers work well, we don't think that it can be done (not just that we have better ideas now, but the 'been there, tried that, nothing worked' side of things)



Even with 100 packet queue lengths you can easily get bad latencies under load.





re-reading your post for the umpteenth time, here's what I think I may be seeing.



you are working on developing video streaming software that can adapt the bit rate of the streaming video to have it fit within the available bandwidth. You are trying to see how this interacts with the different queuing options.



Is this a good summary?





If so, then you are basically wanting to do the same thing that the TCP stack is doing and when you see a dropped packet or ECN tagged packet, slow down the bit rate of the media that you are streaming so that it will use less bandwidth.



This sounds like an extremely interesting thing to do, it will be interesting to see the response from folks who know the deeper levels of the OS as to what options you have to learn that such events have taken place.



David Lang

[-- Attachment #2: Type: text/html, Size: 9808 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] setting queue depth on tail drop configurations of pfifo_fast
  2015-03-27 23:18       ` Bill Ver Steeg (versteb)
@ 2015-03-27 23:40         ` David Lang
  0 siblings, 0 replies; 7+ messages in thread
From: David Lang @ 2015-03-27 23:40 UTC (permalink / raw)
  To: Bill Ver Steeg (versteb); +Cc: bloat

On Fri, 27 Mar 2015, Bill Ver Steeg (versteb) wrote:

> Dave Lang-
>
>
>
> Yup, you got the intent.
>
>
>
> The ABR video delivery stack is actually one level more complex. The 
> application uses plain old HTTP to receive N==2 second chunks of video, which 
> in turn uses TCP to get the data, which in turn interacts with the various 
> queuing mechanisms, yada, yada, yada. So, the application rate adaptation 
> logic is using the HTTP transfer rate to decide whether to upshift to a higher 
> video rate, downshift to a lower video rate, or stay at the current video rate 
> at each chunk boundary.
>
>
>
> There are several application layer algorithms in use (Netflix, MPEG DASH, 
> Apple, Microsoft, etc), and many of them use more than one TCP/HTTP session to 
> get chunks. Lots of moving parts, and IMHO most of these developers are more 
> concerned with getting the best possible throughput than being bloat-friendly. 
> Driving the network at the perceived available line rate for hours at a time 
> is simply not network friendly.....

although if the user is only using the line for this purpose, it may be exactly 
the right thing to do :-/

> Clearly, the newer AQM algorithms will handle these types of aggressive ABR 
> algorithms better. There also may be a way to tweak the ABR algorithm to "do 
> the right thing" and make the system work better - both from a "make my video 
> better" standpoint and a "don't impact cross traffic" standpoint. As a start, 
> I am thinking of ways to keep the sending rate between the max video rate and 
> the (perceived) network rate. This does impact how such a flow competes with 
> other flows, and

You aren't really going to be able to measure your impact on other traffic 
(unless you can have the client do something else at the same time that would 
show the latency)

We've been working for a long time to directly measure bufferbloat and it's been 
quite a struggle. The best that we've been able to do is to compare the ping 
response time while under load and watch for it to climb (it tends to go up 
_very_ quickly when bufferbloat starts kicking in)

> Regarding peeking into the kernel ----- The overall design of the existing 
> systems assumes that they need to run on several OSes/platforms, and therefore 
> they (generally) do not peak into the kernel. I have done some work that does 
> look into the kernel to examine TCP receive queue sizes --- 
> https://smartech.gatech.edu/bitstream/handle/1853/45059/GT-CS-12-07.pdf -- and 
> it worked pretty well. That scheme would be difficult to productize, and I am 
> thinking about server-based methods in addition to client based methods to 
> keep out congestion jail. Perhaps using HTTP pragmas to have the client signal 
> the desired send rate to the HTTP server.

I was thinking in terms of the sender peeking into the kernel, you normally have 
a much more limited set of OSs on your server. But if you are transferring 
things via a standard HTTP server, you can't do this.

do you really have any better option than saying "I expected it to take X ms to 
send 2 sec worth of data, but it took X + Y ms to finish the HTTP transfer" and 
then take action based on the value of Y (which could be negative if the 
connection improved)?

If you have the ability to do something else (something very lightweight, 
ideally UDP based so you don't have TCP retries to deal with) in a separate 
connection while you are downloading the 2s of video and can detect the delays 
of that.

David Lang

> Bill Ver Steeg
>
> -----Original Message-----
> From: David Lang [mailto:david@lang.hm]
>
> re-reading your post for the umpteenth time, here's what I think I may be 
> seeing.
>
>
>
> you are working on developing video streaming software that can adapt the bit 
> rate of the streaming video to have it fit within the available bandwidth. You 
> are trying to see how this interacts with the different queuing options.
>
>
>
> Is this a good summary?
>
>
>
>
>
> If so, then you are basically wanting to do the same thing that the TCP stack 
> is doing and when you see a dropped packet or ECN tagged packet, slow down the 
> bit rate of the media that you are streaming so that it will use less 
> bandwidth.
>
>
>
> This sounds like an extremely interesting thing to do, it will be interesting 
> to see the response from folks who know the deeper levels of the OS as to what 
> options you have to learn that such events have taken place.
>
>
>
> David Lang
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-03-27 23:40 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-27 21:45 [Bloat] setting queue depth on tail drop configurations of pfifo_fast Bill Ver Steeg (versteb)
2015-03-27 22:02 ` David Lang
2015-03-27 22:14   ` Bill Ver Steeg (versteb)
2015-03-27 22:18     ` Toke Høiland-Jørgensen
2015-03-27 22:46     ` David Lang
2015-03-27 23:18       ` Bill Ver Steeg (versteb)
2015-03-27 23:40         ` David Lang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox