* [Bloat] About Stochastic Fair Blue (SFB)
@ 2011-02-04 9:46 Juliusz Chroboczek
2011-02-04 13:56 ` Jim Gettys
2011-02-04 15:12 ` [Bloat] About Stochastic Fair Blue (SFB) Dave Täht
0 siblings, 2 replies; 14+ messages in thread
From: Juliusz Chroboczek @ 2011-02-04 9:46 UTC (permalink / raw)
To: bloat
Hi again,
In his series of articles, Jim has concentrated on router-based
solutions to delay issues. He mentioned AQM policies in routers, and
notably the venerable RED.
AQMs are designed to achieve two different (but not necessarily
contradictory) goals: to improve the behaviour of the traffic (notably
by reducing the amount of buffering, which is what we're concerned about
here), and to improve fairness. For example, RED is mostly concerned
with the former, while CHOKe is only concerned with the latter.
One AQM that attempts both is Stochastic Fair a stochastically-fair
variant of BLUE [1]. In addition to reducing buffer size and enforcing
rough inter-flow fairness, SFB will reliably detect unresponsive flows
and rate-limit them.
In order to experiment with SFB, I've implemented it for Linux a couple
of years ago [2]. Unfortunately, I've given up for now on trying to get
it into the mainline kernel, and I'm not sure I want to try again [3].
--Juliusz
[1] W. Feng, D. Kandlur, D. Saha, K. Shin. Blue: A New Class of Active
Queue Management Algorithms. U. Michigan CSE-TR-387-99, April 1999.
http://www.thefengs.com/wuchang/blue/CSE-TR-387-99.pdf
[2] http://www.pps.jussieu.fr/~jch/software/sfb/
[3] http://article.gmane.org/gmane.linux.network/183813
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] About Stochastic Fair Blue (SFB)
2011-02-04 9:46 [Bloat] About Stochastic Fair Blue (SFB) Juliusz Chroboczek
@ 2011-02-04 13:56 ` Jim Gettys
2011-02-04 17:33 ` [Bloat] Buffer bloat at the sender [was: About Stochastic Fair Blue (SFB)] Juliusz Chroboczek
2011-02-04 15:12 ` [Bloat] About Stochastic Fair Blue (SFB) Dave Täht
1 sibling, 1 reply; 14+ messages in thread
From: Jim Gettys @ 2011-02-04 13:56 UTC (permalink / raw)
To: bloat
On 02/04/2011 04:46 AM, Juliusz Chroboczek wrote:
> Hi again,
>
> In his series of articles, Jim has concentrated on router-based
> solutions to delay issues. He mentioned AQM policies in routers, and
> notably the venerable RED.
Referred to hereafter as RED 93.
For those of you who have not waded through all the postings, RED is
often all that has been available in most Internet routers, and RED 93
won't handle the common case we face most commonly, which is 802.11.
(This is the opinion of Van Jacobson, who with Sally Floyd invented RED 93).
The reasons for this is that RED 93 can't handle the highly variable
"goodput" we see in wireless networks (and some other systems) due to
its static configuration, and Van says it's stability given the volatile
traffic mix we have in these networks also makes RED 93 hopeless.
I just want to make clear we face a problem here we can't solve with RED 93.
We have to explore our alternatives in detail.
>
> AQMs are designed to achieve two different (but not necessarily
> contradictory) goals: to improve the behaviour of the traffic (notably
> by reducing the amount of buffering, which is what we're concerned about
> here), and to improve fairness. For example, RED is mostly concerned
> with the former, while CHOKe is only concerned with the latter.
If we don't manage these insane buffers, we've lost.
>
> One AQM that attempts both is Stochastic Fair a stochastically-fair
> variant of BLUE [1]. In addition to reducing buffer size and enforcing
> rough inter-flow fairness, SFB will reliably detect unresponsive flows
> and rate-limit them.
>
> In order to experiment with SFB, I've implemented it for Linux a couple
> of years ago [2]. Unfortunately, I've given up for now on trying to get
> it into the mainline kernel, and I'm not sure I want to try again [3].
>
> --Juliusz
>
> [1] W. Feng, D. Kandlur, D. Saha, K. Shin. Blue: A New Class of Active
> Queue Management Algorithms. U. Michigan CSE-TR-387-99, April 1999.
> http://www.thefengs.com/wuchang/blue/CSE-TR-387-99.pdf
> [2] http://www.pps.jussieu.fr/~jch/software/sfb/
> [3] http://article.gmane.org/gmane.linux.network/183813
Juliusz, have you thought about the host case at all? One of the
places we're getting insane buffering is in the operating systems
themselves (e.g. the experiment I did with a 100Mbps switch). My
intuition is that we have to do AQM in hosts, not just routers.
- Jim
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] About Stochastic Fair Blue (SFB)
2011-02-04 9:46 [Bloat] About Stochastic Fair Blue (SFB) Juliusz Chroboczek
2011-02-04 13:56 ` Jim Gettys
@ 2011-02-04 15:12 ` Dave Täht
2011-02-04 17:41 ` Juliusz Chroboczek
1 sibling, 1 reply; 14+ messages in thread
From: Dave Täht @ 2011-02-04 15:12 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
Juliusz Chroboczek <jch@pps.jussieu.fr> writes:
> Hi again,
> One AQM that attempts both is Stochastic Fair a stochastically-fair
> variant of BLUE [1]. In addition to reducing buffer size and enforcing
> rough inter-flow fairness, SFB will reliably detect unresponsive flows
> and rate-limit them.
>
> In order to experiment with SFB, I've implemented it for Linux a couple
> of years ago [2]. Unfortunately, I've given up for now on trying to get
> it into the mainline kernel, and I'm not sure I want to try again [3].
I had incorporated juliusz's patches to the tc utility here in my git
repo here a few weeks back.
https://github.com/dtaht/iproute2bufferbloat
(I would be very interested in accumulating the other patches to tc for
the other new AQMs, too, if anyone knows what or where they are)
So a patched tc + SFB modules gives you a new AQM to work with.
The SFB code seems to compile as modules and insert into modern kernels
just fine. I've built it on arm, x86_64, and x86, with mips on my list
soon.
However, the usage is a little vague. Could you make some suggestions as
to a "shaping stack" to fiddle with, for example, a wireless environment
or home gateway scenario? [1]
Also, would you mind if I also put the SFB code into git somewhere
on github? It could use a few minor tweaks (make install).
Things I like about SFB
1) You can hash against multiple combinations of things. For example, in
the home gateway scenario, you could hash against IP addresses only, not
IP/port numbers - to give a per-device level of fairness.
2) It has a brilliant idea in the bloom filter - it looks like that
concept can scale..
3) It does packet marking... (So has to be used in combination with
something else)
4) it works with ipv6.
Things that I don't "get" about SFB.
1) I don't understand how the penalty box concept works.
2) I don't understand how it would interact with shaping above and below it[2]
--
Dave Taht
http://nex-6.taht.net
[1] and in my case my driver set is so bloated that traffic shaping at
the moment does little good - but I'm getting there.
[2] Yes, I've read the paper. And the code. Twice.
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Bloat] Buffer bloat at the sender [was: About Stochastic Fair Blue (SFB)]
2011-02-04 13:56 ` Jim Gettys
@ 2011-02-04 17:33 ` Juliusz Chroboczek
2011-02-04 18:24 ` Jim Gettys
2011-02-04 18:43 ` Dave Täht
0 siblings, 2 replies; 14+ messages in thread
From: Juliusz Chroboczek @ 2011-02-04 17:33 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
> Juliusz, have you thought about the host case at all?
> One of the places we're getting insane buffering is in the operating
> systems themselves (e.g. the experiment I did with a 100Mbps
> switch).
Yes. You have three to four layers of buffering:
(1) the device driver's buffer;
(2) the packet scheduler's buffer;
(3) TCP's buffer;
(4) the application's buffer.
It will come as no surprise to the readers of this list that (1) and (2)
are usually too large. For example, (1) the ath9k driver has a buffer
of 200 packets; and (2) the default scheduler queue is 1000 packets (!).
> My intuition is that we have to do AQM in hosts, not just routers.
Hmm... I would argue that the sending host is somewhat easier than the
intermediate router. In the sender, the driver/packet scheduler can
apply backpressure to the transport layer, to cause it to slow down
without the need for the lengthy feedback loop that dropping/delaying
a packet in an intermediate router has to rely on [1].
Unfortunately, at least under Linux, most drivers do not apply
backpressure correctly. Markus Kittenberger has recently determined [2]
that among b43-legacy, ath5k, ath9k and madwifi, only the former two do
the right thing.
--Juliusz
[1] Now why did we give up on source quench again?
[2] http://article.gmane.org/gmane.network.olsr.user/4264
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] About Stochastic Fair Blue (SFB)
2011-02-04 15:12 ` [Bloat] About Stochastic Fair Blue (SFB) Dave Täht
@ 2011-02-04 17:41 ` Juliusz Chroboczek
2011-02-04 18:54 ` Dave Täht
0 siblings, 1 reply; 14+ messages in thread
From: Juliusz Chroboczek @ 2011-02-04 17:41 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat
> Also, would you mind if I also put the SFB code into git somewhere
> on github? It could use a few minor tweaks (make install).
If that's okay with you, I'd like to commit it myself. If you suggest
the right repo to clone, I'll be glad to do it, and you'll be able to
merge it afterwards.
> 1) You can hash against multiple combinations of things. For example, in
> the home gateway scenario, you could hash against IP addresses only, not
> IP/port numbers - to give a per-device level of fairness.
Stolen from esfq.
> 3) It does packet marking... (So has to be used in combination with
> something else)
I think you're confused -- it does ECN marking, not netfilter marking.
> 1) I don't understand how the penalty box concept works.
For each aggregate, sfb maintains a variable, called pdrop, which is the
drop probability for this aggregate; the more agressive a flow, the
higher pdrop.
If the flow is inelastic, i.e. it doesn't slow down in reaction to
dropped packets, pdrop reaches 1. At that point, sfb should in
principle drop all the packets of this aggregate -- we say that this
aggregate has been put in a penalty box.
(In my implementation of sfb, I'm not actually dropping all the packets
of inelastic flows, I'm just rate-limiting them drastically.)
> 2) I don't understand how it would interact with shaping above and
> below it
There's nothing below sfb, since it's a classless discipline.
You may put anything you wish above sfb. If you're the bottleneck and
use a driver that performs proper backpressure, it's okay to put
nothing; otherwise, you need to put something to simulate backpressure,
typically tbf or htb.
--Juliusz
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Buffer bloat at the sender [was: About Stochastic Fair Blue (SFB)]
2011-02-04 17:33 ` [Bloat] Buffer bloat at the sender [was: About Stochastic Fair Blue (SFB)] Juliusz Chroboczek
@ 2011-02-04 18:24 ` Jim Gettys
2011-02-04 18:58 ` [Bloat] Buffer bloat at the sender Juliusz Chroboczek
2011-02-04 18:43 ` Dave Täht
1 sibling, 1 reply; 14+ messages in thread
From: Jim Gettys @ 2011-02-04 18:24 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
On 02/04/2011 12:33 PM, Juliusz Chroboczek wrote:
>> Juliusz, have you thought about the host case at all?
>
>> One of the places we're getting insane buffering is in the operating
>> systems themselves (e.g. the experiment I did with a 100Mbps
>> switch).
>
> Yes. You have three to four layers of buffering:
>
> (1) the device driver's buffer;
> (2) the packet scheduler's buffer;
> (3) TCP's buffer;
> (4) the application's buffer.
There are a few more than the four you identify:
5) sometimes device drivers have some private buffers, independent of
the device itself; for example the Marvell driver for the wireless
module used on OLPC has such an internal buffer; dwmw2 used it to
simplify his locking problems with USB.
6) the device themselves may have buffers. Again, on the OLPC wireless
module (which does a form of mesh routing internally, without needing
host support), there are 4 packet buffers out in the device (which is
itself an ARM processor with hundreds of kilobytes of code).
I also hypothesise that there could be busses that support multiple
outstanding transactions, that could add yet more buffering. I have no
idea if this hypothesis is true.
>
> It will come as no surprise to the readers of this list that (1) and (2)
> are usually too large. For example, (1) the ath9k driver has a buffer
> of 200 packets; and (2) the default scheduler queue is 1000 packets (!).
>
>> My intuition is that we have to do AQM in hosts, not just routers.
>
> Hmm... I would argue that the sending host is somewhat easier than the
> intermediate router. In the sender, the driver/packet scheduler can
> apply backpressure to the transport layer, to cause it to slow down
> without the need for the lengthy feedback loop that dropping/delaying
> a packet in an intermediate router has to rely on [1].
Yup. I was noting, however, with 1-4 above, we today have a major
problem here in practice.
And I don't know if there is any way to signal back pressure to UDP
based protocols; I've never worked on one.
>
> Unfortunately, at least under Linux, most drivers do not apply
> backpressure correctly. Markus Kittenberger has recently determined [2]
> that among b43-legacy, ath5k, ath9k and madwifi, only the former two do
> the right thing.
Yes, the host side can and should apply backpressure correctly, and it
may be an easier case. It sounds like we have work to that area to do.
>
> --Juliusz
>
> [1] Now why did we give up on source quench again?
> [2] http://article.gmane.org/gmane.network.olsr.user/4264
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Buffer bloat at the sender
2011-02-04 17:33 ` [Bloat] Buffer bloat at the sender [was: About Stochastic Fair Blue (SFB)] Juliusz Chroboczek
2011-02-04 18:24 ` Jim Gettys
@ 2011-02-04 18:43 ` Dave Täht
2011-02-04 18:56 ` Juliusz Chroboczek
1 sibling, 1 reply; 14+ messages in thread
From: Dave Täht @ 2011-02-04 18:43 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
Juliusz Chroboczek <jch@pps.jussieu.fr> writes:
>> Juliusz, have you thought about the host case at all?
>
>> One of the places we're getting insane buffering is in the operating
>> systems themselves (e.g. the experiment I did with a 100Mbps
>> switch).
>
> Yes. You have three to four layers of buffering:
>
> (1) the device driver's buffer;
> (2) the packet scheduler's buffer;
> (3) TCP's buffer;
> (4) the application's buffer.
>
> It will come as no surprise to the readers of this list that (1) and (2)
> are usually too large. For example, (1) the ath9k driver has a buffer
> of 200 packets; and (2) the default scheduler queue is 1000 packets (!).
The ath9k driver I have has 512 buffers, organized into 10 queues of
various usages that I don't think are actually being used for much (need
a way to gain insight into the queue usage and qdisc interaction), with
TX_RETRIES set to 13. Fairly current openwrt head.
I've put a patch out there to reduce this to (I think) effectively a
queue depth of 3, retries of 4, throughout, and the results are thus far
amazing.
>
>> My intuition is that we have to do AQM in hosts, not just routers.
>
> Hmm... I would argue that the sending host is somewhat easier than the
> intermediate router. In the sender, the driver/packet scheduler can
> apply backpressure to the transport layer, to cause it to slow down
> without the need for the lengthy feedback loop that dropping/delaying
> a packet in an intermediate router has to rely on [1].
>
> Unfortunately, at least under Linux, most drivers do not apply
> backpressure correctly. Markus Kittenberger has recently determined [2]
> that among b43-legacy, ath5k, ath9k and madwifi, only the former two do
> the right thing.
I was wondering about madwifi in the context of the mesh potato. Thank you.
I have been looking over the mq qdisc, it's not clear how well it's
being used.
>
> --Juliusz
>
> [1] Now why did we give up on source quench again?
> [2] http://article.gmane.org/gmane.network.olsr.user/4264
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] About Stochastic Fair Blue (SFB)
2011-02-04 17:41 ` Juliusz Chroboczek
@ 2011-02-04 18:54 ` Dave Täht
2011-02-04 19:02 ` Juliusz Chroboczek
0 siblings, 1 reply; 14+ messages in thread
From: Dave Täht @ 2011-02-04 18:54 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
Juliusz Chroboczek <jch@pps.jussieu.fr> writes:
>> Also, would you mind if I also put the SFB code into git somewhere
>> on github? It could use a few minor tweaks (make install).
>
> If that's okay with you, I'd like to commit it myself.
Go right ahead! Just tell us where.
> If you suggest the right repo to clone, I'll be glad to do it, and
> you'll be able to merge it afterwards.
My thought was, since there is an upcoming flurry of AQM modules, that
it would be good to have a structure that allowed for them all to be
maintained in a shared repository. So something like a Linux-AQM repo
with a tree for each qdisc. They are highly modular so forking a Linux
branch like Linux-wireless seems like overkill, at the moment.
Thoughts?
>
>> 1) You can hash against multiple combinations of things. For example, in
>> the home gateway scenario, you could hash against IP addresses only, not
>> IP/port numbers - to give a per-device level of fairness.
>
> Stolen from esfq.
Great artists steal!
>> 3) It does packet marking... (So has to be used in combination with
>> something else)
>
> I think you're confused -- it does ECN marking, not netfilter marking.
The need for multiple levels of qdisc is that other protocols than TCP
do not have ECN.
>
>> 1) I don't understand how the penalty box concept works.
>
> For each aggregate, sfb maintains a variable, called pdrop, which is the
> drop probability for this aggregate; the more agressive a flow, the
> higher pdrop.
>
> If the flow is inelastic, i.e. it doesn't slow down in reaction to
> dropped packets, pdrop reaches 1. At that point, sfb should in
> principle drop all the packets of this aggregate -- we say that this
> aggregate has been put in a penalty box.
>
> (In my implementation of sfb, I'm not actually dropping all the packets
> of inelastic flows, I'm just rate-limiting them drastically.)
Thank you, that clears it up.
>> 2) I don't understand how it would interact with shaping above and
>> below it
>
> There's nothing below sfb, since it's a classless discipline.
>
> You may put anything you wish above sfb. If you're the bottleneck and
> use a driver that performs proper backpressure, it's okay to put
> nothing; otherwise, you need to put something to simulate backpressure,
> typically tbf or htb.
Except that other flows can be non-tcp - udp - sctp... Another
"interesting" qdisc is hsfc. I know firsthand how badly different qdiscs
can interact with each other....
>
> --Juliusz
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Buffer bloat at the sender
2011-02-04 18:43 ` Dave Täht
@ 2011-02-04 18:56 ` Juliusz Chroboczek
2011-02-04 19:58 ` Dave Täht
0 siblings, 1 reply; 14+ messages in thread
From: Juliusz Chroboczek @ 2011-02-04 18:56 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat
> The ath9k driver I have has 512 buffers,
You're right, I stand corrected.
> I have been looking over the mq qdisc,
Do you understand what it's supposed to do?
--Juliusz
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Buffer bloat at the sender
2011-02-04 18:24 ` Jim Gettys
@ 2011-02-04 18:58 ` Juliusz Chroboczek
2011-02-04 19:26 ` Dave Täht
0 siblings, 1 reply; 14+ messages in thread
From: Juliusz Chroboczek @ 2011-02-04 18:58 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
> And I don't know if there is any way to signal back pressure to UDP
> based protocols
Sendmsg returns EAGAIN or blocks, and it's up to the application to deal
with it. Or am I missing something?
--Juliusz
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] About Stochastic Fair Blue (SFB)
2011-02-04 18:54 ` Dave Täht
@ 2011-02-04 19:02 ` Juliusz Chroboczek
0 siblings, 0 replies; 14+ messages in thread
From: Juliusz Chroboczek @ 2011-02-04 19:02 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat
> Thoughts?
Why don't you create a clone of the kernel that we can fork?
Note, by the way, that due to the way Github immediately complied with
https://github.com/github/dmca/blob/master/2011-01-27-sony.markdown
I suggest keeping a personal backup of anything you put there.
--Juliusz
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Buffer bloat at the sender
2011-02-04 18:58 ` [Bloat] Buffer bloat at the sender Juliusz Chroboczek
@ 2011-02-04 19:26 ` Dave Täht
0 siblings, 0 replies; 14+ messages in thread
From: Dave Täht @ 2011-02-04 19:26 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
Juliusz Chroboczek <Juliusz.Chroboczek@pps.jussieu.fr> writes:
>> And I don't know if there is any way to signal back pressure to UDP
>> based protocols
>
> Sendmsg returns EAGAIN or blocks, and it's up to the application to deal
> with it. Or am I missing something?
Works at the host, not at the router.
>
> --Juliusz
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Buffer bloat at the sender
2011-02-04 18:56 ` Juliusz Chroboczek
@ 2011-02-04 19:58 ` Dave Täht
2011-02-04 20:14 ` Juliusz Chroboczek
0 siblings, 1 reply; 14+ messages in thread
From: Dave Täht @ 2011-02-04 19:58 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
Juliusz Chroboczek <Juliusz.Chroboczek@pps.jussieu.fr> writes:
>> I have been looking over the mq qdisc,
>
> Do you understand what it's supposed to do?
The documentation for it is in the Linux tree under
Documentation/networking/multiqueue.txt
My understanding is that it is a general purpose mechanism to get
traffic into a driver (notably a wireless one) that has multiple queues
implemented, in the hope that classified traffic can then line up with
one of the more appropriate traffic classifications in the 802.11*
standards. [1][2]
It does not seem to be widely used at present, and the example in
the documentation is overgeneralized and does not hint at this use.
There are also all sorts of other interesting hacks^H^H^H^H^H attempts
for a reliable delivery/timeliness compromise in the mac layer in the
802.11 standards, including frame aggregation and the like.
The latest 802.11* draft is now open for review to IEEE members. [3]
How all this stuff interacts in the real world is somewhat undefined.
I'm way too far below this level of the stack to care at present, but
seeing some scripts that actually tried to wedge wireless udp/tcpip
traffic into more appropriate queues at the driver level would be
interesting.
--
Dave Taht
http://nex-6.taht.net
[1] http://en.wikipedia.org/wiki/IEEE_802.11e-2005
[2] http://www.intel.com/network/connectivity/resources/doc_library/white_papers/30376201.pdf
[3] http://www.ieee802.org/11/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Buffer bloat at the sender
2011-02-04 19:58 ` Dave Täht
@ 2011-02-04 20:14 ` Juliusz Chroboczek
0 siblings, 0 replies; 14+ messages in thread
From: Juliusz Chroboczek @ 2011-02-04 20:14 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat
>>> I have been looking over the mq qdisc,
> The documentation for it is in the Linux tree under
> Documentation/networking/multiqueue.txt
That's the "multiq" qdisc. Is the "mq" qdisc the same thing?
--Juliusz
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2011-02-04 20:15 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-02-04 9:46 [Bloat] About Stochastic Fair Blue (SFB) Juliusz Chroboczek
2011-02-04 13:56 ` Jim Gettys
2011-02-04 17:33 ` [Bloat] Buffer bloat at the sender [was: About Stochastic Fair Blue (SFB)] Juliusz Chroboczek
2011-02-04 18:24 ` Jim Gettys
2011-02-04 18:58 ` [Bloat] Buffer bloat at the sender Juliusz Chroboczek
2011-02-04 19:26 ` Dave Täht
2011-02-04 18:43 ` Dave Täht
2011-02-04 18:56 ` Juliusz Chroboczek
2011-02-04 19:58 ` Dave Täht
2011-02-04 20:14 ` Juliusz Chroboczek
2011-02-04 15:12 ` [Bloat] About Stochastic Fair Blue (SFB) Dave Täht
2011-02-04 17:41 ` Juliusz Chroboczek
2011-02-04 18:54 ` Dave Täht
2011-02-04 19:02 ` Juliusz Chroboczek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox