Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
* [Cerowrt-devel] happy 4th!
@ 2013-07-03 19:33 Dave Taht
  2013-07-04  5:57 ` Mikael Abrahamsson
  0 siblings, 1 reply; 26+ messages in thread
From: Dave Taht @ 2013-07-03 19:33 UTC (permalink / raw)
  To: cerowrt-devel

I will be taking off for a few days. I've been terribly distracted
with getting the yurtlab fully up, some details here:

https://plus.google.com/u/0/107942175615993706558/posts/XFGgMTUUDWC

Suggestions as to things to test and code to test them welcomed. In
particular I'd like to find something more robust that apache's
benchmarks to more closely emulate the cablelabs tests of realistic
web sites and more realistic RTTs.

It has generally been my hope to sit down and focus on cerowrt again
starting next week, but I just took a contract (money trumps research)
which is going to push that back another week.

The present dev build seems pretty stable, but there are some busted
packages, and I will be revising the default ahcp setup to be more
useful, and adding some more polishing touches on the aqm thing. I
also need to get around to pushing up the 6in4 fq_codel fix into the
kernel mainline at the very least.

Has anyone looked at ACC from gargoyle? It's pretty interesting...

known broken stuff in the present build:

bind-latest: Presently I plan to abandon the bind-latest code, as much
as I liked the xinetd based launcher, and revert to openwrt's mainline
version of bind as an installable option. Doing that primarily to get
"nsupdate", actually...

bloat-test-scripts: keep meaning to package these up (from the ietf demo)
unbound: will add to next build
miniupnp: I saw the conversation, have no means to duplicate the
problem (should I get an xbox for the lab?)
minissd: no clue
mosh-server: dies on trying to create a pty terminal
quagga: I am toying with switching to babeld for the next build as
there is exciting stuff going on in the RTT and source routing
branches. I have some very beta homenet code for quagga that I'd like
to be able to test separately...

Kernel 3.10 is out and as the rest of the lab is moving towards
running that, I'd like to move cero to 3.10 when patches become
available rather than sticking with 3.8.13 for the next stable
release.

I would like to take a better stab at addressing

https://www.bufferbloat.net/issues/433

than merely saying: use ahcp and babel.


-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-03 19:33 [Cerowrt-devel] happy 4th! Dave Taht
@ 2013-07-04  5:57 ` Mikael Abrahamsson
  2013-07-04 13:51   ` Michael Richardson
  2013-07-07 18:52   ` dpreed
  0 siblings, 2 replies; 26+ messages in thread
From: Mikael Abrahamsson @ 2013-07-04  5:57 UTC (permalink / raw)
  To: Dave Taht; +Cc: cerowrt-devel

On Wed, 3 Jul 2013, Dave Taht wrote:

> Suggestions as to things to test and code to test them welcomed. In

I'm wondering a bit what the shallow buffering depth means to higher-RTT 
connections. When I advocate bufferbloat solutions I usually get thrown in 
my face that shallow buffering means around-the-world TCP-connections will 
behave worse than with a lot of buffers (traditional truth being that you 
need to be able to buffer RTT*2).

It would be very interesting to see what an added 100ms 
(<http://stackoverflow.com/questions/614795/simulate-delayed-and-dropped-packets-on-linux>) 
and some packet loss/PDV would result in. If it still works well, at least 
it would mean that people concerned about this could go back to rest.

Also, would be interesting to see is Googles proposed QUIC interacts well 
with the bufferbloat solutions. I imagine it will since it in itself 
measures RTT and FQ_CODEL is all about controlling delay, so I imagine 
QUIC will see a quite constant view of the world through FQ_CODEL.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-04  5:57 ` Mikael Abrahamsson
@ 2013-07-04 13:51   ` Michael Richardson
  2013-07-04 15:48     ` Mikael Abrahamsson
  2013-07-07 18:52   ` dpreed
  1 sibling, 1 reply; 26+ messages in thread
From: Michael Richardson @ 2013-07-04 13:51 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: cerowrt-devel


Mikael Abrahamsson <swmike@swm.pp.se> wrote:

    > On Wed, 3 Jul 2013, Dave Taht wrote:

    >> Suggestions as to things to test and code to test them welcomed. In

    > I'm wondering a bit what the shallow buffering depth means to
    > higher-RTT connections. When I advocate bufferbloat solutions I usually
    > get thrown in my face that shallow buffering means around-the-world
    > TCP-connections will behave worse than with a lot of buffers
    > (traditional truth being that you need to be able to buffer RTT*2).

huh?  The end points might need more buffers to receive more packets (some of
which might be out of order), but the intermediate routers need nothing.
None of the bufferbloat stuff reduces the receive buffers of an end-point.

On long latency links, (the worse being geosynchronous satellite), the link
*itself* stores data.
(a historical exploitation of this: https://en.wikipedia.org/wiki/Delay_line_memory )

So, who is saying this to you, and what exactly do they think bufferbloat is
about?

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        | network architect  [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-04 13:51   ` Michael Richardson
@ 2013-07-04 15:48     ` Mikael Abrahamsson
  0 siblings, 0 replies; 26+ messages in thread
From: Mikael Abrahamsson @ 2013-07-04 15:48 UTC (permalink / raw)
  To: Michael Richardson; +Cc: cerowrt-devel

On Thu, 4 Jul 2013, Michael Richardson wrote:

> huh?  The end points might need more buffers to receive more packets (some of
> which might be out of order), but the intermediate routers need nothing.
> None of the bufferbloat stuff reduces the receive buffers of an end-point.

I never talked about the end-points receive buffers. I was talking about 
intermediate routers buffering. With long-latency links, TCP traditionally 
sends packets at fairly big bursts sent at wirespeed of the end system, 
and when this burst arrives at a router doing speed conversion 
(1gige->1megabit/s WAN link), it will need to buffer some of this burst. 
This is why buffers are large on routers, because of this traditional 
thinking that TCP works best when routers in between never drops packets.

> So, who is saying this to you, and what exactly do they think 
> bufferbloat is about?

They say if you reduce the buffer size in routers, TCP performance on long 
latency paths will suffer due to packet loss and long recovery times due 
to the end-to-end high latency.

Traditional thinking (I've had this quoted to me by several old-timers who 
have been involved in core router design since the 90ties) is that you 
need buffers who can accept 2*RTT amount of packets at the speed you're 
trying to send.

So if we want to reduce the buffers, we need to prove that TCP goodput on 
a 200ms link doesn't suffer because there are now less buffers.

So my suggestion is the following:

When doing the tests, set up the following:

[host1] - (GE/FE/E) - [fq_codel router] - [100ms adding latency router] (GE) [host2]

The link between CODEL-router and the router adding latency should be 1/10 
and 1/100 of the [host1] - [CODEL-router] link speed.

If it can be shown that fq_codel keeps the goodput performance the same as 
FIFO with large buffers, then we hopefully could answer people with the 
concerns I mentioned before.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-04  5:57 ` Mikael Abrahamsson
  2013-07-04 13:51   ` Michael Richardson
@ 2013-07-07 18:52   ` dpreed
  2013-07-08  0:24     ` Mikael Abrahamsson
  1 sibling, 1 reply; 26+ messages in thread
From: dpreed @ 2013-07-07 18:52 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 3951 bytes --]


Whereever the idea came from that you "had to buffer RTT*2" in a midpath node, it is categorically wrong.
 
What is possibly relevant is that you will have RTT * bottleneck-bit-rate bits "in flight" from end-to-end in order not to be constrained by the acknowledgement time.   That is: TCP's outstanding "window" should be RTT*bottleneck-bit-rate to maximize throughput.   Making the window *larger* than that is not helpful.
 
So when somebody "throws that in your face", just confidently use the words "Bullshit, show me evidence", and ignore the ignorant person who is repeating an urban legend similar to the one about the size of crocodiles in New York's sewers that are supposedly there because people throw pet crocodiles down there.
 
If you need a simplified explanation of why having 2*RTT-in-the-worst-case-around-the-world * maximum-bit-rate-on-the-path, all you need to think about is what happens when some intermediate huge bottleneck buffer fills up (which it certainly will, very quickly, since by definition the paths feeding it have much higher delivery rates than it can handle).
 
What will happen?  A packet will be silently discarded from the "tail" of the queue.  But that packet's loss will not be discovered by the endpoints until the "bottleneck-bit-rate" * the worst-case-RTT * 2 (or maybe 4 if the reverse path is similarly clogged) seconds later.  Meanwhile the sources would have happily *sustained* the size of the bottleneck's buffer, by putting out that many bits past the lost packet's position (thinking all is well).
 
And so what will happen?  most of the following packets behind the lost packet will be retransmitted by the source again.   This of course *doubles* the packet rate into the bottleneck.
 
And there is an infinite regression - all the while there being a solidly maintained extremely long queue of packets that are waiting for the bottleneck link.  Many, many seconds of end-to-end latency on that link, perhaps.
 
Only if all users "give up and go home" for the day on that link will the bottleneck link's send queue ever drain.  New TCP connections will open, and if lucky, they will see a link with delays from earth-to-pluto as its norm on their SYN/SYN-ACK.  But they won't get better service than that, while continuing to congest the node.
 
What you need is a message from the bottleneck link to say "WHOA - I can't process all this traffic".  And that happens *only* when that link actually drops packets after about 50 msec. or less of traffic is queued.
 
 
 
 


On Thursday, July 4, 2013 1:57am, "Mikael Abrahamsson" <swmike@swm.pp.se> said:



> On Wed, 3 Jul 2013, Dave Taht wrote:
> 
> > Suggestions as to things to test and code to test them welcomed. In
> 
> I'm wondering a bit what the shallow buffering depth means to higher-RTT
> connections. When I advocate bufferbloat solutions I usually get thrown in
> my face that shallow buffering means around-the-world TCP-connections will
> behave worse than with a lot of buffers (traditional truth being that you
> need to be able to buffer RTT*2).
> 
> It would be very interesting to see what an added 100ms
> (<http://stackoverflow.com/questions/614795/simulate-delayed-and-dropped-packets-on-linux>)
> and some packet loss/PDV would result in. If it still works well, at least
> it would mean that people concerned about this could go back to rest.
> 
> Also, would be interesting to see is Googles proposed QUIC interacts well
> with the bufferbloat solutions. I imagine it will since it in itself
> measures RTT and FQ_CODEL is all about controlling delay, so I imagine
> QUIC will see a quite constant view of the world through FQ_CODEL.
> 
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>

[-- Attachment #2: Type: text/html, Size: 5250 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-07 18:52   ` dpreed
@ 2013-07-08  0:24     ` Mikael Abrahamsson
  2013-07-08 17:03       ` Toke Høiland-Jørgensen
  2013-07-08 20:50       ` [Cerowrt-devel] " dpreed
  0 siblings, 2 replies; 26+ messages in thread
From: Mikael Abrahamsson @ 2013-07-08  0:24 UTC (permalink / raw)
  To: dpreed; +Cc: cerowrt-devel

On Sun, 7 Jul 2013, dpreed@reed.com wrote:

> So when somebody "throws that in your face", just confidently use the 
> words "Bullshit, show me evidence", and ignore the ignorant person who

Oh, the people that have told me this are definitely not ignorant. Quite 
the contrary.

... and by the way, they're optimising for the case where a single TCP 
flow from a 10GE connected host is traversing a 10G based backbone, and 
they want this single TCP session to use every spare capacity the network 
has to give. Not 90% of available capcity, but 100%.

This is the kind of people that have a lot of influence and causes core 
routers to get designed with 600 ms of buffering (well, latest generation 
ones are down to 50ms buffering). We're talking billion dollar investments 
by hardware manufacturers. We're talking core routers of latest generation 
that are still being put into production as we speak.

Calling them ignorant and trying to wave them off by that kind of 
reasonsing isn't productive. Why not just implement the high RTT testing 
part and prove that you're right instead of just saying you're right?

THe bufferbloat initiative is trying to change how things are done. Burden 
of proof is here. When I participate in IETF TCP WG, they talk goodput. 
They're not talking latency and interacting well with UDP based 
interactive streams. They're optimising goodput. If we want buffers to be 
lower, we need to convince people that this doesn't hugely affect goodput.

I have not so far seen tests with FQ_CODEL with a simulated 100ms extra 
latency one-way (200ms RTT). They might be out there, but I have not seen 
them. I encourage these tests to be done.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-08  0:24     ` Mikael Abrahamsson
@ 2013-07-08 17:03       ` Toke Høiland-Jørgensen
  2013-07-09  3:24         ` Dave Taht
  2013-07-09  6:04         ` Mikael Abrahamsson
  2013-07-08 20:50       ` [Cerowrt-devel] " dpreed
  1 sibling, 2 replies; 26+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-08 17:03 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 5489 bytes --]

Mikael Abrahamsson <swmike@swm.pp.se> writes:

> I have not so far seen tests with FQ_CODEL with a simulated 100ms
> extra latency one-way (200ms RTT). They might be out there, but I have
> not seen them. I encourage these tests to be done.

Did a few test runs on my setup. Here are some figures (can't go higher
than 100mbit with the hardware I have, sorry).

Note that I haven't done tests at 100mbit on this setup before, so can't
say whether something weird is going on there. I'm a little bit puzzled
as to why the flows don't seem to get going at all in one direction for
the rrul test. I'm guessing it has something to do with TSQ.

Attaching graphs makes the listserv bounce my mail, so instead they're
here: http://archive.tohojo.dk/bufferbloat-data/long-rtt/ with
throughput data below. Overall, it looks pretty good for fq_codel I'd
say :)

I can put up the data files as well if you'd like.

-Toke


Throughput data:


10mbit:

rrul test (4 flows each way), pfifo_fast qdisc:
 TCP download sum:
  Data points: 299
  Total:       375.443728 Mbits
  Mean:        6.278323 Mbits/s
  Median:      6.175466 Mbits/s
  Min:         0.120000 Mbits/s
  Max:         9.436373 Mbits/s
  Std dev:     1.149514
  Variance:    1.321382
--
 TCP upload sum:
  Data points: 300
  Total:       401.740454 Mbits
  Mean:        6.695674 Mbits/s
  Median:      6.637576 Mbits/s
  Min:         2.122827 Mbits/s
  Max:         16.892302 Mbits/s
  Std dev:     1.758319
  Variance:    3.091687

  
rrul test (4 flows each way), fq_codel qdisc:
 TCP download sum:
  Data points: 301
  Total:       492.824346 Mbits
  Mean:        8.186451 Mbits/s
  Median:      8.416901 Mbits/s
  Min:         0.120000 Mbits/s
  Max:         9.965051 Mbits/s
  Std dev:     1.244959
  Variance:    1.549924
--
 TCP upload sum:
  Data points: 305
  Total:       717.499994 Mbits
  Mean:        11.762295 Mbits/s
  Median:      8.630924 Mbits/s
  Min:         2.513799 Mbits/s
  Max:         323.180000 Mbits/s
  Std dev:     31.056047
  Variance:    964.478066


TCP test (one flow each way), pfifo_fast qdisc:
 TCP download:
  Data points: 301
  Total:       263.445418 Mbits
  Mean:        4.376170 Mbits/s
  Median:      4.797729 Mbits/s
  Min:         0.030000 Mbits/s
  Max:         5.757982 Mbits/s
  Std dev:     1.135209
  Variance:    1.288699
---
 TCP upload:
  Data points: 302
  Total:       321.090853 Mbits
  Mean:        5.316074 Mbits/s
  Median:      5.090142 Mbits/s
  Min:         0.641123 Mbits/s
  Max:         24.390000 Mbits/s
  Std dev:     2.126472
  Variance:    4.521882


TCP test (one flow each way), fq_codel qdisc:
 TCP download:
  Data points: 302
  Total:       365.357123 Mbits
  Mean:        6.048959 Mbits/s
  Median:      6.550488 Mbits/s
  Min:         0.030000 Mbits/s
  Max:         9.090000 Mbits/s
  Std dev:     1.316275
  Variance:    1.732579
---
 TCP upload:
  Data points: 303
  Total:       466.550695 Mbits
  Mean:        7.698856 Mbits/s
  Median:      6.144435 Mbits/s
  Min:         0.641154 Mbits/s
  Max:         127.690000 Mbits/s
  Std dev:     12.075298
  Variance:    145.812812


100 mbit:

rrul test (4 flows each way), pfifo_fast qdisc:
 TCP download sum:
  Data points: 301
  Total:       291.718140 Mbits
  Mean:        4.845816 Mbits/s
  Median:      4.695355 Mbits/s
  Min:         0.120000 Mbits/s
  Max:         10.774475 Mbits/s
  Std dev:     1.818852
  Variance:    3.308222
--
 TCP upload sum:
  Data points: 305
  Total:       5468.339961 Mbits
  Mean:        89.644917 Mbits/s
  Median:      90.731214 Mbits/s
  Min:         2.600000 Mbits/s
  Max:         186.362429 Mbits/s
  Std dev:     21.782436
  Variance:    474.474532


rrul test (4 flows each way), fq_codel qdisc:
 TCP download sum:
  Data points: 304
  Total:       427.064699 Mbits
  Mean:        7.024090 Mbits/s
  Median:      7.074768 Mbits/s
  Min:         0.150000 Mbits/s
  Max:         17.870000 Mbits/s
  Std dev:     2.079303
  Variance:    4.323501
--
 TCP upload sum:
  Data points: 305
  Total:       5036.774674 Mbits
  Mean:        82.570077 Mbits/s
  Median:      82.782532 Mbits/s
  Min:         2.600000 Mbits/s
  Max:         243.990000 Mbits/s
  Std dev:     22.566052
  Variance:    509.226709


TCP test (one flow each way), pfifo_fast qdisc:
 TCP download:
  Data points: 160
  Total:       38.477172 Mbits
  Mean:        1.202412 Mbits/s
  Median:      1.205256 Mbits/s
  Min:         0.020000 Mbits/s
  Max:         4.012585 Mbits/s
  Std dev:     0.728299
  Variance:    0.530419
 TCP upload:
  Data points: 165
  Total:       2595.453489 Mbits
  Mean:        78.650106 Mbits/s
  Median:      92.387832 Mbits/s
  Min:         0.650000 Mbits/s
  Max:         102.610000 Mbits/s
  Std dev:     30.432215
  Variance:    926.119728



TCP test (one flow each way), fq_codel qdisc:
  Data points: 301
  Total:       396.307606 Mbits
  Mean:        6.583183 Mbits/s
  Median:      7.786816 Mbits/s
  Min:         0.030000 Mbits/s
  Max:         15.270000 Mbits/s
  Std dev:     3.034477
  Variance:    9.208053
 TCP upload:
  Data points: 302
  Total:       4238.768131 Mbits
  Mean:        70.178280 Mbits/s
  Median:      74.722554 Mbits/s
  Min:         0.650000 Mbits/s
  Max:         91.901862 Mbits/s
  Std dev:     17.860375
  Variance:    318.993001


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-08  0:24     ` Mikael Abrahamsson
  2013-07-08 17:03       ` Toke Høiland-Jørgensen
@ 2013-07-08 20:50       ` dpreed
  2013-07-08 21:04         ` Jim Gettys
  2013-07-09  5:48         ` Mikael Abrahamsson
  1 sibling, 2 replies; 26+ messages in thread
From: dpreed @ 2013-07-08 20:50 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 4148 bytes --]


I was suggesting that there is no reason to be intimidated.
 
And yes, according to the dictionary definition, they are ignorant - as in they don't know what they are talking about, and don't care to.
 
They may be influential, and they may have a great opinion of themselves.  And others may view them as "knowledgeable".   The folks who told Galileo that he was wrong were all of that.  But they remained ignorant.
 
As to being constructive, I'm not convinced that these people can be convinced that their dismissal of bufferbloat and their idea that "goodput" is a useful Internet concept are incorrect.
 
If they are curious, experimental evidence might be useful.  But have they done their own experiments to validate what they "accept as true"?   I've been told by more than 50% of professional EE's practicing that "Shannon's Law" places a limit on all radio communications capacity.  But none of these EE's can even explain the Shannon-Hartley AWGN channel capacity theorem, its derivation, and its premises and range of applicability.  They just "think they know" what it means.  And they are incredibly arrogant and dismissive, while being totally *incurious*.
 
The same is true about most "networking professionals".  Few understand queueing theory, its range of applicability, etc. *or even exactly how TCP works*.  But that doesn't stop them from ignoring evidence, evidence that is right in front of their eyes - every day.  It took Jim Gettys' curiosity of why his home network performance *sucked* to get him to actually dig into the problem.  And yet much of IETF still tries to claim that the problem doesn't exist!  They dismiss evidence - out of hand.
 
That's not science, it's not curiosity.  It's *dogmatism* - the opposite of science.  And those people are rarely going to change their minds.  After 45 years in advanced computing and communications, I can tell you they will probably go to their graves spouting their "old-wives-tales".
 
Spend your time on people who don't "throw things in your face".  On the people who are actually curious enough to test your claims themselves (which is quite easy for anyone who can do simple measurements).  RRUL is a nice simple test.  Let them try it!
 
 


On Sunday, July 7, 2013 8:24pm, "Mikael Abrahamsson" <swmike@swm.pp.se> said:



> On Sun, 7 Jul 2013, dpreed@reed.com wrote:
> 
> > So when somebody "throws that in your face", just confidently use the
> > words "Bullshit, show me evidence", and ignore the ignorant person who
> 
> Oh, the people that have told me this are definitely not ignorant. Quite
> the contrary.
> 
> ... and by the way, they're optimising for the case where a single TCP
> flow from a 10GE connected host is traversing a 10G based backbone, and
> they want this single TCP session to use every spare capacity the network
> has to give. Not 90% of available capcity, but 100%.
> 
> This is the kind of people that have a lot of influence and causes core
> routers to get designed with 600 ms of buffering (well, latest generation
> ones are down to 50ms buffering). We're talking billion dollar investments
> by hardware manufacturers. We're talking core routers of latest generation
> that are still being put into production as we speak.
> 
> Calling them ignorant and trying to wave them off by that kind of
> reasonsing isn't productive. Why not just implement the high RTT testing
> part and prove that you're right instead of just saying you're right?
> 
> THe bufferbloat initiative is trying to change how things are done. Burden
> of proof is here. When I participate in IETF TCP WG, they talk goodput.
> They're not talking latency and interacting well with UDP based
> interactive streams. They're optimising goodput. If we want buffers to be
> lower, we need to convince people that this doesn't hugely affect goodput.
> 
> I have not so far seen tests with FQ_CODEL with a simulated 100ms extra
> latency one-way (200ms RTT). They might be out there, but I have not seen
> them. I encourage these tests to be done.
> 
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
>

[-- Attachment #2: Type: text/html, Size: 5377 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-08 20:50       ` [Cerowrt-devel] " dpreed
@ 2013-07-08 21:04         ` Jim Gettys
  2013-07-09  5:48         ` Mikael Abrahamsson
  1 sibling, 0 replies; 26+ messages in thread
From: Jim Gettys @ 2013-07-08 21:04 UTC (permalink / raw)
  To: David P Reed; +Cc: cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 5771 bytes --]

On Mon, Jul 8, 2013 at 4:50 PM, <dpreed@reed.com> wrote:

> I was suggesting that there is no reason to be intimidated.
>
>
>
> And yes, according to the dictionary definition, they are ignorant - as in
> they don't know what they are talking about, and don't care to.
>
>
>
> They may be influential, and they may have a great opinion of themselves.
>  And others may view them as "knowledgeable".   The folks who told Galileo
> that he was wrong were all of that.  But they remained ignorant.
>
>
>
> As to being constructive, I'm not convinced that these people can be
> convinced that their dismissal of bufferbloat and their idea that "goodput"
> is a useful Internet concept are incorrect.
>
>
>
> If they are curious, experimental evidence might be useful.  But have they
> done their own experiments to validate what they "accept as true"?   I've
> been told by more than 50% of professional EE's practicing that "Shannon's
> Law" places a limit on all radio communications capacity.  But none of
> these EE's can even explain the Shannon-Hartley AWGN channel capacity
> theorem, its derivation, and its premises and range of applicability.  They
> just "think they know" what it means.  And they are incredibly arrogant and
> dismissive, while being totally *incurious*.
>
>
>
> The same is true about most "networking professionals".  Few understand
> queueing theory, its range of applicability, etc. *or even exactly how TCP
> works*.  But that doesn't stop them from ignoring evidence, evidence that
> is right in front of their eyes - every day.  It took Jim Gettys' curiosity
> of why his home network performance *sucked* to get him to actually dig
> into the problem.  And yet much of IETF still tries to claim that the
> problem doesn't exist!  They dismiss evidence - out of hand.
>

Actually, I don't face much disbelief in the IETF these days, in recent
memory.  Remaining problems there are mostly around how common/severe the
problem is, and that buffers hide everywhere, and people aren't yet
paranoid enough to go find the problems.

More common than IETF disbelief is among the network measurement research
community, ironically, where (some of them) would like to dismiss the
problem as not common, or severe enough to be worth bothering with.  Net
result are a number of papers with conclusions that are suspect at best,
and bogus at worst. I suspect some of them are embarrassed that they
overlooked the bufferbloat problem in the data they were talking...

The other major problem I've seen (and am writing about as I compose this),
is that networking people seem to worship the 100ms number as a "given"
from "heaven", when in fact, human factors and the speed of light make it
easy to demonstrate that *any* unnecessary latency hurts many/most
applications.


>
> That's not science, it's not curiosity.  It's *dogmatism* - the opposite
> of science.  And those people are rarely going to change their minds.
>  After 45 years in advanced computing and communications, I can tell you
> they will probably go to their graves spouting their "old-wives-tales".
>
>
>
> Spend your time on people who don't "throw things in your face".  On the
> people who are actually curious enough to test your claims themselves
> (which is quite easy for anyone who can do simple measurements).  RRUL is a
> nice simple test.  Let them try it!
>

Yup.  Simple tests, and simple results.  Which is why I started reporting
bufferbloat in my blog with extremely simple tests any networking person
should be able to perform themselves.  RRUL is one step above that
(though still pretty simple).
                                  - Jim


>
>
>
>
>
> On Sunday, July 7, 2013 8:24pm, "Mikael Abrahamsson" <swmike@swm.pp.se>
> said:
>
>  > On Sun, 7 Jul 2013, dpreed@reed.com wrote:
> >
> > > So when somebody "throws that in your face", just confidently use the
> > > words "Bullshit, show me evidence", and ignore the ignorant person who
> >
> > Oh, the people that have told me this are definitely not ignorant. Quite
> > the contrary.
> >
> > ... and by the way, they're optimising for the case where a single TCP
> > flow from a 10GE connected host is traversing a 10G based backbone, and
> > they want this single TCP session to use every spare capacity the network
> > has to give. Not 90% of available capcity, but 100%.
> >
> > This is the kind of people that have a lot of influence and causes core
> > routers to get designed with 600 ms of buffering (well, latest generation
> > ones are down to 50ms buffering). We're talking billion dollar
> investments
> > by hardware manufacturers. We're talking core routers of latest
> generation
> > that are still being put into production as we speak.
> >
> > Calling them ignorant and trying to wave them off by that kind of
> > reasonsing isn't productive. Why not just implement the high RTT testing
> > part and prove that you're right instead of just saying you're right?
> >
> > THe bufferbloat initiative is trying to change how things are done.
> Burden
> > of proof is here. When I participate in IETF TCP WG, they talk goodput.
> > They're not talking latency and interacting well with UDP based
> > interactive streams. They're optimising goodput. If we want buffers to be
> > lower, we need to convince people that this doesn't hugely affect
> goodput.
> >
> > I have not so far seen tests with FQ_CODEL with a simulated 100ms extra
> > latency one-way (200ms RTT). They might be out there, but I have not seen
> > them. I encourage these tests to be done.
> >
> > --
> > Mikael Abrahamsson email: swmike@swm.pp.se
> >
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
>

[-- Attachment #2: Type: text/html, Size: 8384 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-08 17:03       ` Toke Høiland-Jørgensen
@ 2013-07-09  3:24         ` Dave Taht
  2013-07-09  6:04         ` Mikael Abrahamsson
  1 sibling, 0 replies; 26+ messages in thread
From: Dave Taht @ 2013-07-09  3:24 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: cerowrt-devel

On Mon, Jul 8, 2013 at 10:03 AM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> Mikael Abrahamsson <swmike@swm.pp.se> writes:
>
>> I have not so far seen tests with FQ_CODEL with a simulated 100ms
>> extra latency one-way (200ms RTT). They might be out there, but I have
>> not seen them. I encourage these tests to be done.
>
> Did a few test runs on my setup. Here are some figures (can't go higher
> than 100mbit with the hardware I have, sorry).
>
> Note that I haven't done tests at 100mbit on this setup before, so can't
> say whether something weird is going on there.

It looks to me as though one direction on the path is running at
10Mbit, and the other at 100Mbit. So I think you typoed an ethtool or
netem line....

Incidentally, I'd like to know if accidental results like that are
repeatable. I'm not a big fan of asymmetric links in the first place
(6x1 being about the worst I ever thought semi-sane), and if behavior
like this:

http://archive.tohojo.dk/bufferbloat-data/long-rtt/rrul-100mbit-pfifo_fast.png

and particularly this:

http://archive.tohojo.dk/bufferbloat-data/long-rtt/tcp_bidirectional-100mbit-pfifo_fast.png

holds up over these longer (200ms) RTT links, you are onto something.


> I'm a little bit puzzled
> as to why the flows don't seem to get going at all in one direction for
> the rrul test.

At high levels of utilization, it is certainly possible to so saturate
the queues that other flows cannot start at all...

>I'm guessing it has something to do with TSQ.

Don't think so. I have incidentally been tuning that way up so as to
get pre-linux 3.6 behavior on several tests. On the other hand the
advent of TSQ makes Linux hosts almost have a pure pull through stack.
If UDP had the same behavior we could almost get rid of the txqueue
entirely (on hosts) and apply fq and codel techniques directly to the
highest levels of the kernel stack.

TSQ might be more effective if it was capped at (current BQL limit *
2)/(number of flows active)... this would start reducing the amount of
data that floods the tso/gso offloads at higher numbers of streams.

> Attaching graphs makes the listserv bounce my mail, so instead they're
> here: http://archive.tohojo.dk/bufferbloat-data/long-rtt/ with
> throughput data below. Overall, it looks pretty good for fq_codel I'd
> say :)

One of your results for fq_codel is impossible, as you get 11Mbit of
throughput out of a 10Mbit link.

>
> I can put up the data files as well if you'd like.
>
> -Toke
>
>
> Throughput data:
>
>
> 10mbit:
>
> rrul test (4 flows each way), pfifo_fast qdisc:
>  TCP download sum:
>   Data points: 299
>   Total:       375.443728 Mbits
>   Mean:        6.278323 Mbits/s
>   Median:      6.175466 Mbits/s
>   Min:         0.120000 Mbits/s
>   Max:         9.436373 Mbits/s
>   Std dev:     1.149514
>   Variance:    1.321382
> --
>  TCP upload sum:
>   Data points: 300
>   Total:       401.740454 Mbits
>   Mean:        6.695674 Mbits/s
>   Median:      6.637576 Mbits/s
>   Min:         2.122827 Mbits/s
>   Max:         16.892302 Mbits/s
>   Std dev:     1.758319
>   Variance:    3.091687
>
>
> rrul test (4 flows each way), fq_codel qdisc:
>  TCP download sum:
>   Data points: 301
>   Total:       492.824346 Mbits
>   Mean:        8.186451 Mbits/s
>   Median:      8.416901 Mbits/s
>   Min:         0.120000 Mbits/s
>   Max:         9.965051 Mbits/s
>   Std dev:     1.244959
>   Variance:    1.549924
> --
>  TCP upload sum:
>   Data points: 305
>   Total:       717.499994 Mbits
>   Mean:        11.762295 Mbits/s
>   Median:      8.630924 Mbits/s
>   Min:         2.513799 Mbits/s
>   Max:         323.180000 Mbits/s
>   Std dev:     31.056047
>   Variance:    964.478066
>
>
> TCP test (one flow each way), pfifo_fast qdisc:
>  TCP download:
>   Data points: 301
>   Total:       263.445418 Mbits
>   Mean:        4.376170 Mbits/s
>   Median:      4.797729 Mbits/s
>   Min:         0.030000 Mbits/s
>   Max:         5.757982 Mbits/s
>   Std dev:     1.135209
>   Variance:    1.288699
> ---
>  TCP upload:
>   Data points: 302
>   Total:       321.090853 Mbits
>   Mean:        5.316074 Mbits/s
>   Median:      5.090142 Mbits/s
>   Min:         0.641123 Mbits/s
>   Max:         24.390000 Mbits/s
>   Std dev:     2.126472
>   Variance:    4.521882
>
>
> TCP test (one flow each way), fq_codel qdisc:
>  TCP download:
>   Data points: 302
>   Total:       365.357123 Mbits
>   Mean:        6.048959 Mbits/s
>   Median:      6.550488 Mbits/s
>   Min:         0.030000 Mbits/s
>   Max:         9.090000 Mbits/s
>   Std dev:     1.316275
>   Variance:    1.732579
> ---
>  TCP upload:
>   Data points: 303
>   Total:       466.550695 Mbits
>   Mean:        7.698856 Mbits/s
>   Median:      6.144435 Mbits/s
>   Min:         0.641154 Mbits/s
>   Max:         127.690000 Mbits/s
>   Std dev:     12.075298
>   Variance:    145.812812
>
>
> 100 mbit:
>
> rrul test (4 flows each way), pfifo_fast qdisc:
>  TCP download sum:
>   Data points: 301
>   Total:       291.718140 Mbits
>   Mean:        4.845816 Mbits/s
>   Median:      4.695355 Mbits/s
>   Min:         0.120000 Mbits/s
>   Max:         10.774475 Mbits/s
>   Std dev:     1.818852
>   Variance:    3.308222
> --
>  TCP upload sum:
>   Data points: 305
>   Total:       5468.339961 Mbits
>   Mean:        89.644917 Mbits/s
>   Median:      90.731214 Mbits/s
>   Min:         2.600000 Mbits/s
>   Max:         186.362429 Mbits/s
>   Std dev:     21.782436
>   Variance:    474.474532
>
>
> rrul test (4 flows each way), fq_codel qdisc:
>  TCP download sum:
>   Data points: 304
>   Total:       427.064699 Mbits
>   Mean:        7.024090 Mbits/s
>   Median:      7.074768 Mbits/s
>   Min:         0.150000 Mbits/s
>   Max:         17.870000 Mbits/s
>   Std dev:     2.079303
>   Variance:    4.323501
> --
>  TCP upload sum:
>   Data points: 305
>   Total:       5036.774674 Mbits
>   Mean:        82.570077 Mbits/s
>   Median:      82.782532 Mbits/s
>   Min:         2.600000 Mbits/s
>   Max:         243.990000 Mbits/s
>   Std dev:     22.566052
>   Variance:    509.226709
>
>
> TCP test (one flow each way), pfifo_fast qdisc:
>  TCP download:
>   Data points: 160
>   Total:       38.477172 Mbits
>   Mean:        1.202412 Mbits/s
>   Median:      1.205256 Mbits/s
>   Min:         0.020000 Mbits/s
>   Max:         4.012585 Mbits/s
>   Std dev:     0.728299
>   Variance:    0.530419
>  TCP upload:
>   Data points: 165
>   Total:       2595.453489 Mbits
>   Mean:        78.650106 Mbits/s
>   Median:      92.387832 Mbits/s
>   Min:         0.650000 Mbits/s
>   Max:         102.610000 Mbits/s
>   Std dev:     30.432215
>   Variance:    926.119728
>
>
>
> TCP test (one flow each way), fq_codel qdisc:
>   Data points: 301
>   Total:       396.307606 Mbits
>   Mean:        6.583183 Mbits/s
>   Median:      7.786816 Mbits/s
>   Min:         0.030000 Mbits/s
>   Max:         15.270000 Mbits/s
>   Std dev:     3.034477
>   Variance:    9.208053
>  TCP upload:
>   Data points: 302
>   Total:       4238.768131 Mbits
>   Mean:        70.178280 Mbits/s
>   Median:      74.722554 Mbits/s
>   Min:         0.650000 Mbits/s
>   Max:         91.901862 Mbits/s
>   Std dev:     17.860375
>   Variance:    318.993001
>
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-08 20:50       ` [Cerowrt-devel] " dpreed
  2013-07-08 21:04         ` Jim Gettys
@ 2013-07-09  5:48         ` Mikael Abrahamsson
  2013-07-09  5:58           ` dpreed
  1 sibling, 1 reply; 26+ messages in thread
From: Mikael Abrahamsson @ 2013-07-09  5:48 UTC (permalink / raw)
  To: dpreed; +Cc: cerowrt-devel

On Mon, 8 Jul 2013, dpreed@reed.com wrote:

> I was suggesting that there is no reason to be intimidated.

I was not intimidated, I just lacked data to actually reply to the 
statement made.

> And yes, according to the dictionary definition, they are ignorant - as 
> in they don't know what they are talking about, and don't care to.

I object to the last part of the statement. If you're a person who has 
been involved in winning an Internet Land Speed Record you probably care, 
but you're have knowledge for a certain application and a certain purpose, 
which might not be applicable to the common type of home connection usage 
today. It doesn't mean the use case is not important or that person is 
opposing solving bufferbloat problem.

> As to being constructive, I'm not convinced that these people can be 
> convinced that their dismissal of bufferbloat and their idea that 
> "goodput" is a useful Internet concept are incorrect.

I haven't heard any dismissal of the problem, only that they optimize for 
a different use case, and they're concerned that their use case will 
suffer if buffers are smaller. This is the reason I want data because if 
FQ_CODEL gets similar results then their use case is not hugely negatively 
affected, and there is data showing it helps a lot for a lot of other use 
cases, then they shouldn't have much to worry about and can stop arguing.

Thinking of Galileo, he didn't walk around saying "the earth revolves 
around the sun" and when people questioned him, he said "check it out for 
yourself, prove your point, I don't need to prove mine!", right?

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-09  5:48         ` Mikael Abrahamsson
@ 2013-07-09  5:58           ` dpreed
  0 siblings, 0 replies; 26+ messages in thread
From: dpreed @ 2013-07-09  5:58 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 2915 bytes --]


Regarding Galileo, I think he did not bother trying to convince his enemies (who wanted to burn him at the stake, but had to turn to one of his followers to carry out that revenge).  I think he devoted time to explaining his ideas to people who were interested in learning about them.
 
He wrote a book making his scientific case, and did not spend (waste) time trying to figure out how to "convert" those that were trying to get the Pope to do him in, by using logic.
 
I don't think it is useful to try to convince James Imhofe that global warming has a scientific basis.  He is convinced that it is a "fraud perpetrated by scientists", and *nothing* will change his mind.  If anything, trying to convince him makes him appear to be important beyond his importance in the matter.
 
And yes, the folks who set "Internet Land Speed Records" are just as important to the Internet as people who drive Indycars (a fun thing to watch) contribute to automobile engineering.
 
I respect their extremely narrow talents, but not necessarily their wisdom outside their narrow field.
 


On Tuesday, July 9, 2013 1:48am, "Mikael Abrahamsson" <swmike@swm.pp.se> said:



> On Mon, 8 Jul 2013, dpreed@reed.com wrote:
> 
> > I was suggesting that there is no reason to be intimidated.
> 
> I was not intimidated, I just lacked data to actually reply to the
> statement made.
> 
> > And yes, according to the dictionary definition, they are ignorant - as
> > in they don't know what they are talking about, and don't care to.
> 
> I object to the last part of the statement. If you're a person who has
> been involved in winning an Internet Land Speed Record you probably care,
> but you're have knowledge for a certain application and a certain purpose,
> which might not be applicable to the common type of home connection usage
> today. It doesn't mean the use case is not important or that person is
> opposing solving bufferbloat problem.
> 
> > As to being constructive, I'm not convinced that these people can be
> > convinced that their dismissal of bufferbloat and their idea that
> > "goodput" is a useful Internet concept are incorrect.
> 
> I haven't heard any dismissal of the problem, only that they optimize for
> a different use case, and they're concerned that their use case will
> suffer if buffers are smaller. This is the reason I want data because if
> FQ_CODEL gets similar results then their use case is not hugely negatively
> affected, and there is data showing it helps a lot for a lot of other use
> cases, then they shouldn't have much to worry about and can stop arguing.
> 
> Thinking of Galileo, he didn't walk around saying "the earth revolves
> around the sun" and when people questioned him, he said "check it out for
> yourself, prove your point, I don't need to prove mine!", right?
> 
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
>

[-- Attachment #2: Type: text/html, Size: 3804 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-08 17:03       ` Toke Høiland-Jørgensen
  2013-07-09  3:24         ` Dave Taht
@ 2013-07-09  6:04         ` Mikael Abrahamsson
  2013-07-09  6:32           ` Dave Taht
  2013-07-09  7:57           ` [Cerowrt-devel] " Toke Høiland-Jørgensen
  1 sibling, 2 replies; 26+ messages in thread
From: Mikael Abrahamsson @ 2013-07-09  6:04 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: cerowrt-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1118 bytes --]

On Mon, 8 Jul 2013, Toke Høiland-Jørgensen wrote:

> Did a few test runs on my setup. Here are some figures (can't go higher
> than 100mbit with the hardware I have, sorry).

Thanks, much appreciated!

> Note that I haven't done tests at 100mbit on this setup before, so can't
> say whether something weird is going on there. I'm a little bit puzzled
> as to why the flows don't seem to get going at all in one direction for
> the rrul test. I'm guessing it has something to do with TSQ.

For me, it shows that FQ_CODEL indeed affects TCP performance negatively 
for long links, however it looks like the impact is only about 20-30%.

What's stranger is that latency only goes up to around 230ms from its 
200ms "floor" with FIFO, I had expected a bigger increase in buffering 
with FIFO. Have you done any TCP tuning?

Would it be easy for you to do tests with the streams that "loads up the 
link" being 200ms RTT, and the realtime flows only having 30-40ms RTT, 
simulating downloads from a high RTT server and doing interactive things 
to a more local web server.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-09  6:04         ` Mikael Abrahamsson
@ 2013-07-09  6:32           ` Dave Taht
  2013-07-09  7:30             ` [Cerowrt-devel] [Codel] " Andrew McGregor
  2013-07-09 13:09             ` Eric Dumazet
  2013-07-09  7:57           ` [Cerowrt-devel] " Toke Høiland-Jørgensen
  1 sibling, 2 replies; 26+ messages in thread
From: Dave Taht @ 2013-07-09  6:32 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: Toke Høiland-Jørgensen, codel, cerowrt-devel

this really, really, really is the wrong list for this dialog. cc-ing codel

On Mon, Jul 8, 2013 at 11:04 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Mon, 8 Jul 2013, Toke Høiland-Jørgensen wrote:
>
>> Did a few test runs on my setup. Here are some figures (can't go higher
>> than 100mbit with the hardware I have, sorry).
>
>
> Thanks, much appreciated!
>
>
>> Note that I haven't done tests at 100mbit on this setup before, so can't
>> say whether something weird is going on there. I'm a little bit puzzled
>> as to why the flows don't seem to get going at all in one direction for
>> the rrul test. I'm guessing it has something to do with TSQ.
>
>
> For me, it shows that FQ_CODEL indeed affects TCP performance negatively for
> long links, however it looks like the impact is only about 20-30%.

I would be extremely reluctant to draw any conclusions from any test
derived from netem's results at this point. (netem is a qdisc that can
insert delay and loss into a stream) I did a lot of netem testing in
the beginning of the bufferbloat effort and the results differed so
much from what I'd got in the "real world" that I gave up and stuck
with the real world for most of the past couple years. There were in
particular, major problems with combining netem with any other
qdisc...

https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel

One of the simplest problems with netem is that by default it delays
all packets, including things like arp and nd, which are kind of
needed in ethernet...

That said, now that more problems are understood, toke and I, and
maybe matt mathis are trying to take it on...

The simulated results with ns2 codel were very good in the range
2-300ms, but that's not the version of codel in linux. It worked well
up to about 1sec, actually, but fell off afterwards. I have a set of
more ns2-like patches for the ns2 model in cerowrt and as part of my
3.10 builds that I should release as a deb soon.

Recently a few major bugs in htb have come to light and been fixed in
the 3.10 series.

There have also been so many changes to the TCP stack that I'd
distrust comparing tcp results between any given kernel version. The
TSQ addition is not well understood, and I think, but am not sure,
it's both too big for low bandwidths and not big enough for larger
ones...

and... unlike in the past where tcp was being optimized for
supercomputer center to supercomputer center, the vast majority of tcp
related work is now coming out of google, who are optimizing for short
transfers over short rtts.

It would be nice to have access to internet2 for more real world testing.

>
> What's stranger is that latency only goes up to around 230ms from its 200ms
> "floor" with FIFO, I had expected a bigger increase in buffering with FIFO.

TSQ, here, probably.

> Have you done any TCP tuning?

Not recently, aside from turning up tsq to higher defaults and lower
defaults without definitive results.

> Would it be easy for you to do tests with the streams that "loads up the
> link" being 200ms RTT, and the realtime flows only having 30-40ms RTT,
> simulating downloads from a high RTT server and doing interactive things to
> a more local web server.

It would be a useful workload. Higher on my list is emulating
cablelab's latest tests, which is about the same thing only closer
statistically to what a real web page might look like - except
cablelabs tests don't have the redirects or dns lookups most web pages
do.


>
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel]  happy 4th!
  2013-07-09  6:32           ` Dave Taht
@ 2013-07-09  7:30             ` Andrew McGregor
  2013-07-09 13:09             ` Eric Dumazet
  1 sibling, 0 replies; 26+ messages in thread
From: Andrew McGregor @ 2013-07-09  7:30 UTC (permalink / raw)
  To: Dave Taht; +Cc: Toke Høiland-Jørgensen, codel, cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 4980 bytes --]

Possibly a better simulation environment than netem would be ns-3's NSC
(network simulation cradle), which lets you connect up multiple VMs over an
emulated network in userspace... obviously, you better have a multicore
system with plenty of resources available, but it works very nicely and
needs no physical network at all.  ns-3 virtual network nodes also speak
real protocols, so you can talk to them with real tools as well (netcat to
a ns-3 virtual node, for example, or ping them).  I suppose it would be
possible also to bridge one of the TAP devices ns-3 is talking on with a
real interface.


On Tue, Jul 9, 2013 at 4:32 PM, Dave Taht <dave.taht@gmail.com> wrote:

> this really, really, really is the wrong list for this dialog. cc-ing codel
>
> On Mon, Jul 8, 2013 at 11:04 PM, Mikael Abrahamsson <swmike@swm.pp.se>
> wrote:
> > On Mon, 8 Jul 2013, Toke Høiland-Jørgensen wrote:
> >
> >> Did a few test runs on my setup. Here are some figures (can't go higher
> >> than 100mbit with the hardware I have, sorry).
> >
> >
> > Thanks, much appreciated!
> >
> >
> >> Note that I haven't done tests at 100mbit on this setup before, so can't
> >> say whether something weird is going on there. I'm a little bit puzzled
> >> as to why the flows don't seem to get going at all in one direction for
> >> the rrul test. I'm guessing it has something to do with TSQ.
> >
> >
> > For me, it shows that FQ_CODEL indeed affects TCP performance negatively
> for
> > long links, however it looks like the impact is only about 20-30%.
>
> I would be extremely reluctant to draw any conclusions from any test
> derived from netem's results at this point. (netem is a qdisc that can
> insert delay and loss into a stream) I did a lot of netem testing in
> the beginning of the bufferbloat effort and the results differed so
> much from what I'd got in the "real world" that I gave up and stuck
> with the real world for most of the past couple years. There were in
> particular, major problems with combining netem with any other
> qdisc...
>
>
> https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel
>
> One of the simplest problems with netem is that by default it delays
> all packets, including things like arp and nd, which are kind of
> needed in ethernet...
>
> That said, now that more problems are understood, toke and I, and
> maybe matt mathis are trying to take it on...
>
> The simulated results with ns2 codel were very good in the range
> 2-300ms, but that's not the version of codel in linux. It worked well
> up to about 1sec, actually, but fell off afterwards. I have a set of
> more ns2-like patches for the ns2 model in cerowrt and as part of my
> 3.10 builds that I should release as a deb soon.
>
> Recently a few major bugs in htb have come to light and been fixed in
> the 3.10 series.
>
> There have also been so many changes to the TCP stack that I'd
> distrust comparing tcp results between any given kernel version. The
> TSQ addition is not well understood, and I think, but am not sure,
> it's both too big for low bandwidths and not big enough for larger
> ones...
>
> and... unlike in the past where tcp was being optimized for
> supercomputer center to supercomputer center, the vast majority of tcp
> related work is now coming out of google, who are optimizing for short
> transfers over short rtts.
>
> It would be nice to have access to internet2 for more real world testing.
>
> >
> > What's stranger is that latency only goes up to around 230ms from its
> 200ms
> > "floor" with FIFO, I had expected a bigger increase in buffering with
> FIFO.
>
> TSQ, here, probably.
>
> > Have you done any TCP tuning?
>
> Not recently, aside from turning up tsq to higher defaults and lower
> defaults without definitive results.
>
> > Would it be easy for you to do tests with the streams that "loads up the
> > link" being 200ms RTT, and the realtime flows only having 30-40ms RTT,
> > simulating downloads from a high RTT server and doing interactive things
> to
> > a more local web server.
>
> It would be a useful workload. Higher on my list is emulating
> cablelab's latest tests, which is about the same thing only closer
> statistically to what a real web page might look like - except
> cablelabs tests don't have the redirects or dns lookups most web pages
> do.
>
>
> >
> >
> > --
> > Mikael Abrahamsson    email: swmike@swm.pp.se
> >
> > _______________________________________________
> > Cerowrt-devel mailing list
> > Cerowrt-devel@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >
>
>
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt:
> http://www.teklibre.com/cerowrt/subscribe.html
> _______________________________________________
> Codel mailing list
> Codel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel
>

[-- Attachment #2: Type: text/html, Size: 6300 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] happy 4th!
  2013-07-09  6:04         ` Mikael Abrahamsson
  2013-07-09  6:32           ` Dave Taht
@ 2013-07-09  7:57           ` Toke Høiland-Jørgensen
  2013-07-09 12:56             ` [Cerowrt-devel] [Codel] " Eric Dumazet
  1 sibling, 1 reply; 26+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09  7:57 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: codel, cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 1245 bytes --]

Mikael Abrahamsson <swmike@swm.pp.se> writes:

> For me, it shows that FQ_CODEL indeed affects TCP performance
> negatively for long links, however it looks like the impact is only
> about 20-30%.

As far as I can tell, fq_codel's throughput is about 10% lower on
100mbit in one direction, while being higher in the other. For 10mbit
fq_codel shows higher throughput throughout?

> What's stranger is that latency only goes up to around 230ms from its
> 200ms "floor" with FIFO, I had expected a bigger increase in buffering
> with FIFO. Have you done any TCP tuning?

Not apart from what's in mainline (3.9.9 kernel). The latency-inducing
box is after the bottleneck, though, so perhaps it has something to do
with that? Some interaction between netem and the ethernet link?

> Would it be easy for you to do tests with the streams that "loads up
> the link" being 200ms RTT, and the realtime flows only having 30-40ms
> RTT, simulating downloads from a high RTT server and doing interactive
> things to a more local web server.

Not on my current setup, sorry. Also, I only did these tests because I
happened to be at my lab anyway yesterday. Not going back again for a
while, so further tests are out for the time being, I'm afraid...

-Toke

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel] happy 4th!
  2013-07-09  7:57           ` [Cerowrt-devel] " Toke Høiland-Jørgensen
@ 2013-07-09 12:56             ` Eric Dumazet
  2013-07-09 13:13               ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2013-07-09 12:56 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel

On Tue, 2013-07-09 at 09:57 +0200, Toke Høiland-Jørgensen wrote:
> Mikael Abrahamsson <swmike@swm.pp.se> writes:
> 
> > For me, it shows that FQ_CODEL indeed affects TCP performance
> > negatively for long links, however it looks like the impact is only
> > about 20-30%.
> 
> As far as I can tell, fq_codel's throughput is about 10% lower on
> 100mbit in one direction, while being higher in the other. For 10mbit
> fq_codel shows higher throughput throughout?

What do you mean ? This makes little sense to me.

> 
> > What's stranger is that latency only goes up to around 230ms from its
> > 200ms "floor" with FIFO, I had expected a bigger increase in buffering
> > with FIFO. Have you done any TCP tuning?
> 
> Not apart from what's in mainline (3.9.9 kernel). The latency-inducing
> box is after the bottleneck, though, so perhaps it has something to do
> with that? Some interaction between netem and the ethernet link?

I did not received a copy of your setup, so its hard to tell. But using
netem correctly is tricky.

My current testbed uses the following script, meant to exercise tcp
flows with random RTT between 49.9 and 50.1 ms, to check how TCP stack
reacts to reorders (The answer is : pretty badly.)

Note that using this setup forced me to send two netem patches,
currently in net-next, or else netem used too many cpu cycles on its
own.

http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=aec0a40a6f78843c0ce73f7398230ee5184f896d

Followed by a fix :

http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=36b7bfe09b6deb71bf387852465245783c9a6208

The script :

# netem based setup, installed at receiver side only
ETH=eth4
IFB=ifb0
EST="est 1sec 4sec" # Optional rate estimator

modprobe ifb
ip link set dev $IFB up

tc qdisc add dev $ETH ingress 2>/dev/null

tc filter add dev $ETH parent ffff: \
   protocol ip u32 match u32 0 0 flowid 1:1 action mirred egress \
   redirect dev $IFB

ethtool -K $ETH gro off lro off

tc qdisc del dev $IFB root 2>/dev/null
# Use netem at ingress to delay packets by 25 ms +/- 100us (to get reorders)
tc qdisc add dev $IFB root $EST netem limit 100000 delay 25ms 100us # loss 0.1 

tc qdisc del dev $ETH root 2>/dev/null
# Use netem at egress to delay packets by 25 ms (no reorders)
tc qd add dev $ETH root $EST netem delay 25ms limit 100000

And the results for a single tcp flow :

lpq84:~# ./netperf -H 10.7.7.83 -l 10
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.7.7.83 () port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.20      37.60   
lpq84:~# ./netperf -H 10.7.7.83 -l 10
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.7.7.83 () port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.06     116.94   


See rates at the receiver side (check if packets were dropped because of
too low qdisc limits, and rates)

lpq83:~# tc -s -d qd
qdisc netem 800e: dev eth4 root refcnt 257 limit 100000 delay 25.0ms
 Sent 10791616 bytes 115916 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 7701Kbit 10318pps backlog 47470b 509p requeues 0 
qdisc ingress ffff: dev eth4 parent ffff:fff1 ---------------- 
 Sent 8867475174 bytes 5914081 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc netem 800d: dev ifb0 root refcnt 2 limit 100000 delay 25.0ms  99us
 Sent 176209244 bytes 116430 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 123481Kbit 10198pps backlog 0b 0p requeues 0 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel]  happy 4th!
  2013-07-09  6:32           ` Dave Taht
  2013-07-09  7:30             ` [Cerowrt-devel] [Codel] " Andrew McGregor
@ 2013-07-09 13:09             ` Eric Dumazet
  1 sibling, 0 replies; 26+ messages in thread
From: Eric Dumazet @ 2013-07-09 13:09 UTC (permalink / raw)
  To: Dave Taht; +Cc: Toke Høiland-Jørgensen, codel, cerowrt-devel

On Mon, 2013-07-08 at 23:32 -0700, Dave Taht wrote:

> and... unlike in the past where tcp was being optimized for
> supercomputer center to supercomputer center, the vast majority of tcp
> related work is now coming out of google, who are optimizing for short
> transfers over short rtts.

That's not really true, we work on many issues, including long transfers
and long rtt.

Beware of tools reproducing latencies, reorders, drops, because they
often add unexpected bugs. One has to be extra careful and check
tcpdumps or things like that to double check tools are not buggy.




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel] happy 4th!
  2013-07-09 12:56             ` [Cerowrt-devel] [Codel] " Eric Dumazet
@ 2013-07-09 13:13               ` Toke Høiland-Jørgensen
  2013-07-09 13:23                 ` Eric Dumazet
  2013-07-09 13:36                 ` Eric Dumazet
  0 siblings, 2 replies; 26+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09 13:13 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: codel, cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 995 bytes --]

Eric Dumazet <eric.dumazet@gmail.com> writes:

> What do you mean ? This makes little sense to me.

The data from my previous post
(http://archive.tohojo.dk/bufferbloat-data/long-rtt/throughput.txt)
shows fq_codel achieving higher aggregate throughput in some cases than
pfifo_fast does.

> I did not received a copy of your setup, so its hard to tell. But
> using netem correctly is tricky.

The setup is this:

Client <--100mbit--> Gateway <--10mbit--> netem box <--10mbit--> Server

The netem box adds 100ms of latency to each of its interfaces (with no
other qdisc applied). Gateway and server both have ethernet speed
negotiation set to 10mbit or 100mbit (respectively for each of the
tests) on the interfaces facing the netem box.

> My current testbed uses the following script, meant to exercise tcp
> flows with random RTT between 49.9 and 50.1 ms, to check how TCP stack
> reacts to reorders (The answer is : pretty badly.)

Doesn't netem have an option to simulate reordering?

-Toke

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel] happy 4th!
  2013-07-09 13:13               ` Toke Høiland-Jørgensen
@ 2013-07-09 13:23                 ` Eric Dumazet
  2013-07-09 13:25                   ` Toke Høiland-Jørgensen
  2013-07-09 13:36                 ` Eric Dumazet
  1 sibling, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2013-07-09 13:23 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel

On Tue, 2013-07-09 at 15:13 +0200, Toke Høiland-Jørgensen wrote:

> 
> Doesn't netem have an option to simulate reordering?

Its really too basic for my needs.

It decides to put the new packet at the front of transmit queue.

If you use netem to add a delay, then adding reordering is only a matter
of using a variable/randomized delay.




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel] happy 4th!
  2013-07-09 13:23                 ` Eric Dumazet
@ 2013-07-09 13:25                   ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 26+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09 13:25 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: codel, cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 396 bytes --]

Eric Dumazet <eric.dumazet@gmail.com> writes:

> Its really too basic for my needs.
>
> It decides to put the new packet at the front of transmit queue.

Right I see.

> If you use netem to add a delay, then adding reordering is only a
> matter of using a variable/randomized delay.

Yeah, realised that; was just wondering why you found the built-in
reordering mechanism insufficient. :)

-Toke

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel] happy 4th!
  2013-07-09 13:13               ` Toke Høiland-Jørgensen
  2013-07-09 13:23                 ` Eric Dumazet
@ 2013-07-09 13:36                 ` Eric Dumazet
  2013-07-09 13:45                   ` Toke Høiland-Jørgensen
  1 sibling, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2013-07-09 13:36 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel

On Tue, 2013-07-09 at 15:13 +0200, Toke Høiland-Jørgensen wrote:
> Eric Dumazet <eric.dumazet@gmail.com> writes:
> 
> > What do you mean ? This makes little sense to me.
> 
> The data from my previous post
> (http://archive.tohojo.dk/bufferbloat-data/long-rtt/throughput.txt)
> shows fq_codel achieving higher aggregate throughput in some cases than
> pfifo_fast does.
> 
> > I did not received a copy of your setup, so its hard to tell. But
> > using netem correctly is tricky.
> 
> The setup is this:
> 
> Client <--100mbit--> Gateway <--10mbit--> netem box <--10mbit--> Server
> 
> The netem box adds 100ms of latency to each of its interfaces (with no
> other qdisc applied).

OK, thats a total of 200 ms RTT. Its a pretty high value :(

Could you send "tc -s qdisc" taken at netem box ?




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel] happy 4th!
  2013-07-09 13:36                 ` Eric Dumazet
@ 2013-07-09 13:45                   ` Toke Høiland-Jørgensen
  2013-07-09 13:49                     ` Eric Dumazet
  0 siblings, 1 reply; 26+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09 13:45 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: codel, cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 574 bytes --]

Eric Dumazet <eric.dumazet@gmail.com> writes:

> OK, thats a total of 200 ms RTT. Its a pretty high value :(

Yeah, that was the point; Mikael requested such a test be run, and I
happened to be near my lab setup yesterday, so thought I'd run it.

> Could you send "tc -s qdisc" taken at netem box ?

Not really, no; sorry. Shut the whole thing down and I'm going on
holiday tomorrow, so won't have a chance to go back for at least a
couple of weeks. Will keep it in mind for the next time I get there;
anything else I should make sure to collect while I'm at it? :)


-Toke

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel] happy 4th!
  2013-07-09 13:45                   ` Toke Høiland-Jørgensen
@ 2013-07-09 13:49                     ` Eric Dumazet
  2013-07-09 13:53                       ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2013-07-09 13:49 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel

On Tue, 2013-07-09 at 15:45 +0200, Toke Høiland-Jørgensen wrote:
> Eric Dumazet <eric.dumazet@gmail.com> writes:
> 
> > OK, thats a total of 200 ms RTT. Its a pretty high value :(
> 
> Yeah, that was the point; Mikael requested such a test be run, and I
> happened to be near my lab setup yesterday, so thought I'd run it.
> 
> > Could you send "tc -s qdisc" taken at netem box ?
> 
> Not really, no; sorry. Shut the whole thing down and I'm going on
> holiday tomorrow, so won't have a chance to go back for at least a
> couple of weeks. Will keep it in mind for the next time I get there;
> anything else I should make sure to collect while I'm at it? :)
> 

It would be nice it the rrul results could include a nstat snapshot

nstat >/dev/null ; rrul_tests ; nstat




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel] happy 4th!
  2013-07-09 13:49                     ` Eric Dumazet
@ 2013-07-09 13:53                       ` Toke Høiland-Jørgensen
  2013-07-09 14:07                         ` Eric Dumazet
  0 siblings, 1 reply; 26+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09 13:53 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: codel, cerowrt-devel

[-- Attachment #1: Type: text/plain, Size: 231 bytes --]

Eric Dumazet <eric.dumazet@gmail.com> writes:

> It would be nice it the rrul results could include a nstat snapshot
>
> nstat >/dev/null ; rrul_tests ; nstat

Sure, can do. Is that from the client machine or the netem box?

-Toke

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Cerowrt-devel] [Codel] happy 4th!
  2013-07-09 13:53                       ` Toke Høiland-Jørgensen
@ 2013-07-09 14:07                         ` Eric Dumazet
  0 siblings, 0 replies; 26+ messages in thread
From: Eric Dumazet @ 2013-07-09 14:07 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel

On Tue, 2013-07-09 at 15:53 +0200, Toke Høiland-Jørgensen wrote:
> Eric Dumazet <eric.dumazet@gmail.com> writes:
> 
> > It would be nice it the rrul results could include a nstat snapshot
> >
> > nstat >/dev/null ; rrul_tests ; nstat
> 
> Sure, can do. Is that from the client machine or the netem box?

Client machine, as I am interested about TCP metrics ;)



^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2013-07-09 14:07 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-03 19:33 [Cerowrt-devel] happy 4th! Dave Taht
2013-07-04  5:57 ` Mikael Abrahamsson
2013-07-04 13:51   ` Michael Richardson
2013-07-04 15:48     ` Mikael Abrahamsson
2013-07-07 18:52   ` dpreed
2013-07-08  0:24     ` Mikael Abrahamsson
2013-07-08 17:03       ` Toke Høiland-Jørgensen
2013-07-09  3:24         ` Dave Taht
2013-07-09  6:04         ` Mikael Abrahamsson
2013-07-09  6:32           ` Dave Taht
2013-07-09  7:30             ` [Cerowrt-devel] [Codel] " Andrew McGregor
2013-07-09 13:09             ` Eric Dumazet
2013-07-09  7:57           ` [Cerowrt-devel] " Toke Høiland-Jørgensen
2013-07-09 12:56             ` [Cerowrt-devel] [Codel] " Eric Dumazet
2013-07-09 13:13               ` Toke Høiland-Jørgensen
2013-07-09 13:23                 ` Eric Dumazet
2013-07-09 13:25                   ` Toke Høiland-Jørgensen
2013-07-09 13:36                 ` Eric Dumazet
2013-07-09 13:45                   ` Toke Høiland-Jørgensen
2013-07-09 13:49                     ` Eric Dumazet
2013-07-09 13:53                       ` Toke Høiland-Jørgensen
2013-07-09 14:07                         ` Eric Dumazet
2013-07-08 20:50       ` [Cerowrt-devel] " dpreed
2013-07-08 21:04         ` Jim Gettys
2013-07-09  5:48         ` Mikael Abrahamsson
2013-07-09  5:58           ` dpreed

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox