* Re: [Codel] [Cerowrt-devel] happy 4th!
[not found] ` <alpine.DEB.2.00.1307090749480.8891@uplift.swm.pp.se>
@ 2013-07-09 6:32 ` Dave Taht
2013-07-09 7:30 ` Andrew McGregor
2013-07-09 13:09 ` Eric Dumazet
2013-07-09 7:57 ` [Codel] " Toke Høiland-Jørgensen
1 sibling, 2 replies; 13+ messages in thread
From: Dave Taht @ 2013-07-09 6:32 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: Toke Høiland-Jørgensen, codel, cerowrt-devel
this really, really, really is the wrong list for this dialog. cc-ing codel
On Mon, Jul 8, 2013 at 11:04 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Mon, 8 Jul 2013, Toke Høiland-Jørgensen wrote:
>
>> Did a few test runs on my setup. Here are some figures (can't go higher
>> than 100mbit with the hardware I have, sorry).
>
>
> Thanks, much appreciated!
>
>
>> Note that I haven't done tests at 100mbit on this setup before, so can't
>> say whether something weird is going on there. I'm a little bit puzzled
>> as to why the flows don't seem to get going at all in one direction for
>> the rrul test. I'm guessing it has something to do with TSQ.
>
>
> For me, it shows that FQ_CODEL indeed affects TCP performance negatively for
> long links, however it looks like the impact is only about 20-30%.
I would be extremely reluctant to draw any conclusions from any test
derived from netem's results at this point. (netem is a qdisc that can
insert delay and loss into a stream) I did a lot of netem testing in
the beginning of the bufferbloat effort and the results differed so
much from what I'd got in the "real world" that I gave up and stuck
with the real world for most of the past couple years. There were in
particular, major problems with combining netem with any other
qdisc...
https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel
One of the simplest problems with netem is that by default it delays
all packets, including things like arp and nd, which are kind of
needed in ethernet...
That said, now that more problems are understood, toke and I, and
maybe matt mathis are trying to take it on...
The simulated results with ns2 codel were very good in the range
2-300ms, but that's not the version of codel in linux. It worked well
up to about 1sec, actually, but fell off afterwards. I have a set of
more ns2-like patches for the ns2 model in cerowrt and as part of my
3.10 builds that I should release as a deb soon.
Recently a few major bugs in htb have come to light and been fixed in
the 3.10 series.
There have also been so many changes to the TCP stack that I'd
distrust comparing tcp results between any given kernel version. The
TSQ addition is not well understood, and I think, but am not sure,
it's both too big for low bandwidths and not big enough for larger
ones...
and... unlike in the past where tcp was being optimized for
supercomputer center to supercomputer center, the vast majority of tcp
related work is now coming out of google, who are optimizing for short
transfers over short rtts.
It would be nice to have access to internet2 for more real world testing.
>
> What's stranger is that latency only goes up to around 230ms from its 200ms
> "floor" with FIFO, I had expected a bigger increase in buffering with FIFO.
TSQ, here, probably.
> Have you done any TCP tuning?
Not recently, aside from turning up tsq to higher defaults and lower
defaults without definitive results.
> Would it be easy for you to do tests with the streams that "loads up the
> link" being 200ms RTT, and the realtime flows only having 30-40ms RTT,
> simulating downloads from a high RTT server and doing interactive things to
> a more local web server.
It would be a useful workload. Higher on my list is emulating
cablelab's latest tests, which is about the same thing only closer
statistically to what a real web page might look like - except
cablelabs tests don't have the redirects or dns lookups most web pages
do.
>
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] [Cerowrt-devel] happy 4th!
2013-07-09 6:32 ` [Codel] [Cerowrt-devel] happy 4th! Dave Taht
@ 2013-07-09 7:30 ` Andrew McGregor
2013-07-09 13:09 ` Eric Dumazet
1 sibling, 0 replies; 13+ messages in thread
From: Andrew McGregor @ 2013-07-09 7:30 UTC (permalink / raw)
To: Dave Taht
Cc: Toke Høiland-Jørgensen, codel, cerowrt-devel,
Mikael Abrahamsson
[-- Attachment #1: Type: text/plain, Size: 4980 bytes --]
Possibly a better simulation environment than netem would be ns-3's NSC
(network simulation cradle), which lets you connect up multiple VMs over an
emulated network in userspace... obviously, you better have a multicore
system with plenty of resources available, but it works very nicely and
needs no physical network at all. ns-3 virtual network nodes also speak
real protocols, so you can talk to them with real tools as well (netcat to
a ns-3 virtual node, for example, or ping them). I suppose it would be
possible also to bridge one of the TAP devices ns-3 is talking on with a
real interface.
On Tue, Jul 9, 2013 at 4:32 PM, Dave Taht <dave.taht@gmail.com> wrote:
> this really, really, really is the wrong list for this dialog. cc-ing codel
>
> On Mon, Jul 8, 2013 at 11:04 PM, Mikael Abrahamsson <swmike@swm.pp.se>
> wrote:
> > On Mon, 8 Jul 2013, Toke Høiland-Jørgensen wrote:
> >
> >> Did a few test runs on my setup. Here are some figures (can't go higher
> >> than 100mbit with the hardware I have, sorry).
> >
> >
> > Thanks, much appreciated!
> >
> >
> >> Note that I haven't done tests at 100mbit on this setup before, so can't
> >> say whether something weird is going on there. I'm a little bit puzzled
> >> as to why the flows don't seem to get going at all in one direction for
> >> the rrul test. I'm guessing it has something to do with TSQ.
> >
> >
> > For me, it shows that FQ_CODEL indeed affects TCP performance negatively
> for
> > long links, however it looks like the impact is only about 20-30%.
>
> I would be extremely reluctant to draw any conclusions from any test
> derived from netem's results at this point. (netem is a qdisc that can
> insert delay and loss into a stream) I did a lot of netem testing in
> the beginning of the bufferbloat effort and the results differed so
> much from what I'd got in the "real world" that I gave up and stuck
> with the real world for most of the past couple years. There were in
> particular, major problems with combining netem with any other
> qdisc...
>
>
> https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel
>
> One of the simplest problems with netem is that by default it delays
> all packets, including things like arp and nd, which are kind of
> needed in ethernet...
>
> That said, now that more problems are understood, toke and I, and
> maybe matt mathis are trying to take it on...
>
> The simulated results with ns2 codel were very good in the range
> 2-300ms, but that's not the version of codel in linux. It worked well
> up to about 1sec, actually, but fell off afterwards. I have a set of
> more ns2-like patches for the ns2 model in cerowrt and as part of my
> 3.10 builds that I should release as a deb soon.
>
> Recently a few major bugs in htb have come to light and been fixed in
> the 3.10 series.
>
> There have also been so many changes to the TCP stack that I'd
> distrust comparing tcp results between any given kernel version. The
> TSQ addition is not well understood, and I think, but am not sure,
> it's both too big for low bandwidths and not big enough for larger
> ones...
>
> and... unlike in the past where tcp was being optimized for
> supercomputer center to supercomputer center, the vast majority of tcp
> related work is now coming out of google, who are optimizing for short
> transfers over short rtts.
>
> It would be nice to have access to internet2 for more real world testing.
>
> >
> > What's stranger is that latency only goes up to around 230ms from its
> 200ms
> > "floor" with FIFO, I had expected a bigger increase in buffering with
> FIFO.
>
> TSQ, here, probably.
>
> > Have you done any TCP tuning?
>
> Not recently, aside from turning up tsq to higher defaults and lower
> defaults without definitive results.
>
> > Would it be easy for you to do tests with the streams that "loads up the
> > link" being 200ms RTT, and the realtime flows only having 30-40ms RTT,
> > simulating downloads from a high RTT server and doing interactive things
> to
> > a more local web server.
>
> It would be a useful workload. Higher on my list is emulating
> cablelab's latest tests, which is about the same thing only closer
> statistically to what a real web page might look like - except
> cablelabs tests don't have the redirects or dns lookups most web pages
> do.
>
>
> >
> >
> > --
> > Mikael Abrahamsson email: swmike@swm.pp.se
> >
> > _______________________________________________
> > Cerowrt-devel mailing list
> > Cerowrt-devel@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >
>
>
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt:
> http://www.teklibre.com/cerowrt/subscribe.html
> _______________________________________________
> Codel mailing list
> Codel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel
>
[-- Attachment #2: Type: text/html, Size: 6300 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
[not found] ` <alpine.DEB.2.00.1307090749480.8891@uplift.swm.pp.se>
2013-07-09 6:32 ` [Codel] [Cerowrt-devel] happy 4th! Dave Taht
@ 2013-07-09 7:57 ` Toke Høiland-Jørgensen
2013-07-09 12:56 ` Eric Dumazet
1 sibling, 1 reply; 13+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09 7:57 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: codel, cerowrt-devel
[-- Attachment #1: Type: text/plain, Size: 1245 bytes --]
Mikael Abrahamsson <swmike@swm.pp.se> writes:
> For me, it shows that FQ_CODEL indeed affects TCP performance
> negatively for long links, however it looks like the impact is only
> about 20-30%.
As far as I can tell, fq_codel's throughput is about 10% lower on
100mbit in one direction, while being higher in the other. For 10mbit
fq_codel shows higher throughput throughout?
> What's stranger is that latency only goes up to around 230ms from its
> 200ms "floor" with FIFO, I had expected a bigger increase in buffering
> with FIFO. Have you done any TCP tuning?
Not apart from what's in mainline (3.9.9 kernel). The latency-inducing
box is after the bottleneck, though, so perhaps it has something to do
with that? Some interaction between netem and the ethernet link?
> Would it be easy for you to do tests with the streams that "loads up
> the link" being 200ms RTT, and the realtime flows only having 30-40ms
> RTT, simulating downloads from a high RTT server and doing interactive
> things to a more local web server.
Not on my current setup, sorry. Also, I only did these tests because I
happened to be at my lab anyway yesterday. Not going back again for a
while, so further tests are out for the time being, I'm afraid...
-Toke
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
2013-07-09 7:57 ` [Codel] " Toke Høiland-Jørgensen
@ 2013-07-09 12:56 ` Eric Dumazet
2013-07-09 13:13 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2013-07-09 12:56 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel, Mikael Abrahamsson
On Tue, 2013-07-09 at 09:57 +0200, Toke Høiland-Jørgensen wrote:
> Mikael Abrahamsson <swmike@swm.pp.se> writes:
>
> > For me, it shows that FQ_CODEL indeed affects TCP performance
> > negatively for long links, however it looks like the impact is only
> > about 20-30%.
>
> As far as I can tell, fq_codel's throughput is about 10% lower on
> 100mbit in one direction, while being higher in the other. For 10mbit
> fq_codel shows higher throughput throughout?
What do you mean ? This makes little sense to me.
>
> > What's stranger is that latency only goes up to around 230ms from its
> > 200ms "floor" with FIFO, I had expected a bigger increase in buffering
> > with FIFO. Have you done any TCP tuning?
>
> Not apart from what's in mainline (3.9.9 kernel). The latency-inducing
> box is after the bottleneck, though, so perhaps it has something to do
> with that? Some interaction between netem and the ethernet link?
I did not received a copy of your setup, so its hard to tell. But using
netem correctly is tricky.
My current testbed uses the following script, meant to exercise tcp
flows with random RTT between 49.9 and 50.1 ms, to check how TCP stack
reacts to reorders (The answer is : pretty badly.)
Note that using this setup forced me to send two netem patches,
currently in net-next, or else netem used too many cpu cycles on its
own.
http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=aec0a40a6f78843c0ce73f7398230ee5184f896d
Followed by a fix :
http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=36b7bfe09b6deb71bf387852465245783c9a6208
The script :
# netem based setup, installed at receiver side only
ETH=eth4
IFB=ifb0
EST="est 1sec 4sec" # Optional rate estimator
modprobe ifb
ip link set dev $IFB up
tc qdisc add dev $ETH ingress 2>/dev/null
tc filter add dev $ETH parent ffff: \
protocol ip u32 match u32 0 0 flowid 1:1 action mirred egress \
redirect dev $IFB
ethtool -K $ETH gro off lro off
tc qdisc del dev $IFB root 2>/dev/null
# Use netem at ingress to delay packets by 25 ms +/- 100us (to get reorders)
tc qdisc add dev $IFB root $EST netem limit 100000 delay 25ms 100us # loss 0.1
tc qdisc del dev $ETH root 2>/dev/null
# Use netem at egress to delay packets by 25 ms (no reorders)
tc qd add dev $ETH root $EST netem delay 25ms limit 100000
And the results for a single tcp flow :
lpq84:~# ./netperf -H 10.7.7.83 -l 10
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.7.7.83 () port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.20 37.60
lpq84:~# ./netperf -H 10.7.7.83 -l 10
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.7.7.83 () port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.06 116.94
See rates at the receiver side (check if packets were dropped because of
too low qdisc limits, and rates)
lpq83:~# tc -s -d qd
qdisc netem 800e: dev eth4 root refcnt 257 limit 100000 delay 25.0ms
Sent 10791616 bytes 115916 pkt (dropped 0, overlimits 0 requeues 0)
rate 7701Kbit 10318pps backlog 47470b 509p requeues 0
qdisc ingress ffff: dev eth4 parent ffff:fff1 ----------------
Sent 8867475174 bytes 5914081 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc netem 800d: dev ifb0 root refcnt 2 limit 100000 delay 25.0ms 99us
Sent 176209244 bytes 116430 pkt (dropped 0, overlimits 0 requeues 0)
rate 123481Kbit 10198pps backlog 0b 0p requeues 0
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] [Cerowrt-devel] happy 4th!
2013-07-09 6:32 ` [Codel] [Cerowrt-devel] happy 4th! Dave Taht
2013-07-09 7:30 ` Andrew McGregor
@ 2013-07-09 13:09 ` Eric Dumazet
1 sibling, 0 replies; 13+ messages in thread
From: Eric Dumazet @ 2013-07-09 13:09 UTC (permalink / raw)
To: Dave Taht
Cc: Toke Høiland-Jørgensen, codel, cerowrt-devel,
Mikael Abrahamsson
On Mon, 2013-07-08 at 23:32 -0700, Dave Taht wrote:
> and... unlike in the past where tcp was being optimized for
> supercomputer center to supercomputer center, the vast majority of tcp
> related work is now coming out of google, who are optimizing for short
> transfers over short rtts.
That's not really true, we work on many issues, including long transfers
and long rtt.
Beware of tools reproducing latencies, reorders, drops, because they
often add unexpected bugs. One has to be extra careful and check
tcpdumps or things like that to double check tools are not buggy.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
2013-07-09 12:56 ` Eric Dumazet
@ 2013-07-09 13:13 ` Toke Høiland-Jørgensen
2013-07-09 13:23 ` Eric Dumazet
2013-07-09 13:36 ` Eric Dumazet
0 siblings, 2 replies; 13+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09 13:13 UTC (permalink / raw)
To: Eric Dumazet; +Cc: codel, cerowrt-devel, Mikael Abrahamsson
[-- Attachment #1: Type: text/plain, Size: 995 bytes --]
Eric Dumazet <eric.dumazet@gmail.com> writes:
> What do you mean ? This makes little sense to me.
The data from my previous post
(http://archive.tohojo.dk/bufferbloat-data/long-rtt/throughput.txt)
shows fq_codel achieving higher aggregate throughput in some cases than
pfifo_fast does.
> I did not received a copy of your setup, so its hard to tell. But
> using netem correctly is tricky.
The setup is this:
Client <--100mbit--> Gateway <--10mbit--> netem box <--10mbit--> Server
The netem box adds 100ms of latency to each of its interfaces (with no
other qdisc applied). Gateway and server both have ethernet speed
negotiation set to 10mbit or 100mbit (respectively for each of the
tests) on the interfaces facing the netem box.
> My current testbed uses the following script, meant to exercise tcp
> flows with random RTT between 49.9 and 50.1 ms, to check how TCP stack
> reacts to reorders (The answer is : pretty badly.)
Doesn't netem have an option to simulate reordering?
-Toke
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
2013-07-09 13:13 ` Toke Høiland-Jørgensen
@ 2013-07-09 13:23 ` Eric Dumazet
2013-07-09 13:25 ` Toke Høiland-Jørgensen
2013-07-09 13:36 ` Eric Dumazet
1 sibling, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2013-07-09 13:23 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel, Mikael Abrahamsson
On Tue, 2013-07-09 at 15:13 +0200, Toke Høiland-Jørgensen wrote:
>
> Doesn't netem have an option to simulate reordering?
Its really too basic for my needs.
It decides to put the new packet at the front of transmit queue.
If you use netem to add a delay, then adding reordering is only a matter
of using a variable/randomized delay.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
2013-07-09 13:23 ` Eric Dumazet
@ 2013-07-09 13:25 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 13+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09 13:25 UTC (permalink / raw)
To: Eric Dumazet; +Cc: codel, cerowrt-devel, Mikael Abrahamsson
[-- Attachment #1: Type: text/plain, Size: 396 bytes --]
Eric Dumazet <eric.dumazet@gmail.com> writes:
> Its really too basic for my needs.
>
> It decides to put the new packet at the front of transmit queue.
Right I see.
> If you use netem to add a delay, then adding reordering is only a
> matter of using a variable/randomized delay.
Yeah, realised that; was just wondering why you found the built-in
reordering mechanism insufficient. :)
-Toke
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
2013-07-09 13:13 ` Toke Høiland-Jørgensen
2013-07-09 13:23 ` Eric Dumazet
@ 2013-07-09 13:36 ` Eric Dumazet
2013-07-09 13:45 ` Toke Høiland-Jørgensen
1 sibling, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2013-07-09 13:36 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel, Mikael Abrahamsson
On Tue, 2013-07-09 at 15:13 +0200, Toke Høiland-Jørgensen wrote:
> Eric Dumazet <eric.dumazet@gmail.com> writes:
>
> > What do you mean ? This makes little sense to me.
>
> The data from my previous post
> (http://archive.tohojo.dk/bufferbloat-data/long-rtt/throughput.txt)
> shows fq_codel achieving higher aggregate throughput in some cases than
> pfifo_fast does.
>
> > I did not received a copy of your setup, so its hard to tell. But
> > using netem correctly is tricky.
>
> The setup is this:
>
> Client <--100mbit--> Gateway <--10mbit--> netem box <--10mbit--> Server
>
> The netem box adds 100ms of latency to each of its interfaces (with no
> other qdisc applied).
OK, thats a total of 200 ms RTT. Its a pretty high value :(
Could you send "tc -s qdisc" taken at netem box ?
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
2013-07-09 13:36 ` Eric Dumazet
@ 2013-07-09 13:45 ` Toke Høiland-Jørgensen
2013-07-09 13:49 ` Eric Dumazet
0 siblings, 1 reply; 13+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09 13:45 UTC (permalink / raw)
To: Eric Dumazet; +Cc: codel, cerowrt-devel, Mikael Abrahamsson
[-- Attachment #1: Type: text/plain, Size: 574 bytes --]
Eric Dumazet <eric.dumazet@gmail.com> writes:
> OK, thats a total of 200 ms RTT. Its a pretty high value :(
Yeah, that was the point; Mikael requested such a test be run, and I
happened to be near my lab setup yesterday, so thought I'd run it.
> Could you send "tc -s qdisc" taken at netem box ?
Not really, no; sorry. Shut the whole thing down and I'm going on
holiday tomorrow, so won't have a chance to go back for at least a
couple of weeks. Will keep it in mind for the next time I get there;
anything else I should make sure to collect while I'm at it? :)
-Toke
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
2013-07-09 13:45 ` Toke Høiland-Jørgensen
@ 2013-07-09 13:49 ` Eric Dumazet
2013-07-09 13:53 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2013-07-09 13:49 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel, Mikael Abrahamsson
On Tue, 2013-07-09 at 15:45 +0200, Toke Høiland-Jørgensen wrote:
> Eric Dumazet <eric.dumazet@gmail.com> writes:
>
> > OK, thats a total of 200 ms RTT. Its a pretty high value :(
>
> Yeah, that was the point; Mikael requested such a test be run, and I
> happened to be near my lab setup yesterday, so thought I'd run it.
>
> > Could you send "tc -s qdisc" taken at netem box ?
>
> Not really, no; sorry. Shut the whole thing down and I'm going on
> holiday tomorrow, so won't have a chance to go back for at least a
> couple of weeks. Will keep it in mind for the next time I get there;
> anything else I should make sure to collect while I'm at it? :)
>
It would be nice it the rrul results could include a nstat snapshot
nstat >/dev/null ; rrul_tests ; nstat
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
2013-07-09 13:49 ` Eric Dumazet
@ 2013-07-09 13:53 ` Toke Høiland-Jørgensen
2013-07-09 14:07 ` Eric Dumazet
0 siblings, 1 reply; 13+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-07-09 13:53 UTC (permalink / raw)
To: Eric Dumazet; +Cc: codel, cerowrt-devel, Mikael Abrahamsson
[-- Attachment #1: Type: text/plain, Size: 231 bytes --]
Eric Dumazet <eric.dumazet@gmail.com> writes:
> It would be nice it the rrul results could include a nstat snapshot
>
> nstat >/dev/null ; rrul_tests ; nstat
Sure, can do. Is that from the client machine or the netem box?
-Toke
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Codel] happy 4th!
2013-07-09 13:53 ` Toke Høiland-Jørgensen
@ 2013-07-09 14:07 ` Eric Dumazet
0 siblings, 0 replies; 13+ messages in thread
From: Eric Dumazet @ 2013-07-09 14:07 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: codel, cerowrt-devel, Mikael Abrahamsson
On Tue, 2013-07-09 at 15:53 +0200, Toke Høiland-Jørgensen wrote:
> Eric Dumazet <eric.dumazet@gmail.com> writes:
>
> > It would be nice it the rrul results could include a nstat snapshot
> >
> > nstat >/dev/null ; rrul_tests ; nstat
>
> Sure, can do. Is that from the client machine or the netem box?
Client machine, as I am interested about TCP metrics ;)
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2013-07-09 14:07 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <CAA93jw7WPdxKwGGvQoSEfmDN3ey9j7+BNT5yohB6XUyCqbQRtQ@mail.gmail.com>
[not found] ` <alpine.DEB.2.00.1307040751130.10894@uplift.swm.pp.se>
[not found] ` <1373223178.486913695@apps.rackspace.com>
[not found] ` <alpine.DEB.2.00.1307080211340.10894@uplift.swm.pp.se>
[not found] ` <871u79x9kb.fsf@toke.dk>
[not found] ` <alpine.DEB.2.00.1307090749480.8891@uplift.swm.pp.se>
2013-07-09 6:32 ` [Codel] [Cerowrt-devel] happy 4th! Dave Taht
2013-07-09 7:30 ` Andrew McGregor
2013-07-09 13:09 ` Eric Dumazet
2013-07-09 7:57 ` [Codel] " Toke Høiland-Jørgensen
2013-07-09 12:56 ` Eric Dumazet
2013-07-09 13:13 ` Toke Høiland-Jørgensen
2013-07-09 13:23 ` Eric Dumazet
2013-07-09 13:25 ` Toke Høiland-Jørgensen
2013-07-09 13:36 ` Eric Dumazet
2013-07-09 13:45 ` Toke Høiland-Jørgensen
2013-07-09 13:49 ` Eric Dumazet
2013-07-09 13:53 ` Toke Høiland-Jørgensen
2013-07-09 14:07 ` Eric Dumazet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox