* debloat-testing loadlatency test
@ 2011-03-21 14:45 Dave Täht
2011-03-21 19:43 ` Jonathan Morton
0 siblings, 1 reply; 8+ messages in thread
From: Dave Täht @ 2011-03-21 14:45 UTC (permalink / raw)
To: bloat-devel
Moving this convo to bloat-devel to reduce the noise level. I got my
somewhat debloated network up late last night and fired off the Path:
openrd <-> wndr3700 <-> davepc With loadlatency.
The wndr3700 has the buffer size in the ath9k driver reduced to a depth
of 3, TX_RETRIES of 4, txqueuelen of 2, and davepc is using
debloat-testing with the iwllagn driver, txqueuelen of 4. The two
devices connect at 36Mbit, however.
The conditions of this test are (purposely) poor, the two wireless
devices are separated by 3 floors, and 2 concrete walls. In other (iperf
related) testing I've seen 13% packet loss on ping.
Scenario 1: 0 uploads, 1 downloads... 2101 KiB/s down, 4.07 Hz smoothness
Scenario 2: 1 uploads, 0 downloads... 1614 KiB/s up, 4.50 Hz smoothness
Scenario 3: 0 uploads, 2 downloads... 2106 KiB/s down, 0.04 Hz smoothness
Scenario 4: 1 uploads, 1 downloads... 345 KiB/s up, 1604 KiB/s down, 0.07 Hz smoothness
Scenario 5: 2 uploads, 0 downloads... 1661 KiB/s up, 0.14 Hz smoothness
Scenario 6: 0 uploads, 3 downloads... 2100 KiB/s down, 0.04 Hz smoothness
Scenario 7: 1 uploads, 2 downloads... 244 KiB/s up, 1822 KiB/s down, 0.02 Hz smoothness
Scenario 8: 2 uploads, 1 downloads... 553 KiB/s up, 1419 KiB/s down, 0.04 Hz smoothness
Scenario 9: 3 uploads, 0 downloads... 1672 KiB/s up, 0.15 Hz smoothness
Scenario 10: 0 uploads, 4 downloads... 1845 KiB/s down, 0.01 Hz smoothness
Scenario 11: 1 uploads, 3 downloads... 186 KiB/s up, 1861 KiB/s down, 0.05 Hz smoothness
Scenario 12: 2 uploads, 2 downloads... 388 KiB/s up, 1630 KiB/s down, 0.04 Hz smoothness
Scenario 13: 3 uploads, 1 downloads... 744 KiB/s up, 1234 KiB/s down, 0.09 Hz smoothness
Scenario 14: 4 uploads, 0 downloads... 1697 KiB/s up, 0.07 Hz smoothness
Scenario 15: 0 uploads, 32 downloads... 1982 KiB/s down, 0.00 Hz smoothness
Scenario 16: 1 uploads, 31 downloads... 51 KiB/s up, 1810 KiB/s down, 0.00 Hz smoothness
Scenario 17: 16 uploads, 16 downloads...
^C
real 416m30.222s
user 3m3.230s
sys 25m41.070s
I aborted the test when I got up this morning.
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: debloat-testing loadlatency test
2011-03-21 14:45 debloat-testing loadlatency test Dave Täht
@ 2011-03-21 19:43 ` Jonathan Morton
2011-03-22 1:13 ` Dave Täht
2011-03-22 7:35 ` Jonathan Morton
0 siblings, 2 replies; 8+ messages in thread
From: Jonathan Morton @ 2011-03-21 19:43 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat-devel
On 21 Mar, 2011, at 4:45 pm, Dave Täht wrote:
> I aborted the test when I got up this morning.
Yes, it looks as though it may have got stuck, possibly on a partly-mitigated race condition I found. A full test run should never take that long, as there is an override condition which forces a scenario to complete after about 10 minutes.
It would probably be worth re-running this with better propagation conditions. We've already established that even with high-quality physical links, the statistics are usually remarkably bad - I'd like to establish that high smoothness and responsiveness figures are actually possible.
At least you don't seem to have any actual dropped connections. Here's a run between two Macs on GigE:
MinRTT: 0.0ms
Scenario 1: 0 uploads, 1 downloads... 107198 KiB/s down, 22.82 Hz smoothness
Scenario 2: 1 uploads, 0 downloads... 105388 KiB/s up, 31.00 Hz smoothness
Scenario 3: 0 uploads, 2 downloads... 110783 KiB/s down, 7.52 Hz smoothness
Scenario 4: 1 uploads, 1 downloads... 50998 KiB/s up, 85859 KiB/s down, 2.90 Hz smoothness
Scenario 5: 2 uploads, 0 downloads... 108230 KiB/s up, 2.73 Hz smoothness
Scenario 6: 0 uploads, 3 downloads... 112491 KiB/s down, 7.29 Hz smoothness
Scenario 7: 1 uploads, 2 downloads... 33296 KiB/s up, 100526 KiB/s down, 3.11 Hz smoothness
Scenario 8: 2 uploads, 1 downloads... 69995 KiB/s up, 66111 KiB/s down, 2.53 Hz smoothness
Scenario 9: 3 uploads, 0 downloads... 102502 KiB/s up, 4.96 Hz smoothness
Scenario 10: 0 uploads, 4 downloads... 111745 KiB/s down, 2.55 Hz smoothness
Scenario 11: 1 uploads, 3 downloads... 21816 KiB/s up, 104424 KiB/s down, 6.00 Hz smoothness
Scenario 12: 2 uploads, 2 downloads... 52828 KiB/s up, 93344 KiB/s down, 2.96 Hz smoothness
Scenario 13: 3 uploads, 1 downloads... 82564 KiB/s up, 52255 KiB/s down, 6.89 Hz smoothness
Scenario 14: 4 uploads, 0 downloads... 110715 KiB/s up, 2.57 Hz smoothness
Scenario 15: 0 uploads, 32 downloads... 0 KiB/s down, 0.00 Hz smoothness
Scenario 16: 1 uploads, 31 downloads... 14712 KiB/s up, 0 KiB/s down, 0.00 Hz smoothness
Scenario 17: 16 uploads, 16 downloads... 76252 KiB/s up, 85105 KiB/s down, 0.86 Hz smoothness
Scenario 18: 31 uploads, 1 downloads... 113567 KiB/s up, 5775 KiB/s down, 2.64 Hz smoothness
Scenario 19: 32 uploads, 0 downloads... 107671 KiB/s up, 2.55 Hz smoothness
OVERALL:
Upload Capacity: 50450 KiB/s
Download Capacity: 0 KiB/s
Link Responsiveness: 0 Hz
Flow Smoothness: 0 Hz
In the above, during scenarios 15 and 16, the G5 (acting as the server) reported hard shutdowns of several spew() threads, indicating that the TCP/IP stack had cancelled the connections. The MBP (as client) didn't hard-shutdown any connections. Shutdown connections are deliberately recorded as zeroes because they constitute a serious failure of the network, and the way the stats are combined ensures that this is reflected in the overall results.
It seems that OSX uses a rather aggressive TCP which can actually saturate GigE even with one connection. The tradeoff is that with multiple flows in contention, they can be mutually unfair enough for some flows to be completely starved out by packet losses. When that happens, the TCP retransmits, but the retransmissions also get lost at a rather high probability. Eventually, the stack decides the flow is dead and cancels it.
It's not clear whether both ends are using ECN here (the G5 is restricted to OSX 10.5.x because 10.6 requires an Intel CPU), but either way it is clearly ineffective because there is no AQM on the GigE ports (or at least not the one in the G5). ECN requires AQM to function.
- Jonathan
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: debloat-testing loadlatency test
2011-03-21 19:43 ` Jonathan Morton
@ 2011-03-22 1:13 ` Dave Täht
2011-03-22 23:59 ` Felix Fietkau
2011-03-22 7:35 ` Jonathan Morton
1 sibling, 1 reply; 8+ messages in thread
From: Dave Täht @ 2011-03-22 1:13 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat-devel
Jonathan Morton <chromatix99@gmail.com> writes:
> On 21 Mar, 2011, at 4:45 pm, Dave Täht wrote:
>
>> I aborted the test when I got up this morning.
>
> Yes, it looks as though it may have got stuck, possibly on a
> partly-mitigated race condition I found. A full test run should never
> take that long, as there is an override condition which forces a
> scenario to complete after about 10 minutes.
I was looking over rngs a little today, it looked like drand48_r would
be threadsafe...
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: debloat-testing loadlatency test
2011-03-21 19:43 ` Jonathan Morton
2011-03-22 1:13 ` Dave Täht
@ 2011-03-22 7:35 ` Jonathan Morton
2011-03-22 8:50 ` Pedro Tumusok
1 sibling, 1 reply; 8+ messages in thread
From: Jonathan Morton @ 2011-03-22 7:35 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat-devel
On 21 Mar, 2011, at 9:43 pm, Jonathan Morton wrote:
> I'd like to establish that high smoothness and responsiveness figures are actually possible.
And here is the beginning of that proof: a run between my MBP and my firewall (an old PowerBook G3 running Linux) over Ethernet. The G3 only supports 100base-TX, but has SFQ turned on for that port, and is using my Blackpool mod (designed for 3G) in it's TCP stack. The MBP is stock Snow Leopard fare.
MinRTT: 0.0ms
Scenario 1: 0 uploads, 1 downloads... 11486 KiB/s down, 30.19 Hz smoothness
Scenario 2: 1 uploads, 0 downloads... 11439 KiB/s up, 68.72 Hz smoothness
Scenario 3: 0 uploads, 2 downloads... 11506 KiB/s down, 28.20 Hz smoothness
Scenario 4: 1 uploads, 1 downloads... 11293 KiB/s up, 5024 KiB/s down, 26.74 Hz smoothness
Scenario 5: 2 uploads, 0 downloads... 11474 KiB/s up, 37.20 Hz smoothness
Scenario 6: 0 uploads, 3 downloads... 11514 KiB/s down, 20.26 Hz smoothness
Scenario 7: 1 uploads, 2 downloads... 10946 KiB/s up, 7253 KiB/s down, 20.76 Hz smoothness
Scenario 8: 2 uploads, 1 downloads... 11361 KiB/s up, 4428 KiB/s down, 21.33 Hz smoothness
Scenario 9: 3 uploads, 0 downloads... 11471 KiB/s up, 21.91 Hz smoothness
Scenario 10: 0 uploads, 4 downloads... 11515 KiB/s down, 19.66 Hz smoothness
Scenario 11: 1 uploads, 3 downloads... 9911 KiB/s up, 8196 KiB/s down, 23.96 Hz smoothness
Scenario 12: 2 uploads, 2 downloads... 11319 KiB/s up, 4705 KiB/s down, 9.80 Hz smoothness
Scenario 13: 3 uploads, 1 downloads... 11355 KiB/s up, 4230 KiB/s down, 19.53 Hz smoothness
Scenario 14: 4 uploads, 0 downloads... 11482 KiB/s up, 15.77 Hz smoothness
Scenario 15: 0 uploads, 32 downloads... 10673 KiB/s down, 0.68 Hz smoothness
Scenario 16: 1 uploads, 31 downloads... 1441 KiB/s up, 11462 KiB/s down, 1.97 Hz smoothness
Scenario 17: 16 uploads, 16 downloads... 11477 KiB/s up, 5043 KiB/s down, 2.48 Hz smoothness
Scenario 18: 31 uploads, 1 downloads... 11640 KiB/s up, 1561 KiB/s down, 0.48 Hz smoothness
Scenario 19: 32 uploads, 0 downloads... 11775 KiB/s up, 6.70 Hz smoothness
OVERALL:
Upload Capacity: 7584 KiB/s
Download Capacity: 5598 KiB/s
Link Responsiveness: 0 Hz
Flow Smoothness: 0 Hz
The overall stats are still dragged down by poor 32-flow results, but notice that even with 32 uploads (towards the G3), the smoothness is considerably improved from the untweaked GigE situation. For up to 4 flows, the smoothness remains high even with excellent link utilisation.
And this is with CUBIC being used on the G3, with nothing to trigger ECN. That's probably the combination that drags down the 32-flow results.
- Jonathan
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: debloat-testing loadlatency test
2011-03-22 7:35 ` Jonathan Morton
@ 2011-03-22 8:50 ` Pedro Tumusok
2011-03-22 14:15 ` Jonathan Morton
0 siblings, 1 reply; 8+ messages in thread
From: Pedro Tumusok @ 2011-03-22 8:50 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat-devel
Hi,
I compiled this and started a test last night before going to bed.
This is on my internal network, which is a mess in this regard, but I
had to test first.
PC with client - Router/AP - Switch - Homeplug - Homplug - PC with server
PC with client is a laptop with Ubuntu 10.10
PC with client connects to the router/ap via wireless.
The router/ap is connected to the switch, the router/ap switch is GE
but the external switch is FE.
The connection between the router/ap and switch is 100Mb.
Connection between switch and homeplug is 100Mb
Unknown connection speed/sync between homeplugs.
Homeplug to PC with server is 100Mb
PC with server is running Ubuntu 10.10
jpt@jpt-laptop:~/Bloat/bufferbloat$ ./loadlatency 172.18.18.59
Selected client ID 6374F8B526CC5866
Connected to 172.18.18.59, waiting for response...
Server responding, beginning test...
MinRTT: 1.8ms
Scenario 1: 0 uploads, 1 downloads... 1903 KiB/s down, 0.81 Hz smoothness
Scenario 2: 1 uploads, 0 downloads... 2022 KiB/s up, 6.87 Hz smoothness
Scenario 3: 0 uploads, 2 downloads... 2204 KiB/s down, 0.78 Hz smoothness
Scenario 4: 1 uploads, 1 downloads... 981 KiB/s up, 1130 KiB/s down,
0.53 Hz smoothness
Scenario 5: 2 uploads, 0 downloads... 1981 KiB/s up, 4.86 Hz smoothness
Scenario 6: 0 uploads, 3 downloads... 2203 KiB/s down, 1.07 Hz smoothness
Scenario 7: 1 uploads, 2 downloads... 748 KiB/s up, 1283 KiB/s down,
0.39 Hz smoothness
Scenario 8: 2 uploads, 1 downloads... 1032 KiB/s up, 936 KiB/s down,
0.33 Hz smoothness
Scenario 9: 3 uploads, 0 downloads... 2039 KiB/s up, 5.25 Hz smoothness
Scenario 10: 0 uploads, 4 downloads... 2275 KiB/s down, 1.03 Hz smoothness
Scenario 11: 1 uploads, 3 downloads... 840 KiB/s up, 1278 KiB/s down,
0.40 Hz smoothness
Scenario 12: 2 uploads, 2 downloads... 911 KiB/s up, 1221 KiB/s down,
0.36 Hz smoothness
Scenario 13: 3 uploads, 1 downloads... 1068 KiB/s up, 929 KiB/s down,
0.54 Hz smoothness
Scenario 14: 4 uploads, 0 downloads... 2055 KiB/s up, 3.23 Hz smoothness
Scenario 15: 0 uploads, 32 downloads... 2251 KiB/s down, 0.44 Hz smoothness
Scenario 16: 1 uploads, 31 downloads... 534 KiB/s up, 1578 KiB/s down,
0.07 Hz smoothness
Scenario 17: 16 uploads, 16 downloads... 695 KiB/s up, 1329 KiB/s
down, 0.07 Hz smoothness
Scenario 18: 31 uploads, 1 downloads... 1073 KiB/s up, 1007 KiB/s
down, 0.36 Hz smoothness
Scenario 19: 32 uploads, 0 downloads... 2119 KiB/s up, 1.08 Hz smoothness
OVERALL:
Upload Capacity: 1058 KiB/s
Download Capacity: 1385 KiB/s
Link Responsiveness: 0 Hz
Flow Smoothness: 0 Hz
jpt@jpt-laptop:~/Bloat/bufferbloat$
jpt@jpt-laptop:~/Bloat/bufferbloat$
I'll do an elimination of units to get some comparison numbers, ie PC
to PC directly connected, the PC to PC through a switch an so forth.
After that, I'll start looking into debloating the network.
Pedro
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: debloat-testing loadlatency test
2011-03-22 8:50 ` Pedro Tumusok
@ 2011-03-22 14:15 ` Jonathan Morton
0 siblings, 0 replies; 8+ messages in thread
From: Jonathan Morton @ 2011-03-22 14:15 UTC (permalink / raw)
To: Pedro Tumusok; +Cc: bloat-devel
On 22 Mar 2011, at 10:50, Pedro Tumusok <pedro.tumusok@gmail.com> wrote:
> PC with client - Router/AP - Switch - Homeplug - Homplug - PC with server
Based on the results, I would guess that the Homeplugs are running at about 20Mbps. You would therefore want to put some AQM into those for best results, since they control the bottleneck.
The key to knowledge is not to rely on others to teach you it.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: debloat-testing loadlatency test
2011-03-22 1:13 ` Dave Täht
@ 2011-03-22 23:59 ` Felix Fietkau
2011-03-23 3:07 ` Felix Fietkau
0 siblings, 1 reply; 8+ messages in thread
From: Felix Fietkau @ 2011-03-22 23:59 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat-devel
On 2011-03-22 2:13 AM, Dave Täht wrote:
> Jonathan Morton <chromatix99@gmail.com> writes:
>
>> On 21 Mar, 2011, at 4:45 pm, Dave Täht wrote:
>>
>>> I aborted the test when I got up this morning.
>>
>> Yes, it looks as though it may have got stuck, possibly on a
>> partly-mitigated race condition I found. A full test run should never
>> take that long, as there is an override condition which forces a
>> scenario to complete after about 10 minutes.
>
> I was looking over rngs a little today, it looked like drand48_r would
> be threadsafe...
Made a version using a user space port of freebsd's arc4random instead
of gsl.
http://nbd.name/gitweb.cgi?p=loadlatency.git;a=summary
git://nbd.name/loadlatency.git
If any of you want write access to that repository, just send me an SSH key.
- Felix
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: debloat-testing loadlatency test
2011-03-22 23:59 ` Felix Fietkau
@ 2011-03-23 3:07 ` Felix Fietkau
0 siblings, 0 replies; 8+ messages in thread
From: Felix Fietkau @ 2011-03-23 3:07 UTC (permalink / raw)
To: bloat-devel
On 2011-03-23 12:59 AM, Felix Fietkau wrote:
> On 2011-03-22 2:13 AM, Dave Täht wrote:
>> Jonathan Morton <chromatix99@gmail.com> writes:
>>
>>> On 21 Mar, 2011, at 4:45 pm, Dave Täht wrote:
>>>
>>>> I aborted the test when I got up this morning.
>>>
>>> Yes, it looks as though it may have got stuck, possibly on a
>>> partly-mitigated race condition I found. A full test run should never
>>> take that long, as there is an override condition which forces a
>>> scenario to complete after about 10 minutes.
>>
>> I was looking over rngs a little today, it looked like drand48_r would
>> be threadsafe...
> Made a version using a user space port of freebsd's arc4random instead
> of gsl.
> http://nbd.name/gitweb.cgi?p=loadlatency.git;a=summary
> git://nbd.name/loadlatency.git
> If any of you want write access to that repository, just send me an SSH key.
Update: used Bob Jenkins' public domain rand.c instead - on my machine
the result is about twice as fast as the gsl random function.
- Felix
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2011-03-23 3:07 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-03-21 14:45 debloat-testing loadlatency test Dave Täht
2011-03-21 19:43 ` Jonathan Morton
2011-03-22 1:13 ` Dave Täht
2011-03-22 23:59 ` Felix Fietkau
2011-03-23 3:07 ` Felix Fietkau
2011-03-22 7:35 ` Jonathan Morton
2011-03-22 8:50 ` Pedro Tumusok
2011-03-22 14:15 ` Jonathan Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox