Hi Fred,
On Aug 14, 2013, at 16:28 , Fred Stratton <fredstratton@imap.cc> wrote:
On 14 Aug 2013, at 15:18, Sebastian Moeller <moeller0@gmx.de> wrote:
Hi Fred,
On Aug 14, 2013, at 15:37 , Fred Stratton <fredstratton@imap.cc> wrote:
On 14 Aug 2013, at 14:31, Sebastian Moeller <moeller0@gmx.de> wrote:
Hi Fred,
On Aug 14, 2013, at 14:01 , Fred Stratton <fredstratton@imap.cc> wrote:
On 14 Aug 2013, at 12:42, Sebastian Moeller <moeller0@gmx.de> wrote:
Hi Fred,
On Aug 13, 2013, at 21:40 , Fred Stratton <fredstratton@imap.cc> wrote:
(apologies for wrecking the list, and introducing email addresses in error)
Begin forwarded message
On 13 Aug 2013, at 19:53, Sebastian Moeller <moeller0@gmx.de> wrote:
H Fred
On Aug 13, 2013, at 17:28 , Fred Stratton <fredstratton@imap.cc> wrote:
I have been experimenting with the two sets of modified sets of scripts and AQM panels. Thank you for constructing them.
Thanks for testing...
To mention the string ''for ATM choose' is repeated erroneously in the extended panel.
Fixed… I will try to test whether it actually works before sending the next version...
The scripts work.
The link layer giving best results is ethernet.
What and how did you measure? Using "use HTB's private mechanism for linklayer and overhead" or "Use tc's stab mechanism for linklayer and overhead"? A little browsing of the kernel source makes me believe that the HTB version is fully busted and will not do anything at all (so I would have imagined adel atm and ethernet to behave the same). I am thinking about how to test whether a link layer adjustment works or not.
Ein Fehler. I had both chosen. They are mutually exclusive options. 2 days of testing lost. Shall restart.
I will try to fix the AQM scripts to make these two mutually exclusive. That said, the HTB internal implementation does not seem to work at all, so enabling both should be equivalent to just enabling stab. In my quick and dirty testing (using netsurf-wrapper, which I got working on macosx 10.8) it looks like activating both actually should work. BTW I am looking for an open netsurf server in Europe anybody any ideas?
I am actually getting better results from htb than td-stab at present.
Then I will have to test an compare the RRUL performance for stab-linklayeradjustments (loa), htb-lla, no-lla, no-shaping at all, at 50% of link rate and at say 80% of link rate and see which performs best. Alas I need a closer netperf 2.6.0 net server binary than the ones in NY and CA. So far I am failing to find a windows binary I could run on one of the machines in the lab…
How do you measure currently? I would love to run the same tests to figure out what is up with the two loa methods.
Netalyzr, for all its deficiencies.
Ah, so you are basically testing the maximum depth of the ge00 uplink fq_codels, which is set at 600 at the moment. You could test this by changing the value in egress() if simple.qos
$TC qdisc add dev $IFACE parent 1:11 handle 110: $QDISC limit 600 $NOECN `get_quantum 300` `get_flows ${PRIO_RATE}`
$TC qdisc add dev $IFACE parent 1:12 handle 120: $QDISC limit 600 $NOECN `get_quantum 300` `get_flows ${BE_RATE}`
$TC qdisc add dev $IFACE parent 1:13 handle 130: $QDISC limit 600 $NOECN `get_quantum 300` `get_flows ${BK_RATE}`
to
$TC qdisc add dev $IFACE parent 1:11 handle 110: $QDISC limit 300 $NOECN `get_quantum 300` `get_flows ${PRIO_RATE}`
$TC qdisc add dev $IFACE parent 1:12 handle 120: $QDISC limit 300 $NOECN `get_quantum 300` `get_flows ${BE_RATE}`
$TC qdisc add dev $IFACE parent 1:13 handle 130: $QDISC limit 300 $NOECN `get_quantum 300` `get_flows ${BK_RATE}`
and the reported buffering will get lower… I assume that netalyzr only complains about the uplink and the downlink is okay? Setting limit higher will make netalyzr more unhappy (up to a certain number over which netalyzr will get happy again). I assume netalyzr simply uses an inelastic UDP load of a fixed bandwidth tries to fill buffers until dropping occurs and deduces the size of the buffer from the amount of data that passed before dropping, if all fits into the buffer nothing needs dropping and hence netalyzr should be happy even though the buffers just got more bloated.
You could also try http://loki10.mpi-sws.mpg.de/bb/bb.php for that kind of a test which IIRC uses higher bandwidth probes.
But we have been there in the past, see https://lists.bufferbloat.net/pipermail/cerowrt-devel/2012-August/000418.html the critical test is to see whether a concurrent ping or ssh session stays responsive in spite of the netalyzr probe… And interestingly in the old e-mail cited, no AQM defaulted to 500ms uplink delay, the cable modems buffer was simply not enormously oversized to begin with…
Could you share the netalyzr numbers for no AQM, AQM with HTB and AQM with stab, in any way? And in case you can run this how do the ping statistics (example:
--- www.heise.de ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 5.813/6.120/6.619/0.213 ms
of a concurrent "ping -c 1000 IPADDRESS.OF.NEAREST>ISPHOST" look, especially the max and stddev values could be interesting…
There will be a delay.
I do not think that these measurements will be too revealing anyways, netalyzr is a good test for simple pfifo or pfifo_fast discs, but not too interesting for fq_codel. In case you stick to using netalyzr concurrent ping data might be nice, but better switch to a different latency probe.
have installed netsurf-wrapper etc, but am having difficulty with the switches.
Ah, so I so far simply used
./netperf-wrapper -l 60 -H snapon.lab.bufferbloat.net rrul -p all_scaled
to at least get pretty pictures for my quick and dirty testing. The -l NN controls the duration of the test, the -H the host, I found that both snapon.lab.bufferbloat.net and icei.org seem to be open netsurf servers (as Dave announced earlier this year), I do not know whether these are supposed to be used by everyone or not (and it would be nice to have a net server closer in Europe anyways).
I hope this gets you going…
Thank you. The snapon server errorred 'no config file found'.