[Cerowrt-devel] Equivocal results with using 3.10.28-14
Dave Taht
dave.taht at gmail.com
Tue Feb 25 10:54:58 EST 2014
On Tue, Feb 25, 2014 at 5:37 AM, Sebastian Moeller <moeller0 at gmx.de> wrote:
> Hi Rich,
>
>
> On Feb 25, 2014, at 14:09 , Rich Brown <richb.hanover at gmail.com> wrote:
>
>> Thanks everyone for all the good advice. I will summarize my responses to all your notes now, then I'll go away and run more tests.
>>
>> - Yes, I am using netperf 2.6.0 and netperf-wrapper from Toke's github repo.
>>
>> - The "sync rate" is the speed with which the DSL modem sends bits to/from my house. I got this by going into the modem's admin interface and poking around. (It turns out that I have a very clean line, high SNR, low attenuation. I'm much less than a km from the central office.) So actual speed should approach this, except...
>
> I would think of this as the theoretical upper limit ;)
>
>>
>> - Of course, I have to subtract all those overheads that Sebastian described - ATM 48-in-53, which knocks off 10%; ATM frame overhead which could add up to 47 bytes padding to any packet, etc.)
>>
>> - I looked at the target calculation in Dave's Home Gateway best practices. (http://snapon.lab.bufferbloat.net/~d/draft-taht-home-gateway-best-practices-00.html) Am I correct that it sets the target to five 1500-byte packet transmission time or 5 msec, whichever is greater?
>
> Note, the auto target implementation in ceropackages-3.10 sqm-scripts uses the following:
>
> adapt_target_to_slow_link() {
> CUR_LINK_KBPS=$1
> CUR_EXTENDED_TARGET_US=
> MAX_PAKET_DELAY_IN_US_AT_1KBPS=$(( 1000 * 1000 *1540 * 8 / 1000 ))
> CUR_EXTENDED_TARGET_US=$(( ${MAX_PAKET_DELAY_IN_US_AT_1KBPS} / ${CUR_LINK_KBPS} )) # note this truncates the decimals
> # do not change anything for fast links
> [ "$CUR_EXTENDED_TARGET_US" -lt 5000 ] && CUR_EXTENDED_TARGET_US=5000
> case ${QDISC} in
> *codel|pie)
> echo "${CUR_EXTENDED_TARGET_US}"
> ;;
> esac
> }
>
> This is modeled after the shell code Dave sent around, and does not exactly match the free version, because I could not make heads and tails out of the free version. (Happy to discuss change this in SQM if anybody has a better idea)
Really the target calculation doesn't matter so much so long as it's
larger than the MTU. This is somewhat an artifact of htb which buffers
up an extra packet, and the cpu scheduler...
It is becoming clearer (with the recent description of the pie + rate
limiter + mac compensation stuff in future cablemodems) that we can do
much work to improve the rate limiters, and should probably intertwine
them like what the cable folk just did, in order to get best
performance.
>>
>> - I was astonished by the calculation of the bandwidth consumed by acks in the reverse direction. In a 7mbps/768kbps setting, I'm going to lose one quarter of the reverse bandwidth? Wow!
Yes. TCP's sending ack requirement was actually the main driver for
having any bandwidth at all on the upstream. Back in the 90s all cable
providers wanted to provide was sufficient bandwidth for a "buy"
button. But, due to
this behavior of TCP, a ratio between down/up of somewhere between 6
and 12 to 1 was what was needed to make TCP work well. (and even then
they tried hard to make it worse and ack compression is still part of
many cable modem's provisioning)
IPv6 has much larger acks than ipv4....
I did a lot of work on various forms of hybrid network compression
back in the day (early 90s), back when we had a single 10Mbit radio
feeding hundreds of subscribers, and a 14.4 modem sending back acks...
and data. It turned out that a substantial percentage of subscribers
actually wanted to upload stuff... and that we couldn't achieve 10Mbit
in the lab with a 14.4 return... and that you can only slice up 10Mbit
so many ways before you ran out of bandwidth, and you run out of
bandwidth long before you can turn a profit...
(and while I remember details of the radio setup and all the crazy
stuff we did to get more data through the modems, I can't remember the
name of the company)
At the time I was happy, sorta, in that we'd proven that future ISPs
HAD to provide some level of upstream bandwidth bigger than a buy
button, and the original e2e internet was going to be at least
somewhat preserved...
I didn't grok at the time that NAT was going to be widely deployed...
I mean, at the time, you sold a connection, and asked how big a
network did you want, we defaulted to a /28, and we were still handing
out class C's to everyone that asked.
> Well, so was I (first time I did that) but here is the kicker with classical non0-delayed ACKs this actually doubles since each data packet gets acknowledged (I assume this puts a lower bound on how asymmetrical a loin an ISP can sell ;) ). But since I figured out that macosx seems to default to 1 Ack for every 4 packets, so only half the traffic. And note any truly bi-directional traffic shouldbe able to piggyback many of those ACKs into normal data packets.
I doubt your '4'. Take a capture for a few hundred ms.
My understanding of how macos works is that after a stream is
sustained for a while, it switches from one ack every 2 packets into
"stretch acks" - to one every 6 (or so).
there are some interesting bugs associated with stretch acks, and also
TSO - in one case we observed a full TSO of TCP RSTs being sent
instead of one.
>>
>> - I wasn't entirely clear how to set the target in the SQM GUI. I believe that "target ##msec" is an acceptable format. Is that correct?
>
> In the new version with the dedicated target field, "40ms" will work as will "40000us", "40 ms". In the most recent version "auto" will set the auto adjustment of target (it will also extend interval by the new-target - 5ms) to avoid the situation where Target gets larger than interval.
> In the older versions you would put "target 40ms" into the egress advanced option string. Note I used double quotes in my examples for clarity, the GUI does not want those...
>
>
>>
>> - There's also a discussion of setting the target with "auto", but I'm not sure I understand the syntax.
>
> just type in auto. You can check with log read and "tc -d qdsic"
>
> Best Regards
> Sebastian
>
>>
>> Now to find some time to go back into the measurement lab! I'll report again when I have more data. Thanks again.
>>
>> Rich
>>
>>
>>
>> On Feb 24, 2014, at 9:56 AM, Aaron Wood <woody77 at gmail.com> wrote:
>>
>>> Do you have the latest (head) version of netperf and netperf-wrapper? some changes were made to both that give better UDP results.
>>>
>>> -Aaron
>>>
>>>
>>> On Mon, Feb 24, 2014 at 3:36 PM, Rich Brown <richb.hanover at gmail.com> wrote:
>>>
>>> CeroWrt 3.10.28-14 is doing a good job of keeping latency low. But... it has two other effects:
>>>
>>> - I don't get the full "7 mbps down, 768 kbps up" as touted by my DSL provider (Fairpoint). In fact, CeroWrt struggles to get above 6.0/0.6 mbps.
>>>
>>> - When I adjust the SQM parameters to get close to those numbers, I get increasing levels of packet loss (5-8%) during a concurrent ping test.
>>>
>>> So my question to the group is whether this behavior makes sense: that we can have low latency while losing ~10% of the link capacity, or that getting close to the link capacity should induce large packet loss...
>>>
>>> Experimental setup:
>>>
>>> I'm using a Comtrend 583-U DSL modem, that has a sync rate of 7616 kbps down, 864 kbps up. Theoretically, I should be able to tell SQM to use numbers a bit lower than those values, with an ATM plus header overhead with default settings.
>>>
>>> I have posted the results of my netperf-wrapper trials at http://richb-hanover.com - There are a number of RRUL charts, taken with different link rates configured, and with different link layers.
>>>
>>> I welcome people's thoughts for other tests/adjustments/etc.
>>>
>>> Rich Brown
>>> Hanover, NH USA
>>>
>>> PS I did try the 3.10.28-16, but ran into troubles with wifi and ethernet connectivity. I must have screwed up my local configuration - I was doing it quickly - so I rolled back to 3.10.28.14.
>>> _______________________________________________
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>>
>>
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
More information about the Cerowrt-devel
mailing list