From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-1" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 2FF1521F18E for ; Mon, 24 Feb 2014 13:55:01 -0800 (PST) Received: from hms-beagle-2.home.lan ([217.86.112.208]) by mail.gmx.com (mrgmx002) with ESMTPSA (Nemesis) id 0Lmr1w-1WuNuX2hrg-00h53e for ; Mon, 24 Feb 2014 22:54:57 +0100 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\)) From: Sebastian Moeller In-Reply-To: <4E5BC321-2054-4364-BECC-DF34E0D20380@gmail.com> Date: Mon, 24 Feb 2014 22:54:55 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: <4E5BC321-2054-4364-BECC-DF34E0D20380@gmail.com> To: Rich Brown X-Mailer: Apple Mail (2.1510) X-Provags-ID: V03:K0:VgWUszy55iAZVO4Z1f+SMhaox0DEAWSHkAf+4gwO5cgoARHirNb n5HtbYJw+nHDTbcMo13aVavG1Ydn10AvgSY12drFdhEUCSEBjt/N3nAFEwOuvMWIUz/E+1b pdvUBKu7Y5mnthq+5JHgSkQsnFD0FiNDnptQ1R6IVrSDWRGl9dY6joMDBfYcsVR5JU4x1Qo zsOP5XAnaP8QGk2+s4JqQ== Cc: cerowrt-devel Subject: Re: [Cerowrt-devel] Equivocal results with using 3.10.28-14 X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Feb 2014 21:55:01 -0000 Hi Rich, On Feb 24, 2014, at 15:36 , Rich Brown wrote: >=20 > CeroWrt 3.10.28-14 is doing a good job of keeping latency low. But... = it has two other effects: >=20 > - I don't get the full "7 mbps down, 768 kbps up" as touted by my DSL = provider (Fairpoint). In fact, CeroWrt struggles to get above 6.0/0.6 = mbps. Okay, that sounds like a rather large bandwidth sacrifice, but = let's see what we can expect to see on your link, to get a better = hypothesis of what we can expect on your link. 0) the raw line rates as presented by your modem: DOWN [kbps]: 7616 UP [kbps]: 864 1) let's start with the reported sync rates: the sync rates of the modem = (that Rich graciously send me privately) also contain bins used for = forward error correction (these sub-carriers will not be available for = ATM-payload so reduce the useable sync. It looks like K reports the = number of data bytes per dmt-frame, while R denotes the number of FEC = bytes per dmt-frame. =46rom my current understanding K is the useable = part of the K+R total, so with K(down) =3D 239 and R(down) =3D 16 (and = K(up) =3D 28 and R(up) =3D 0) we get: but from the numbers you send it looks like for the downlink 16 in 239 = byte are FEC bytes (and zero in you uplink) so you seem to loose = 100*16/239 =3D 6.69% for forward error control on your downlink. In = other words the useable DSL rate is 7616 * (1-(16/(239+16))) =3D 7138.13 = =3D 7106.14 kbps DOWN [kbps]: 7616 * (1-(16/(239+16))) = =3D 7138.13333333 UP [kbps]: 864 * = (1-(0/(28+0))) =3D 864 2) ATM framing 1: For the greater group I think it is worth reminding = that the ATM cell train that the packets get transferred over uses a 48 = payload in 53 byte cells encapsulation so even if the ATM encapsulation = would not have more quirks (but it does) you could at best expect = 100*48/53 =3D 90.57% of the sync rate to show up as IP throughput.=20 So in your case: downlink: 7616* (1-(16/(239+16))) * (48/53) =3D = 6464.7245283 uplink: 864* (1-(0/(28+0))) * (48/53) =3D = 782.490566038 3) per packet fixed overhead: each packet also drags in some overhead = for all the headers (some like ATM and ethernet headers are on top of = the MTU, some like the PPPoE headers or potential VLAN tags reduce your = useable MTU). I assume that with your link with PPPoE your MTU is 1492 = (the PPPoE headers are 8 byte) and you have a total of 40 bytes = overhead, so packets are maximally 1492+40 =3D 1532 bytes on the wire, = so this is the reference for size: (1- (1492/1532)) * 100 =3D = 2.61096605744 % you loose 2.6% just for the overheads (now since this is = fixed it will be larger for small packets, say a 64Byte packet ends up = with 100*64/(64+40) =3D 61.5384615385 of the expected rates this is not = specific to DSL though you have fixed headers also with ethernet it is = just with most DSL encapsulation schemes the overhead just mushrooms=85 = Let's assume that netsurf tries to use maximally full packets for its = TCP streams, so we get downlink: 7616* (1-(16/(239+16))) * (48/53) * = (1492/1532)) =3D 6295.93276516=20 uplink: 864* (1-(0/(28+0))) * (48/53) * = (1492/1532)) =3D 762.060002956 4) per packet variable overhead: now the black horse comes in ;), the = variable padding caused by each IP packet being send in an full integer = number of ATM cells, worst case is 47 bytes of padding in the last cell = (actually the padding gets spread over the last two packets, but the = principle remains the same; did I mention quirky in connection with ATM = already ;) ). So for large packets, depending on size we have an = additional 0 to 47 bytes of overhead of roughly 47/1500 =3D 3%. For you link with 1492 MTU packets (required to make room for = the 8 byte PPPoE header) we have (1492+40)/48 =3D 31.9166666667, so we = need 32 ATM cells, resulting in (1492+40) - (48*32) =3D 4 bytes of = padding downlink: 7616* (1-(16/(239+16))) * (48/53) * = (1492/1536)) =3D 6279.53710692 uplink: 864* (1-(0/(28+0))) * (48/53) * = (1492/1536)) =3D 760.075471698=20 5) stuff that netsurf does not report: netsurf will not see any ACK = packets; but we can try to estate those (if anybody sees a fleaw in this = reasoning please holler). I assume that typically we sent one ACK per = two packets, so estimate the number of MTU-sized packets we could = maximally send per second: downlink: (7616* (1-(16/(239+16))) * (48/53) * = (1492/1536))) =3D 6279.53710692 / (1536*8/1000) =3D 511.030037998 uplink: 864* (1-(0/(28+0))) * (48/53) * = (1492/1536)) / 1536 =3D 760.075471698 / (1536*8/1000) =3D 61.8551002358 Now an ACK packet is rather small (40 bytes without 52 with = timestamp?) but with overhead and cell-padding we get 40+40 =3D 80 = results in two cells worth 96 bytes (52+40 =3D 92, so also two cells = just less padding) so the relevant size of our ACKs is 96bytes. I do not = know about your stem but mine send one ACK per two data packets (I = think) so lets fold this into our calculations by assuming each data = packet would contain the ACK data already by simply assuming each packet = is 48 bytes longer downlink: 7616* (1-(16/(239+16))) * (48/53) * = (1492/(1536+48))) =3D 6089.24810368 uplink: 864* (1-(0/(28+0))) * (48/53) * = (1492/(1536+48))) =3D 737.042881647 6) more stuff that does not show up in netsurf-wrappers TCP averages: = all the ICMP and UDP packets for the latency probe are not accounted = for, yet consume bandwidth as well. The UDP probes in your experiments = all stop pretty quickly, if they state at all so we can ignore those. = The ICMP pings come at 5 per second and cost 56 default ping size plus 8 = byte ICMP header bytes plus 20 bytes IP4 header, plus overhead 40, so = 56+8+20+40 =3D 124 resulting in 3 ATM cells or 3*48 =3D 144 bytes = 144*8*5/1000 =3D 5.76 kbps, which we probably can ignore here. Overall it looks like your actual measured results are pretty close to = the maximum we can expect, at least for the download direction; looking = at the upstream plots it is really not clear what the cumulative rate = actually is, but the order of magnitude looks about right. I really wish = we all could switch to ethernet of fiber optics soon, so the calculation = of the expected maximum will be much easier=85 Note if you shape down to below the rates calculated in 1) use = the shaped rates as inputs for the further calculations, also note that = activating the ATM link-layer option in SQM will take care of2) 3) 4) = independent of whether your link actually suffers from ATM in the first = place, so activation these options on a fiber link will cause the same = apparent bandwidth waste=85 Best Regards Sebastian >=20 > - When I adjust the SQM parameters to get close to those numbers, I = get increasing levels of packet loss (5-8%) during a concurrent ping = test. >=20 > So my question to the group is whether this behavior makes sense: that = we can have low latency while losing ~10% of the link capacity, or that = getting close to the link capacity should induce large packet loss... >=20 > Experimental setup: >=20 > I'm using a Comtrend 583-U DSL modem, that has a sync rate of 7616 = kbps down, 864 kbps up. Theoretically, I should be able to tell SQM to = use numbers a bit lower than those values, with an ATM plus header = overhead with default settings.=20 >=20 > I have posted the results of my netperf-wrapper trials at = http://richb-hanover.com - There are a number of RRUL charts, taken with = different link rates configured, and with different link layers. =46rom you website: Note: I don=92t know why the upload charts show such fragmentary data. This is because netsurf-wrapper works with a fixed step size (from = netperf-wrapper --help: -s STEP_SIZE, --step-size=3DSTEP_SIZE: = Measurement data point step size.) which works okay for high enough = bandwidths, your uplink however is too slow, so "-s 1.0" or even 2.0 = would look reasonable ()the default is as far as I remember 0.1. = Unfortunately netperf-wrapper does not seem to allow setting different = -s options for up and down... >=20 > I welcome people's thoughts for other tests/adjustments/etc. >=20 > Rich Brown > Hanover, NH USA >=20 > PS I did try the 3.10.28-16, but ran into troubles with wifi and = ethernet connectivity. I must have screwed up my local configuration - I = was doing it quickly - so I rolled back to 3.10.28.14. > _______________________________________________ > Cerowrt-devel mailing list > Cerowrt-devel@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cerowrt-devel