[Cerowrt-devel] Equivocal results with using 3.10.28-14
Sebastian Moeller
sebastian.moeller at gmail.com
Mon Feb 24 17:40:23 EST 2014
Hi Rich,
On Feb 24, 2014, at 22:54 , Sebastian Moeller <moeller0 at gmx.de> wrote:
> Hi Rich,
>
>
> On Feb 24, 2014, at 15:36 , Rich Brown <richb.hanover at gmail.com> wrote:
>
>>
>> CeroWrt 3.10.28-14 is doing a good job of keeping latency low. But... it has two other effects:
>>
>> - I don't get the full "7 mbps down, 768 kbps up" as touted by my DSL provider (Fairpoint). In fact, CeroWrt struggles to get above 6.0/0.6 mbps.
>
> Okay, that sounds like a rather large bandwidth sacrifice, but let's see what we can expect to see on your link, to get a better hypothesis of what we can expect on your link.
>
> 0) the raw line rates as presented by your modem:
> DOWN [kbps]: 7616
> UP [kbps]: 864
>
>
> 1) let's start with the reported sync rates: the sync rates of the modem (that Rich graciously send me privately) also contain bins used for forward error correction (these sub-carriers will not be available for ATM-payload so reduce the useable sync. It looks like K reports the number of data bytes per dmt-frame, while R denotes the number of FEC bytes per dmt-frame. From my current understanding K is the useable part of the K+R total, so with K(down) = 239 and R(down) = 16 (and K(up) = 28 and R(up) = 0) we get:
> but from the numbers you send it looks like for the downlink 16 in 239 byte are FEC bytes (and zero in you uplink) so you seem to loose 100*16/239 = 6.69% for forward error control on your downlink. In other words the useable DSL rate is 7616 * (1-(16/(239+16))) = 7138.13 = 7106.14 kbps
>
> DOWN [kbps]: 7616 * (1-(16/(239+16))) = 7138.13333333
> UP [kbps]: 864 * (1-(0/(28+0))) = 864
>
> 2) ATM framing 1: For the greater group I think it is worth reminding that the ATM cell train that the packets get transferred over uses a 48 payload in 53 byte cells encapsulation so even if the ATM encapsulation would not have more quirks (but it does) you could at best expect 100*48/53 = 90.57% of the sync rate to show up as IP throughput.
> So in your case:
> downlink: 7616* (1-(16/(239+16))) * (48/53) = 6464.7245283
> uplink: 864* (1-(0/(28+0))) * (48/53) = 782.490566038
>
> 3) per packet fixed overhead: each packet also drags in some overhead for all the headers (some like ATM and ethernet headers are on top of the MTU, some like the PPPoE headers or potential VLAN tags reduce your useable MTU). I assume that with your link with PPPoE your MTU is 1492 (the PPPoE headers are 8 byte) and you have a total of 40 bytes overhead, so packets are maximally 1492+40 = 1532 bytes on the wire, so this is the reference for size: (1- (1492/1532)) * 100 = 2.61096605744 % you loose 2.6% just for the overheads (now since this is fixed it will be larger for small packets, say a 64Byte packet ends up with 100*64/(64+40) = 61.5384615385 of the expected rates this is not specific to DSL though you have fixed headers also with ethernet it is just with most DSL encapsulation schemes the overhead just mushrooms… Let's assume that netsurf tries to use maximally full packets for its TCP streams, so we get
> downlink: 7616* (1-(16/(239+16))) * (48/53) * (1492/1532)) = 6295.93276516
> uplink: 864* (1-(0/(28+0))) * (48/53) * (1492/1532)) = 762.060002956
>
> 4) per packet variable overhead: now the black horse comes in ;), the variable padding caused by each IP packet being send in an full integer number of ATM cells, worst case is 47 bytes of padding in the last cell (actually the padding gets spread over the last two packets, but the principle remains the same; did I mention quirky in connection with ATM already ;) ). So for large packets, depending on size we have an additional 0 to 47 bytes of overhead of roughly 47/1500 = 3%.
> For you link with 1492 MTU packets (required to make room for the 8 byte PPPoE header) we have (1492+40)/48 = 31.9166666667, so we need 32 ATM cells, resulting in (1492+40) - (48*32) = 4 bytes of padding
> downlink: 7616* (1-(16/(239+16))) * (48/53) * (1492/1536)) = 6279.53710692
> uplink: 864* (1-(0/(28+0))) * (48/53) * (1492/1536)) = 760.075471698
>
> 5) stuff that netsurf does not report: netsurf will not see any ACK packets; but we can try to estate those (if anybody sees a fleaw in this reasoning please holler). I assume that typically we sent one ACK per two packets, so estimate the number of MTU-sized packets we could maximally send per second:
> downlink: (7616* (1-(16/(239+16))) * (48/53) * (1492/1536))) = 6279.53710692 / (1536*8/1000) = 511.030037998
> uplink: 864* (1-(0/(28+0))) * (48/53) * (1492/1536)) / 1536 = 760.075471698 / (1536*8/1000) = 61.8551002358
What I failed to mention/realize in the initial post is that sending the ACKs for the downstream TCP transfers is going to affect the upstream much more than the other way around as it has less capacity and the rrul test loads both directions at the same time; so the hidden ACK traffic is going to have a stronger impact on the observed upload than on the observed download speed. So assuming this would work:
downlad induced upload ACK traffic [kbps]: 511 data packets per second / 2 (assume we only ack every second packet) * 96 (2 aTM cells )*8/1000 = 196.224
upload induced download ACK traffic [kbps]: 62 data packets per second / 2 (assume we only ack every second packet) * 96 (2 aTM cells )*8/1000 = 23.808
So maxing out your download bandwidth with ACK every other packet eats 100*196/737 = 26.59% go your uplink bandwidth right there. The A in ADSL really is quite extreme in most cases as if to persuade people not to actually serve any data… (my pet theory is that this is caused by payed-peering practices in which you pay for the data you transfer to another network, low uplink means for most ISPs their customers can not send to much and hence keep peering costs in check ;) )
> Now an ACK packet is rather small (40 bytes without 52 with timestamp?) but with overhead and cell-padding we get 40+40 = 80 results in two cells worth 96 bytes (52+40 = 92, so also two cells just less padding) so the relevant size of our ACKs is 96bytes. I do not know about your stem but mine send one ACK per two data packets (I think) so lets fold this into our calculations by assuming each data packet would contain the ACK data already by simply assuming each packet is 48 bytes longer
> downlink: 7616* (1-(16/(239+16))) * (48/53) * (1492/(1536+48))) = 6089.24810368
> uplink: 864* (1-(0/(28+0))) * (48/53) * (1492/(1536+48))) = 737.042881647
Here is a small mix-up, due to the asynchrony in ACK traffic on both directions the approximation above does not work out. the effects are more
down: 6279.53710692 - 23.808 = 6255.72910692
up: 760.075471698 - 196.224 = 563.851471698
Which obviously is also wrong as this assumes a number of ACKs matching the full bandwidth worth of data packets, but the order of magnitude should be about right. Upload-induced ACK traffic has a marginal effect on the download, while the reverse is not true; a saturated download has a considerable hidden uplink bandwidth costs.
I assume that different ACK strategies can have lower costs and I assume that the netsurf probes are independentper direction so the ACZKs can be piggy-backed onto packets in one of the return flows. (It would be a great test of my ramblings above if netsurf could do both to estimate the cost of the ACK-channel)
Best Regards
Sebastian
>
>
>
> 6) more stuff that does not show up in netsurf-wrappers TCP averages: all the ICMP and UDP packets for the latency probe are not accounted for, yet consume bandwidth as well. The UDP probes in your experiments all stop pretty quickly, if they state at all so we can ignore those. The ICMP pings come at 5 per second and cost 56 default ping size plus 8 byte ICMP header bytes plus 20 bytes IP4 header, plus overhead 40, so 56+8+20+40 = 124 resulting in 3 ATM cells or 3*48 = 144 bytes 144*8*5/1000 = 5.76 kbps, which we probably can ignore here.
>
> Overall it looks like your actual measured results are pretty close to the maximum we can expect, at least for the download direction; looking at the upstream plots it is really not clear what the cumulative rate actually is, but the order of magnitude looks about right. I really wish we all could switch to ethernet of fiber optics soon, so the calculation of the expected maximum will be much easier…
> Note if you shape down to below the rates calculated in 1) use the shaped rates as inputs for the further calculations, also note that activating the ATM link-layer option in SQM will take care of2) 3) 4) independent of whether your link actually suffers from ATM in the first place, so activation these options on a fiber link will cause the same apparent bandwidth waste…
>
> Best Regards
> Sebastian
>
>
>
>>
>> - When I adjust the SQM parameters to get close to those numbers, I get increasing levels of packet loss (5-8%) during a concurrent ping test.
>>
>> So my question to the group is whether this behavior makes sense: that we can have low latency while losing ~10% of the link capacity, or that getting close to the link capacity should induce large packet loss...
>>
>> Experimental setup:
>>
>> I'm using a Comtrend 583-U DSL modem, that has a sync rate of 7616 kbps down, 864 kbps up. Theoretically, I should be able to tell SQM to use numbers a bit lower than those values, with an ATM plus header overhead with default settings.
>>
>> I have posted the results of my netperf-wrapper trials at http://richb-hanover.com - There are a number of RRUL charts, taken with different link rates configured, and with different link layers.
>
> From you website:
> Note: I don’t know why the upload charts show such fragmentary data.
>
> This is because netsurf-wrapper works with a fixed step size (from netperf-wrapper --help: -s STEP_SIZE, --step-size=STEP_SIZE: Measurement data point step size.) which works okay for high enough bandwidths, your uplink however is too slow, so "-s 1.0" or even 2.0 would look reasonable ()the default is as far as I remember 0.1. Unfortunately netperf-wrapper does not seem to allow setting different -s options for up and down...
>
>
>>
>> I welcome people's thoughts for other tests/adjustments/etc.
>>
>> Rich Brown
>> Hanover, NH USA
>>
>> PS I did try the 3.10.28-16, but ran into troubles with wifi and ethernet connectivity. I must have screwed up my local configuration - I was doing it quickly - so I rolled back to 3.10.28.14.
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
--
Sandra, Okko, Joris, & Sebastian Moeller
Telefon: +49 7071 96 49 783, +49 7071 96 49 784, +49 7071 96 49 785
GSM: +49-1577-190 31 41
GSM: +49-1517-00 70 355
Moltkestrasse 6
72072 Tuebingen
Deutschland
More information about the Cerowrt-devel
mailing list