From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.19]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-1" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 4790121F184 for ; Sun, 25 Aug 2013 14:50:25 -0700 (PDT) Received: from hms-beagle-2.home.lan ([79.229.225.62]) by mail.gmx.com (mrgmx003) with ESMTPSA (Nemesis) id 0MEWkb-1VG7yp2Irh-00Fmh7 for ; Sun, 25 Aug 2013 23:50:22 +0200 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.5 \(1508\)) From: Sebastian Moeller In-Reply-To: Date: Sun, 25 Aug 2013 23:50:21 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <4173E3CC-143D-45A0-8ABC-E90DF30A4227@gmx.de> References: <56B261F1-2277-457C-9A38-FAB89818288F@gmx.de> <2148E2EF-A119-4499-BAC1-7E647C53F077@gmx.de> <03951E31-8F11-4FB8-9558-29EAAE3DAE4D@gmx.de> <9A9B094D-CA07-48B0-85FE-FA7C759FEDE3@gmx.de> <5BEF0C7C-C2F4-45A9-9FF2-E32A05B8D67B@gmx.de> <8CD72282-88CB-43FD-84EF-574DDB23F0AB@gmx.de> <0886582B-E46C-4F93-A9E5-C45A81C32AEA@imap.cc> <8AFDEBD8-54C9-46B6-8CBE-5CD4242A2765@imap.cc> To: Fred Stratton X-Mailer: Apple Mail (2.1508) X-Provags-ID: V03:K0:oY+DsilDnpmxRyBYHSdVPyfBisnu/SOnweTe5BruzkT2qS+ztk2 4m4CmoHBm5U3F0ck3zZpMH0bRdTCmZ9zhwn7mEvWZ09k0dSDod2T0C3+oPH15zQfeN0LORT MCaZnApt1l9xqTU8whRnsmUlzJTWmujmFyEJTGjE+8vDdEfvbKi7cjo12OEf93ie8nwzCBp hR7xvMRduzT9zQYqF+wFA== Cc: "cerowrt-devel@lists.bufferbloat.net" Subject: Re: [Cerowrt-devel] some kernel updates X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Aug 2013 21:50:25 -0000 Hi Fred, On Aug 25, 2013, at 20:30 , Fred Stratton wrote: >=20 > On 25 Aug 2013, at 18:53, Sebastian Moeller wrote: >=20 >> Hi Fred, >>=20 >>=20 >> On Aug 25, 2013, at 16:26 , Fred Stratton = wrote: >>=20 >>> Thank you. >>>=20 >>> This is an initial response. >>>=20 >>> Am using 3.10.2-1 currently, with the standard AQM interface. This = does not have the pull down menu of your interface, which is why I ask = if both are active.=20 >>=20 >> I have seen your follow-up mail that you actually used 3.10.9-2. = I think that has the first cut of the script modifications that still = allow to select both. Since I have not tested it any other way I would = recommend to enable just one of them at the same time. Since the = implementation of both is somewhat orthogonal and htb_private actually = works in 3.10.9, best case you might actually get the link layer = adjustments (LLA) and the overhead applied twice, wasting bandwidth. So = please either use the last set of modified files I send around or wait = for Dave to include them in ceropackages=85 >=20 > I have retained the unmodified script. I shall return to that. Let me know how you fare (but expect no replies for a week due = to holiday) >=20 >=20 >>=20 >>> On 25 Aug 2013, at 14:59, Sebastian Moeller wrote: >>>=20 >>>> Hi Fred, >>>>=20 >>>>=20 >>>> On Aug 25, 2013, at 12:17 , Fred Stratton = wrote: >>>>=20 >>>>>=20 >>>>> On 25 Aug 2013, at 10:21, Fred Stratton = wrote: >>>>>=20 >>>>>> As the person with the most flaky ADSL link, I point out that = None of these recent, welcome, changes, are having any effect here, with = an uplink sped of circa 950 kbits/s. >>>>=20 >>>> Okay, how flaky is you link? What rate of Errors do you have = while testing? I am especially interested in CRC errors and ES SES and = HEC, just to get an idea how flaky the line is... >>>>=20 >>>>>>=20 >>>>>> The reason I mention this is that it is still impossible to watch = iPlayer Flash streaming video and download at the same time, The iPlayer = stream fails. The point of the exercise was to achieve this.=20 >>>>>>=20 >>>>>> The uplink delay is consistently around 650ms, which appears to = be too high for effective streaming. In addition, the uplink stream has = multiple breaks, presumably outages, if the uplink rate is capped at, = say, 700 kbits/s. >>>>=20 >>>> Well, watching video is going to stress your downlink so the = uplink should not saturate by the ACKs and the concurrent downloads also = do not stress your uplink except for the ACKs, so this points to = downlink errors as far as I can tell from the data you have given. If = the up link has repeated outages however, your problems might be = unfixable because these, if long enough, will cause lost ACKs and will = probably trigger retransmission, independent of whether the link layer = adjustments work or not. (You could test this by shaping you up and = downlink to <=3D 50% of the link rates and disable all link layer = adjustments, 50% is larger than the ATM worst case so should have you = covered. Well unless you del link has an excessive number of tones = reserved for forward error correction (FEC)). >>>=20 >>> Uptime 100655 >>> downstream 12162 kbits/s >>> CRC errors 10154 >>> FEC Errors 464 >>> hEC Errors 758 >>>=20 >>> upstream 1122 kbits/s >>> no errors in period. >>=20 >> Ah, I think you told me in the past that "Target snr upped to 12 = deciBel. Line can sustain 10 megabits/s with repeated loss of sync.at = lower snr. " so sync at 12162 might be too aggressive, no? But the point = is that as I understand iPlayer works fine without competing download = traffic? To my eye the error numbers look small enough to not be = concerned about. Do you know how long the error correction period is? >=20 > The correction period is probably circa 28 hours. Okay, if these errors are logged over 28 hours they are not the = cause of your troubles... > Have moved to using the HG612. This is uses the Broadcom 6368 SoC. = Like most of the devices I use, it fell out of a BT van and on to ebay. = It is the standard device used for connecting FTTC installations in the = UK. With a simple modification, it will work stably with ADSL2+. >=20 > Ihe sync rate has gone up considerably, not because I have changed the = Target SNR from 12 Decibel, but because I am now using a Broadcom = chipset and software blob with a DSLAM which returns BDCM when = interrogated. Ah, good then... >>=20 >>=20 >>>=20 >>>> Could you perform the following test by any chance: state = iPlayer and yor typical downloads and then have a look at = http://gw.home.lan:81und the following tab chain Status -> Realtime = Graphs -> Traffic -> Realtime Traffic. If during your test the Outbound = rate stays well below you shaped limit and you still encounter the = stream failure I would say it is save to ignore the link layer = adjustments as cause of your issues. >>>=20 >>> Am happy reducing rate to fifty per cent, but the uplink appears to = have difficulty operating below circa 500 kbits/s. This should not be = so. I shall try a fourth time. >>=20 >> That sounds weird, if you shape to below 500 upload stops = working or just gets choppier? Looking at your sync data 561 would fit = the ~50% and above 500 requirements. >=20 > I was basing the judgment on Netalyzr data. DT and you now say this is = suspect. However, netsurf-wrapper traces are discontinuous. The actual = real time trace looks perfectly normal. >=20 > iPlayer is a Flash based player which is web page embedded. The ipv4 = user address is parsed to see if it is in the UK. It plays BBC TV = programs. It most likely is badly designed and written. It is the way I = watch TV. Like all UK residents, I pay the bloated bureaucracy of the = BBC a yearly fee of about 200 euro. If I do not pay, I will be fined. = You will be surprised that I am not a fan of the BBC. iPlayer starts and = runs fine, but if a download is commenced whilst it is running, so I can = watch the propaganda put out as national news, the video will stall and = the continue, but most commonly will stop. So being not in the UK this is something I can not really test, = but if you have a single iPlayer instance running and watch something = how much traffic does show up in cerowrt's teatime traffic display, or = asked differently how much of your link as eaten up by iPlayer in = default mode? >>=20 >>=20 >>>>=20 >>>>=20 >>>>>>=20 >>>>>> YouTube has no problems. >>>>>>=20 >>>>>> I remain unclear whether the use of tc-stab and htb are mutually = exclusive options, using the present stock interface. >>>>=20 >>>> Well, depending on the version of the cerowrt you use, <3.10.9-1 = I believe lacks a functional HTB link layer adjustment mechanism, so you = should select tc_stab. My most recent modifications to Toke and Dave's = AQM package does only allow you to select one or the other. In any case = selecting BOTH is not a reasonable thing to do, because best case it = will only apply overhead twice, worst case it would also do the (link = layer adjustments) LLA twice >>>=20 >>>=20 >>>> See initial comments. >>>>=20 >>>>>>=20 >>>>>> The current ISP connection is IPoA LLC. >>>>>=20 >>>>> Correction - Bridged LLC.=20 >>>>=20 >>>> Well, I think you should try to figure out your overhead = empirically and check the encapsulation. I would recommend you run the = following script on you r link over night and send me the log file it = produces: >>>>=20 >>>> #! /bin/bash >>>> # TODO use seq or bash to generate a list of the requested sizes = (to alow for non-equdistantly spaced sizes) >>>>=20 >>>> # Telekom Tuebingen Moltkestrasse 6 >>>> TECH=3DADSL2 >>>> # finding a proper target IP is somewhat of an art, just traceroute = a remote site=20 >>>> # and find the nearest host reliably responding to pings showing = the smallet variation of pingtimes >>>> TARGET=3D87.186.197.70 # T >>>> DATESTR=3D`date +%Y%m%d_%H%M%S` # to allow multiple sequential = records >>>> LOG=3Dping_sweep_${TECH}_${DATESTR}.txt >>>>=20 >>>>=20 >>>> # by default non-root ping will only end one packet per second, so = work around that by calling ping independently for each package >>>> # empirically figure out the shortest period still giving the = standard ping time (to avoid being slow-pathed by our host) >>>> PINGPERIOD=3D0.01 # in seconds >>>> PINGSPERSIZE=3D10000 >>>>=20 >>>> # Start, needed to find the per packet overhead dependent on the = ATM encapsulation >>>> # to reliably show ATM quantization one would like to see at least = two steps, so cover a range > 2 ATM cells (so > 96 bytes) >>>> SWEEPMINSIZE=3D16 # 64bit systems seem to require 16 bytes = of payload to include a timestamp... >>>> SWEEPMAXSIZE=3D116 >>>>=20 >>>>=20 >>>> n_SWEEPS=3D`expr ${SWEEPMAXSIZE} - ${SWEEPMINSIZE}` >>>>=20 >>>>=20 >>>> i_sweep=3D0 >>>> i_size=3D0 >>>>=20 >>>> while [ ${i_sweep} -lt ${PINGSPERSIZE} ] >>>> do >>>> (( i_sweep++ )) >>>> echo "Current iteration: ${i_sweep}" >>>> # now loop from sweepmin to sweepmax >>>> i_size=3D${SWEEPMINSIZE} >>>> while [ ${i_size} -le ${SWEEPMAXSIZE} ] >>>> do >>>> echo "${i_sweep}. repetition of ping size ${i_size}" >>>> ping -c 1 -s ${i_size} ${TARGET} >> ${LOG} & >>>> (( i_size++ )) >>>> # we need a sleep binary that allows non integer times (GNU = sleep is fine as is sleep of macosx 10.8.4) >>>> sleep ${PINGPERIOD} >>>> done >>>> done >>>>=20 >>>> #tail -f ${LOG} >>>>=20 >>>> echo "Done... ($0)" >>>>=20 >>>>=20 >>>> Please set TARGET to the closest IP host on the ISP side of your = link that gives reliable ping RTTs (using "ping -c 100 -s 16 = your.best.host.ip"). Also test whether the RTTs are in the same ballpark = when you reduce the ping period to 0.01 (you might have to increase the = period until the RTTs are close to the standard 1 ping per second case). = I can then run this through my matlab code to detect the actual = overhead. (I am happy to share the code as well, if you have matlab = available; it might even run under octave but I have not tested that = since the last major changes). >>>=20 >>> To follow at some point. >>=20 >> Oh, I failed to mention at the given parameters the script takes = almost 3 hours, during which the link should be otherwise idle... >>=20 >>>>=20 >>>>=20 >>>>>=20 >>>>>> Whatever byte value is used for tc-stab makes no change. >>>>=20 >>>> I assume you talk about the overhead? Missing link layer = adjustment will eat between 50% and 10% of your link bandwidth, while = missing overhead values will be more benign. The only advise I can give = is to pick the overhead that actually describes your link. I am willing = to help you figure this out. >>>=20 >>> The link is bridged LLC. Have been using 18 and 32 for test = purposes. I shall move to PPPoA VC-MUX in 4 months. >>=20 >> I guess figuring out you exact overhead empirically is going to = be fun. >>=20 >>>>=20 >>>>>>=20 >>>>>> I have applied the ingress modification to simple.qos, keeping = the original version., and tested both. >>>>=20 >>>> For which cerowrt version? It is only expected to do something = for 3.10.9-1 and upwards, before that the HTB lionklayer adjustment did = NOT work. >>>=20 >>> Using 3.10.9-2 >>=20 >> Yeah as stated above, I would recommend to use either or, not = both. If you took RRUL data you might be able to compare the three = conditions. I would estimate the most interesting part would be in the = sustained ravager up and download rates here. >=20 > How do you obtain an average i.e. mean rate from the RRUL graph? So far, I am eyeballing it similarly to Dave (except I do not = bother to multiply by 4 most of the time). Experience has taught me that = this often is good enough, especially as I can easily ignore some = periodic events caused by macosx that should not be included in any = statistics. But I a thinking about looking into net-perfr wrapper to get = numerical outputs and ideally less choppy upload graphs=85 Best Sebastian >>=20 >>=20 >>>=20 >>>>=20 >>>>>>=20 >>>>>> I have changed the Powerline adaptors I use to ones with known = smaller buffers, though this is unlikely to be a ate-limiting step. >>>>>>=20 >>>>>> I have changed the 2Wire gateway, known to be heavily buffered, = with a bridged Huawei HG612, with a Broadcom 6368 SoC. >>>>>>=20 >>>>>> This device has a permanently on telnet interface, with a simple = password, which cannot be changed other than by firmware recompilation=85 >>>>>>=20 >>>>>> Telnet, however, allows txqueuelen to be reduced from 1000 to 0. >>>>>>=20 >>>>>> None of these changes affect the problematic uplink delay. >>>>=20 >>>> So how did you measure the uplink delay? The RRUL plots you sent = me show an increase in ping RTT from around 50ms to 80ms with tc_stab = and fq_codel on simplest.qos, how does that reconcile with 650ms uplink = delay, netalyzr? >>>=20 >>> Max Planck and Netalyzr produce the same figure. I use both, but Max = Planck gives you circa 3 tries per IP address per 24 hours. >>=20 >> Well, both use the same method which is not to meaningful if you = use fq_codel on a shaped link (unless you want to optimize your system = for UDP floods :) ) >>=20 >> [snipp] >>=20 >>=20 >> Best Regards >> Sebastian >=20 > _______________________________________________ > Cerowrt-devel mailing list > Cerowrt-devel@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cerowrt-devel