From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 0EA7F21F200 for ; Sat, 28 Dec 2013 12:36:36 -0800 (PST) Received: from compute4.internal (compute4.nyi.mail.srv.osa [10.202.2.44]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 2CFEA20F38; Sat, 28 Dec 2013 15:36:36 -0500 (EST) Received: from frontend2 ([10.202.2.161]) by compute4.internal (MEProxy); Sat, 28 Dec 2013 15:36:36 -0500 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=imap.cc; h= message-id:date:from:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; s=mesmtp; bh=v4dbGa1mj06Wsxk6i44WJ2mqiO8=; b=BLjZCpO6yx94F2xpangCd331Zacg hL0/LCfNoS4psw8xPaXKn9JB5VjGgfVo/zDjtTtwvV2CL69/BF5gmMqSij3VebKE o6Gk824x4+TKrMG79yr8qhDdWSAiZt/fIJ5r8i0/Q4accdr9nO3NyCZupgBkRYRx ITiI5dJeWCJReZw= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=message-id:date:from:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; s=smtpout; bh=v4dbGa1mj06Wsxk6i44WJ2 mqiO8=; b=eOxAh123TDP9WZU7YXG97i6ThlE/XYERKJ/WGxYY72n+R6wUE9uOeM PUX3ikHSILd9bkpMdWgGY4BygRLG4k07cXOiKqsk9J++0uORhReCKQkbMObYxmH/ xACSzCFOxWwvCTXT7KLm7Hgoo8+E00f8GK+L6FjD+/U5A5Zn9wmGc= X-Sasl-enc: kZf+VtZxUxK6c8Zj/LkRbykIbQ+Mmic0bdHrQfGRz3iB 1388262995 Received: from [172.30.42.8] (unknown [78.147.119.216]) by mail.messagingengine.com (Postfix) with ESMTPA id 04A086801CD; Sat, 28 Dec 2013 15:36:34 -0500 (EST) Message-ID: <52BF3652.7080806@imap.cc> Date: Sat, 28 Dec 2013 20:36:34 +0000 From: Fred Stratton User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Sebastian Moeller , cerowrt-devel@lists.bufferbloat.net References: <75A7B6AE-8ECA-4FAC-B4D3-08FD14078DA2@gmail.com> <52BEB166.903@imap.cc> <48F50AF1-018A-400F-BBA4-D6F6B95B8AD2@gmx.de> <52BEDFC8.6000301@imap.cc> <3E4BB4E0-66CC-4026-AF8C-41D3158BCD3B@gmx.de> <52BF3009.5080800@imap.cc> <68913B01-6272-4A45-BBFE-06FDE18FA5C7@gmx.de> In-Reply-To: <68913B01-6272-4A45-BBFE-06FDE18FA5C7@gmx.de> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable Subject: Re: [Cerowrt-devel] Update to "Setting up SQM for CeroWrt 3.10" web page. Comments needed. X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Dec 2013 20:36:37 -0000 On 28/12/13 20:29, Sebastian Moeller wrote: > Hi Fred, > > On Dec 28, 2013, at 21:09 , Fred Stratton wrote:= > >> On 28/12/13 19:54, Sebastian Moeller wrote: >>> Hi Fred, >>> >>> >>> On Dec 28, 2013, at 15:27 , Fred Stratton wrot= e: >>> >>>> On 28/12/13 13:42, Sebastian Moeller wrote: >>>>> Hi Fred, >>>>> >>>>> >>>>> On Dec 28, 2013, at 12:09 , Fred Stratton >>>>> >>>>> wrote: >>>>> >>>>> >>>>>> IThe UK consensus fudge factor has always been 85 per cent of the = rate achieved, not 95 or 99 per cent. >>>>>> >>>>> I know that the recommendations have been lower in the past; I thi= nk this is partly because before Jesper Brouer's and Russels Stuart's wor= k to properly account for ATM "quantization" people typically had to deal= with a ~10% rate tax for the 5byte per cell overhead (48 byte payload in= 53 byte cells 90.57% useable rate) plus an additional 5% to stochastical= ly account for the padding of the last cell and the per packet overhead b= oth of which affect the effective good put way more for small than large = packets, so the 85% never worked well for all packet sizes. My hypothesis= now is since we can and do properly account for these effects of ATM fra= ming we can afford to start with a fudge factor of 90% or even 95% percen= t. As far as I know the recommended fudge factors are never ever explaine= d by more than "this works empirically"... >>>> The fudge factors are totally empirical. IF you are proposing a more= formal approach, I shall try a 90 per cent fudge factor, although 'curre= nt rate' varies here. >>> My hypothesis is that we can get away with less fudge as we have a b= etter handle on the actual wire size. Personally, I do start at 95% to fi= gure out the trade-off between bandwidth loss and latency increase. >> You are now saying something slightly different. You are implying now = that you are starting at 95 per cent, and then reducing the nominal downl= oad speed until you achieve an unspecified endpoint. > So I typically start with 95%, run RRUL and look at the ping latency i= ncrease under load. I try to go as high with the bandwidth as I can and s= till keep the latency increase close to 10ms (the default fq_codel target= of 5ms will allow RTT increases of 5ms in both directions so it adds up = to 10). The last time I tried this I ended up at 97% of link rate. I see the rationale. I have tried something similar, But found it very=20 time consuming. I did not arrive at a clear reproducible end point. I=20 hope it works for others. > >>>>>> Devices express 2 values: the sync rate - or 'maximum rate attaina= ble' - and the dynamic value of 'current rate'. >>>>>> >>>>> The actual data rate is the relevant information for shaping, ofte= n DSL modems report the link capacity as "maximum rate attainable" or som= e such, while the actual bandwidth is limited to a rate below what the li= ne would support by contract (often this bandwidth reduction is performed= on the PPPoE link to the BRAS). >>>>> >>>>> >>>>>> As the sync rate is fairly stable for any given installation - ADS= L or Fibre - this could be used as a starting value. decremented by the = traditional 15 per cent of 'overhead'. and the 85 per cent fudge factor a= pplied to that. >>>>>> >>>>> I would like to propose to use the "current rate" as starting poin= t, as 'maximum rate attainable' >=3D 'current rate'. >>>> 'current rate' is still a sync rate, and so is conventionally viewed= as 15 per cent above the unmeasurable actual rate. >>> No no, the current rate really is the current link capacity between = modem and DSLAM (or CPE and CTS), only this rate typically is for the raw= ATM stream, so we have to subtract all the additional layers until we re= ach the IP layer... >> You are saying the same thing as I am. > I guess the point I want to make is that we are able to measure the un= measurable actual rate, that is what the link layer adaptation does for u= s, if configured properly :) > > Best Regards > Sebastian > >>>> As you are proposing a new approach, I shall take 90 per cent of 'cu= rrent rate' as a starting point. >>> I would love to learn how that works put for you. Because for all my= theories about why 85% was used, the proof still is in the (plum-) puddi= ng... >>> >>>> No one in the UK uses SRA currently. One small ISP used to. >>> That is sad, because on paper SRA looks like a good feature to have = (lower bandwidth sure beats synchronization loss). >>> >>>> The ISP I currently use has Dynamic Line Management, which changes t= arget SNR constantly. >>> Now that is much better, as we should neuter notice nor care; I assu= me that this happens on layers below ATM even. >> >>>> The DSLAM is made by Infineon. >>>> >>>> >>>>>> Fibre - FTTC - connections can suffer quite large download speed f= luctuations over the 200 - 500 metre link to the MSAN. This phenomenon i= s not confined to ADSL links. >>>>>> >>>>> On the actual xDSL link? As far as I know no telco actually uses S= RA (seamless rate adaptation or so) so the current link speed will only g= et lower not higher, so I would expect a relative stable current rate (it= might take a while, a few days to actually slowly degrade to the highest= link speed supported under all conditions, but I hope you still get my p= oint) >>>> I understand the point, but do not think it is the case, from data I= have seen, but cannot find now, unfortunately. >>> I see, maybe my assumption here is wrong, I would love to see data t= hough before changing my hypothesis. >>> >>>>>> An alternative speed test is something like this >>>>>> >>>>>> >>>>>> http://download.bethere.co.uk/downloadMeter.html >>>>>> >>>>>> >>>>>> which, as Be has been bought by Sky, may not exist after the end o= f April 2014. >>>>>> >>>>> But, if we recommend to run speed tests we really need to advise o= ur users to start several concurrent up- and downloads to independent ser= vers to actually measure the bandwidth of our bottleneck link; often a si= ngle server connection will not saturate a link (I seem to recall that wi= th TCP it is guaranteed to only reach 75% or so averaged over time, is th= at correct?). >>>>> But I think this is not the proper way to set the bandwidth for th= e shaper, because upstream of our link to the ISP we have no guaranteed b= andwidth at all and just can hope the ISP is oing the right thing AQM-wis= e. >>>>> >>>> I quote the Be site as an alternative to a java based approach. I wo= uld be very happy to see your suggestion adopted. >>>>> >>>>>> =95 [What is the proper description here?] If you use PPPoE (but = not over ADSL/DSL link), PPPoATM, or bridging that isn=92t Ethernet, you = should choose [what?] and set the Per-packet Overhead to [what?] >>>>>> >>>>>> For a PPPoA service, the PPPoA link is treated as PPPoE on the sec= ond device, here running ceroWRT. >>>>>> >>>>> This still means you should specify the PPPoA overhead, not PPPoE.= >>>> I shall try the PPPoA overhead. >>> Great, let me know how that works. >>> >>>>>> The packet overhead values are written in the dubious man page for= tc_stab. >>>>>> >>>>> The only real flaw in that man page, as far as I know, is the fact= that it indicates that the kernel will account for the 18byte ethernet h= eader automatically, while the kernel does no such thing (which I hope to= change). >>>> It mentions link layer types as 'atm' ethernet' and 'adsl'. There is= no reference anywhere to the last. I do not see its relevance. >>> If you have a look inside the source code for tc and the kernel, you= will notice that atm and adel are aliases for the same thing. I just thi= nk that we should keep naming the thing ATM since that is the problematic= layer in the stack that causes most of the useable link rate judgements,= adel just happens to use ATM exclusively. >> I have reviewed the source. I see what you mean. >>>>>> Sebastian has a potential alternative method of formal calculation= =2E >>>>>> >>>>> So, I have no formal calculation method available, but an empirica= l way of detecting ATM quantization as well as measuring the per packet o= verhead of an ATM link. >>>>> The idea is to measure the RTT of ICMP packets of increasing lengt= h and then displaying the distribution of RTTs by ICMP packet length, on = an ATM carrier we expect to see a step function with steps 48 bytes apart= =2E For non-ATM carrier we expect to rather see a smooth ramp. By compari= ng the residuals of a linear fit of the data with the residuals of the be= st step function fit to the data. The fit with the lower residuals "wins"= =2E Attached you will find an example of this approach, ping data in red = (median of NNN repetitions for each ICMP packet size), linear fit in blue= , and best staircase fit in green. You notice that data starts somewhere = in a 48 byte ATM cell. Since the ATM encapsulation overhead is maximally = 44 bytes and we know the IP and ICMP overhead of the ping probe we can ca= lculate the overhead preceding the IP header, which is what needs to be p= ut in the overhead field in the GUI. (Note where the green line intersect= the y-axis at 0 bytes packet size? this is where the IP hea >>>>> der starts, the "missing" part of this ATM cell is the overhead). >>>>> >>>> You are curve fitting. This is calculation. >>> I see, that is certainly a valid way to look at it, just one that ha= d not occurred to me. >>> >>>>> >>>>> >>>>> >>>>> >>>>> Believe it or not, this methods works reasonable well (I tested su= ccessfully with one Bridged, LLC/SNAP RFC-1483/2684 connection (overhead = 32 bytes), and several PPPOE, LLC, (overhead 40) connections (from ADSL1 = @ 3008/512 to ADSL2+ @ 16402/2558)). But it takes relative long time to m= easure the ping train especially at the higher rates=85 and it requires p= ing time stamps with decent resolution (which rules out windows) and my n= aive data acquisition scripts creates really large raw data files. I gues= s I should post the code somewhere so others can test and improve it. >>>>> Fred I would be delighted to get a data set from your connection, = to test a known different encapsulation. >>>>> >>>> I shall try this. If successful, I shall initially pass you the raw = data. >>> Great, but be warned this will be hundreds of megabytes. (For produc= tion use the measurement script would need to prune the generated log fil= e down to the essential values=85 and potentially store the data in binar= y) >>> >>>> I have not used MatLab since the 1980s. >>> Lucky you, I sort of have to use matlab in my day job and hence are = most "fluent" in matlabese, but the code should also work with octave (I = tested version 3.6.4) so it should be relatively easy to run the analysis= yourself. That said, I would love to get a copy of the ping sweep :) >>> >>>>>> TYPICAL OVERHEADS >>>>>> The following values are typical for different adsl scenar= ios (based on >>>>>> [1] and [2]): >>>>>> >>>>>> LLC based: >>>>>> PPPoA - 14 (PPP - 2, ATM - 12) >>>>>> PPPoE - 40+ (PPPoE - 8, ATM - 18, ethernet 14, possibl= y FCS - 4+padding) >>>>>> Bridged - 32 (ATM - 18, ethernet 14, possibly FCS - 4+= padding) >>>>>> IPoA - 16 (ATM - 16) >>>>>> >>>>>> VC Mux based: >>>>>> PPPoA - 10 (PPP - 2, ATM - 8) >>>>>> PPPoE - 32+ (PPPoE - 8, ATM - 10, ethernet 14, possibl= y FCS - 4+padding) >>>>>> Bridged - 24+ (ATM - 10, ethernet 14, possibly FCS - 4= +padding) >>>>>> IPoA - 8 (ATM - 8) >>>>>> >>>>>> >>>>>> For VC Mux based PPPoA, I am currently using an overhead of 18 for= the PPPoE setting in ceroWRT. >>>>>> >>>>> Yeah we could put this list into the wiki, but how shall a typical= user figure out which encapsulation is used? And good luck in figuring o= ut whether the frame check sequence (FCS) is included or not=85 >>>>> BTW 18, I predict that if PPPoE is only used between cerowrt and th= e "modem' or gateway your effective overhead should be 10 bytes; I would = love if you could run the following against your link at night (also atta= ched >>>>> >>>>> >>>>> >>>>> ): >>>>> >>>>> #! /bin/bash >>>>> # TODO use seq or bash to generate a list of the requested sizes (t= o allow for non-equidistantly spaced sizes) >>>>> >>>>> #. >>>>> TECH=3DADSL2 # just to give some meaning to the ping trace file nam= e >>>>> # finding a proper target IP is somewhat of an art, just traceroute= a remote site. >>>>> # and find the nearest host reliably responding to pings showing th= e smallet variation of pingtimes >>>>> TARGET=3D${1} # the IP against which to run the ICMP pings >>>>> DATESTR=3D`date +%Y%m%d_%H%M%S`<-># to allow multiple sequential re= cords >>>>> LOG=3Dping_sweep_${TECH}_${DATESTR}.txt >>>>> >>>>> >>>>> # by default non-root ping will only end one packet per second, so = work around that by calling ping independently for each package >>>>> # empirically figure out the shortest period still giving the stand= ard ping time (to avoid being slow-pathed by our target) >>>>> PINGPERIOD=3D0.01><------># in seconds >>>>> PINGSPERSIZE=3D10000 >>>>> >>>>> # Start, needed to find the per packet overhead dependent on the AT= M encapsulation >>>>> # to reiably show ATM quantization one would like to see at least t= wo steps, so cover a range > 2 ATM cells (so > 96 bytes) >>>>> SWEEPMINSIZE=3D16><------># 64bit systems seem to require 16 bytes = of payload to include a timestamp... >>>>> SWEEPMAXSIZE=3D116 >>>>> >>>>> n_SWEEPS=3D`expr ${SWEEPMAXSIZE} - ${SWEEPMINSIZE}` >>>>> >>>>> i_sweep=3D0 >>>>> i_size=3D0 >>>>> >>>>> echo "Running ICMP RTT measurement against: ${TARGET}" >>>>> while [ ${i_sweep} -lt ${PINGSPERSIZE} ] >>>>> do >>>>> (( i_sweep++ )) >>>>> echo "Current iteration: ${i_sweep}" >>>>> # now loop from sweepmin to sweepmax >>>>> i_size=3D${SWEEPMINSIZE} >>>>> while [ ${i_size} -le ${SWEEPMAXSIZE} ] >>>>> do >>>>> echo "${i_sweep}. repetition of ping size ${i_size}" >>>>> ping -c 1 -s ${i_size} ${TARGET} >> ${LOG} &\ >>>>> (( i_size++ )) >>>>> # we need a sleep binary that allows non integer times (GNU sleep = is fine as is sleep of macosx 10.8.4) >>>>> sleep ${PINGPERIOD} >>>>> done >>>>> done >>>>> echo "Done... ($0)" >>>>> >>>>> >>>>> This will try to run 10000 repetitions for ICMP packet sizes from 1= 6 to 116 bytes running (10000 * 101 * 0.01 / 60 =3D) 168 minutes, but you= should be able to stop it with ctrl c if you are not patience enough, wi= th your link I would estimate that 3000 should be plenty, but if you coul= d run it over night that would be great and then ~3 hours should not matt= er much. >>>>> And then run the following attached code in octave or matlab >>>>> >>>>> >>>>> >>>>> . Invoce with "tc_stab_parameter_guide_03('path/to/the/data/file/yo= u/created/name_of_said_file')". The parser will run on the first invocati= on and is reallr really slow, but further invocations should be faster. I= f issues arise, let me know, I am happy to help. >>>>> >>>>> >>>>>> Were I to use a single directly connected gateway, I would input a= suitable value for PPPoA in that openWRT firmware. >>>>>> >>>>> I think you should do that right now. >>>> The firmware has not yet been released. >>>>>> In theory, I might need to use a negative value, bmt the current k= ernel does not support that. >>>>>> >>>>> If you use tc_stab, negative overheads are fully supported, only h= tb_private has overhead defined as unsigned integer and hence does not al= low negative values. >>>> Jesper Brouer posted about this. I thought he was referring to tc_st= ab. >>> I recall having a discussion with Jesper about this topic, where he = agreed that tc_stab was not affected, only htb_private. >> Reading what was said on 23rd August, you corrected his error in inter= pretation. >> >> >>>>>> I have used many different arbitrary values for overhead. All appe= ar to have little effect. >>>>>> >>>>> So the issue here is that only at small packet sizes does the over= head and last cell padding eat a disproportionate amount of your bandwidt= h (64 byte packet plus 44 byte overhead plus 47 byte worst case cell padd= ing: 100* (44+47+64)/64 =3D 242% effective packet size to what the shaper= estimated ), at typical packet sizes the max error (44 bytes missing ove= rhead and potentially misjudged cell padding of 47 bytes adds up to a the= oretical 100*(44+47+1500)/1500 =3D 106% effective packet size to what th= e shaper estimated). It is obvious that at 1500 byte packets the whole AT= M issue can be easily dismissed with just reducing the link rate by ~10% = for the 48 in 53 framing and an additional ~6% for overhead and cell padd= ing. But once you mix smaller packets in your traffic for say VoIP, the e= ffective wire size misjudgment will kill your ability to control the queu= eing. Note that the common wisdom of shape down to 85% might be fem the ~= 15% ATM "tax" on 1500 byte traffic size... >>>>> >>>>> >>>>>> As I understand it, the current recommendation is to use tc_stab i= n preference to htb_private. I do not know the basis for this value judge= ment. >>>>>> >>>>> In short: tc_stab allows negative overheads, tc_stab works with HT= B, TBF, HFSC while htb_private only works with HTB. Currently htb_private= has two advantages: it will estimate the per packet overhead correctly o= f GSO (generic segmentation offload) is enabled and it will produce exact= ATM link layer estimates for all possible packet sizes. In practice almo= st everyone uses an MTU of 1500 or less for their internet access making = both htb_private advantages effectively moot. (Plus if no one beats me to= it I intend to address both theoretical short coming of tc_stab next yea= r). >>>>> >>>>> Best Regards >>>>> Sebastian >>>>> >>>>> >>>>>> >>>>>> >>>>>> On 28/12/13 10:01, Sebastian Moeller wrote: >>>>>> >>>>>>> Hi Rich, >>>>>>> >>>>>>> great! A few comments: >>>>>>> >>>>>>> Basic Settings: >>>>>>> [Is 95% the right fudge factor?] I think that ideally, if we get = can precisely measure the useable link rate even 99% of that should work = out well, to keep the queue in our device. I assume that due to the diffi= culties in measuring and accounting for the link properties as link layer= and overhead people typically rely on setting the shaped rate a bit lowe= r than required to stochastically/empirically account for the link proper= ties. I predict that if we get a correct description of the link properti= es to the shaper we should be fine with 95% shaping. Note though, it is n= ot trivial on an adel link to get the actually useable bit rate from the = modem so 95% of what can be deduced from the modem or the ISP's invoice m= ight be a decent proxy=85 >>>>>>> >>>>>>> [Do we have a recommendation for an easy way to tell if it's work= ing? Perhaps a link to a new Quick Test for Bufferbloat page. ] The linke= d page looks like a decent probe for buffer bloat. >>>>>>> >>>>>>> >>>>>>> >>>>>>>> Basic Settings - the details... >>>>>>>> >>>>>>>> CeroWrt is designed to manage the queues of packets waiting to b= e sent across the slowest (bottleneck) link, which is usually your connec= tion to the Internet. >>>>>>>> >>>>>>>> >>>>>>> I think we can only actually control the first link to the ISP, = which often happens to be the bottleneck. At a typical DSLAM (xDSL head e= nd station) the cumulative sold bandwidth to the customers is larger than= the back bone connection (which is called over-subscription and is almos= t guaranteed to be the case in every DSLAM) which typically is not a prob= lem, as typically people do not use their internet that much. My point be= ing we can not really control congestion in the DSLAM's uplink (as we hav= e no idea what the reserved rate per customer is in the worst case, if th= ere is any). >>>>>>> >>>>>>> >>>>>>> >>>>>>>> CeroWrt can automatically adapt to network conditions to improve= the delay/latency of data without any settings. >>>>>>>> >>>>>>>> >>>>>>> Does this describe the default fq_codels on each interface (exce= pt fib?)? >>>>>>> >>>>>>> >>>>>>> >>>>>>>> However, it can do a better job if it knows more about the actua= l link speeds available. You can adjust this setting by entering link spe= eds that are a few percent below the actual speeds. >>>>>>>> >>>>>>>> Note: it can be difficult to get an accurate measurement of the = link speeds. The speed advertised by your provider is a starting point, b= ut your experience often won't meet their published specs. You can also u= se a speed test program or web site like >>>>>>>> >>>>>>>> http://speedtest.net >>>>>>>> >>>>>>>> to estimate actual operating speeds. >>>>>>>> >>>>>>>> >>>>>>> While this approach is commonly recommended on the internet, I d= o not believe that it is that useful. Between a user and the speediest si= te there are a number of potential congestion points that can affect (red= uce) the throughput, like bad peering. Now that said the sppedtets will r= eport something <=3D the actual link speed and hence be conservative (int= eractivity stays great at 90% of link rate as well as 80% so underestimat= ing the bandwidth within reason does not affect the latency gains from tr= affic shaping it just sacrifices a bit more bandwidth; and given the diff= iculty to actually measure the actually attainable bandwidth might have b= een effectively a decent recommendation even though the theory of it seem= s flawed) >>>>>>> >>>>>>> >>>>>>> >>>>>>>> Be sure to make your measurement when network is quiet, and othe= rs in your home aren=92t generating traffic. >>>>>>>> >>>>>>>> >>>>>>> This is great advise. >>>>>>> >>>>>>> I would love to comment further, but after reloading >>>>>>> >>>>>>> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_f= or_CeroWrt_310 >>>>>>> >>>>>>> just returns a blank page and I can not get back to the page as= of yesterday evening=85 I will have a look later to see whether the page= resurfaces=85 >>>>>>> >>>>>>> Best >>>>>>> Sebastian >>>>>>> >>>>>>> >>>>>>> On Dec 27, 2013, at 23:09 , Rich Brown >>>>>>> >>>>>>> >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>>>> You are a very good writer and I am on a tablet. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> Thanks! >>>>>>>> >>>>>>>> >>>>>>>>> Ill take a pass at the wiki tomorrow. >>>>>>>>> >>>>>>>>> The shaper does up and down was my first thought... >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> Everyone else=85 Don=92t let Dave hog all the fun! Read the tech= note and give feedback! >>>>>>>> >>>>>>>> Rich >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On Dec 27, 2013 10:48 AM, "Rich Brown" >>>>>>>>> >>>>>>>>> wrote: >>>>>>>>> I updated the page to reflect the 3.10.24-8 build, and its new = GUI pages. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM= _for_CeroWrt_310 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> There are still lots of open questions. Comments, please. >>>>>>>>> >>>>>>>>> Rich >>>>>>>>> _______________________________________________ >>>>>>>>> Cerowrt-devel mailing list >>>>>>>>> >>>>>>>>> >>>>>>>>> Cerowrt-devel@lists.bufferbloat.net >>>>>>>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>>>>>>> _______________________________________________ >>>>>>>> Cerowrt-devel mailing list >>>>>>>> >>>>>>>> >>>>>>>> Cerowrt-devel@lists.bufferbloat.net >>>>>>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>>>>>> _______________________________________________ >>>>>>> Cerowrt-devel mailing list >>>>>>> >>>>>>> >>>>>>> Cerowrt-devel@lists.bufferbloat.net >>>>>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>