From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id EC4B621F1ED for ; Sat, 28 Dec 2013 12:09:50 -0800 (PST) Received: from compute3.internal (compute3.nyi.mail.srv.osa [10.202.2.43]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 8C2EA20CFE; Sat, 28 Dec 2013 15:09:47 -0500 (EST) Received: from frontend2 ([10.202.2.161]) by compute3.internal (MEProxy); Sat, 28 Dec 2013 15:09:47 -0500 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=imap.cc; h= message-id:date:from:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; s=mesmtp; bh=Hj+ZY/hGRwM/i14fpOu64AVNzQg=; b=0JxgE8a6AN0EAse7IusVxHkCoW9U MwHS3Umdbdf9wLXmPYkJJ2WBh9uHEz2Gbm1sD/0Jz8Db8zcS4PoI16i7C84Ek1Kx d95vJSPYP6Ctpm/1KcFV5Rb6WDjvMogzDRDge0tqtvfXFK5ibJKNQlsUl0ILOy02 /Hh9IfOWI5vzNS0= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=message-id:date:from:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; s=smtpout; bh=Hj+ZY/hGRwM/i14fpOu64A VNzQg=; b=Ky7aOiGGB4MUu6MLk/jF0cgAJBKDKSM4qw7IKo0mD3lzAXc+/uXn72 3o5KihqVzUgcTWnuL1hg+LCOB+vGtxQsQG8KBIpAZFq82UVA1xKJlmXWSEc32Nn8 +hf09NcC27Rjz4AU62+uIzyA34qkBLmhxfP59OC9XHEUjWwopJ0cU= X-Sasl-enc: YxazB6VGpNp1YQ56NQmWlXaudQYUK37R8mKZxrhGi4Z3 1388261386 Received: from [172.30.42.8] (unknown [78.147.119.216]) by mail.messagingengine.com (Postfix) with ESMTPA id A97BC680091; Sat, 28 Dec 2013 15:09:46 -0500 (EST) Message-ID: <52BF3009.5080800@imap.cc> Date: Sat, 28 Dec 2013 20:09:45 +0000 From: Fred Stratton User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Sebastian Moeller , cerowrt-devel@lists.bufferbloat.net References: <75A7B6AE-8ECA-4FAC-B4D3-08FD14078DA2@gmail.com> <52BEB166.903@imap.cc> <48F50AF1-018A-400F-BBA4-D6F6B95B8AD2@gmx.de> <52BEDFC8.6000301@imap.cc> <3E4BB4E0-66CC-4026-AF8C-41D3158BCD3B@gmx.de> In-Reply-To: <3E4BB4E0-66CC-4026-AF8C-41D3158BCD3B@gmx.de> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable Subject: Re: [Cerowrt-devel] Update to "Setting up SQM for CeroWrt 3.10" web page. Comments needed. X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Dec 2013 20:09:51 -0000 On 28/12/13 19:54, Sebastian Moeller wrote: > Hi Fred, > > > On Dec 28, 2013, at 15:27 , Fred Stratton wrote:= > >> On 28/12/13 13:42, Sebastian Moeller wrote: >>> Hi Fred, >>> >>> >>> On Dec 28, 2013, at 12:09 , Fred Stratton >>> >>> wrote: >>> >>> >>>> IThe UK consensus fudge factor has always been 85 per cent of the ra= te achieved, not 95 or 99 per cent. >>>> >>> I know that the recommendations have been lower in the past; I think= this is partly because before Jesper Brouer's and Russels Stuart's work = to properly account for ATM "quantization" people typically had to deal w= ith a ~10% rate tax for the 5byte per cell overhead (48 byte payload in 5= 3 byte cells 90.57% useable rate) plus an additional 5% to stochastically= account for the padding of the last cell and the per packet overhead bot= h of which affect the effective good put way more for small than large pa= ckets, so the 85% never worked well for all packet sizes. My hypothesis n= ow is since we can and do properly account for these effects of ATM frami= ng we can afford to start with a fudge factor of 90% or even 95% percent.= As far as I know the recommended fudge factors are never ever explained = by more than "this works empirically"... >> The fudge factors are totally empirical. IF you are proposing a more f= ormal approach, I shall try a 90 per cent fudge factor, although 'current= rate' varies here. > My hypothesis is that we can get away with less fudge as we have a bet= ter handle on the actual wire size. Personally, I do start at 95% to figu= re out the trade-off between bandwidth loss and latency increase. You are now saying something slightly different. You are implying now=20 that you are starting at 95 per cent, and then reducing the nominal=20 download speed until you achieve an unspecified endpoint. > >>>> Devices express 2 values: the sync rate - or 'maximum rate attainabl= e' - and the dynamic value of 'current rate'. >>>> >>> The actual data rate is the relevant information for shaping, often = DSL modems report the link capacity as "maximum rate attainable" or some = such, while the actual bandwidth is limited to a rate below what the line= would support by contract (often this bandwidth reduction is performed o= n the PPPoE link to the BRAS). >>> >>> >>>> As the sync rate is fairly stable for any given installation - ADSL = or Fibre - this could be used as a starting value. decremented by the tr= aditional 15 per cent of 'overhead'. and the 85 per cent fudge factor app= lied to that. >>>> >>> I would like to propose to use the "current rate" as starting point,= as 'maximum rate attainable' >=3D 'current rate'. >> 'current rate' is still a sync rate, and so is conventionally viewed a= s 15 per cent above the unmeasurable actual rate. > No no, the current rate really is the current link capacity between mo= dem and DSLAM (or CPE and CTS), only this rate typically is for the raw A= TM stream, so we have to subtract all the additional layers until we reac= h the IP layer... You are saying the same thing as I am. > >> As you are proposing a new approach, I shall take 90 per cent of 'curr= ent rate' as a starting point. > I would love to learn how that works put for you. Because for all my t= heories about why 85% was used, the proof still is in the (plum-) pudding= =2E.. > >> No one in the UK uses SRA currently. One small ISP used to. > That is sad, because on paper SRA looks like a good feature to have (l= ower bandwidth sure beats synchronization loss). > >> The ISP I currently use has Dynamic Line Management, which changes tar= get SNR constantly. > Now that is much better, as we should neuter notice nor care; I assume= that this happens on layers below ATM even. > >> The DSLAM is made by Infineon. >> >> >>>> Fibre - FTTC - connections can suffer quite large download speed flu= ctuations over the 200 - 500 metre link to the MSAN. This phenomenon is = not confined to ADSL links. >>>> >>> On the actual xDSL link? As far as I know no telco actually uses SRA= (seamless rate adaptation or so) so the current link speed will only get= lower not higher, so I would expect a relative stable current rate (it m= ight take a while, a few days to actually slowly degrade to the highest l= ink speed supported under all conditions, but I hope you still get my poi= nt) >> I understand the point, but do not think it is the case, from data I h= ave seen, but cannot find now, unfortunately. > I see, maybe my assumption here is wrong, I would love to see data tho= ugh before changing my hypothesis. > >>>> An alternative speed test is something like this >>>> >>>> >>>> http://download.bethere.co.uk/downloadMeter.html >>>> >>>> >>>> which, as Be has been bought by Sky, may not exist after the end of = April 2014. >>>> >>> But, if we recommend to run speed tests we really need to advise our= users to start several concurrent up- and downloads to independent serve= rs to actually measure the bandwidth of our bottleneck link; often a sing= le server connection will not saturate a link (I seem to recall that with= TCP it is guaranteed to only reach 75% or so averaged over time, is that= correct?). >>> But I think this is not the proper way to set the bandwidth for the = shaper, because upstream of our link to the ISP we have no guaranteed ban= dwidth at all and just can hope the ISP is oing the right thing AQM-wise.= >>> >> I quote the Be site as an alternative to a java based approach. I woul= d be very happy to see your suggestion adopted. >>> >>> >>>> =95 [What is the proper description here?] If you use PPPoE (but no= t over ADSL/DSL link), PPPoATM, or bridging that isn=92t Ethernet, you sh= ould choose [what?] and set the Per-packet Overhead to [what?] >>>> >>>> For a PPPoA service, the PPPoA link is treated as PPPoE on the secon= d device, here running ceroWRT. >>>> >>> This still means you should specify the PPPoA overhead, not PPPoE. >> I shall try the PPPoA overhead. > Great, let me know how that works. > >>>> The packet overhead values are written in the dubious man page for t= c_stab. >>>> >>> The only real flaw in that man page, as far as I know, is the fact t= hat it indicates that the kernel will account for the 18byte ethernet hea= der automatically, while the kernel does no such thing (which I hope to c= hange). >> It mentions link layer types as 'atm' ethernet' and 'adsl'. There is n= o reference anywhere to the last. I do not see its relevance. > If you have a look inside the source code for tc and the kernel, you w= ill notice that atm and adel are aliases for the same thing. I just think= that we should keep naming the thing ATM since that is the problematic l= ayer in the stack that causes most of the useable link rate judgements, a= del just happens to use ATM exclusively. I have reviewed the source. I see what you mean. > >>>> Sebastian has a potential alternative method of formal calculation. >>>> >>> So, I have no formal calculation method available, but an empirical = way of detecting ATM quantization as well as measuring the per packet ove= rhead of an ATM link. >>> The idea is to measure the RTT of ICMP packets of increasing length = and then displaying the distribution of RTTs by ICMP packet length, on an= ATM carrier we expect to see a step function with steps 48 bytes apart. = For non-ATM carrier we expect to rather see a smooth ramp. By comparing t= he residuals of a linear fit of the data with the residuals of the best s= tep function fit to the data. The fit with the lower residuals "wins". At= tached you will find an example of this approach, ping data in red (media= n of NNN repetitions for each ICMP packet size), linear fit in blue, and = best staircase fit in green. You notice that data starts somewhere in a 4= 8 byte ATM cell. Since the ATM encapsulation overhead is maximally 44 byt= es and we know the IP and ICMP overhead of the ping probe we can calculat= e the overhead preceding the IP header, which is what needs to be put in = the overhead field in the GUI. (Note where the green line intersect the y= -axis at 0 bytes packet size? this is where the IP hea >>> der starts, the "missing" part of this ATM cell is the overhead). >>> >> You are curve fitting. This is calculation. > I see, that is certainly a valid way to look at it, just one that had = not occurred to me. > >>> >>> >>> >>> >>> >>> Believe it or not, this methods works reasonable well (I tested succ= essfully with one Bridged, LLC/SNAP RFC-1483/2684 connection (overhead 32= bytes), and several PPPOE, LLC, (overhead 40) connections (from ADSL1 @ = 3008/512 to ADSL2+ @ 16402/2558)). But it takes relative long time to mea= sure the ping train especially at the higher rates=85 and it requires pin= g time stamps with decent resolution (which rules out windows) and my nai= ve data acquisition scripts creates really large raw data files. I guess = I should post the code somewhere so others can test and improve it. >>> Fred I would be delighted to get a data set from your connection, to= test a known different encapsulation. >>> >> I shall try this. If successful, I shall initially pass you the raw da= ta. > Great, but be warned this will be hundreds of megabytes. (For producti= on use the measurement script would need to prune the generated log file = down to the essential values=85 and potentially store the data in binary)= > >> I have not used MatLab since the 1980s. > Lucky you, I sort of have to use matlab in my day job and hence are mo= st "fluent" in matlabese, but the code should also work with octave (I te= sted version 3.6.4) so it should be relatively easy to run the analysis y= ourself. That said, I would love to get a copy of the ping sweep :) > >>>> TYPICAL OVERHEADS >>>> The following values are typical for different adsl scenario= s (based on >>>> [1] and [2]): >>>> >>>> LLC based: >>>> PPPoA - 14 (PPP - 2, ATM - 12) >>>> PPPoE - 40+ (PPPoE - 8, ATM - 18, ethernet 14, possibly = FCS - 4+padding) >>>> Bridged - 32 (ATM - 18, ethernet 14, possibly FCS - 4+pa= dding) >>>> IPoA - 16 (ATM - 16) >>>> >>>> VC Mux based: >>>> PPPoA - 10 (PPP - 2, ATM - 8) >>>> PPPoE - 32+ (PPPoE - 8, ATM - 10, ethernet 14, possibly = FCS - 4+padding) >>>> Bridged - 24+ (ATM - 10, ethernet 14, possibly FCS - 4+p= adding) >>>> IPoA - 8 (ATM - 8) >>>> >>>> >>>> For VC Mux based PPPoA, I am currently using an overhead of 18 for t= he PPPoE setting in ceroWRT. >>>> >>> Yeah we could put this list into the wiki, but how shall a typical u= ser figure out which encapsulation is used? And good luck in figuring out= whether the frame check sequence (FCS) is included or not=85 >>> BTW 18, I predict that if PPPoE is only used between cerowrt and the = "modem' or gateway your effective overhead should be 10 bytes; I would lo= ve if you could run the following against your link at night (also attach= ed >>> >>> >>> >>> ): >>> >>> #! /bin/bash >>> # TODO use seq or bash to generate a list of the requested sizes (to = allow for non-equidistantly spaced sizes) >>> >>> #. >>> TECH=3DADSL2 # just to give some meaning to the ping trace file name >>> # finding a proper target IP is somewhat of an art, just traceroute a= remote site. >>> # and find the nearest host reliably responding to pings showing the = smallet variation of pingtimes >>> TARGET=3D${1} # the IP against which to run the ICMP pings >>> DATESTR=3D`date +%Y%m%d_%H%M%S`<-># to allow multiple sequential reco= rds >>> LOG=3Dping_sweep_${TECH}_${DATESTR}.txt >>> >>> >>> # by default non-root ping will only end one packet per second, so wo= rk around that by calling ping independently for each package >>> # empirically figure out the shortest period still giving the standar= d ping time (to avoid being slow-pathed by our target) >>> PINGPERIOD=3D0.01><------># in seconds >>> PINGSPERSIZE=3D10000 >>> >>> # Start, needed to find the per packet overhead dependent on the ATM = encapsulation >>> # to reiably show ATM quantization one would like to see at least two= steps, so cover a range > 2 ATM cells (so > 96 bytes) >>> SWEEPMINSIZE=3D16><------># 64bit systems seem to require 16 bytes of= payload to include a timestamp... >>> SWEEPMAXSIZE=3D116 >>> >>> n_SWEEPS=3D`expr ${SWEEPMAXSIZE} - ${SWEEPMINSIZE}` >>> >>> i_sweep=3D0 >>> i_size=3D0 >>> >>> echo "Running ICMP RTT measurement against: ${TARGET}" >>> while [ ${i_sweep} -lt ${PINGSPERSIZE} ] >>> do >>> (( i_sweep++ )) >>> echo "Current iteration: ${i_sweep}" >>> # now loop from sweepmin to sweepmax >>> i_size=3D${SWEEPMINSIZE} >>> while [ ${i_size} -le ${SWEEPMAXSIZE} ] >>> do >>> echo "${i_sweep}. repetition of ping size ${i_size}" >>> ping -c 1 -s ${i_size} ${TARGET} >> ${LOG} &\ >>> (( i_size++ )) >>> # we need a sleep binary that allows non integer times (GNU sleep is= fine as is sleep of macosx 10.8.4) >>> sleep ${PINGPERIOD} >>> done >>> done >>> echo "Done... ($0)" >>> >>> >>> This will try to run 10000 repetitions for ICMP packet sizes from 16 = to 116 bytes running (10000 * 101 * 0.01 / 60 =3D) 168 minutes, but you s= hould be able to stop it with ctrl c if you are not patience enough, with= your link I would estimate that 3000 should be plenty, but if you could = run it over night that would be great and then ~3 hours should not matter= much. >>> And then run the following attached code in octave or matlab >>> >>> >>> >>> . Invoce with "tc_stab_parameter_guide_03('path/to/the/data/file/you/= created/name_of_said_file')". The parser will run on the first invocation= and is reallr really slow, but further invocations should be faster. If = issues arise, let me know, I am happy to help. >>> >>> >>>> Were I to use a single directly connected gateway, I would input a s= uitable value for PPPoA in that openWRT firmware. >>>> >>> I think you should do that right now. >> The firmware has not yet been released. >>>> In theory, I might need to use a negative value, bmt the current ker= nel does not support that. >>>> >>> If you use tc_stab, negative overheads are fully supported, only htb= _private has overhead defined as unsigned integer and hence does not allo= w negative values. >> Jesper Brouer posted about this. I thought he was referring to tc_stab= =2E > I recall having a discussion with Jesper about this topic, where he ag= reed that tc_stab was not affected, only htb_private. Reading what was said on 23rd August, you corrected his error in=20 interpretation. >>>> I have used many different arbitrary values for overhead. All appear= to have little effect. >>>> >>> So the issue here is that only at small packet sizes does the overhe= ad and last cell padding eat a disproportionate amount of your bandwidth = (64 byte packet plus 44 byte overhead plus 47 byte worst case cell paddin= g: 100* (44+47+64)/64 =3D 242% effective packet size to what the shaper e= stimated ), at typical packet sizes the max error (44 bytes missing overh= ead and potentially misjudged cell padding of 47 bytes adds up to a theor= etical 100*(44+47+1500)/1500 =3D 106% effective packet size to what the = shaper estimated). It is obvious that at 1500 byte packets the whole ATM = issue can be easily dismissed with just reducing the link rate by ~10% fo= r the 48 in 53 framing and an additional ~6% for overhead and cell paddin= g. But once you mix smaller packets in your traffic for say VoIP, the eff= ective wire size misjudgment will kill your ability to control the queuei= ng. Note that the common wisdom of shape down to 85% might be fem the ~15= % ATM "tax" on 1500 byte traffic size... >>> >>> >>>> As I understand it, the current recommendation is to use tc_stab in = preference to htb_private. I do not know the basis for this value judgeme= nt. >>>> >>> In short: tc_stab allows negative overheads, tc_stab works with HTB,= TBF, HFSC while htb_private only works with HTB. Currently htb_private h= as two advantages: it will estimate the per packet overhead correctly of = GSO (generic segmentation offload) is enabled and it will produce exact A= TM link layer estimates for all possible packet sizes. In practice almost= everyone uses an MTU of 1500 or less for their internet access making bo= th htb_private advantages effectively moot. (Plus if no one beats me to i= t I intend to address both theoretical short coming of tc_stab next year)= =2E >>> >>> Best Regards >>> Sebastian >>> >>> >>>> >>>> >>>> >>>> On 28/12/13 10:01, Sebastian Moeller wrote: >>>> >>>>> Hi Rich, >>>>> >>>>> great! A few comments: >>>>> >>>>> Basic Settings: >>>>> [Is 95% the right fudge factor?] I think that ideally, if we get ca= n precisely measure the useable link rate even 99% of that should work ou= t well, to keep the queue in our device. I assume that due to the difficu= lties in measuring and accounting for the link properties as link layer a= nd overhead people typically rely on setting the shaped rate a bit lower = than required to stochastically/empirically account for the link properti= es. I predict that if we get a correct description of the link properties= to the shaper we should be fine with 95% shaping. Note though, it is not= trivial on an adel link to get the actually useable bit rate from the mo= dem so 95% of what can be deduced from the modem or the ISP's invoice mig= ht be a decent proxy=85 >>>>> >>>>> [Do we have a recommendation for an easy way to tell if it's workin= g? Perhaps a link to a new Quick Test for Bufferbloat page. ] The linked = page looks like a decent probe for buffer bloat. >>>>> >>>>> >>>>> >>>>>> Basic Settings - the details... >>>>>> >>>>>> CeroWrt is designed to manage the queues of packets waiting to be = sent across the slowest (bottleneck) link, which is usually your connecti= on to the Internet. >>>>>> >>>>>> >>>>> I think we can only actually control the first link to the ISP, wh= ich often happens to be the bottleneck. At a typical DSLAM (xDSL head end= station) the cumulative sold bandwidth to the customers is larger than t= he back bone connection (which is called over-subscription and is almost = guaranteed to be the case in every DSLAM) which typically is not a proble= m, as typically people do not use their internet that much. My point bein= g we can not really control congestion in the DSLAM's uplink (as we have = no idea what the reserved rate per customer is in the worst case, if ther= e is any). >>>>> >>>>> >>>>> >>>>>> CeroWrt can automatically adapt to network conditions to improve t= he delay/latency of data without any settings. >>>>>> >>>>>> >>>>> Does this describe the default fq_codels on each interface (except= fib?)? >>>>> >>>>> >>>>> >>>>>> However, it can do a better job if it knows more about the actual = link speeds available. You can adjust this setting by entering link speed= s that are a few percent below the actual speeds. >>>>>> >>>>>> Note: it can be difficult to get an accurate measurement of the li= nk speeds. The speed advertised by your provider is a starting point, but= your experience often won't meet their published specs. You can also use= a speed test program or web site like >>>>>> >>>>>> http://speedtest.net >>>>>> >>>>>> to estimate actual operating speeds. >>>>>> >>>>>> >>>>> While this approach is commonly recommended on the internet, I do = not believe that it is that useful. Between a user and the speediest site= there are a number of potential congestion points that can affect (reduc= e) the throughput, like bad peering. Now that said the sppedtets will rep= ort something <=3D the actual link speed and hence be conservative (inter= activity stays great at 90% of link rate as well as 80% so underestimatin= g the bandwidth within reason does not affect the latency gains from traf= fic shaping it just sacrifices a bit more bandwidth; and given the diffic= ulty to actually measure the actually attainable bandwidth might have bee= n effectively a decent recommendation even though the theory of it seems = flawed) >>>>> >>>>> >>>>> >>>>>> Be sure to make your measurement when network is quiet, and others= in your home aren=92t generating traffic. >>>>>> >>>>>> >>>>> This is great advise. >>>>> >>>>> I would love to comment further, but after reloading >>>>> >>>>> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for= _CeroWrt_310 >>>>> >>>>> just returns a blank page and I can not get back to the page as o= f yesterday evening=85 I will have a look later to see whether the page r= esurfaces=85 >>>>> >>>>> Best >>>>> Sebastian >>>>> >>>>> >>>>> On Dec 27, 2013, at 23:09 , Rich Brown >>>>> >>>>> >>>>> >>>>> wrote: >>>>> >>>>> >>>>> >>>>>>> You are a very good writer and I am on a tablet. >>>>>>> >>>>>>> >>>>>>> >>>>>> Thanks! >>>>>> >>>>>> >>>>>>> Ill take a pass at the wiki tomorrow. >>>>>>> >>>>>>> The shaper does up and down was my first thought... >>>>>>> >>>>>>> >>>>>>> >>>>>> Everyone else=85 Don=92t let Dave hog all the fun! Read the tech n= ote and give feedback! >>>>>> >>>>>> Rich >>>>>> >>>>>> >>>>>> >>>>>>> On Dec 27, 2013 10:48 AM, "Rich Brown" >>>>>>> >>>>>>> wrote: >>>>>>> I updated the page to reflect the 3.10.24-8 build, and its new GU= I pages. >>>>>>> >>>>>>> >>>>>>> >>>>>>> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_f= or_CeroWrt_310 >>>>>>> >>>>>>> >>>>>>> >>>>>>> There are still lots of open questions. Comments, please. >>>>>>> >>>>>>> Rich >>>>>>> _______________________________________________ >>>>>>> Cerowrt-devel mailing list >>>>>>> >>>>>>> >>>>>>> Cerowrt-devel@lists.bufferbloat.net >>>>>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>>>>> _______________________________________________ >>>>>> Cerowrt-devel mailing list >>>>>> >>>>>> >>>>>> Cerowrt-devel@lists.bufferbloat.net >>>>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>>>> _______________________________________________ >>>>> Cerowrt-devel mailing list >>>>> >>>>> >>>>> Cerowrt-devel@lists.bufferbloat.net >>>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel