From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id DAFC521F184 for ; Sun, 25 Aug 2013 11:00:14 -0700 (PDT) Received: from compute6.internal (compute6.nyi.mail.srv.osa [10.202.2.46]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id BD6EA21B3C; Sun, 25 Aug 2013 14:00:13 -0400 (EDT) Received: from frontend2 ([10.202.2.161]) by compute6.internal (MEProxy); Sun, 25 Aug 2013 14:00:13 -0400 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=imap.cc; h=from :content-type:message-id:mime-version:subject:date:references:to :in-reply-to; s=mesmtp; bh=jgaIZQFS53fp9h1YzNr7RpYS7hY=; b=UGKSs EqHVViAXRrhiwICCW75Us0Omnqv+qRImpeRrzuUdCQTXXK8IWCI48Deg0tPk6qQI GAnQaBVBmak4BY7vzZZ++yybpFfZ/Rwfxvk3T+3Y+ACtxX+eTna9M4QNYuHsei3I UO0ZXQHHIJl+ZbM+BHjQM6AUfTgQ5D6iEO21bA= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=from:content-type:message-id:mime-version :subject:date:references:to:in-reply-to; s=smtpout; bh=jgaIZQFS5 3fp9h1YzNr7RpYS7hY=; b=SGfr593Rhn0LUxxLjazIonUSdlKXH89KYjuJWTOn/ brJ6TAaeaNSSUlOY7NF5nWR9BU8T2IiwPruI5pgmn13uWvfUHCd0DBib364eVexx dgrFGGnJp14jpOhfiioIoGKs2hN2scrJDn6A72IORZ1wxnudY99VlpV9U057GiKF 78= X-Sasl-enc: iocMoiRhGRqZaf5Plwm7WR1K9D4WeMOJTSGaMWXzQCuG 1377453607 Received: from [172.30.42.15] (unknown [188.221.232.223]) by mail.messagingengine.com (Postfix) with ESMTPA id 19DB068011B for ; Sun, 25 Aug 2013 14:00:06 -0400 (EDT) From: Fred Stratton Content-Type: multipart/alternative; boundary="Apple-Mail=_6ED94616-AE48-49EF-9EF2-87414B81AB26" Message-Id: Mime-Version: 1.0 (Mac OS X Mail 6.5 \(1508\)) Date: Sun, 25 Aug 2013 19:00:03 +0100 References: <56B261F1-2277-457C-9A38-FAB89818288F@gmx.de> <2148E2EF-A119-4499-BAC1-7E647C53F077@gmx.de> <03951E31-8F11-4FB8-9558-29EAAE3DAE4D@gmx.de> <9A9B094D-CA07-48B0-85FE-FA7C759FEDE3@gmx.de> <5BEF0C7C-C2F4-45A9-9FF2-E32A05B8D67B@gmx.de> <8CD72282-88CB-43FD-84EF-574DDB23F0AB@gmx.de> <0886582B-E46C-4F93-A9E5-C45A81C32AEA@imap.cc> <8AFDEBD8-54C9-46B6-8CBE-5CD4242A2765@imap.cc> To: "cerowrt-devel@lists.bufferbloat.net" In-Reply-To: X-Mailer: Apple Mail (2.1508) Subject: Re: [Cerowrt-devel] some kernel updates X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Aug 2013 18:00:15 -0000 --Apple-Mail=_6ED94616-AE48-49EF-9EF2-87414B81AB26 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 On 25 Aug 2013, at 18:55, Dave Taht wrote: > Netanalyzer is not useful in a fq_codel'ed system. Thank you. I shall stop using it. >=20 >=20 > On Sun, Aug 25, 2013 at 10:53 AM, Sebastian Moeller = wrote: > Hi Fred, >=20 >=20 > On Aug 25, 2013, at 16:26 , Fred Stratton = wrote: >=20 > > Thank you. > > > > This is an initial response. > > > > Am using 3.10.2-1 currently, with the standard AQM interface. This = does not have the pull down menu of your interface, which is why I ask = if both are active. >=20 > I have seen your follow-up mail that you actually used = 3.10.9-2. I think that has the first cut of the script modifications = that still allow to select both. Since I have not tested it any other = way I would recommend to enable just one of them at the same time. Since = the implementation of both is somewhat orthogonal and htb_private = actually works in 3.10.9, best case you might actually get the link = layer adjustments (LLA) and the overhead applied twice, wasting = bandwidth. So please either use the last set of modified files I send = around or wait for Dave to include them in ceropackages... >=20 > > On 25 Aug 2013, at 14:59, Sebastian Moeller wrote: > > > >> Hi Fred, > >> > >> > >> On Aug 25, 2013, at 12:17 , Fred Stratton = wrote: > >> > >>> > >>> On 25 Aug 2013, at 10:21, Fred Stratton = wrote: > >>> > >>>> As the person with the most flaky ADSL link, I point out that = None of these recent, welcome, changes, are having any effect here, with = an uplink sped of circa 950 kbits/s. > >> > >> Okay, how flaky is you link? What rate of Errors do you have = while testing? I am especially interested in CRC errors and ES SES and = HEC, just to get an idea how flaky the line is... > >> > >>>> > >>>> The reason I mention this is that it is still impossible to watch = iPlayer Flash streaming video and download at the same time, The iPlayer = stream fails. The point of the exercise was to achieve this. > >>>> > >>>> The uplink delay is consistently around 650ms, which appears to = be too high for effective streaming. In addition, the uplink stream has = multiple breaks, presumably outages, if the uplink rate is capped at, = say, 700 kbits/s. > >> > >> Well, watching video is going to stress your downlink so the = uplink should not saturate by the ACKs and the concurrent downloads also = do not stress your uplink except for the ACKs, so this points to = downlink errors as far as I can tell from the data you have given. If = the up link has repeated outages however, your problems might be = unfixable because these, if long enough, will cause lost ACKs and will = probably trigger retransmission, independent of whether the link layer = adjustments work or not. (You could test this by shaping you up and = downlink to <=3D 50% of the link rates and disable all link layer = adjustments, 50% is larger than the ATM worst case so should have you = covered. Well unless you del link has an excessive number of tones = reserved for forward error correction (FEC)). > > > > Uptime 100655 > > downstream 12162 kbits/s > > CRC errors 10154 > > FEC Errors 464 > > hEC Errors 758 > > > > upstream 1122 kbits/s > > no errors in period. >=20 > Ah, I think you told me in the past that "Target snr upped to = 12 deciBel. Line can sustain 10 megabits/s with repeated loss of = sync.at lower snr. " so sync at 12162 might be too aggressive, no? But = the point is that as I understand iPlayer works fine without competing = download traffic? To my eye the error numbers look small enough to not = be concerned about. Do you know how long the error correction period is? >=20 >=20 > > > >> Could you perform the following test by any chance: state = iPlayer and yor typical downloads and then have a look at = http://gw.home.lan:81und the following tab chain Status -> Realtime = Graphs -> Traffic -> Realtime Traffic. If during your test the Outbound = rate stays well below you shaped limit and you still encounter the = stream failure I would say it is save to ignore the link layer = adjustments as cause of your issues. > > > > Am happy reducing rate to fifty per cent, but the uplink appears to = have difficulty operating below circa 500 kbits/s. This should not be = so. I shall try a fourth time. >=20 > That sounds weird, if you shape to below 500 upload stops = working or just gets choppier? Looking at your sync data 561 would fit = the ~50% and above 500 requirements. >=20 >=20 > >> > >> > >>>> > >>>> YouTube has no problems. > >>>> > >>>> I remain unclear whether the use of tc-stab and htb are mutually = exclusive options, using the present stock interface. > >> > >> Well, depending on the version of the cerowrt you use, = <3.10.9-1 I believe lacks a functional HTB link layer adjustment = mechanism, so you should select tc_stab. My most recent modifications to = Toke and Dave's AQM package does only allow you to select one or the = other. In any case selecting BOTH is not a reasonable thing to do, = because best case it will only apply overhead twice, worst case it would = also do the (link layer adjustments) LLA twice > > > > > >> See initial comments. > >> > >>>> > >>>> The current ISP connection is IPoA LLC. > >>> > >>> Correction - Bridged LLC. > >> > >> Well, I think you should try to figure out your overhead = empirically and check the encapsulation. I would recommend you run the = following script on you r link over night and send me the log file it = produces: > >> > >> #! /bin/bash > >> # TODO use seq or bash to generate a list of the requested sizes = (to alow for non-equdistantly spaced sizes) > >> > >> # Telekom Tuebingen Moltkestrasse 6 > >> TECH=3DADSL2 > >> # finding a proper target IP is somewhat of an art, just traceroute = a remote site > >> # and find the nearest host reliably responding to pings showing = the smallet variation of pingtimes > >> TARGET=3D87.186.197.70 # T > >> DATESTR=3D`date +%Y%m%d_%H%M%S` # to allow multiple = sequential records > >> LOG=3Dping_sweep_${TECH}_${DATESTR}.txt > >> > >> > >> # by default non-root ping will only end one packet per second, so = work around that by calling ping independently for each package > >> # empirically figure out the shortest period still giving the = standard ping time (to avoid being slow-pathed by our host) > >> PINGPERIOD=3D0.01 # in seconds > >> PINGSPERSIZE=3D10000 > >> > >> # Start, needed to find the per packet overhead dependent on the = ATM encapsulation > >> # to reliably show ATM quantization one would like to see at least = two steps, so cover a range > 2 ATM cells (so > 96 bytes) > >> SWEEPMINSIZE=3D16 # 64bit systems seem to require 16 = bytes of payload to include a timestamp... > >> SWEEPMAXSIZE=3D116 > >> > >> > >> n_SWEEPS=3D`expr ${SWEEPMAXSIZE} - ${SWEEPMINSIZE}` > >> > >> > >> i_sweep=3D0 > >> i_size=3D0 > >> > >> while [ ${i_sweep} -lt ${PINGSPERSIZE} ] > >> do > >> (( i_sweep++ )) > >> echo "Current iteration: ${i_sweep}" > >> # now loop from sweepmin to sweepmax > >> i_size=3D${SWEEPMINSIZE} > >> while [ ${i_size} -le ${SWEEPMAXSIZE} ] > >> do > >> echo "${i_sweep}. repetition of ping size ${i_size}" > >> ping -c 1 -s ${i_size} ${TARGET} >> ${LOG} & > >> (( i_size++ )) > >> # we need a sleep binary that allows non integer times (GNU = sleep is fine as is sleep of macosx 10.8.4) > >> sleep ${PINGPERIOD} > >> done > >> done > >> > >> #tail -f ${LOG} > >> > >> echo "Done... ($0)" > >> > >> > >> Please set TARGET to the closest IP host on the ISP side of your = link that gives reliable ping RTTs (using "ping -c 100 -s 16 = your.best.host.ip"). Also test whether the RTTs are in the same ballpark = when you reduce the ping period to 0.01 (you might have to increase the = period until the RTTs are close to the standard 1 ping per second case). = I can then run this through my matlab code to detect the actual = overhead. (I am happy to share the code as well, if you have matlab = available; it might even run under octave but I have not tested that = since the last major changes). > > > > To follow at some point. >=20 > Oh, I failed to mention at the given parameters the script = takes almost 3 hours, during which the link should be otherwise idle... >=20 > >> > >> > >>> > >>>> Whatever byte value is used for tc-stab makes no change. > >> > >> I assume you talk about the overhead? Missing link layer = adjustment will eat between 50% and 10% of your link bandwidth, while = missing overhead values will be more benign. The only advise I can give = is to pick the overhead that actually describes your link. I am willing = to help you figure this out. > > > > The link is bridged LLC. Have been using 18 and 32 for test = purposes. I shall move to PPPoA VC-MUX in 4 months. >=20 > I guess figuring out you exact overhead empirically is going = to be fun. >=20 > >> > >>>> > >>>> I have applied the ingress modification to simple.qos, keeping = the original version., and tested both. > >> > >> For which cerowrt version? It is only expected to do something = for 3.10.9-1 and upwards, before that the HTB lionklayer adjustment did = NOT work. > > > > Using 3.10.9-2 >=20 > Yeah as stated above, I would recommend to use either or, not = both. If you took RRUL data you might be able to compare the three = conditions. I would estimate the most interesting part would be in the = sustained ravager up and download rates here. >=20 >=20 > > > >> > >>>> > >>>> I have changed the Powerline adaptors I use to ones with known = smaller buffers, though this is unlikely to be a ate-limiting step. > >>>> > >>>> I have changed the 2Wire gateway, known to be heavily buffered, = with a bridged Huawei HG612, with a Broadcom 6368 SoC. > >>>> > >>>> This device has a permanently on telnet interface, with a simple = password, which cannot be changed other than by firmware recompilation=85 > >>>> > >>>> Telnet, however, allows txqueuelen to be reduced from 1000 to 0. > >>>> > >>>> None of these changes affect the problematic uplink delay. > >> > >> So how did you measure the uplink delay? The RRUL plots you = sent me show an increase in ping RTT from around 50ms to 80ms with = tc_stab and fq_codel on simplest.qos, how does that reconcile with 650ms = uplink delay, netalyzr? > > > > Max Planck and Netalyzr produce the same figure. I use both, but Max = Planck gives you circa 3 tries per IP address per 24 hours. >=20 > Well, both use the same method which is not to meaningful if = you use fq_codel on a shaped link (unless you want to optimize your = system for UDP floods :) ) >=20 > [snipp] >=20 >=20 > Best Regards > Sebastian > _______________________________________________ > Cerowrt-devel mailing list > Cerowrt-devel@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cerowrt-devel >=20 >=20 >=20 > --=20 > Dave T=E4ht >=20 > Fixing bufferbloat with cerowrt: = http://www.teklibre.com/cerowrt/subscribe.html --Apple-Mail=_6ED94616-AE48-49EF-9EF2-87414B81AB26 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 dave.taht@gmail.com> = wrote:
Netanalyzer is not useful in a = fq_codel'ed system.

Thank you. I = shall stop using it.




On Sun, Aug 25, 2013 at 10:53 AM, Sebastian = Moeller <moeller0@gmx.de> wrote:
Hi Fred,


On Aug 25, 2013, at 16:26 , Fred Stratton <fredstratton@imap.cc> = wrote:

> Thank you.
>
> This is an initial response.
>
> Am using 3.10.2-1 currently, with the standard AQM interface. This = does not have the pull down menu of your interface, which is why I ask = if both are active.

        I have seen your follow-up mail that = you actually used 3.10.9-2. I think that has the first cut of the script = modifications that still allow to select both. Since I have not tested = it any other way I would recommend to enable just one of them at the = same time. Since the implementation of both is somewhat orthogonal and = htb_private actually works in 3.10.9, best case you might actually get = the link layer adjustments (LLA) and the overhead applied twice, wasting = bandwidth. So please either use the last set of modified files I send = around or wait for Dave to include them in ceropackages...

> On 25 Aug 2013, at 14:59, Sebastian Moeller <moeller0@gmx.de> wrote:
>
>> Hi Fred,
>>
>>
>> On Aug 25, 2013, at 12:17 , Fred Stratton <fredstratton@imap.cc> = wrote:
>>
>>>
>>> On 25 Aug 2013, at 10:21, Fred Stratton <fredstratton@imap.cc> = wrote:
>>>
>>>> As the person with the most flaky ADSL link, I point = out that None of these recent, welcome, changes, are having any effect = here, with an uplink sped of circa 950 kbits/s.
>>
>>      Okay, how flaky is you link? What rate of = Errors do you have while testing? I am especially interested in CRC = errors and ES SES and HEC, just to get an idea how flaky the line = is...
>>
>>>>
>>>> The reason I mention this is that it is still = impossible to watch iPlayer Flash streaming video and download at the = same time, The iPlayer stream fails. The point of the exercise was to = achieve this.
>>>>
>>>> The uplink delay is consistently around 650ms, which = appears to be too high for effective streaming. In addition, the uplink = stream has multiple breaks, presumably outages, if the uplink rate is = capped at, say, 700 kbits/s.
>>
>>      Well, watching video is going to stress = your downlink so the uplink should not saturate by the ACKs and the = concurrent downloads also do not stress your uplink except for the ACKs, = so this points to downlink errors as far as I can tell from the data you = have given. If the up link has repeated outages however, your problems = might be unfixable because these, if long enough, will cause lost ACKs = and will probably trigger retransmission, independent of whether the = link layer adjustments work or not. (You could test this by shaping you = up and downlink to <=3D 50% of the link rates and disable all link = layer adjustments, 50% is larger than the ATM worst case so should have = you covered. Well unless you del link has an excessive number of tones = reserved for forward error correction (FEC)).
>
> Uptime 100655
> downstream 12162 kbits/s
> CRC errors 10154
> FEC Errors 464
> hEC Errors 758
>
> upstream 1122 kbits/s
> no errors in period.

        Ah, I think you told me in the past = that "Target snr upped to 12 deciBel.  Line can sustain 10 = megabits/s with repeated loss of sync.at lower snr. " so sync at 12162 might be too = aggressive, no? But the point is that as I understand iPlayer works fine = without competing download traffic? To my eye the error numbers look = small enough to not be concerned about. Do you know how long the error = correction period is?


>
>>      Could you perform the following test by any = chance: state iPlayer and yor typical downloads and then have a look at = http://gw.home.lan:81und the = following tab chain Status -> Realtime Graphs -> Traffic -> = Realtime Traffic. If during your test the Outbound rate stays well below = you shaped limit and you still encounter the stream failure I would say = it is save to ignore the link layer adjustments as cause of your = issues.
>
> Am happy reducing rate to fifty per cent, but the uplink appears to = have difficulty operating below circa 500 kbits/s. This should not be = so. I shall try a fourth time.

        That sounds weird, if you shape to = below 500 upload stops working or just gets choppier? Looking at your = sync data 561 would fit the ~50% and above 500 requirements.


>>
>>
>>>>
>>>> YouTube has no problems.
>>>>
>>>> I remain unclear whether the use of tc-stab and htb are = mutually exclusive options, using the present stock interface.
>>
>>      Well, depending on the version of the = cerowrt you use, <3.10.9-1 I believe lacks a functional HTB link = layer adjustment mechanism, so you should select tc_stab. My most recent = modifications to Toke and Dave's AQM package does only allow you to = select one or the other. In any case selecting BOTH is not a reasonable = thing to do, because best case it will only apply overhead twice, worst = case it would also do the (link layer adjustments) LLA twice
>
>
>> See initial comments.
>>
>>>>
>>>> The current ISP connection is IPoA LLC.
>>>
>>> Correction - Bridged LLC.
>>
>>      Well, I think you should try to figure out = your overhead empirically and check the encapsulation. I would recommend = you run the following script on you r link over night and send me the = log file it produces:
>>
>> #! /bin/bash
>> # TODO use seq or bash to generate a list of the requested = sizes (to alow for non-equdistantly spaced sizes)
>>
>> # Telekom Tuebingen Moltkestrasse 6
>> TECH=3DADSL2
>> # finding a proper target IP is somewhat of an art, just = traceroute a remote site
>> # and find the nearest host reliably responding to pings = showing the smallet variation of pingtimes
>> TARGET=3D87.186.197.70         # T
>> DATESTR=3D`date +%Y%m%d_%H%M%S`        # to = allow multiple sequential records
>> LOG=3Dping_sweep_${TECH}_${DATESTR}.txt
>>
>>
>> # by default non-root ping will only end one packet per second, = so work around that by calling ping independently for each package
>> # empirically figure out the shortest period still giving the = standard ping time (to avoid being slow-pathed by our host)
>> PINGPERIOD=3D0.01             =  # in seconds
>> PINGSPERSIZE=3D10000
>>
>> # Start, needed to find the per packet overhead dependent on = the ATM encapsulation
>> # to reliably show ATM quantization one would like to see at = least two steps, so cover a range > 2 ATM cells (so > 96 = bytes)
>> SWEEPMINSIZE=3D16             =  # 64bit systems seem to require 16 bytes of payload to include a = timestamp...
>> SWEEPMAXSIZE=3D116
>>
>>
>> n_SWEEPS=3D`expr ${SWEEPMAXSIZE} - ${SWEEPMINSIZE}`
>>
>>
>> i_sweep=3D0
>> i_size=3D0
>>
>> while [ ${i_sweep} -lt ${PINGSPERSIZE} ]
>> do
>>    (( i_sweep++ ))
>>    echo "Current iteration: ${i_sweep}"
>>    # now loop from sweepmin to sweepmax
>>    i_size=3D${SWEEPMINSIZE}
>>    while [ ${i_size} -le ${SWEEPMAXSIZE} ]
>>    do
>>      echo "${i_sweep}. repetition of ping size = ${i_size}"
>>      ping -c 1 -s ${i_size} ${TARGET} >> = ${LOG} &
>>      (( i_size++ ))
>>      # we need a sleep binary that allows non = integer times (GNU sleep is fine as is sleep of macosx 10.8.4)
>>      sleep ${PINGPERIOD}
>>    done
>> done
>>
>> #tail -f ${LOG}
>>
>> echo "Done... ($0)"
>>
>>
>> Please set TARGET to the closest IP host on the ISP side of = your link that gives reliable ping RTTs (using "ping -c 100 -s 16 = your.best.host.ip"). Also test whether the RTTs are in the same ballpark = when you reduce the ping period to 0.01 (you might have to increase the = period until the RTTs are close to the standard 1 ping per second case). = I can then run this through my matlab code to detect the actual = overhead. (I am happy to share the code as well, if you have matlab = available; it might even run under octave but I have not tested that = since the last major changes).
>
> To follow at some point.

        Oh, I failed to mention at the = given parameters the script takes almost 3 hours, during which the link = should be otherwise idle...

>>
>>
>>>
>>>> Whatever byte value is used for tc-stab makes no = change.
>>
>>      I assume you talk about the overhead? = Missing link layer adjustment will eat between 50% and 10% of your link = bandwidth, while missing overhead values will be more benign. The only = advise I can give is to pick the overhead that actually describes your = link. I am willing to help you figure this out.
>
> The link is bridged LLC. Have been using 18 and 32 for test = purposes. I shall move to PPPoA VC-MUX in 4 months.

        I guess figuring out you exact = overhead empirically is going to be fun.

>>
>>>>
>>>> I have applied the ingress modification to simple.qos, = keeping the original version., and tested both.
>>
>>      For which cerowrt version? It is only = expected to do something for 3.10.9-1 and upwards, before that the HTB = lionklayer adjustment did NOT work.
>
> Using 3.10.9-2

        Yeah as stated above, I would = recommend to use either or, not both. If you took RRUL data you might be = able to compare the three conditions. I would estimate the most = interesting part would be in the sustained ravager up and download rates = here.


>
>>
>>>>
>>>> I have changed the Powerline adaptors I use to ones = with known smaller buffers, though this is unlikely to be a ate-limiting = step.
>>>>
>>>> I have changed the 2Wire gateway, known to be heavily = buffered, with a bridged Huawei HG612, with a Broadcom 6368 SoC.
>>>>
>>>> This device has a permanently on telnet interface, with = a simple password, which cannot be changed other than by firmware = recompilation=85
>>>>
>>>> Telnet, however, allows txqueuelen to be reduced from = 1000 to 0.
>>>>
>>>> None of these changes affect the problematic uplink = delay.
>>
>>      So how did you measure the uplink delay? = The RRUL plots you sent me show an increase in ping RTT from around 50ms = to 80ms with tc_stab and fq_codel on simplest.qos, how does that = reconcile with 650ms uplink delay, netalyzr?
>
> Max Planck and Netalyzr produce the same figure. I use both, but = Max Planck gives you circa 3 tries per IP address per 24 hours.

        Well, both use the same method which = is not to meaningful if you use fq_codel on a shaped link (unless you = want to optimize your system for UDP floods :) )

[snipp]


Best Regards
      =   Sebastian
_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bu= fferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel=



--
Dave = T=E4ht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html=20

= --Apple-Mail=_6ED94616-AE48-49EF-9EF2-87414B81AB26--