From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.20]) by huchra.bufferbloat.net (Postfix) with ESMTP id D3BE421F170 for ; Mon, 17 Jun 2013 03:41:00 -0700 (PDT) Received: from u-090-csam110a.zmbp.uni-tuebingen.de ([134.2.90.47]) by mail.gmx.com (mrgmx001) with ESMTPSA (Nemesis) id 0LqhmM-1UAG640zhq-00eJsa; Mon, 17 Jun 2013 12:40:58 +0200 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.5 \(1508\)) From: Sebastian Moeller In-Reply-To: <87hagxnkad.fsf@toke.dk> Date: Mon, 17 Jun 2013 12:40:59 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <2AD5FF25-5388-4AE7-8BE2-837B7F94250F@gmx.de> References: <8738shsx3w.fsf@toke.dk> <87vc5drhdj.fsf@toke.dk> <87bo75rgp5.fsf@toke.dk> <8738shrd0p.fsf@toke.dk> <00B68282-44B0-4392-8AEF-68144A50D94E@gmx.de> <87hagxnkad.fsf@toke.dk> To: =?windows-1252?Q?Toke_H=F8iland-J=F8rgensen?= X-Mailer: Apple Mail (2.1508) X-Provags-ID: V03:K0:rrb9SKUFKhrP3tuqnrNqhfHYGJNH6/3mJrMa+yv9XDv4V/fguV0 4uwoZmLgQZr8RFUqoc0+jcYz8wPxNUpc8woX4L+g1Tnn7creSyJr1pOgJP7fkhJnxgKKzCm YHfnOKAx3T5qMcdz40IZLsYFLnI7c/e82JpPUbdJR3yE3SifA8sSDnDdfnBdSKjyRWQh1r+ BJWzWolj9CV6TK87ACnFw== Cc: cerowrt-devel@lists.bufferbloat.net Subject: Re: [Cerowrt-devel] trivial 6in4 fix(?) X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Jun 2013 10:41:01 -0000 Hi Toke, On Jun 17, 2013, at 11:44 , Toke H=F8iland-J=F8rgensen = wrote: > Sebastian Moeller writes: >=20 >> Honestly, I think the best thing to do is not so much assume ATM or >> lack of ATM, but simply measure it :) >=20 > Right, doing the ping test with payload sizes from 16 to 113 packets > gives me an almost completely flat ping time distribution ranging from > 20.3 to 21.3 ms (see attached graphic). So probably I'm on PTM=85 I fully believe you that it is flat (graph did not make it into = my inbox=85) So that looks like PTM. Good! But beware the expected step = size depends on your down and uplink speeds, at VDSL I would only expect = a very tiny increase (basically the time it takes to see an additional = ATM cell back and forth, (RTT step per ATM cell in milliseconds =3D = (53*8 / line.down.bit + 53*8 / line.up.bit ) * 1000); this means that = potentially a large sample size per ping packet size is required to be = reasonably sure that there is no step.... >=20 >> Easy to figure out empirically by hand, by finding the largest ping >> packet size that still passes without fragmentation (see >> = http://www.debian.org/doc/manuals/debian-reference/ch05.en.html#_finding_o= ptimal_mtu) >=20 > $ ping -c 1 -s $((1500-28)) -M do www.debian.org > PING www.debian.org (128.31.0.51) 1472(1500) bytes of data. > 1480 bytes from senfl.debian.org (128.31.0.51): icmp_seq=3D1 ttl=3D45 = time=3D114 ms >=20 > --- www.debian.org ping statistics --- > 1 packets transmitted, 1 received, 0% packet loss, time 0ms > rtt min/avg/max/mdev =3D 114.522/114.522/114.522/0.000 ms >=20 > $ ping -c 1 -s $((1500-27)) -M do www.debian.org > PING www.debian.org (128.31.0.51) 1473(1501) bytes of data. > =46rom 10.42.3.5 icmp_seq=3D1 Frag needed and DF set (mtu =3D 1500) >=20 > --- www.debian.org ping statistics --- > 0 packets transmitted, 0 received, +1 errors >=20 > So the MTU seems to be 1500 bytes. That is great! >=20 > Now, how do I figure out what the PTM overhead is and feed it to HTB? = :) All I know so far is that PTM will not drag in the quite baroque = ATM encapsulation options. Googling for vdsl2 makes me hope that maybe = there is no additional user visible overhead; so if you have PPP that = would still need handling. It would be quite interesting to determine = the overhead empirically. ATM's quantization makes overhead detection in = atm based del lines conceptually easy; but for VDSL I am not so sure. In = principle we expect to see "buffer bloat" and its signature increase of = latency on saturated links if we shape with too high rates. So too small = an overhead should fill the modems buffers and might increase latency = (depending on the modems configuration, but assuming pfifo the buffer = should just fill up slowly until latencies should be noticeably = affected, or?). Hence in theory using a saturating load and measuring = the latencies for different overhead values should still work. I wonder = whether rrul might just be the right probe? If you go that route I would = be delighted to learn the outcome :). Sorry to be of no more help here.=20= Best Sebastian >=20 > -Toke