Hello, Could you share the two output plots somewhere, so I can have a look at > those? (Also I might want tto ask for the text file that actually was > generated by the ping collector script, just so I can run and > confirm/de-bug things my self). Sure thing. The plot images: http://imgur.com/a/qDtA0 And the output text file: https://drive.google.com/open?id=0B7vEuplJWEIkc1ozbUZRSGstajQ On 24 April 2017 at 13:34, Sebastian Moeller wrote: > Hello, > > > > On Apr 24, 2017, at 10:41, Dendari Marini wrote: > > > > Hello, > > > > Probably correct, but you do not have to resort to believing, you can > actually try to measure that ;) In case I have been too subtle before, have > a look at https://github.com/moeller0/ATM_overhead_detector and follow > the instructions there... > > > > I just used your script and it estimated an overhead of 20 bytes, so > should I use "overhead 20 atm" or am I missing something? In the last few > days I've been using "pppoe-llcsnap" ("overhead 40 atm") without any > evident issue, should I change it? > > Hmm, 20 seems rather interesting and something I never saw before. > Could you share the two output plots somewhere, so I can have a look at > those? (Also I might want tto ask for the text file that actually was > generated by the ping collector script, just so I can run and > confirm/de-bug things my self). I am not saying 20 is impossible, just that > it is improbable enough to require more scrutiny. > > > Best Regards > Sebastian > > > > > > FWIW here's a quick example on ingress ppp that I tested using connmark > > the connmarks (1 or 2 or unmarked) being set by iptables rules on > outbound > > connections/traffic classes. > > > > Unfortunately I'm really not sure how to apply those settings to my > case, it's something I've never done so some hand-holding is probably > needed, sorry. At the moment I've limited the Steam bandwidth using the > built-in Basic Queue and DPI features from the ER-X. They're easy to set up > but aren't really ideal, would rather prefer Cake would take care about it > more dynamically. > > > > Anyway about the Steam IP addresses I've noticed, in the almost three > weeks of testing, they're almost always the same IP blocks (most of which > can be found on the Steam Support website, https://support.steampowered. > com/kb_article.php?ref=8571-GLVN-8711). I believe it would be a good > starting point for limiting Steam, what do you think? > > > > On 24 April 2017 at 09:55, Sebastian Moeller wrote: > > Hi David, > > > > > On Apr 23, 2017, at 14:32, David Lang wrote: > > > > > > On Sun, 23 Apr 2017, Sebastian Moeller wrote: > > > > > >>> About the per-host fairness download issue: while it's kinda > resolved I still feel like it's mainly related to Steam, as normally > downloading files from PC1 and PC2 halved the speed as expected even at > full bandwidth (so no overhead, no -15%). > > >> > > >> This might be true, but for cake to meaningfully resolve > bufferbloat you absolutely _must_ take care to account for encapsulation > and overhead one way or another. > > > > > > well, one way to account for this overhead is to set the allowed > bandwidth low enough. Being precise on this overhead lets you get closer to > the actual line rate, but if you have enough bandwidth, it may not really > matter (i.e. if you have a 100Mb connection and only get 70Mb out of it, > you probably won't notice unless you go looking) > > > > Violent agreement. But note that with AAL5’s rule to always use > an integer number of ATM cells per user packet the required bandwidth > sacrifice to statically cover the worst case gets ludicrous (theoretical > worst case: requiring 2 53 byte ATM cells for on 49 Byte data packet: 100 * > 49 / (53 * 2) = 46.2% and this is on top of any potential unaccounted > overhead inside the 49 Byte packet). Luckily the ATM padding issue is not > as severe for bigger packets… but still to statically fully solve > modem/dslam bufferbloat the required bandwidth sacrifice seems excessive… > But again you are right, there might be users who do not mind to go to this > length. For this reason I occasionally recommend to start the bandwidth at > 50% to certainly rule out overhead/encapsulation accounting issues (mind > you take 50% as starting point from which to ramp up…) > > > > Best Regards > > Sebastian. > > > > > > > > > > David Lang > > > > > >