From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bifrost.lang.hm (mail.lang.hm [64.81.33.126]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 11BBC21F649 for ; Sat, 26 Jul 2014 15:24:16 -0700 (PDT) Received: from asgard.lang.hm (asgard.lang.hm [10.0.0.100]) by bifrost.lang.hm (8.13.4/8.13.4/Debian-3) with ESMTP id s6QMOA4g010788; Sat, 26 Jul 2014 15:24:10 -0700 Date: Sat, 26 Jul 2014 15:24:10 -0700 (PDT) From: David Lang X-X-Sender: dlang@asgard.lang.hm To: Sebastian Moeller In-Reply-To: Message-ID: References: <13144.1406313454@turing-police.cc.vt.edu> <36889fad276c5cdd1cd083d1c83f2265@lang.hm> <2483CF77-EE7D-4D76-ACC8-5CBC75D093A7@gmx.de> User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: cerowrt-devel@lists.bufferbloat.net Subject: Re: [Cerowrt-devel] Ideas on how to simplify and popularize bufferbloat control for consideration. X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 26 Jul 2014 22:24:16 -0000 On Sat, 26 Jul 2014, David Lang wrote: > On Sat, 26 Jul 2014, Sebastian Moeller wrote: > >> On Jul 26, 2014, at 22:39 , David Lang wrote: >> >>> by how much tuning is required, I wasn't meaning how frequently to tune, >>> but how close default settings can come to the performance of a expertly >>> tuned setup. >> >> Good question. >> >>> >>> Ideally the tuning takes into account the characteristics of the hardware >>> of the link layer. If it's IP encapsulated in something else (ATM, PPPoE, >>> VPN, VLAN tagging, ethernet with jumbo packet support for example), then >>> you have overhead from the encapsulation that you would ideally take into >>> account when tuning things. >>> >>> the question I'm talking about below is how much do you loose compared to >>> the idea if you ignore this sort of thing and just assume that the wire is >>> dumb and puts the bits on them as you send them? By dumb I mean don't even >>> allow for inter-packet gaps, don't measure the bandwidth, don't try to >>> pace inbound connections by the timing of your acks, etc. Just run BQL and >>> fq_codel and start the BQL sizes based on the wire speed of your link >>> (Gig-E on the 3800) and shrink them based on long-term passive observation >>> of the sender. >> >> As data talks I just did a quick experiment with my ADSL2+ koine at >> home. The solid lines in the attached plot show the results for proper >> shaping with SQM (shaping to 95% of del link rates of downstream and >> upstream while taking the link layer properties, that is ATM encapsulation >> and per packet overhead into account) the broken lines show the same system >> with just the link layer adjustments and per packet overhead adjustments >> disabled, but still shaping to 95% of link rate (this is roughly equivalent >> to 15% underestimation of the packet size). The actual theist is >> netperf-wrappers RRUL (4 tcp streams up, 4 tcp steams down while measuring >> latency with ping and UDP probes). As you can see from the plot just >> getting the link layer encapsulation wrong destroys latency under load >> badly. The host is ~52ms RTT away, and with fq_codel the ping time per leg >> is just increased one codel target of 5ms each resulting in an modest >> latency increase of ~10ms with proper shaping for a total of ~65ms, with >> improper shaping RTTs increase to ~95ms (they almost double), so RTT >> increases by ~43ms. Also note how the extremes for the broken lines are >> much worse than for the solid lines. In short I would estimate that a >> slight misjudgment (15%) results in almost 80% increase of latency under >> load. In other words getting the rates right matters a lot. (I should also >> note that in my setup there is a secondary router that limits RTT to max >> 300ms, otherwise the broken lines might look even worse...) is this with BQL/fq_codel in both directions or only in one direction? David Lang > what is the latency like without BQL and codel? the pre-bufferbloat version? > (without any traffic shaping) > > I agree that going from 65ms to 95ms seems significant, but if the stock > version goes into up above 1000ms, then I think we are talking about things > that are 'close' > > assuming that latency under load without the improvents got >1000ms > > fast-slow (in ms) > ideal=10 > untuned=43 > bloated > 1000 > > fast/slow > ideal = 1.25 > untuned = 1.83 > bloated > 19 > > slow/fast > ideal = 0.8 > untuned = 0.55 > bloated = 0.05 > > rather than looking at how much worse it is than the ideal, look at how much > closer it is to the ideal than to the bloated version. > > David Lang > > _______________________________________________ > Cerowrt-devel mailing list > Cerowrt-devel@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cerowrt-devel >