From justin at althea.net Thu Oct 3 13:52:15 2019 From: justin at althea.net (Justin Kilpatrick) Date: Thu, 03 Oct 2019 13:52:15 -0400 Subject: [Cake] Fighting bloat in the face of uncertinty In-Reply-To: References: <2825CE14-2109-4580-A086-9701F4D3ADF0@gmail.com> <18b1c174-b88d-4664-9aa8-9c42925fc14c@www.fastmail.com> <9a90111b-2389-4dc6-8409-18c40f895540@www.fastmail.com> <43F02160-E691-4393-A0C0-8AB4AD962700@gmail.com> Message-ID: I've developed a rough version of this and put it into production Monday. After a few tweaks we're seeing a ~10x reduction in the magnitude of latency spikes at high usage times. https://github.com/althea-net/althea_rs/blob/master/rita/src/rita_common/network_monitor/mod.rs#L288 The average and standard deviation of latency to a given neighbor is scraped from Babel and when the standard deviation exceeds 10x the average we reduce the throughput of the connection by 20%. It's not theoretically sound yet because I still need to expose single direction latency in Babel rather than only round trip. Bloat caused by the other side of the link currently causes connections to be reduced all the way down to the throughput minimum unnecessarily. It would also be advantageous to observe what throughput we've recorded for the last 5 seconds and put a threshold there. Rather than doing any probing ourselves we can just observe if the user was saturating the connection or if it was a transient radio problem. If anyone else is interested in using this I can split it off from our application and into a stand alone (if somewhat bulky) binary without much trouble. -- Justin Kilpatrick justin at althea.net On Sun, Sep 8, 2019, at 1:27 PM, Jonathan Morton wrote: > >> You could also set it back to 'internet' and progressively reduce the > >> bandwidth parameter, making the Cake shaper into the actual bottleneck. > >> This is the correct fix for the problem, and you should notice an > >> instant improvement as soon as the bandwidth parameter is correct. > > > > Hand tuning this one link is not a problem. I'm searching for a set of settings that will provide generally good performance across a wide range of devices, links, and situations. > > > > From what you've indicated so far there's nothing as effective as a correct bandwidth estimation if we consider the antenna (link) a black box. Expecting the user to input expected throughput for every link and then managing that information is essentially a non-starter. > > > > Radio tuning provides some improvement, but until ubiquiti starts shipping with Codel on non-router devices I don't think there's a good solution here. > > > > Any way to have the receiving device detect bloat and insert an ECN? > > That's what the qdisc itself is supposed to do. > > > I don't think the time spent in the intermediate device is detectable at the kernel level but we keep track of latency for routing decisions and could detect bloat with some accuracy, the problem is how to respond. > > As long as you can detect which link the bloat is on (and in which > direction), you can respond by reducing the bandwidth parameter on that > half-link by a small amount. Since you have a cooperating network, > maintaining a time standard on each node sufficient to observe one-way > delays seems feasible, as is establishing a normal baseline latency for > each link. > > The characteristics of the bandwidth parameter being too high are easy > to observe. Not only will the one-way delay go up, but the received > throughput in the same direction at the same time will be lower than > configured. You might use the latter as a hint as to how far you need > to reduce the shaped bandwidth. > > Deciding when and by how much to *increase* bandwidth, which is > presumably desirable when link conditions improve, is a more difficult > problem when the link hardware doesn't cooperate by informing you of > its status. (This is something you could reasonably ask Ubiquiti to > address.) > > I would assume that link characteristics will change slowly, and run an > occasional explicit bandwidth probe to see if spare bandwidth is > available. If that probe comes through without exhibiting bloat, *and* > the link is otherwise loaded to capacity, then increase the shaper by > an amount within the probe's capacity of measurement - and schedule a > repeat. > > A suitable probe might be 100x 1500b packets paced out over a second, > bypassing the shaper. This will occupy just over 1Mbps of bandwidth, > and can be expected to induce 10ms of delay if injected into a > saturated 100Mbps link. Observe the delay experienced by each packet > *and* the quantity of other traffic that appears between them. Only if > both are favourable can you safely open the shaper, by 1Mbps. > > Since wireless links can be expected to change their capacity over > time, due to eg. weather and tree growth, this seems to be more > generally useful than a static guess. You could deploy a new link with > a conservative "guess" of say 10Mbps, and just probe from there. > > - Jonathan Morton From dave.taht at gmail.com Thu Oct 3 14:41:33 2019 From: dave.taht at gmail.com (Dave Taht) Date: Thu, 3 Oct 2019 11:41:33 -0700 Subject: [Cake] Fighting bloat in the face of uncertinty In-Reply-To: References: <2825CE14-2109-4580-A086-9701F4D3ADF0@gmail.com> <18b1c174-b88d-4664-9aa8-9c42925fc14c@www.fastmail.com> <9a90111b-2389-4dc6-8409-18c40f895540@www.fastmail.com> <43F02160-E691-4393-A0C0-8AB4AD962700@gmail.com> Message-ID: Heh. We need a t-shirt... from TunnelManager::from_registry().do_send(GotBloat { .. GotBloat() ? more_fq_codel : fq_codel; On Thu, Oct 3, 2019 at 10:52 AM Justin Kilpatrick wrote: > > I've developed a rough version of this and put it into production Monday. After a few tweaks we're seeing a ~10x reduction in the magnitude of latency spikes at high usage times. > > https://github.com/althea-net/althea_rs/blob/master/rita/src/rita_common/network_monitor/mod.rs#L288 > > The average and standard deviation of latency to a given neighbor is scraped from Babel and when the standard deviation exceeds 10x the average we reduce the throughput of the connection by 20%. > > It's not theoretically sound yet because I still need to expose single direction latency in Babel rather than only round trip. Bloat caused by the other side of the link currently causes connections to be reduced all the way down to the throughput minimum unnecessarily. > > It would also be advantageous to observe what throughput we've recorded for the last 5 seconds and put a threshold there. Rather than doing any probing ourselves we can just observe if the user was saturating the connection or if it was a transient radio problem. > > If anyone else is interested in using this I can split it off from our application and into a stand alone (if somewhat bulky) binary without much trouble. > > -- > Justin Kilpatrick > justin at althea.net > > On Sun, Sep 8, 2019, at 1:27 PM, Jonathan Morton wrote: > > >> You could also set it back to 'internet' and progressively reduce the > > >> bandwidth parameter, making the Cake shaper into the actual bottleneck. > > >> This is the correct fix for the problem, and you should notice an > > >> instant improvement as soon as the bandwidth parameter is correct. > > > > > > Hand tuning this one link is not a problem. I'm searching for a set of settings that will provide generally good performance across a wide range of devices, links, and situations. > > > > > > From what you've indicated so far there's nothing as effective as a correct bandwidth estimation if we consider the antenna (link) a black box. Expecting the user to input expected throughput for every link and then managing that information is essentially a non-starter. > > > > > > Radio tuning provides some improvement, but until ubiquiti starts shipping with Codel on non-router devices I don't think there's a good solution here. > > > > > > Any way to have the receiving device detect bloat and insert an ECN? > > > > That's what the qdisc itself is supposed to do. > > > > > I don't think the time spent in the intermediate device is detectable at the kernel level but we keep track of latency for routing decisions and could detect bloat with some accuracy, the problem is how to respond. > > > > As long as you can detect which link the bloat is on (and in which > > direction), you can respond by reducing the bandwidth parameter on that > > half-link by a small amount. Since you have a cooperating network, > > maintaining a time standard on each node sufficient to observe one-way > > delays seems feasible, as is establishing a normal baseline latency for > > each link. > > > > The characteristics of the bandwidth parameter being too high are easy > > to observe. Not only will the one-way delay go up, but the received > > throughput in the same direction at the same time will be lower than > > configured. You might use the latter as a hint as to how far you need > > to reduce the shaped bandwidth. > > > > Deciding when and by how much to *increase* bandwidth, which is > > presumably desirable when link conditions improve, is a more difficult > > problem when the link hardware doesn't cooperate by informing you of > > its status. (This is something you could reasonably ask Ubiquiti to > > address.) > > > > I would assume that link characteristics will change slowly, and run an > > occasional explicit bandwidth probe to see if spare bandwidth is > > available. If that probe comes through without exhibiting bloat, *and* > > the link is otherwise loaded to capacity, then increase the shaper by > > an amount within the probe's capacity of measurement - and schedule a > > repeat. > > > > A suitable probe might be 100x 1500b packets paced out over a second, > > bypassing the shaper. This will occupy just over 1Mbps of bandwidth, > > and can be expected to induce 10ms of delay if injected into a > > saturated 100Mbps link. Observe the delay experienced by each packet > > *and* the quantity of other traffic that appears between them. Only if > > both are favourable can you safely open the shaper, by 1Mbps. > > > > Since wireless links can be expected to change their capacity over > > time, due to eg. weather and tree growth, this seems to be more > > generally useful than a static guess. You could deploy a new link with > > a conservative "guess" of say 10Mbps, and just probe from there. > > > > - Jonathan Morton > _______________________________________________ > Cake mailing list > Cake at lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cake -- Dave Täht CTO, TekLibre, LLC http://www.teklibre.com Tel: 1-831-205-9740 From chromatix99 at gmail.com Thu Oct 3 15:04:04 2019 From: chromatix99 at gmail.com (Jonathan Morton) Date: Thu, 3 Oct 2019 22:04:04 +0300 Subject: [Cake] Fighting bloat in the face of uncertinty In-Reply-To: References: <2825CE14-2109-4580-A086-9701F4D3ADF0@gmail.com> <18b1c174-b88d-4664-9aa8-9c42925fc14c@www.fastmail.com> <9a90111b-2389-4dc6-8409-18c40f895540@www.fastmail.com> <43F02160-E691-4393-A0C0-8AB4AD962700@gmail.com> Message-ID: > On 3 Oct, 2019, at 8:52 pm, Justin Kilpatrick wrote: > > I've developed a rough version of this and put it into production Monday. After a few tweaks we're seeing a ~10x reduction in the magnitude of latency spikes at high usage times. Sounds promising. Keep it up! - Jonathan Morton