[Ecn-sane] [Bloat] [iccrg] Fwd: [tcpPrague] Implementation and experimentation of TCP Prague/L4S hackaton at IETF104

David P. Reed dpreed at deepplum.com
Fri Mar 15 19:43:58 EDT 2019


My point is that the argument for doing such balancing is that somehow ISPs at the entry points (representing somehow the goals of source and destination of each flow) will classify their flows correctly based on some criterion, and not select the option that allows them to "beat" all the others, which then causes them be "game theoreitically" incented to screw up the labeling.
 
The business argument that the users at both ends will choose the rignt labels is that they are responsive to price signals in a very sensitive way that will get them to mark things correctly. (that includes, by the way, the Internet Access Providers, if they take over the labeling job and force their choice on their users, because they become the "endpoint")
 
So if pricing mechanisms don't work to incent labeling correctly, it does not matter that there exists an optimum that can be achieved by an Oracle who fully decides the settings on all packets of all protocols ever to be invented.
 
And that's why I brought up the issue of pricing and economics, which sadly really affect any kind of queue management.
 
That's why pricing becomes a practical issue, and issues of "utility" to the users become important.
 
Now the other thing that is crucial is that the optimal state almost all of the time of every link in the network is that a utilization far from max capacity. The reason for this is the fact that the Internet (like almost all networks) is bursty and fractal. The law of large numbers doesn't smooth traffic volume over any timescale (that's the sense of fractalness here). There is no statistical smoothing of load - there are rare high peaks on some links but most links are underutilized, *if you want all the protocols currently used in the Internet to make users happy with minimal time-to-task-completion* (response time at the scale that matters for the particular user's needs at that moment).
 
So if most links are uncongested most of the time (and they should be if the folks maintaining the subnets are all doing their job by growing links that are congested to handle more traffic), then congestion management is only a peak load problem, and only affects things a small percentage of the time.
 
This is very, very different from the typical "benchmark" case, which focuses only on dealing with peak loads, which are transient in the real world. Most "benchmarks" make the strange and unrealistic assumption that overload is steady state, and that users themselves don't give up and stop using an overloaded system.
 
 
-----Original Message-----
From: "Jonathan Morton" <chromatix99 at gmail.com>
Sent: Friday, March 15, 2019 4:13pm
To: "David P. Reed" <dpreed at deepplum.com>
Cc: "Mikael Abrahamsson" <swmike at swm.pp.se>, ecn-sane at lists.bufferbloat.net, "bloat" <bloat at lists.bufferbloat.net>
Subject: Re: [Bloat] [Ecn-sane] [iccrg] Fwd: [tcpPrague] Implementation and experimentation of TCP Prague/L4S hackaton at IETF104



> On 15 Mar, 2019, at 9:44 pm, David P. Reed <dpreed at deepplum.com> wrote:
> 
> pricing (even dynamic pricing) of different qualities of service is unstable

An interesting result, but I should note that the four-way optimisation system I described doesn't rely on pricing, only a sufficiently correct implementation of those optimisations at enough bottlenecks to make it worthwhile for applications to mark their traffic appropriately. The technology exists to do so, but is not standardised in a way that makes it usable.

 - Jonathan Morton

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/ecn-sane/attachments/20190315/7b6e6541/attachment.html>


More information about the Ecn-sane mailing list