[Ecn-sane] paper idea: praising smaller packets

Jonathan Morton chromatix99 at gmail.com
Wed Sep 29 06:36:18 EDT 2021


> On 29 Sep, 2021, at 1:15 am, David P. Reed <dpreed at deepplum.com> wrote:
> 
> Now, it is important as hell to avoid bullshit research programs that try to "optimize" ustilization of link capacity at 100%. Those research programs focus on the absolute wrong measure - a proxy for "network capital cost" that is in fact the wrong measure of any real network operator's cost structure. The cost of media (wires, airtime, ...) is a tiny fraction of most network operations' cost in any real business or institution. We don't optimize highways by maximizing the number of cars on every stretch of highway, for obvious reasons, but also for non-obvious reasons.

I think it is important to distinguish between core/access networks and last-mile links.  The technical distinction is in the level of statistical multiplexing - high in the former, low in the latter.  The cost structure to the relevant user is also significantly different.

I agree with your analysis when it comes to core/access networks with a high degree of statistical multiplexing.  These networks should be built with enough capacity to service their expected load.  When the actual load exceeds installed capacity for whatever reason, keeping latency low maintains network stability and, with a reasonable AQM, should not result in appreciably reduced goodput in practice.

The relevant user's costs are primarily in the hardware installed at each end of the link (hence minimising complexity in this high-speed hardware is often seen as an important goal), and possibly in the actual volume of traffic transferred, not in the raw capacity of the medium.  All the same, if the medium were cheap, why not just install more of it, rather than spending big on the hardware at each end?  There's probably a good explanation for this that I'm not quite aware of.  Perhaps it has to do with capital versus operational costs.

On a last-mile link, the relevant user is a member of the household that the link leads to.  He is rather likely to be *very* interested in getting the most goodput out of the capacity available to him, on those occasions when he happens to have a heavy workload for it.  He's just bought a game on Steam, for example, and wants to minimise the time spent waiting for multiple gigabytes to download before he can enjoy his purchase.  Assuming his ISP and the Steam CDN have built their networks wisely, his last-mile link will be the bottleneck for this task - and optimising goodput over it becomes *more* important the lower the link capacity is.

A lot of people, for one reason or another, still have links below 50Mbps, and sometimes *much* less than that.  It's worth reminding the gigabit fibre crowd of that, once in a while.

But he may not the only member of the household interested in this particular link.  My landlord, for example, may commonly have his wife, sister, mother, and four children at home at any given time, depending on the time of year.  Some of the things they wish to do may be latency-sensitive, and they are also likely to be annoyed if throughput-sensitive tasks are unreasonably impaired.  So the goodput of the Steam download is not the only metric of relevance, taken holistically.  And it is certainly not correct to maximise utilisation of the link, as you can "utilise" the link with a whole lot of useless junk, yet make no progress whatsoever.

Maximising an overall measure of network power, however, probably *does* make sense - in both contexts.  The method of doing so is naturally different in each context:

1: In core/access networks, ensuring that demand is always met by capacity maximises useful throughput and minimises latency.  This is the natural optimum for network power.

2: It is reasonable to assume that installing more capacity has an associated cost, which may exert downward pressure on capacity.  In core/access networks where demand exceeds capacity, throughput is fixed at capacity, and network power is maximised by minimising delays.  This assumes that no individual traffic's throughput is unreasonably impaired, compared to others, in the process; the "linear product-based fairness index" can be used to detect this:

	https://en.wikipedia.org/wiki/Fairness_measure#:~:text=Product-based%20Fairness%20Indices

3: In a last-mile link, network power is maximised by maximising the goodput of useful applications, ensuring that all applications have a "fair" share of available capacity (for some reasonable definition of "fair"), and keeping latency as low as reasonably practical while doing so.  This is likely to be associated with high link utilisation when demand is heavy.

> Operating at fully congested state - or designing TCP to essentially come close to DDoS behaviour on a bottleneck to get a publishable paper - is missing the point.

When writing a statement like that, it's probably important to indicate what a "fully congested state" actually means.  Some might take it to mean merely 100% link utilisation, which could actually be part of an optimal network power solution.  From context, I assume you actually mean that the queues are driven to maximum depth and to the point of overflow - or beyond.

 - Jonathan Morton


More information about the Ecn-sane mailing list