[Ecn-sane] per-flow scheduling

David P. Reed dpreed at deepplum.com
Wed Jul 17 18:18:37 EDT 2019


I do want to toss in my personal observations about the "end-to-end argument" related to per-flow-scheduling. (Such arguments are, of course, a class of arguments to which my name is attached. Not that I am a judge/jury of such questions...)

A core principle of the Internet design is to move function out of the network, including routers and middleboxes, if those functions

a) can be properly accomplished by the endpoints, and 
b) are not relevant to all uses of the Internet transport fabric being used by the ends.

The rationale here has always seemed obvious to me. Like Bob Briscoe suggests, we were very wary of throwing features into the network that would preclude unanticipated future interoperability needs, new applications, and new technology in the infrastructure of the Internet as a whole. 

So what are we talking about here (ignoring the fine points of SCE, some of which I think are debatable - especially the focus on TCP alone, since much traffic will likely move away from TCP in the near future.

A second technical requirement (necessary invariant) of the Internet's transport is that the entire Internet depends on rigorously stopping queueing delay from building up anywhere except at the endpoints, where the ends can manage it.This is absolutely critical, though it is peculiar in that many engineers, especially those who work at the IP layer and below, have a mental model of routing as essentially being about building up queueing delay (in order to manage priority in some trivial way by building up the queue on purpose, apparently).

This second technical requirement cannot be resolved merely by the endpoints.
The reason is that the endpoints cannot know accurately what host-host paths share common queues.

This lack of a way to "cooperate" among independent users of a queue cannot be solved by a purely end-to-end solution. (well, I suppose some genius might invent a way, but I have not seen one in my 36 years closely watching the Internet in operation since it went live in 1983.)

So, what the end-to-end argument would tend to do here, in my opinion, is to provide the most minimal mechanism in the devices that are capable of building up a queue in order to allow all the ends sharing that queue to do their job - which is to stop filling up the queue!

Only the endpoints can prevent filling up queues. And depending on the protocol, they may need to make very different, yet compatible choices.

This is a question of design at the architectural level. And the future matters.

So there is an end-to-end argument to be made here, but it is a subtle one.

The basic mechanism for controlling queue depth has been, and remains, quite simple: dropping packets. This has two impacts: 1) immediately reducing queueing delay, and 2) signalling to endpoints that are paying attention that they have contributed to an overfull queue.

The optimum queueing delay in a steady state would always be one packet or less. Kleinrock has shown this in the last few years. Of course there aren't steady states. But we don't want a mechanism that can't converge to that steady state *quickly*, for all queues in the network.

Another issue is that endpoints are not aware of the fact that packets can take multiple paths to any destination. In the future, alternate path choices can be made by routers (when we get smarter routing algorithms based on traffic engineering).

So again, some minimal kind of information must be exposed to endpoints that will continue to communicate. Again, the routers must be able to help a wide variety of endpoints with different use cases to decide how to move queue buildup out of the network itself.

Now the decision made by the endpoints must be made in the context of information about fairness. Maybe this is what is not obvious.

The most obvious notion of fairness is equal shares among source host, dest host pairs. There are drawbacks to that, but the benefit of it is that it affects the IP layer alone, and deals with lots of boundary cases like the case where a single host opens a zillion TCP connections or uses lots of UDP source ports or destinations to somehow "cheat" by appearing to have "lots of flows".

Another way to deal with dividing up flows is to ignore higher level protocol information entirely, and put the flow idenfitication in the IP layer. A 32-bit or 64-bit random number could be added as an "option" to IP to somehow extend the flow space.

But that is not the most important thing today.

I write this to say:
1) some kind of per-flow queueing, during the transient state where a queue is overloaded before packets are dropped would provide much needed information to the ends of every flow sharing a common queue.
2) per-flow queueing, minimized to a very low level, using IP envelope address information (plus maybe UDP and TCP addresses for those protocols in an extended address-based flow definition) is totally compatible with end-to-end arguments, but ONLY if the decisions made are certain to drive queueing delay out of the router to the endpoints.




On Wednesday, July 17, 2019 5:33pm, "Sebastian Moeller" <moeller0 at gmx.de> said:

> Dear Bob, dear IETF team,
> 
> 
>> On Jun 19, 2019, at 16:12, Bob Briscoe <ietf at bobbriscoe.net> wrote:
>>
>> Jake, all,
>>
>> You may not be aware of my long history of concern about how per-flow scheduling
>> within endpoints and networks will limit the Internet in future. I find per-flow
>> scheduling a violation of the e2e principle in such a profound way - the dynamic
>> choice of the spacing between packets - that most people don't even associate it
>> with the e2e principle.
> 
> 	This does not rhyme well with the L4S stated advantage of allowing packet
> reordering (due to mandating RACK for all L4S tcp endpoints). Because surely
> changing the order of packets messes up the "the dynamic choice of the spacing
> between packets" in a significant way. IMHO it is either L4S is great because it
> will give intermediate hops more leeway to re-order packets, or "a sender's
> packet spacing" is sacred, please make up your mind which it is.
> 
>>
>> I detected that you were talking about FQ in a way that might have assumed my
>> concern with it was just about implementation complexity. If you (or anyone
>> watching) is not aware of the architectural concerns with per-flow scheduling, I
>> can enumerate them.
> 
> 	Please do not hesitate to do so after your deserved holiday, and please state a
> superior alternative.
> 
> Best Regards
> 	Sebastian
> 
> 
>>
>> I originally started working on what became L4S to prove that it was possible to
>> separate out reducing queuing delay from throughput scheduling. When Koen and I
>> started working together on this, we discovered we had identical concerns on
>> this.
>>
>>
>>
>> Bob
>>
>>
>> --
>> ________________________________________________________________
>> Bob Briscoe                               http://bobbriscoe.net/
>>
>> _______________________________________________
>> Ecn-sane mailing list
>> Ecn-sane at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/ecn-sane
> 
> _______________________________________________
> Ecn-sane mailing list
> Ecn-sane at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/ecn-sane
> 




More information about the Ecn-sane mailing list