[Ecn-sane] per-flow scheduling
David P. Reed
dpreed at deepplum.com
Wed Jul 17 18:34:15 EDT 2019
A follow up point that I think needs to be made is one more end-to-end argument:
It is NOT the job of the IP transport layer to provide free storage for low priority packets. The end-to-end argument here says: the ends can and must hold packets until they are either delivered or not relevant (in RTP, they become irrelevant when they get older than their desired delivery time, if you want an example of the latter), SO, the network should not provide the function of storage beyond the minimum needed to deal with transients.
That means, unfortunately, that the dream of some kind of "background" path that stores "low priority" packets in the network fails the end-to-end argument test.
If you think about this, it even applies to some imaginary interplanetary IP layer network. Queueing delay is not a feature of any end-to-end requirement.
What may be desired at the router/link level in an interplanetary IP layer is holding packets because a link is actually down, or using link-level error correction coding or retransmission to bring the error rate down to an acceptable level before declaring it down. But that's quite different - it's the link level protocol, which aims to deliver minimum queueing delay under tough conditions, without buffering more than needed for that (the number of bits that fit in the light-speed transmission at the transmission rate.
So, the main reason I'm saying this is because again, there are those who want to implement the TCP function of reliable delivery of each packet in the links. That's a very bad idea.
On Wednesday, July 17, 2019 6:18pm, "David P. Reed" <dpreed at deepplum.com> said:
> I do want to toss in my personal observations about the "end-to-end argument"
> related to per-flow-scheduling. (Such arguments are, of course, a class of
> arguments to which my name is attached. Not that I am a judge/jury of such
> questions...)
>
> A core principle of the Internet design is to move function out of the network,
> including routers and middleboxes, if those functions
>
> a) can be properly accomplished by the endpoints, and
> b) are not relevant to all uses of the Internet transport fabric being used by the
> ends.
>
> The rationale here has always seemed obvious to me. Like Bob Briscoe suggests, we
> were very wary of throwing features into the network that would preclude
> unanticipated future interoperability needs, new applications, and new technology
> in the infrastructure of the Internet as a whole.
>
> So what are we talking about here (ignoring the fine points of SCE, some of which
> I think are debatable - especially the focus on TCP alone, since much traffic will
> likely move away from TCP in the near future.
>
> A second technical requirement (necessary invariant) of the Internet's transport
> is that the entire Internet depends on rigorously stopping queueing delay from
> building up anywhere except at the endpoints, where the ends can manage it.This is
> absolutely critical, though it is peculiar in that many engineers, especially
> those who work at the IP layer and below, have a mental model of routing as
> essentially being about building up queueing delay (in order to manage priority in
> some trivial way by building up the queue on purpose, apparently).
>
> This second technical requirement cannot be resolved merely by the endpoints.
> The reason is that the endpoints cannot know accurately what host-host paths share
> common queues.
>
> This lack of a way to "cooperate" among independent users of a queue cannot be
> solved by a purely end-to-end solution. (well, I suppose some genius might invent
> a way, but I have not seen one in my 36 years closely watching the Internet in
> operation since it went live in 1983.)
>
> So, what the end-to-end argument would tend to do here, in my opinion, is to
> provide the most minimal mechanism in the devices that are capable of building up
> a queue in order to allow all the ends sharing that queue to do their job - which
> is to stop filling up the queue!
>
> Only the endpoints can prevent filling up queues. And depending on the protocol,
> they may need to make very different, yet compatible choices.
>
> This is a question of design at the architectural level. And the future matters.
>
> So there is an end-to-end argument to be made here, but it is a subtle one.
>
> The basic mechanism for controlling queue depth has been, and remains, quite
> simple: dropping packets. This has two impacts: 1) immediately reducing queueing
> delay, and 2) signalling to endpoints that are paying attention that they have
> contributed to an overfull queue.
>
> The optimum queueing delay in a steady state would always be one packet or less.
> Kleinrock has shown this in the last few years. Of course there aren't steady
> states. But we don't want a mechanism that can't converge to that steady state
> *quickly*, for all queues in the network.
>
> Another issue is that endpoints are not aware of the fact that packets can take
> multiple paths to any destination. In the future, alternate path choices can be
> made by routers (when we get smarter routing algorithms based on traffic
> engineering).
>
> So again, some minimal kind of information must be exposed to endpoints that will
> continue to communicate. Again, the routers must be able to help a wide variety of
> endpoints with different use cases to decide how to move queue buildup out of the
> network itself.
>
> Now the decision made by the endpoints must be made in the context of information
> about fairness. Maybe this is what is not obvious.
>
> The most obvious notion of fairness is equal shares among source host, dest host
> pairs. There are drawbacks to that, but the benefit of it is that it affects the
> IP layer alone, and deals with lots of boundary cases like the case where a single
> host opens a zillion TCP connections or uses lots of UDP source ports or
> destinations to somehow "cheat" by appearing to have "lots of flows".
>
> Another way to deal with dividing up flows is to ignore higher level protocol
> information entirely, and put the flow idenfitication in the IP layer. A 32-bit or
> 64-bit random number could be added as an "option" to IP to somehow extend the
> flow space.
>
> But that is not the most important thing today.
>
> I write this to say:
> 1) some kind of per-flow queueing, during the transient state where a queue is
> overloaded before packets are dropped would provide much needed information to the
> ends of every flow sharing a common queue.
> 2) per-flow queueing, minimized to a very low level, using IP envelope address
> information (plus maybe UDP and TCP addresses for those protocols in an extended
> address-based flow definition) is totally compatible with end-to-end arguments,
> but ONLY if the decisions made are certain to drive queueing delay out of the
> router to the endpoints.
>
>
>
>
> On Wednesday, July 17, 2019 5:33pm, "Sebastian Moeller" <moeller0 at gmx.de> said:
>
>> Dear Bob, dear IETF team,
>>
>>
>>> On Jun 19, 2019, at 16:12, Bob Briscoe <ietf at bobbriscoe.net> wrote:
>>>
>>> Jake, all,
>>>
>>> You may not be aware of my long history of concern about how per-flow scheduling
>>> within endpoints and networks will limit the Internet in future. I find per-flow
>>> scheduling a violation of the e2e principle in such a profound way - the dynamic
>>> choice of the spacing between packets - that most people don't even associate it
>>> with the e2e principle.
>>
>> This does not rhyme well with the L4S stated advantage of allowing packet
>> reordering (due to mandating RACK for all L4S tcp endpoints). Because surely
>> changing the order of packets messes up the "the dynamic choice of the spacing
>> between packets" in a significant way. IMHO it is either L4S is great because it
>> will give intermediate hops more leeway to re-order packets, or "a sender's
>> packet spacing" is sacred, please make up your mind which it is.
>>
>>>
>>> I detected that you were talking about FQ in a way that might have assumed my
>>> concern with it was just about implementation complexity. If you (or anyone
>>> watching) is not aware of the architectural concerns with per-flow scheduling, I
>>> can enumerate them.
>>
>> Please do not hesitate to do so after your deserved holiday, and please state a
>> superior alternative.
>>
>> Best Regards
>> Sebastian
>>
>>
>>>
>>> I originally started working on what became L4S to prove that it was possible to
>>> separate out reducing queuing delay from throughput scheduling. When Koen and I
>>> started working together on this, we discovered we had identical concerns on
>>> this.
>>>
>>>
>>>
>>> Bob
>>>
>>>
>>> --
>>> ________________________________________________________________
>>> Bob Briscoe http://bobbriscoe.net/
>>>
>>> _______________________________________________
>>> Ecn-sane mailing list
>>> Ecn-sane at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/ecn-sane
>>
>> _______________________________________________
>> Ecn-sane mailing list
>> Ecn-sane at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/ecn-sane
>>
>
>
> _______________________________________________
> Ecn-sane mailing list
> Ecn-sane at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/ecn-sane
>
More information about the Ecn-sane
mailing list