From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp124.iad3a.emailsrvr.com (smtp124.iad3a.emailsrvr.com [173.203.187.124]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 8D8DE3B2A4 for ; Sat, 22 Jun 2019 15:50:10 -0400 (EDT) Received: from smtp8.relay.iad3a.emailsrvr.com (localhost [127.0.0.1]) by smtp8.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id 5B9FB4364; Sat, 22 Jun 2019 15:50:10 -0400 (EDT) X-SMTPDoctor-Processed: csmtpprox beta DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=g001.emailsrvr.com; s=20190322-9u7zjiwi; t=1561233010; bh=8AhrzIBHSAdXpao2OKlZssguFbmyPx5pPgS/MjJ1mHg=; h=Date:Subject:From:To:From; b=F6rnzDj2kOiJo0ULyMiv3oehOzyGHlJXUamw4Lk0HYJheAKf7LTVCl7cRe9L2+e+b Aj1qEY8RIeCLoMhDbkfoReYLfY19UOB98qzIYJpSTfnTd3KJE5Py5/vKvPs5polBd7 WRa4DB7sVYbPWVRiegywpPtFVbtOlaEu60pDl3Jc= Received: from app36.wa-webapps.iad3a (relay-webapps.rsapps.net [172.27.255.140]) by smtp8.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id 0B9294362; Sat, 22 Jun 2019 15:50:10 -0400 (EDT) X-Sender-Id: dpreed@deepplum.com Received: from app36.wa-webapps.iad3a (relay-webapps.rsapps.net [172.27.255.140]) by 0.0.0.0:25 (trex/5.7.12); Sat, 22 Jun 2019 15:50:10 -0400 Received: from deepplum.com (localhost.localdomain [127.0.0.1]) by app36.wa-webapps.iad3a (Postfix) with ESMTP id EB2C1603AD; Sat, 22 Jun 2019 15:50:09 -0400 (EDT) Received: by apps.rackspace.com (Authenticated sender: dpreed@deepplum.com, from: dpreed@deepplum.com) with HTTP; Sat, 22 Jun 2019 15:50:09 -0400 (EDT) X-Auth-ID: dpreed@deepplum.com Date: Sat, 22 Jun 2019 15:50:09 -0400 (EDT) From: "David P. Reed" To: "Brian E Carpenter" Cc: "Luca Muscariello" , "Sebastian Moeller" , "ecn-sane@lists.bufferbloat.net" , "tsvwg IETF list" MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_20190622155009000000_81947" Importance: Normal X-Priority: 3 (Normal) X-Type: html In-Reply-To: <835b1fb3-e8d5-c58c-e2f8-03d2b886af38@gmail.com> References: <350f8dd5-65d4-d2f3-4d65-784c0379f58c@bobbriscoe.net> <46D1ABD8-715D-44D2-B7A0-12FE2A9263FE@gmx.de> <835b1fb3-e8d5-c58c-e2f8-03d2b886af38@gmail.com> Message-ID: <1561233009.95886420@apps.rackspace.com> X-Mailer: webmail/16.4.5-RC Subject: Re: [Ecn-sane] [tsvwg] per-flow scheduling X-BeenThere: ecn-sane@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussion of explicit congestion notification's impact on the Internet List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Jun 2019 19:50:10 -0000 ------=_20190622155009000000_81947 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable =0ATwo points:=0A =0A- Jerry Saltzer and I were the primary authors of the = End-to-end argument paper, and the motivation was based *my* work on the or= iginal TCP and IP protocols. Dave Clark got involved significantly later th= an all those decisions, which were basically complete when he got involved.= (Jerry was my thesis supervisor, I was his student, and I operated largely= independently, taking input from various others at MIT). I mention this be= cause Dave understands the end-to-end arguments, but he understands (as we = all did) that it was a design *principle* and not a perfectly strict rule. = That said, it's a rule that has a strong foundational argument from modular= ity and evolvability in a context where the system has to work on a wide ra= nge of infrastructures (not all knowable in advance) and support a wide ran= ge of usage/application-areas (not all knowable in advance). Treating the p= aper as if it were "DDC" declaring a law is just wrong. He wasn't Moses and= it is not written on tablets. Dave did have some "power" in his role of tr= ying to achieve interoperability across diverse implementations. But his fo= cus was primarily on interoperability, not other things. So ideas in the IP= protocol like "TOS" which were largely placeholders for not-completely-wor= ked-out concepts deferred to the future were left till later.=0A =0A- It is= clear (at least to me) that from the point of view of the source of an IP = datagram, the "handling" of that datagram within the network of networks ca= n vary, and so that is why there is a TOS field - to specify an interoperab= le, meaningfully described per-packet indicator of differential handling. I= n regards to the end-to-end argument, that handling choice is a network fun= ction, *to the extent that it can completely be implemented in the network = itself*.=0ACongestion management, however, is not achievable entirely and o= nly within the network. That's completely obvious: congestion happens when = the source-destination flows exceed the capacity of the network of networks= to satisfy all demands.=0AThe network can only implement *certain* general= kinds of mechanisms that may be used by the endpoints to resolve congestio= n:=0A1) admission controls. These are implemented at the interface between = the source entity and the network of networks. They tend to be impractical = in the Internet context, because there is, by a fundamental and irreversibl= e design choice made by Cerf and Kahn (and the rest of us), no central cont= roller of the entire network of networks. This is to make evolvability and = scalability work. 5G (not an Internet system) implies a central controller,= as does SNA, LTE, and many other networks. The Internet is an overlay on t= op of such networks.=0A2) signalling congestion to the endpoints, which wil= l respond by slowing their transmission rate (or explicitly re-routing tran= smission, or compressing their content) through the network to match capaci= ty. This response is done *above* the IP layer, and has proven very practic= al. The function in the network is reduced to "congestion signalling", in a= universally understandable meaningful mechanism: packet drops, ECN, packet= -pair separation in arrival time, ... This limited function is essential w= ithin the network, because it is the state of the path(s) that is needed to= implement the full function at the end points. So congestion signalling, l= ike ECN, is implemented according to the end-to-end argument by carefully d= efining the network function to be the minimum necessary mechanism so that = endpoints can control their rates.=0A3) automatic selection of routes for f= lows. It's perfectly fine to select different routes based on information i= n the IP header (the part that is intended to be read and understood by the= network of networks). Now this is currently *rarely* done, due to the comp= lexity of tracking more detailed routing information at the router level. B= ut we had expected that eventually the Internet would be so well connected = that there would be diverse routes with diverse capabilities. For example, = the "Interplanetary Internet" works with datagrams, that can be implemented= with IP, but not using TCP, which requires very low end-to-end latency. Th= us, one would expect that TCP would not want any packets transferred over a= path via Mars, or for that matter a geosynchronous satellite, even if the = throughput would be higher.=0ASo one can imagine that eventually a "TOS" mi= ght say - send this packet preferably along a path that has at most 200 ms.= RTT, *even if that leads to congestion signalling*, while another TOS migh= t say "send this path over the most "capacious" set of paths, ignoring RTT = entirely. (these are just for illustration, but obviously something like th= is woujld work).=0ANote that TOS is really aimed at *route selection* prefe= rences, and not queueing management of individual routers.=0A =0AQueueing m= anagement to share a single queue on a path for multiple priorities of traf= fic is not very compatible with "end-to-end arguments". There are any numbe= r of reasons why this doesn't work well. I can go into them. Mainly these r= easons are why "diffserv" has never been adopted - it's NOT interoperable b= ecause the diversity of traffic between endpoints is hard to specify in a w= ay that translates into the network mechanisms. Of course any queue can be = managed in some algorithmic way with parameters, but the endpoints that wan= t to specify an end-to-end goal don't have a way to understand the impact o= f those parameters on a specific queue that is currently congested.=0A =0AI= nstead, the history of the Internet (and for that matter *all* networks, ev= en Bell's voice systems) has focused on minimizing queueing delay to near z= ero throughout the network by whatever means it has at the endpoints or in = the design. This is why we have AIMD's MD as a response to detection of con= gestion.=0A =0APragmatic networks (those that operate in the real world) do= not choose to operate with shared links in a saturated state. That's known= in the phone business as the Mother's Day problem. You want to have enough= capacity for the rare near-overload to never result in congestion. Which = means that the normal state of the network is very lightly loaded indeed, i= n order to minimize RTT. Consequently, focusing on somehow trying to optimi= ze the utilization of the network to 100% is just a purely academic exercis= e. Since "priority" at the packet level within a queue only improves that c= ase, it's just a focus of (bad) Ph.D. theses. (Good Ph.D. theses focus on a= ctual real problems like getting the queues down to 1 packet or less by sig= nalling the endpoints with information that allows them to do their job).= =0A =0ASo, in considering what goes in the IP layer, both its header and th= e mechanics of the network of networks, it is those things that actually ha= ve implementable meaning in the network of networks when processing the IP = datagram. The rest is "content" because the network of networks doesn't nee= d to see it.=0A =0AThus, don't put anything in the IP header that belongs i= n the "content" part, just being a signal between end points. Some informat= ion used in the network of networks is also logically carried between endpo= ints.=0A =0A =0AOn Friday, June 21, 2019 4:37pm, "Brian E Carpenter" said:=0A=0A=0A=0A> Below...=0A> On 21-Jun-19 21:33,= Luca Muscariello wrote:=0A> > + David Reed, as I'm not sure he's on the ec= n-sane list.=0A> >=0A> > To me, it seems like a very religious position aga= inst per-flow=0A> queueing. =0A> > BTW, I fail to see how this would violat= e (in a "profound" way ) the e2e=0A> principle.=0A> >=0A> > When I read it = (the e2e principle)=0A> >=0A> > Saltzer, J. H., D. P. Reed, and D. D. Clark= (1981) "End-to-End Arguments in=0A> System Design". =0A> > In: Proceedings= of the Second International Conference on Distributed=0A> Computing System= s. Paris, France. =0A> > April 8=E2=80=9310, 1981. IEEE Computer Society, p= p. 509-512.=0A> > (available on line for free).=0A> >=0A> > It seems very m= uch like the application of the Occam's razor to function=0A> placement in = communication networks back in the 80s.=0A> > I see no conflict between wha= t is written in that paper and per-flow queueing=0A> today, even after almo= st 40 years.=0A> >=0A> > If that was the case, then all service differentia= tion techniques would=0A> violate the e2e principle in a "profound" way too= ,=0A> > and dualQ too. A policer? A shaper? A priority queue?=0A> >=0A> > L= uca=0A> =0A> Quoting RFC2638 (the "two-bit" RFC):=0A> =0A> >>> Both these= =0A> >>> proposals seek to define a single common mechanism that is used=0A= > by=0A> >>> interior network routers, pushing most of the complexity and s= tate=0A> of=0A> >>> differentiated services to the network edges.=0A> =0A> = I can't help thinking that if DDC had felt this was against the E2E princip= le,=0A> he would have kicked up a fuss when it was written.=0A> =0A> Bob's = right, however, that there might be a tussle here. If end-points are=0A> at= tempting to pace their packets to suit their own needs, and the network is= =0A> policing packets to support both service differentiation and fairness,= =0A> these may well be competing rather than collaborating behaviours. And = there=0A> probably isn't anything we can do about it by twiddling with algo= rithms.=0A> =0A> Brian=0A> =0A> =0A> =0A> =0A> =0A> =0A> =0A> >=0A> >=0A> >= =0A> >=0A> >=0A> >=0A> > =0A> >=0A> > On Fri, Jun 21, 2019 at 9:00 AM Seba= stian Moeller > wrote:=0A> >= =0A> >=0A> >=0A> > > On Jun 19, 2019, at 16:12, Bob Briscoe > wrote:=0A> > >=0A> > > Jake, all,= =0A> > >=0A> > > You may not be aware of my long history of concern about h= ow=0A> per-flow scheduling within endpoints and networks will limit the Int= ernet in=0A> future. I find per-flow scheduling a violation of the e2e prin= ciple in such a=0A> profound way - the dynamic choice of the spacing betwee= n packets - that most=0A> people don't even associate it with the e2e princ= iple.=0A> >=0A> > Maybe because it is not a violation of the e2e principle = at all? My point=0A> is that with shared resources between the endpoints, t= he endpoints simply should=0A> have no expectancy that their choice of spac= ing between packets will be conserved.=0A> For the simple reason that it se= ems generally impossible to guarantee that=0A> inter-packet spacing is cons= erved (think "cross-traffic" at the bottleneck hop=0A> along the path and g= eneral bunching up of packets in the queue of a fast to slow=0A> transition= *). I also would claim that the way L4S works (if it works) is to=0A> synch= ronize all active flows at the bottleneck which in tirn means each sender h= as=0A> only a very small timewindow in which to transmit a packet for it to= hits its=0A> "slot" in the bottleneck L4S scheduler, otherwise, L4S's low = queueing delay=0A> guarantees will not work. In other words the senders hav= e basically no say in the=0A> "spacing between packets", I fail to see how = L4S improves upon FQ in that regard.=0A> >=0A> >=0A> > IMHO having per-flo= w fairness as the defaults seems quite=0A> reasonable, endpoints can still = throttle flows to their liking. Now per-flow=0A> fairness still can be "abu= sed", so by itself it might not be sufficient, but=0A> neither is L4S as it= has at best stochastic guarantees, as a single queue AQM=0A> (let's ignore= the RFC3168 part of the AQM) there is the probability to send a=0A> thrott= eling signal to a low bandwidth flow (fair enough, it is only a mild=0A> th= rotteling signal, but still).=0A> > But enough about my opinion, what is th= e ideal fairness measure in your=0A> mind, and what is realistically achiev= able over the internet?=0A> >=0A> >=0A> > Best Regards=0A> > Sebast= ian=0A> >=0A> >=0A> >=0A> >=0A> > >=0A> > > I detected that you were talkin= g about FQ in a way that might have=0A> assumed my concern with it was just= about implementation complexity. If you (or=0A> anyone watching) is not aw= are of the architectural concerns with per-flow=0A> scheduling, I can enume= rate them.=0A> > >=0A> > > I originally started working on what became L4S = to prove that it was=0A> possible to separate out reducing queuing delay fr= om throughput scheduling. When=0A> Koen and I started working together on t= his, we discovered we had identical=0A> concerns on this.=0A> > >=0A> > >= =0A> > >=0A> > > Bob=0A> > >=0A> > >=0A> > > --=0A> > > ___________________= _____________________________________________=0A> > > Bob Briscoe = =0A> http://bobbriscoe.net/=0A> > >=0A> > > ___________= ____________________________________=0A> > > Ecn-sane mailing list=0A> > > = Ecn-sane@lists.bufferbloat.net=0A> = =0A> > > https://lists.bufferbloat.net/listinfo/ecn-sane=0A> >=0A> > ______= _________________________________________=0A> > Ecn-sane mailing list=0A> >= Ecn-sane@lists.bufferbloat.net=0A> = =0A> > https://lists.bufferbloat.net/listinfo/ecn-sane=0A> >=0A> =0A> ------=_20190622155009000000_81947 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

Two points:

=0A

 

=0A

- Jerry Saltzer and I we= re the primary authors of the End-to-end argument paper, and the motivation= was based *my* work on the original TCP and IP protocols. Dave Clark got i= nvolved significantly later than all those decisions, which were basically = complete when he got involved. (Jerry was my thesis supervisor, I was his s= tudent, and I operated largely independently, taking input from various oth= ers at MIT). I mention this because Dave understands the end-to-end argumen= ts, but he understands (as we all did) that it was a design *principle* and= not a perfectly strict rule. That said, it's a rule that has a strong foun= dational argument from modularity and evolvability in a context where the s= ystem has to work on a wide range of infrastructures (not all knowable in a= dvance) and support a wide range of usage/application-areas (not all knowab= le in advance). Treating the paper as if it were "DDC" declaring a law is j= ust wrong. He wasn't Moses and it is not written on tablets. Dave did have = some "power" in his role of trying to achieve interoperability across diver= se implementations. But his focus was primarily on interoperability, not ot= her things. So ideas in the IP protocol like "TOS" which were largely place= holders for not-completely-worked-out concepts deferred to the future were = left till later.

=0A

 

=0A

- It is clear (at least to me) that from the point of view of the source= of an IP datagram, the "handling" of that datagram within the network of n= etworks can vary, and so that is why there is a TOS field - to specify an i= nteroperable, meaningfully described per-packet indicator of differential h= andling. In regards to the end-to-end argument, that handling choice is a n= etwork function, *to the extent that it can completely be implemented in th= e network itself*.

=0A

Congestion management, howeve= r, is not achievable entirely and only within the network. That's completel= y obvious: congestion happens when the source-destination flows exceed the = capacity of the network of networks to satisfy all demands.

=0A

The network can only implement *certain* general kinds of mech= anisms that may be used by the endpoints to resolve congestion:

=0A

1) admission controls. These are implemented at the interfa= ce between the source entity and the network of networks. They tend to be i= mpractical in the Internet context, because there is, by a fundamental and = irreversible design choice made by Cerf and Kahn (and the rest of us), no c= entral controller of the entire network of networks. This is to make evolva= bility and scalability work. 5G (not an Internet system) implies a central = controller, as does SNA, LTE, and many other networks. The Internet is an o= verlay on top of such networks.

=0A

2) signalling co= ngestion to the endpoints, which will respond by slowing their transmission= rate (or explicitly re-routing transmission, or compressing their content)= through the network to match capacity. This response is done *above* the I= P layer, and has proven very practical. The function in the network is redu= ced to "congestion signalling", in a universally understandable meaningful = mechanism: packet drops, ECN, packet-pair separation in arrival time, ...&n= bsp; This limited function is essential within the network, because it is t= he state of the path(s) that is needed to implement the full function at th= e end points. So congestion signalling, like ECN, is implemented according = to the end-to-end argument by carefully defining the network function to be= the minimum necessary mechanism so that endpoints can control their rates.=

=0A

3) automatic selection of routes for flows. It'= s perfectly fine to select different routes based on information in the IP = header (the part that is intended to be read and understood by the network = of networks). Now this is currently *rarely* done, due to the complexity of= tracking more detailed routing information at the router level. But we had= expected that eventually the Internet would be so well connected that ther= e would be diverse routes with diverse capabilities. For example, the "Inte= rplanetary Internet" works with datagrams, that can be implemented with IP,= but not using TCP, which requires very low end-to-end latency. Thus, one w= ould expect that TCP would not want any packets transferred over a path via= Mars, or for that matter a geosynchronous satellite, even if the throughpu= t would be higher.

=0A

So one can imagine that event= ually a "TOS" might say - send this packet preferably along a path that has= at most 200 ms. RTT, *even if that leads to congestion signalling*, while = another TOS might say "send this path over the most "capacious" set of path= s, ignoring RTT entirely. (these are just for illustration, but obviously s= omething like this woujld work).

=0A

Note that TOS i= s really aimed at *route selection* preferences, and not queueing managemen= t of individual routers.

=0A

 

=0A

Queueing management to share a single queue on a path for mult= iple priorities of traffic is not very compatible with "end-to-end argument= s". There are any number of reasons why this doesn't work well. I can go in= to them. Mainly these reasons are why "diffserv" has never been adopted - i= t's NOT interoperable because the diversity of traffic between endpoints is= hard to specify in a way that translates into the network mechanisms. Of c= ourse any queue can be managed in some algorithmic way with parameters, but= the endpoints that want to specify an end-to-end goal don't have a way to = understand the impact of those parameters on a specific queue that is curre= ntly congested.

=0A

 

=0A

 

=0A

Pr= agmatic networks (those that operate in the real world) do not choose to op= erate with shared links in a saturated state. That's known in the phone bus= iness as the Mother's Day problem. You want to have enough capacity for the= rare near-overload to never result in congestion.  Which means that t= he normal state of the network is very lightly loaded indeed, in order to m= inimize RTT. Consequently, focusing on somehow trying to optimize the utili= zation of the network to 100% is just a purely academic exercise. Since "pr= iority" at the packet level within a queue only improves that case, it's ju= st a focus of (bad) Ph.D. theses. (Good Ph.D. theses focus on actual real p= roblems like getting the queues down to 1 packet or less by signalling the = endpoints with information that allows them to do their job).

=0A

 

=0A

So, in considering what goe= s in the IP layer, both its header and the mechanics of the network of netw= orks, it is those things that actually have implementable meaning in the ne= twork of networks when processing the IP datagram. The rest is "content" be= cause the network of networks doesn't need to see it.

=0A

 

=0A

Thus, don't put anything in the IP = header that belongs in the "content" part, just being a signal between end = points. Some information used in the network of networks is also logically = carried between endpoints.

=0A

 

=0A

 

=0A

On Friday, June 21, 2019 4:3= 7pm, "Brian E Carpenter" <brian.e.carpenter@gmail.com> said:

=0A
=0A

> Bel= ow...
> On 21-Jun-19 21:33, Luca Muscariello wrote:
> > = + David Reed, as I'm not sure he's on the ecn-sane list.
> >
> > To me, it seems like a very religious position against per-flow=
> queueing. 
> > BTW, I fail to see how this would= violate (in a "profound" way ) the e2e
> principle.
> >=
> > When I read it (the e2e principle)
> >
>= > Saltzer, J. H., D. P. Reed, and D. D. Clark (1981) "End-to-End Argume= nts in
> System Design". 
> > In: Proceedings of th= e Second International Conference on Distributed
> Computing System= s. Paris, France. 
> > April 8=E2=80=9310, 1981. IEEE Compu= ter Society, pp. 509-512.
> > (available on line for free).
> >
> > It seems very much like the application of the Oc= cam's razor to function
> placement in communication networks back = in the 80s.
> > I see no conflict between what is written in tha= t paper and per-flow queueing
> today, even after almost 40 years.<= br />> >
> > If that was the case, then all service differ= entiation techniques would
> violate the e2e principle in a "profou= nd" way too,
> > and dualQ too. A policer? A shaper? A priority = queue?
> >
> > Luca
>
> Quoting RFC2= 638 (the "two-bit" RFC):
>
> >>> Both these
= > >>> proposals seek to define a single common mechanism that i= s used
> by
> >>> interior network routers, pushin= g most of the complexity and state
> of
> >>> diff= erentiated services to the network edges.
>
> I can't help= thinking that if DDC had felt this was against the E2E principle,
>= ; he would have kicked up a fuss when it was written.
>
> = Bob's right, however, that there might be a tussle here. If end-points are<= br />> attempting to pace their packets to suit their own needs, and the= network is
> policing packets to support both service differentiat= ion and fairness,
> these may well be competing rather than collabo= rating behaviours. And there
> probably isn't anything we can do ab= out it by twiddling with algorithms.
>
> Brian
> <= br />>
>
>
>
>
>
>= >
> >
> >
> >
> >
>= >
> >  
> >
> > On Fri, Jun 21, 2= 019 at 9:00 AM Sebastian Moeller <moeller0@gmx.de
> <mailto:m= oeller0@gmx.de>> wrote:
> >
> >
> ><= br />> > > On Jun 19, 2019, at 16:12, Bob Briscoe <ietf@bobbris= coe.net
> <mailto:ietf@bobbriscoe.net>> wrote:
> &= gt; >
> > > Jake, all,
> > >
> > = > You may not be aware of my long history of concern about how
>= per-flow scheduling within endpoints and networks will limit the Internet = in
> future. I find per-flow scheduling a violation of the e2e prin= ciple in such a
> profound way - the dynamic choice of the spacing = between packets - that most
> people don't even associate it with t= he e2e principle.
> >
> > Maybe because it is not a v= iolation of the e2e principle at all? My point
> is that with share= d resources between the endpoints, the endpoints simply should
> ha= ve no expectancy that their choice of spacing between packets will be conse= rved.
> For the simple reason that it seems generally impossible to= guarantee that
> inter-packet spacing is conserved (think "cross-t= raffic" at the bottleneck hop
> along the path and general bunching= up of packets in the queue of a fast to slow
> transition*). I als= o would claim that the way L4S works (if it works) is to
> synchron= ize all active flows at the bottleneck which in tirn means each sender has<= br />> only a very small timewindow in which to transmit a packet for it= to hits its
> "slot" in the bottleneck L4S scheduler, otherwise, L= 4S's low queueing delay
> guarantees will not work. In other words = the senders have basically no say in the
> "spacing between packets= ", I fail to see how L4S improves upon FQ in that regard.
> >> >
> >  IMHO having per-flow fairness as the defa= ults seems quite
> reasonable, endpoints can still throttle flows t= o their liking. Now per-flow
> fairness still can be "abused", so b= y itself it might not be sufficient, but
> neither is L4S as it has= at best stochastic guarantees, as a single queue AQM
> (let's igno= re the RFC3168 part of the AQM) there is the probability to send a
>= ; throtteling signal to a low bandwidth flow (fair enough, it is only a mil= d
> throtteling signal, but still).
> > But enough about= my opinion, what is the ideal fairness measure in your
> mind, and= what is realistically achievable over the internet?
> >
&g= t; >
> > Best Regards
> >       &nb= sp; Sebastian
> >
> >
> >
> >> > >
> > > I detected that you were talking abo= ut FQ in a way that might have
> assumed my concern with it was jus= t about implementation complexity. If you (or
> anyone watching) is= not aware of the architectural concerns with per-flow
> scheduling= , I can enumerate them.
> > >
> > > I originall= y started working on what became L4S to prove that it was
> possibl= e to separate out reducing queuing delay from throughput scheduling. When> Koen and I started working together on this, we discovered we had = identical
> concerns on this.
> > >
> > &g= t;
> > >
> > > Bob
> > >
>= ; > >
> > > --
> > > ____________________= ____________________________________________
> > > Bob Brisco= e               
>   &= nbsp;            http://bobbriscoe.net/
= > > >
> > > ________________________________________= _______
> > > Ecn-sane mailing list
> > > Ecn-s= ane@lists.bufferbloat.net
> <mailto:Ecn-sane@lists.bufferbloat.n= et>
> > > https://lists.bufferbloat.net/listinfo/ecn-sane<= br />> >
> > _____________________________________________= __
> > Ecn-sane mailing list
> > Ecn-sane@lists.buffe= rbloat.net
> <mailto:Ecn-sane@lists.bufferbloat.net>
>= ; > https://lists.bufferbloat.net/listinfo/ecn-sane
> >
= >
>

=0A
------=_20190622155009000000_81947--