Discussion of explicit congestion notification's impact on the Internet
 help / color / mirror / Atom feed
From: "David P. Reed" <dpreed@deepplum.com>
To: "Jonathan Morton" <chromatix99@gmail.com>
Cc: "Sebastian Moeller" <moeller0@gmx.de>,
	"ecn-sane@lists.bufferbloat.net" <ecn-sane@lists.bufferbloat.net>,
	"Bob Briscoe" <ietf@bobbriscoe.net>,
	"tsvwg IETF list" <tsvwg@ietf.org>
Subject: Re: [Ecn-sane] per-flow scheduling
Date: Thu, 18 Jul 2019 11:52:22 -0400 (EDT)	[thread overview]
Message-ID: <1563465142.686423080@apps.rackspace.com> (raw)
In-Reply-To: <368B308A-C17C-42C6-AA37-F48DF6BC06AA@gmail.com>



On Thursday, July 18, 2019 12:31am, "Jonathan Morton" <chromatix99@gmail.com> said:

>> On 18 Jul, 2019, at 1:18 am, David P. Reed <dpreed@deepplum.com> wrote:
>>
>> So what are we talking about here (ignoring the fine points of SCE, some of which
>> I think are debatable - especially the focus on TCP alone, since much traffic
>> will likely move away from TCP in the near future.
> 
> As a point of order, SCE is not specific to TCP.  TCP is merely the most
> convenient congestion-aware protocol to experiment with, and therefore the one we
> have adapted first.  Other protocols which already are (or aspire to be) TCP
> friendly, especially QUIC, should also be straightforward to adapt to SCE.
>
I agree that it is important to show that SCE in the IP layer can be interpreted in each congestion management system, and TCP is a major one. Ideally there would be a general theory that is protocol and use case agnostic, so that the functions in IP are helpful both in particular protocols and also in the very important case of interacti0ns between different coexisting protocols. I believe that SCE can be structured so that it serves that purpose as more and more protocols that are based on UDP generate more and more traffic - 
when we designed UDP we decided NOT to try to put congestion management in UDP deliberately, for two reasons:
1) we didn't yet have a good congestion management approach for TCP, and
2) major use cases that argued for UDP (packet speech, in particular, but lots of computer-computer interactions on LANs, such as Remote Procedure Calls, etc.) were known to require different approaches to congestion management beyond the basic packet-drop (such as rate management via compression variation).
UDP was part of the design choice to allow end-to-end agreement about congestion management implementation.
We now have a very complex protocol due to the WWW, that imperfectly works on TCP. Thus, a new UDP based protocol framework is proposed, and will be quite heavily used in access networks that need congestion management, both at the server end and the client end.
And we have heavy use of media streaming (though it matches TCP adequately well, being file-transfer-like due to buffering at the receiving end). 

Google and others are working hard to transition entirely away from HTTP/TCP to HTTP/QUIC/UDP. This transition will be concurrent, if not prior, to SCE integration into IP. I would hope that QUIC could use SCE to great advantage, especially in helping the co-existence of two competing uses for the same bottleneck paths without queueing delay.

That's the case that matters to me, along with RTP and other uses. From browser-level monitoring, we already see many landing web pages open up HTTP requests to 100's of different server hosts concurrently. Yes, that is hundreds for one, count them, one click.

This is not a bad thing. The designers of the Internet should not be allowed to say that it is wrong. Because it isn't wrong - it's exactly what the Internet is supposed to be able to do! However, the browser or its host must have the information to avoid queue overflow in this new protocol. That means a useful means like SCE

It also, I believe, means that arbitration based on "flows" matter. So per-flow interactions matter. I don't know, but I believe that when lots of browsers end up sharing a bottleneck link/queue, per-flow scheduling may help a reasonable amount, primarily by preventing starvation of resources. (In scheduling of parallel computing, we call that "prevention of livelock". And when you have a hundred processors on a computer - which is what my day job supports, you get livelock ALL the time if you don't guarantee that all contenders on a resource get a chance.)  What does NOT matter is some complex (intserv/diffserv) differentiation at the router level, or at least not much.


> I should also note that TCP is the de-facto gold standard, by which all other
> congestion control is measured, for better or worse.  SCE is included in this,
> insofar as competing reasonably with standard TCP flows under all reasonable
> network conditions is necessary to introduce a new congestion control paradigm. 
> This, I think, is also part of the end-to-end principle.
> 
>  - Jonathan Morton



  reply	other threads:[~2019-07-18 15:52 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-19 14:12 Bob Briscoe
2019-06-19 14:20 ` [Ecn-sane] [tsvwg] " Kyle Rose
2019-06-21  6:59 ` [Ecn-sane] " Sebastian Moeller
2019-06-21  9:33   ` Luca Muscariello
2019-06-21 20:37     ` [Ecn-sane] [tsvwg] " Brian E Carpenter
2019-06-22 19:50       ` David P. Reed
2019-06-22 20:47         ` Jonathan Morton
2019-06-22 22:03           ` Luca Muscariello
2019-06-22 22:09           ` David P. Reed
2019-06-22 23:07             ` Jonathan Morton
2019-06-24 18:57               ` David P. Reed
2019-06-24 19:31                 ` Jonathan Morton
2019-06-24 19:50                   ` David P. Reed
2019-06-24 20:14                     ` Jonathan Morton
2019-06-25 21:05                       ` David P. Reed
2019-06-24 21:25                   ` Luca Muscariello
2019-06-26 12:48             ` Sebastian Moeller
2019-06-26 16:31               ` David P. Reed
2019-06-26 16:53                 ` David P. Reed
2019-06-27  7:54                   ` Sebastian Moeller
2019-06-27  7:49                 ` Sebastian Moeller
2019-06-27 20:33                   ` Brian E Carpenter
2019-06-27 21:31                     ` David P. Reed
2019-06-28  7:49                       ` Toke Høiland-Jørgensen
2019-06-27  7:53                 ` Bless, Roland (TM)
2019-06-22 21:10         ` Brian E Carpenter
2019-06-22 22:25           ` David P. Reed
2019-06-22 22:30             ` Luca Muscariello
2019-07-17 21:33 ` [Ecn-sane] " Sebastian Moeller
2019-07-17 22:18   ` David P. Reed
2019-07-17 22:34     ` David P. Reed
2019-07-17 23:23       ` Dave Taht
2019-07-18  0:20         ` Dave Taht
2019-07-18  5:30           ` Jonathan Morton
2019-07-18 15:02         ` David P. Reed
2019-07-18 16:06           ` Dave Taht
2019-07-18  4:31     ` Jonathan Morton
2019-07-18 15:52       ` David P. Reed [this message]
2019-07-18 18:12         ` [Ecn-sane] [tsvwg] " Dave Taht
2019-07-18  5:24     ` [Ecn-sane] " Jonathan Morton
2019-07-22 13:44       ` Bob Briscoe
2019-07-23  5:00         ` Jonathan Morton
2019-07-23 11:35           ` [Ecn-sane] CNQ cheap-nasty-queuing (was per-flow queuing) Luca Muscariello
2019-07-23 20:14           ` [Ecn-sane] per-flow scheduling Bob Briscoe
2019-07-23 22:24             ` Jonathan Morton
2019-07-23 15:12         ` [Ecn-sane] [tsvwg] " Kyle Rose
2019-07-25 19:25           ` Holland, Jake
2019-07-27 15:35             ` Kyle Rose
2019-07-27 19:42               ` Jonathan Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/ecn-sane.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1563465142.686423080@apps.rackspace.com \
    --to=dpreed@deepplum.com \
    --cc=chromatix99@gmail.com \
    --cc=ecn-sane@lists.bufferbloat.net \
    --cc=ietf@bobbriscoe.net \
    --cc=moeller0@gmx.de \
    --cc=tsvwg@ietf.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox