Discussion of explicit congestion notification's impact on the Internet
 help / color / mirror / Atom feed
From: Luca Muscariello <luca.muscariello@gmail.com>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: Brian E Carpenter <brian.e.carpenter@gmail.com>,
	"David P. Reed" <dpreed@deepplum.com>,
	 "ecn-sane@lists.bufferbloat.net"
	<ecn-sane@lists.bufferbloat.net>,
	tsvwg IETF list <tsvwg@ietf.org>
Subject: Re: [Ecn-sane] [tsvwg] per-flow scheduling
Date: Sun, 23 Jun 2019 00:03:35 +0200	[thread overview]
Message-ID: <CAHx=1M68JBFhwETW5TE0s20tCHVayrw6j2x0kJh6PH18JDUSUA@mail.gmail.com> (raw)
In-Reply-To: <71EF351D-AFBF-4C92-B6B9-7FD695B68815@gmail.com>

[-- Attachment #1: Type: text/plain, Size: 2707 bytes --]

On Sat 22 Jun 2019 at 22:48, Jonathan Morton <chromatix99@gmail.com> wrote:

> > On 22 Jun, 2019, at 10:50 pm, David P. Reed <dpreed@deepplum.com> wrote:
> >
> > Pragmatic networks (those that operate in the real world) do not choose
> to operate with shared links in a saturated state. That's known in the
> phone business as the Mother's Day problem. You want to have enough
> capacity for the rare near-overload to never result in congestion.
>
> This is most likely true for core networks.  However, I know of several
> real-world networks and individual links which, in practice, are regularly
> in a saturated and/or congested state.
>
> Indeed, the average Internet consumer's ADSL or VDSL last-mile link
> becomes saturated for a noticeable interval, every time his operating
> system or game vendor releases an update.  In my case, I share a 3G/4G
> tower's airtime with whatever variable number of subscribers to the same
> network happen to be in the area on any given day; today, during midsummer
> weekend, that number is considerably inflated compared to normal, and my
> available link bandwidth is substantially impacted as a result, indicating
> congestion.
>
> I did not see anything in your argument specifically about per-flow
> scheduling for the simple purpose of fairly sharing capacity between flows
> and/or between subscribers, and minimising the impact of elephants on
> mice.  Empirical evidence suggests that it makes the network run more
> smoothly.  Does anyone have a concrete refutation?
>
>  - Jonathan Morton



I don’t think you would be able to find a refutation.
Going back for a second to what David and also Brian have said about
diffserv, QoS have proved to be an intractable  problem and I won’t blame
those who have tried to propose solutions that currently work under very
special  circumstances.  Things have not changed to make that problem
simpler, quite the opposite, mostly because the mix of applications is way
more diverse today with less predictable patters.

If I apply the same mindset used in  David’s paper, i.e. the Occam’s razor,
to get design principles to obtain a solution that is simple
and tractable,  flow-queuing in your DSL link looks like a perfectly
acceptable solution.
And I say that w/o religious positions.

The fact that flow-isolation generates incentives in sources to well behave
is good evidence to me.
Also the fact that even in situations that may look like the law of the
jungle, flow-isolation brings me performance that is predictable.
That brings more evidence that the solution is a good one.
In this respect Fq_codel (RFC 8290) looks like a simple useful tool.

[-- Attachment #2: Type: text/html, Size: 3394 bytes --]

  reply	other threads:[~2019-06-22 22:03 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-19 14:12 [Ecn-sane] " Bob Briscoe
2019-06-19 14:20 ` [Ecn-sane] [tsvwg] " Kyle Rose
2019-06-21  6:59 ` [Ecn-sane] " Sebastian Moeller
2019-06-21  9:33   ` Luca Muscariello
2019-06-21 20:37     ` [Ecn-sane] [tsvwg] " Brian E Carpenter
2019-06-22 19:50       ` David P. Reed
2019-06-22 20:47         ` Jonathan Morton
2019-06-22 22:03           ` Luca Muscariello [this message]
2019-06-22 22:09           ` David P. Reed
2019-06-22 23:07             ` Jonathan Morton
2019-06-24 18:57               ` David P. Reed
2019-06-24 19:31                 ` Jonathan Morton
2019-06-24 19:50                   ` David P. Reed
2019-06-24 20:14                     ` Jonathan Morton
2019-06-25 21:05                       ` David P. Reed
2019-06-24 21:25                   ` Luca Muscariello
2019-06-26 12:48             ` Sebastian Moeller
2019-06-26 16:31               ` David P. Reed
2019-06-26 16:53                 ` David P. Reed
2019-06-27  7:54                   ` Sebastian Moeller
2019-06-27  7:49                 ` Sebastian Moeller
2019-06-27 20:33                   ` Brian E Carpenter
2019-06-27 21:31                     ` David P. Reed
2019-06-28  7:49                       ` Toke Høiland-Jørgensen
2019-06-27  7:53                 ` Bless, Roland (TM)
2019-06-22 21:10         ` Brian E Carpenter
2019-06-22 22:25           ` David P. Reed
2019-06-22 22:30             ` Luca Muscariello
2019-07-17 21:33 ` [Ecn-sane] " Sebastian Moeller
2019-07-17 22:18   ` David P. Reed
2019-07-17 22:34     ` David P. Reed
2019-07-17 23:23       ` Dave Taht
2019-07-18  0:20         ` Dave Taht
2019-07-18  5:30           ` Jonathan Morton
2019-07-18 15:02         ` David P. Reed
2019-07-18 16:06           ` Dave Taht
2019-07-18  4:31     ` Jonathan Morton
2019-07-18 15:52       ` David P. Reed
2019-07-18 18:12         ` [Ecn-sane] [tsvwg] " Dave Taht
2019-07-18  5:24     ` [Ecn-sane] " Jonathan Morton
2019-07-22 13:44       ` Bob Briscoe
2019-07-23  5:00         ` Jonathan Morton
2019-07-23 11:35           ` [Ecn-sane] CNQ cheap-nasty-queuing (was per-flow queuing) Luca Muscariello
2019-07-23 20:14           ` [Ecn-sane] per-flow scheduling Bob Briscoe
2019-07-23 22:24             ` Jonathan Morton
2019-07-23 15:12         ` [Ecn-sane] [tsvwg] " Kyle Rose
2019-07-25 19:25           ` Holland, Jake
2019-07-27 15:35             ` Kyle Rose
2019-07-27 19:42               ` Jonathan Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/ecn-sane.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHx=1M68JBFhwETW5TE0s20tCHVayrw6j2x0kJh6PH18JDUSUA@mail.gmail.com' \
    --to=luca.muscariello@gmail.com \
    --cc=brian.e.carpenter@gmail.com \
    --cc=chromatix99@gmail.com \
    --cc=dpreed@deepplum.com \
    --cc=ecn-sane@lists.bufferbloat.net \
    --cc=tsvwg@ietf.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox