General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Michael Welzl <michawe@ifi.uio.no>
To: Sebastian Moeller <moeller0@gmx.de>
Cc: Dave Taht <dave.taht@gmail.com>, bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] [iccrg] Musings on the future of Internet Congestion Control
Date: Sun, 10 Jul 2022 22:01:17 +0200	[thread overview]
Message-ID: <95FB54F9-973F-40DE-84BF-90D05A642D6B@ifi.uio.no> (raw)
In-Reply-To: <F5C9EFF0-9DEB-4843-A21E-2DB3E9E44483@gmx.de>

Hi !


> On Jul 10, 2022, at 7:27 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
> 
> Hi Michael,
> 
> so I reread your paper and stewed a bit on it.

Many thanks for doing that!  :)


> I believe that I do not buy some of your premises.

you say so, but I don’t really see much disagreement here. Let’s see:


> e.g. you write:
> 
> "We will now examine two factors that make the the present situation particularly worrisome. First, the way the infrastructure has been evolving gives TCP an increasingly large operational space in which it does not see any feedback at all. Second, most TCP connections are extremely short. As a result, it is quite rare for a TCP connection to even see a single congestion notification during its lifetime."
> 
> And seem to see a problem that flows might be able to finish their data transfer business while still in slow start. I see the same data, but see no problem. Unless we have an oracle that tells each sender (over a shared bottleneck) exactly how much to send at any given time point, different control loops will interact on those intermediary nodes.

You really say that you don’t see the solution. The problem is that capacities are underutilized, which means that flows take longer (sometimes, much longer!) to finish than they theoretically could, if we had a better solution.


> I might be limited in my depth of thought here, but having each flow probing for capacity seems exactly the right approach... and doubling CWND or rate every RTT is pretty aggressive already (making slow start shorter by reaching capacity faster within the slow-start framework requires either to start with a higher initial value (what increasing IW tries to achieve?) or use a larger increase factor than 2 per RTT). I consider increased IW a milder approach than the alternative. And once one accepts that a gradual rate increasing is the way forward it falls out logically that some flows will finish before they reach steady state capacity especially if that flows available capacity is large. So what exactly is the problem with short flows not reaching capacity and what alternative exists that does not lead to carnage if more-aggressive start-up phases drive the bottleneck load into emergency drop territory?

There are various ways to do this; one is to cache information and re-use it, assuming that - at least sometimes - new flows will see the same path again. Another is to let parallel flows share information. Yet another is to just be blindly more aggressive. Yet another, chirping.


> And as an aside, a PEP (performance enhancing proxy) that does not enhance performance is useless at best and likely harmful (rather a PDP, performance degrading proxy).

You’ve made it sound worse by changing the term, for whatever that’s worth. If they never help, why has anyone ever called them PEPs in the first place? Why do people buy these boxes?


> The network so far has been doing reasonably well with putting more protocol smarts at the ends than in the parts in between.

Truth is, PEPs are used a lot: at cellular edges, at satellite links…   because the network is *not* always doing reasonably well without them.


> I have witnessed the arguments in the "L4S wars" about how little processing one can ask the more central network nodes perform, e.g. flow queueing which would solve a lot of the issues (e.g. a hyper aggressive slow-start flow would mostly hurt itself if it overshoots its capacity) seems to be a complete no-go.

That’s to do with scalability, which depends on how close to the network’s edge one is.


> I personally think what we should do is have the network supply more information to the end points to control their behavior better. E.g. if we would mandate a max_queue-fill-percentage field in a protocol header and have each node write max(current_value_of_the_field, queue-filling_percentage_of_the_current_node) in every packet, end points could estimate how close to congestion the path is (e.g. by looking at the rate of %queueing changes) and tailor their growth/shrinkage rates accordingly, both during slow-start and during congestion avoidance.

That could well be one way to go. Nice if we provoked you to think!


> But alas we seem to go the path of a relative dumb 1 bit signal giving us an under-defined queue filling state instead and to estimate relative queue filling dynamics from that we need many samples (so literally too little too late, or L3T2), but I digress.

Yeah you do  :-)

Cheers,
Michael


  reply	other threads:[~2022-07-10 20:01 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <6458C1E6-14CB-4A36-8BB3-740525755A95@ifi.uio.no>
2022-06-15 17:49 ` [Bloat] Fwd: " Dave Taht
2022-06-19 16:53   ` [Bloat] " Sebastian Moeller
2022-06-20 12:58     ` Michael Welzl
2022-07-10 17:27       ` Sebastian Moeller
2022-07-10 20:01         ` Michael Welzl [this message]
2022-07-10 21:29           ` Sebastian Moeller
2022-07-11  6:24             ` Michael Welzl
2022-07-11  7:33               ` Sebastian Moeller
2022-07-11  8:49                 ` Michael Welzl
2022-07-12  9:47                   ` Sebastian Moeller
2022-07-12 17:56                     ` David Lang
2022-07-12 19:12                       ` Sebastian Moeller
2022-07-12 19:22                         ` David Lang
2022-07-13  6:54                           ` Sebastian Moeller
2022-07-13 15:43                             ` David Lang
2022-07-12 22:27                     ` Michael Welzl
2022-07-13  6:16                       ` Sebastian Moeller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=95FB54F9-973F-40DE-84BF-90D05A642D6B@ifi.uio.no \
    --to=michawe@ifi.uio.no \
    --cc=bloat@lists.bufferbloat.net \
    --cc=dave.taht@gmail.com \
    --cc=moeller0@gmx.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox