General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] Fwd: [ih] Installed base momentum (was Re: Design choices in SMTP)
       [not found]   ` <5bb4686d-b6b7-62e1-305d-e06e4568c374@3kitty.org>
@ 2023-02-14  1:07     ` Dave Taht
  2023-02-14  2:58       ` [Bloat] " Jonathan Morton
  0 siblings, 1 reply; 2+ messages in thread
From: Dave Taht @ 2023-02-14  1:07 UTC (permalink / raw)
  To: bloat

early arpanet reports on congestion control.

---------- Forwarded message ---------
From: Jack Haverty via Internet-history <internet-history@elists.isoc.org>
Date: Mon, Feb 13, 2023 at 12:44 PM
Subject: Re: [ih] Installed base momentum (was Re: Design choices in SMTP)
To: <internet-history@elists.isoc.org>


It seems that I didn't receive some messages over the weekend....sorry
if anyone has already noted what I say below.

Re the ARPANET and Congestion Control:   This was definitely a hot
topic, in particular after DCA took over operations and the network grew
in size.   There were DCA-managed contracts to rework the internal
mechanisms of the ARPANET to handle the much larger and diverse networks
of IMPs that evolved into the multiple IMP-based networks called the
DDN.   Congestion control was just one issue of several that interacted,
e.g., routing, flow control, retransmission, buffer management, etc.
The IMP design, although a "packet network", in effect had a "serial
byte stream" mechanism internally to make sure all data got from source
host to destination.  The ARPANET had the equivalent of parts of a TCP
built inside the IMPs to guarantee the delivery of a data stream.

I'm not sure how much historical detail you'll find in traditionally
published papers and journals.   Outside of academia that wasn't a
priority.  But there were extensive and detailed reports prepared as
part of the ARPANET "operations" contracts and delivered to DCA. Here's
one 3-volume, multi-year example that discusses a lot of the work in the
early 80s on "congestion control" and new internal IMP mechanisms in
general:

https://apps.dtic.mil/sti/citations/ADA053450
https://apps.dtic.mil/sti/citations/ADA086338
https://apps.dtic.mil/sti/citations/ADA121350

There's hundreds of pages of detail in those reports and there are
others available through DTiC.   I was listed as author on some of
these, because at the time that contract was one of "my" contracts --
which meant that I had to make sure that the report got written and
delivered so we would get paid.   I didn't personally work on the
ARPANET technical research, but I did absorb some understanding of the
issues and details.  The "IMP Group" was literally just down the hall.

At the time (early 1980s), I was involved in the early Internet work,
when TCP/IP V4 was being created and the various flow and congestion
control mechanisms were being defined.  From the ARPANET experience, it
was clear to me that the IMP gurus "down the hall" at BBN viewed
congestion control as a major issue, and that sometimes surfaced as
statements such as "TCP will never work".  TCP didn't address any of the
issues of congestion, except by the rudimentary and unproven mechanism
of "Source Quench".

The expectation was that the Internet would work if congestion was
avoided rather than controlled, which could be attempted by keeping
network capacity above traffic demands, at least long enough that TCP's
retransmission and backoff mechanisms in the hosts would throttle down
as expected to match what the network substrate was capable of carrying
at the time.   Of course those mechanisms were now distributed among the
several hosts and network switches (e.g., IMPs, Packet Radios, computer
OS, gateways) involved, designed, built, and managed by different
organizaions, which made it challenging to predict how it would all behave.

Even today, as an end user, I can't tell if "congestion control" is
implemented and working well, or if congestion is just mostly being
avoided by deployment of lots of fiber and lots of buffer memory in all
the switching locations where congestion might be expected. That of
course results in the phenomenon of "buffer bloat".   That's another
question for the Historians.  Has "Congestion Control" in the Internet
been solved?  Or avoided?

Jack Haverty



On 2/13/23 08:19, Craig Partridge via Internet-history wrote:
> On Sat, Feb 11, 2023 at 7:48 AM Noel Chiappa via Internet-history <
> internet-history@elists.isoc.org> wrote:
>
>>
>>      > From: Craig Partridge
>>
>>      > We figured out congestion collapse well enough for the time
>>
>> It should be remembered that the ARPANET people (hi!) had perhaps solved
>> this
>> problem a long time before. I'm trying to remember how explicitly they saw
>> this as a separate problem from the issue of running out of buffer space
>> for
>> message re-assembly at the destination IMP, but I seem to recall that RFNMs
>> were seen as a needed throttle to prevent the network as a whole from being
>> overrun (i.e. what we now think of as 'congestion', although IIRC that term
>> wasn't used then), as well as flow control to the source host (as we would
>> now call it).
>>
>> I don't recall exactly where I saw that, but I'd try the BBN proposal to
>> DARPA's RFP, and the first JFIPS paper ("The interface message processor
>> for
>> the ARPA computer network").
>>
> I don't recall the details either, though I remember stories of Bob Kahn
> going to LA to beat up on the first few ARPANET nodes
> because he anticipated various issues, I think including congestion.  And
> he found them and fixes were made.
>
> But remember ARPANET was homogeneous -- same speed for each link and a
> single control mechanism.  I think John Nagle was
> the first to point out ("On packet switches with infinite storage") that
> connecting very different networks had its own challenges.
> And to my point, not something that a person working with X.25 would have
> understood terribly well (yes X.75 gateways existed but
> they typically throttled the window size to 2 packets, which hid a lot of
> issues).
>
> Craig

--
Internet-history mailing list
Internet-history@elists.isoc.org
https://elists.isoc.org/mailman/listinfo/internet-history


-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [Bloat] [ih] Installed base momentum (was Re: Design choices in SMTP)
  2023-02-14  1:07     ` [Bloat] Fwd: [ih] Installed base momentum (was Re: Design choices in SMTP) Dave Taht
@ 2023-02-14  2:58       ` Jonathan Morton
  0 siblings, 0 replies; 2+ messages in thread
From: Jonathan Morton @ 2023-02-14  2:58 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

> ---------- Forwarded message ---------
> From: Jack Haverty via Internet-history <internet-history@elists.isoc.org>
> 
> Even today, as an end user, I can't tell if "congestion control" is
> implemented and working well, or if congestion is just mostly being
> avoided by deployment of lots of fiber and lots of buffer memory in all
> the switching locations where congestion might be expected. That of
> course results in the phenomenon of "buffer bloat".   That's another
> question for the Historians.  Has "Congestion Control" in the Internet
> been solved?  Or avoided?

It's a good question, and one that shows understanding of the underlying problem.

TCP has implemented a workable congestion control system since the introduction of Reno, and has continued to take congestion control seriously with the newer flavours of Reno (eg. NewReno, SACK, etc) and CUBIC.  Each of these schemes reacts to congestion *signals* from the network; they probe gradually for capacity, then back off rapidly when that capacity is evidently exceeded, repeatedly.

Confusingly, this process is called the "congestion avoidance" phase of TCP, to distinguish it from the "slow start" phase which is, equally confusingly, a rapid initial probe for path capacity.  CUBIC's main refinement is that it spends more time near the capacity limit thus found than Reno does, and thus scales better to modern high-capacity networks at Internet scale.

In the simplest and most widespread case, the overflow of a buffer, resulting in packet loss, results in that loss being interpreted as a congestion signal, as well as triggering the "reliable stream" function of retransmission.  Congestion signals can also be explicitly encoded by the network onto IP packets, in the form of ECN, without requiring packet losses and the consequent retransmissions.

My take is that *if* networks focus only on increasing link and buffer capacity, then they are "avoiding" congestion - a strategy that only works so long as capacity consistently exceeds load.  However, it has repeatedly been shown in many contexts (not just networking) that increased capacity *stimulates* increased load; the phenomenon is called "induced demand".  In particular, many TCP-based Internet applications are "capacity seeking" by nature, and will *immediately* expand to fill whatever path capacity is made available to them.  If this causes the path latency to exceed about 2 seconds, DNS timeouts can be expected and the user experience will suffer dramatically.

Fortunately, many networks and, more importantly, equipment providers are now learning the value of implementing AQM (to apply congestion signals explicitly, before the buffers are full), or failing that, of sizing the buffers appropriately so that path latency doesn't increase unreasonably before congestion signals are naturally produced.  This allows TCP's sophisticated congestion control algorithms to work as intended.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-02-14  2:58 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20230211144853.BFE4D18C091@mercury.lcs.mit.edu>
     [not found] ` <CAHQj4CfZ8vCrP_x_f5K4frBrCdndrL3oBLeE1-uYhuA16WTkGw@mail.gmail.com>
     [not found]   ` <5bb4686d-b6b7-62e1-305d-e06e4568c374@3kitty.org>
2023-02-14  1:07     ` [Bloat] Fwd: [ih] Installed base momentum (was Re: Design choices in SMTP) Dave Taht
2023-02-14  2:58       ` [Bloat] " Jonathan Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox