From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com [IPv6:2a00:1450:4864:20::52f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 283DB3B29D for ; Sun, 16 May 2021 17:33:01 -0400 (EDT) Received: by mail-ed1-x52f.google.com with SMTP id s6so4548778edu.10 for ; Sun, 16 May 2021 14:33:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=08CvdDujmXXBuD4acReAW38hMUmZg1iV6Dk02x9bjEs=; b=Uj6J8fcZa9Uz2peii13nlS4CjuZjbrB0RAiosUXzhJROR6tP46491pATgEvH+eKriw GkXz56Z4NP6Mi5VeCnjxSnI1inWqeWLLClgScR1YdN4UD4FpxGa/TzI9JGfQDSeCIx+2 omkOSe7UOWtz0l4WqvJnDctopNceXrVQ7Iij8prC/qYC2EvEtyWK3bPLUMjbkakAflhA 6HpAA4hQcOsbg23io1gIMN2XRxFzIw1uynzF2+dcA2vuPcSxAPZHb8SAughft9cJdm9q JI+OIAdGpmRP6Ti3Oskf1ooW7IzS36g5grtvKsaCGxrNnCKDgmRtfYd8ttOsrzeUbXnb pZFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=08CvdDujmXXBuD4acReAW38hMUmZg1iV6Dk02x9bjEs=; b=sS2bJd7jp1w4px9tqlLfS9uuKSqiCeHvXOMeVgb55gmun3TMP3p/8n0GqOWu6XKgyd I/h2W4mqB1z3O2EjiONH2vqX9TlICzi1JzOY8iH1WPRZg62Sqpp0cBRDSBr8RrHmLFeT QhL/+rufYdr9dQJahSsAlW9EbL3cyYjCS/tX1S8b7n0gJeZRmBsGmajhidvyMR8wJYBh gzIeJAlnq8CNmaT4toUeaU5Ho6+Nd3Yiu2ZpW1vMdKFOMyFow9L3Z+02Uj/0J7mUKuca 6bI1Ew5CXuuRIpjPctNuJsi/7lm/D0MvztKFXF8WsYbI2uaNxZanOvkmgK/EXMparewF 83ew== X-Gm-Message-State: AOAM532RXyuJcZZQyHz76iVYIK1azqKSh4RmbDQVVt8L7Yr+qR2cWNfC jA0181DROxd8t7HAkMBryYDH+iE8D42DFUdNiZ0= X-Google-Smtp-Source: ABdhPJzXRgGZr//F1QI45DN6PRqabZy+Mpq5eDVvhYcjsFZTgAnpURRIXTL9K0xxKPeRgRU61lzSfg1CS3Mxw8TO6A0= X-Received: by 2002:a05:6402:3494:: with SMTP id v20mr67173725edc.169.1621200780054; Sun, 16 May 2021 14:33:00 -0700 (PDT) MIME-Version: 1.0 References: <6d93a3cf-7907-c50b-7903-79bd638b5766@indexexchange.com> <487B5EAD-6FEE-482C-919E-D1F1B7166530@gmail.com> <15155.1621197853@localhost> In-Reply-To: <15155.1621197853@localhost> From: Aaron Wood Date: Sun, 16 May 2021 14:32:49 -0700 Message-ID: To: Michael Richardson Cc: Jonathan Morton , bloat Content-Type: multipart/alternative; boundary="0000000000009ad34c05c27938b0" Subject: Re: [Bloat] Terminology for Laypeople X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 May 2021 21:33:01 -0000 --0000000000009ad34c05c27938b0 Content-Type: text/plain; charset="UTF-8" I think the "I Love Lucy" chocolate factory scene is perhaps a good analogy: https://www.youtube.com/watch?v=WmAwcMNxGqM The chocolates start to come in too fast, and they can't keep up, but because they aren't telling the kitchen to slow down, they keep piling up until it collapses into a mess. Except with networks, many of the senders keep sending packets until the receiver says that they've missed one (or three, or whatever it is), and then the sender slows down again. But if you're hoarding packets, that signal to slow down is delayed. And then that creates bufferbloat. I also like to think of buffers as time. The buffer in front of a link is basically a bucket of time the size of the buffer divided by the speed of the link. 1MB of buffer, in front of a 10Mbps link, is 800ms: (1,000,000 MB) * (8 bits/byte) / 10,000,000 bits /sec => 0.8 seconds. And so the sender is going to keep sending faster and faster until they go over 10Mbps, and start to fill that buffer, and then when they do fill it, they have to resend the missing packets, AND cut their sending rate. If the buffer is large enough (and therefore the delay long enough), the sender "overshoots" by so far that they have to just sit and deal with all the "hey, I missed packets after X" messages from the receiver, until everything's caught up, and they they can start going faster again (we call this congestion collapse, because the sender can't send anything new at all, and once they've sorted out the state of things with the receiver, they can start again (slowly)). Congestion collapse is the candy factory from the above clip: That mess that needs to be cleaned up before things can start over again (slowly). On Sun, May 16, 2021 at 1:44 PM Michael Richardson wrote: > > Jonathan Morton wrote: > > So instead of just loading ready-made bags of firewood into my > trailer, > > I have to wait for the trimming team to get around to taking the > > branches off "my" tree which is waiting behind a dozen others. The > > branches then go into a big stack of branches waiting for the > chopping > > machine. When they eventually get around to chopping those, the > > firewood is carefully put in a separate pile, waiting for the > weighing > > and bagging. > > Your analogy is definitely the result of optimizing for batches rather > than latency. > (JIT manufacturing in general and much of _The Goal_ talks about the > business side of > this, btw) > > But, I don't think that it's a great explanation for grandma. > The fetching milk analogy is a bit better, but still not great. > > John@matrix8, how did it work for you? > > Explaining this is pretty important. > > (Thanks for the slide Jonathan) > > -- > ] Never tell me the odds! | ipv6 mesh > networks [ > ] Michael Richardson, Sandelman Software Works | IoT > architect [ > ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on > rails [ > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > --0000000000009ad34c05c27938b0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I think the "I Love Lucy" choco= late factory scene is perhaps a good analogy:


The chocolates start to= come in too fast, and they can't keep up, but because they aren't = telling the kitchen to slow down, they keep piling up until it collapses in= to a mess.

Except with networks, many of the sende= rs keep sending packets until the receiver says that they've missed one= (or three, or whatever it is), and then the sender slows down again.=C2=A0= But if you're hoarding packets, that signal to slow down is delayed.= =C2=A0 And then that creates bufferbloat.

I also l= ike to think of buffers as time.=C2=A0 The buffer in front of a link is bas= ically a bucket of time the size of the buffer divided by the speed of the = link. =C2=A01MB of buffer, in front of a 10Mbps link, is 800ms: (1,000,000 = MB) * (8 bits/byte) / 10,000,000 bits /sec =3D> 0.8 seconds.
<= br>
And so the sender is going to keep sending faster and faster = until they go over 10Mbps, and start to fill that buffer, and then when the= y do fill it, they have to resend the missing packets, AND cut their sendin= g rate.

If the buffer is large enough (and therefo= re the delay long enough), the sender "overshoots" by so far that= they have to just sit and deal with all the "hey, I missed packets af= ter X" messages from the receiver, until everything's caught up, a= nd they they can start going faster again (we call this congestion collapse= , because the sender can't send anything new at all, and once they'= ve sorted out the state of things with the receiver, they can start again (= slowly)).

Congestion collapse is the candy factory= from the above clip: =C2=A0That mess that needs to be cleaned up before th= ings can start over again (slowly).




On Sun, May 16, 2021 at 1:44 PM Michael Richardson <mcr@sandelman.ca> wrote:
<= blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-l= eft-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);pa= dding-left:1ex">
Jonathan Morton <chromatix99@gmail.com> wrote:
=C2=A0 =C2=A0 > So instead of just loading ready-made bags of firewood i= nto my trailer,
=C2=A0 =C2=A0 > I have to wait for the trimming team to get around to ta= king the
=C2=A0 =C2=A0 > branches off "my" tree which is waiting behind= a dozen others.=C2=A0 The
=C2=A0 =C2=A0 > branches then go into a big stack of branches waiting fo= r the chopping
=C2=A0 =C2=A0 > machine.=C2=A0 When they eventually get around to choppi= ng those, the
=C2=A0 =C2=A0 > firewood is carefully put in a separate pile, waiting fo= r the weighing
=C2=A0 =C2=A0 > and bagging.

Your analogy is definitely the result of optimizing for batches rather than= latency.
(JIT manufacturing in general and much of _The Goal_ talks about the busine= ss side of
this, btw)

But, I don't think that it's a great explanation for grandma.
The fetching milk analogy is a bit better, but still not great.

John@matrix8, how did it work for you?

Explaining this is pretty important.

(Thanks for the slide Jonathan)

--
]=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Never tell me the o= dds!=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| ipv6 me= sh networks [
]=C2=A0 =C2=A0Michael Richardson, Sandelman Software Works=C2=A0 =C2=A0 =C2= =A0 =C2=A0 |=C2=A0 =C2=A0 IoT architect=C2=A0 =C2=A0[
]=C2=A0 =C2=A0 =C2=A0= mcr@sandelman.ca=C2=A0 http://www.sandelman.ca/=C2=A0 =C2=A0 =C2=A0 = =C2=A0 |=C2=A0 =C2=A0ruby on rails=C2=A0 =C2=A0 [

_______________________________________________
Bloat mailing list
Bloat@list= s.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
--0000000000009ad34c05c27938b0--