<div dir="ltr"><div dir="ltr">I think the "I Love Lucy" chocolate factory scene is perhaps a good analogy:<div><br></div><div><a href="https://www.youtube.com/watch?v=WmAwcMNxGqM">https://www.youtube.com/watch?v=WmAwcMNxGqM</a><br></div><div><br></div><div>The chocolates start to come in too fast, and they can't keep up, but because they aren't telling the kitchen to slow down, they keep piling up until it collapses into a mess.</div><div><br></div><div>Except with networks, many of the senders keep sending packets until the receiver says that they've missed one (or three, or whatever it is), and then the sender slows down again. But if you're hoarding packets, that signal to slow down is delayed. And then that creates bufferbloat.</div><div><br></div><div>I also like to think of buffers as time. The buffer in front of a link is basically a bucket of time the size of the buffer divided by the speed of the link. 1MB of buffer, in front of a 10Mbps link, is 800ms: (1,000,000 MB) * (8 bits/byte) / 10,000,000 bits /sec => 0.8 seconds.</div><div><br></div><div>And so the sender is going to keep sending faster and faster until they go over 10Mbps, and start to fill that buffer, and then when they do fill it, they have to resend the missing packets, AND cut their sending rate.</div><div><br></div><div>If the buffer is large enough (and therefore the delay long enough), the sender "overshoots" by so far that they have to just sit and deal with all the "hey, I missed packets after X" messages from the receiver, until everything's caught up, and they they can start going faster again (we call this congestion collapse, because the sender can't send anything new at all, and once they've sorted out the state of things with the receiver, they can start again (slowly)).</div><div><br></div><div>Congestion collapse is the candy factory from the above clip: That mess that needs to be cleaned up before things can start over again (slowly).</div><div><br></div><div><br></div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, May 16, 2021 at 1:44 PM Michael Richardson <<a href="mailto:mcr@sandelman.ca">mcr@sandelman.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><br>
Jonathan Morton <<a href="mailto:chromatix99@gmail.com" target="_blank">chromatix99@gmail.com</a>> wrote:<br>
> So instead of just loading ready-made bags of firewood into my trailer,<br>
> I have to wait for the trimming team to get around to taking the<br>
> branches off "my" tree which is waiting behind a dozen others. The<br>
> branches then go into a big stack of branches waiting for the chopping<br>
> machine. When they eventually get around to chopping those, the<br>
> firewood is carefully put in a separate pile, waiting for the weighing<br>
> and bagging.<br>
<br>
Your analogy is definitely the result of optimizing for batches rather than latency.<br>
(JIT manufacturing in general and much of _The Goal_ talks about the business side of<br>
this, btw)<br>
<br>
But, I don't think that it's a great explanation for grandma.<br>
The fetching milk analogy is a bit better, but still not great.<br>
<br>
John@matrix8, how did it work for you?<br>
<br>
Explaining this is pretty important.<br>
<br>
(Thanks for the slide Jonathan)<br>
<br>
--<br>
] Never tell me the odds! | ipv6 mesh networks [<br>
] Michael Richardson, Sandelman Software Works | IoT architect [<br>
] <a href="mailto:mcr@sandelman.ca" target="_blank">mcr@sandelman.ca</a> <a href="http://www.sandelman.ca/" rel="noreferrer" target="_blank">http://www.sandelman.ca/</a> | ruby on rails [<br>
<br>
_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</blockquote></div>