* [Bloat] First draft of complete "Bufferbloat And You" enclosed.
@ 2011-02-05 13:23 Eric Raymond
2011-02-05 13:42 ` Jim Gettys
` (5 more replies)
0 siblings, 6 replies; 22+ messages in thread
From: Eric Raymond @ 2011-02-05 13:23 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 2466 bytes --]
I consider this draft coverage-complete for the basic introduction I was
aiming at. Suggestions from dtaht5 and jg have been incorporated where
appropriate. Critique and correct, but try not to make it longer. I'm a
bit unhappy about the length and may actually try to cut it.
You will note that the description of network failure modes is
somewhat broader than in jg's talk. So is the section on why QoS
fails to address the problem. This is me putting on my
system-architect head and doing original analysis; if you think I have
misunderstood the premises or reasoned about them incorrectly, tell
me.
Please fix typos and outright grammatical errors. If you think you have spotted
a higher-level usage problem or awkwardness, check with me before changing it.
What you think is technically erroneous may be expressive voice.
Explanation: Style is the contrast between expectation and surprise.
Poets writing metric poetry learn to introduce small breaks in
scansion in order to induce tension-and-release cycles at a higher
level that will hold the reader's interest. The corresponding prose
trick is to bend usage rules or change the register of the writing
slightly away from what the reader unconsciously expects. If you try
to "fix" these you will probably be stepping on an intended effect.
So check first.
(I will also observe that unless you are already an unusually skilled
writer, you should *not* try to replicate this technique; the risk of
sounding affected or just teeth-jarringly bad is high. As Penn &
Teller puts it, "These stunts are being performed by trained,
*professional* idiots.")
Future directions: unless somebody stops me, I'm going to reorganize
what wiki docs there are around this thing. The basic idea is to make this
the page new visitors naturally land on *first*, with embedded
hotlinks to the more specialized stuff.
Explanation: Outlines and bulleted lists of stuff are deadly. They're
great for reference, but they scream "too much; don't read" to people
first trying to wrap their heads around a topic. Narrative
introductions with hotlinks are both less threatening and more
effective. The main reason they're not used more is that most people
find them quite hard to write. I don't.
If I decide I need to cut the length, I'll push some subsections down
to linked subpages.
I haven't learned Textile yet. I'll probably get to that this weekend.
--
<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
[-- Attachment #2: fordummies.txt --]
[-- Type: text/plain, Size: 11069 bytes --]
Bufferbloat is a huge drag on Internet performance created,
ironically, by previous attempts to make it work better.
The bad news is that bufferbloat is everywhere, in more devices and
programs than you can shake a stick at. The good news is, bufferbloat
is relatively easy to fix. The even better news is that fixing it may
solve a lot of the service problems now addressed by bandwidth caps
and metering, making the Internet faster and less expensive for both
consumers and providers.
== Packets on the Highway ==
To fix bufferbloat, you first have to understand it. Start by
imagining cars traveling down an imaginary road. They're trying to get
from one end to the other as fast as possible, so they travel nearly
bumper to bumper at the road's highest safe speed.
Our "cars" are standing in for Internet packets, of course, and our
road is a network link. The 'bandwidth' of the link is like the total
amount of stuff the cars can carry from one end to the other per
second; the 'latency' is like the amount of time it takes any given
car to get from one end to the other.
One of the problems road networks have to cope with is traffic
congestion. If too many cars try to use the road at once, bad things
happen. One of those bad things is cars running off the road and
crashing. The Internet analog of this is called 'packet loss'. We
want to hold it to a minimum.
There's an easy way to attack a road congestion problem that's not
actually used much because human drivers hate it. That's to interrupt
the road with a parking lot. A car drives in, waits to be told when it
can leave, and then drives out. By controlling the timing and rate at
which you tell cars they can leave, you can hold the number of cars on
the road downstream of the lot to a safe level.
For this technique to work, cars must enter the parking lot without
slowing down, otherwise you'd cause a backup on the upstream side of the
lot. Real cars can't do that, but Internet packets can, so please
think of this as a minor bug in the analogy and then ignore it.
The other thing that has to be true is that the lot doesn't exceed its
maximum capacity. That is, cars leave often enough relative to the
speed at which they come in that there's always space in the lot for
incoming cars.
In the real world, this is a serious problem. On the Internet,
extremely large parking lots are so cheap to build that it's difficult
to fill them to capacity. So we can (mostly) ignore this problem with the
analogy as well. We'll explain later what happens when we can't.
The Internet analog of our parking lot is a packet buffer. People who
build network hardware and software have been raised up to hate losing
packets the same way highway engineers hate auto crashes. So they put
lots of huge buffers everywhere on the network.
In network jargon, this optimizes for bandwidth. That is, it
maximizes the amount of stuff you can bulk-ship through the network
without loss. The problem is that it does horrible things to latency.
To see why, let's go back to our cars on the road.
Suppose your rule for when a car gets to leave the parking lot is the
simplest possible: it fills up until it overflows, then cars are let
out the downstream side as fast as they can go. This is not a very
smart rule, and human beings wouldn't use it, but many Internet
devices actually do and it's a good place to start in understanding
bufferbloat. (We'll call this rule Simple Overflow Queuing, or SOQU
for short. Pronounce it "sock-you" or "soak-you"; you'll see why in a
moment.)
Now, here's how the flow of cars will look if the lot starts empty and
the road is in light use. Cars will arrive at the parking lot, fill it
up, and then proceed out the other side and nobody will go off the
road. But - each car will be delayed by the time required to initially
fill up the parking lot.
There's another effect, too. The parking lot turns smooth traffic
into clumpy traffic. A constantly spaced string of cars coming in
tends to turn into a series of clumps coming out, with size of each
clump controlled by the width of the exit from the the parking lot.
This is a problem, because car clumps tend to cause car crashes.
When this happens on the Internet, the buffer adds latency to the
connection. Packets that arrive where they're supposed to go will
have large time delays. Smooth network traffic turns into a
herky-jerky stuttering thing; as a result, packet loss rises.
Performance is worse than if the buffer weren't there at all. And -
this is an important point - the larger the buffer is, the worse the
problems are.
== From Highway to Network ==
Now imagine a whole network of highways, each with parking lots
scattered randomly along them and at their intersections. Cars trying
to get through it will experience multiple delays, and initially
smooth traffic will become clumpy and chaotic. Clumps from upstream
buffers will clog downstream buffers that might have handled the same
volume of traffic as a smooth flow, leading to serious and sometimrds
unrecoverable packet loss.
As the total traffic becomes heavier, network traffic patterns will
grow burstier and more chaotic. Usage of individual links will swing
rapidly and crazily between emptiness and overload. Latency, and total
packet times, will zig from instantaneous to
check-again-next-week-please and zag back again in no predictable
pattern.
Packet losses - the problem all those buffers were put in to prevent -
will begin to increase once all the buffers are full, because the
occasional crash is the only thing that can currently tell Internet
routers to slow down their sending. It doesn't take too long before
you start getting the Internet equivalent of 60-car pileups.
Bad consequences of this are legion. One of the most obvious is what
latency spikes do to the service that converts things like website names
to actual network addresses - DNS lookups get painfully slow.
Voice-over-IP services like Skype and video streamers like YouTube
become stuttery, prone to dropouts, and painful to use. Gamers get
fragged more.
For the more technically-inclined reader, there are several other
important Internet service protocols that degrade badly in an
enviroment with serious latency spikes: NTP, ARP, DHCP, and various
routing protocols. Yes, things as basic as your system clock time can
get messed up!
And - this is the key point - the larger and more numerous the buffers
on the network are, the worse these problems get. This is the bufferbloat
problem in a nutshell.
One of the most insidious things about bufferbloat is that it easily
masquerades as something else: underprovisioning of the network. But buying
fatter pipes doesn't fix the bufferbloat cascades, and buying larger
buffers actually makes them worse!
Those of us who have been studying bufferbloat believe that many of
the problems now attributed to under-capacity and bandwidth hogging
are actually symptoms of bufferbloat. We think fixing the bufferbloat
problem may well make many contentious arguments about usage metering,
bandwidth caps, and tiered pricing unnecessary. At the very least, we
think networks should be systematically adited for bufferbloat before
more resources are plowed into fixing problems that may be completely
misdiagnosed.
== Three Cures and a Blind Alley ==
Now that we understand it, what can we do about it?
We can start by understanding how we got into this mess; mainly, by
equating "The data must get through!" with zero packet loss.
Hating packet loss enough to want to stamp it out completely is
actually a bad mental habit. Unlike real cars on real highways, the
Internet is designed to respond to crashes by resending an identical
copy when a packet send is not acknowledged. In fact, the Internet's
normal mechanisms for avoiding congestion rely on the occasional
packet loss to trigger them. Thus, the perfect is the enemy of the
good; some packet loss is essential.
But, historically, the designers of network hardware and software have
tended to break in the other direction, bloating buffers in order to
drive packet losses to zero. Undoing this mistake will pay off hugely
in improved network oerformance.
There are three main tactics:
First, we can *pay attention*! Bufferbloat is easy to test for once
you know how to spot it. Watching networks for bufferbloat cascades
and fixing them needs to be part of the normal job duties of every
network administrator.
Second, we can decrease buffer sizes. This cuts the delay due to
latency and decreases the clumping effect on the traffic. It can
increase packet loss, but that problem is coped with pretty well by the
Internet's normal congestion-avoidance methods. As long as packet
losses remain unusual events (below the levels produced by bufferbloat
cascades), resends will happen as needed and the data will get through.
Third, we can use smarter rules than SOQU for when and by how much a
buffer should try to empty itself. That is, we need buffer-management
rules that we can expect to statistically smooth network traffic
rather than clumpifying it. The reasons smarter rules have not been
universally deployed already are mainly historical; now, this can and
should be fixed.
Next we need to point out one tactic that won't work.
Some people think the answer to Internet congestion is to turn each link
into a multi-lane highway, with fast lanes and slow lanes. The theory
of QoS ("Quality Of Service") is that you can put priority traffic in
fast lanes and bulk traffic in slow ones.
This approach has historical roots in things telephone companies used to
do. It works well for analog traffic that doesn't use buffering, only
switching. It doesn't work for Internet traffic, because all the lanes
have to use the same buffers.
If you try to implement QoS on a digital packet network, what you end
up with is extremely complicated buffer-management rules with so many
brittle assumptions baked into them that they harm performance when
the shape of network demand is even slightly different than the
rule-designer expected.
Really smart buffer-management rules are simple enough not to have
strange corner cases where they break down and jam up the traffic.
Complicated ones break down and jam up the traffic. QOS rules
are complicated.
== Less Hard ==
We started by asserting that bufferbloat is easy to fix. Here
are the reasons for optimism:
First, it's easy to detect once you understand it - and verifying
that you've fixed it is easy, too.
Second, the fixes are cheap and give direct benefits as soon as
they're applied. You don't have to wait for other people to fix
bufferbloat in their devices to improve the performance of your own.
Third, you usually only have to fix it once per device; continual
tuning isn't necessary.
Fourth, it's basically all software fixes. No expensive hardware
upgrades are required.
Finally (and importantly!), trying to fix it won't significantly
increase your risk of a network failure. If you fumble the first time,
it's reversible.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 13:23 [Bloat] First draft of complete "Bufferbloat And You" enclosed Eric Raymond
@ 2011-02-05 13:42 ` Jim Gettys
2011-02-05 15:12 ` Dave Täht
2011-02-05 15:46 ` Dave Täht
` (4 subsequent siblings)
5 siblings, 1 reply; 22+ messages in thread
From: Jim Gettys @ 2011-02-05 13:42 UTC (permalink / raw)
To: bloat
Several reactions:
1) the latency-speed connection needs to be emphasised; along with
bandwidth !=speed. You can at least start to attack the conflation of
"speed" and "bandwidth" that is in the market.
Some highways "meter" car entrance exactly to provide smooth flow on the
on ramps and avoid having parking lots (stationary highways).
In fact, a highway *is* a parking lot. It has a capacity of stationary
cars.
But memory is sooo cheap we've paved Texas over with extra road, just in
case.
2) classification itself is not evil; but if you don't fix the bloat
first, you end with way too much complexity, and
and still have problems anyway. At best, you've limited who suffers; at
worst, you've made sure the "right" cars haven't suffered here, but when
they get to a different part of the network, they'll still suffer
(QOS isn't universal, much less your complicated rules). Fix the bloat:
then classify.
To use the telephone analogy, classification will still result in
dropped calls or fast busy, if you aren't on a privileged phone.
To use the highway analogy, only a few areas have carpool lanes, and
most of the highway is still jammed. Once the carpool lanes are gone
(you get to another part of the network), you're still stuck behind
miles of traffic.
At the end, you've got some problems:
First, we don't yet have good solutions for the wireless /variable
bandwidth case; we have hopeful avenues we're exploring, Claiming we can
fix all of it right now is overstating the problem. We can immediately
reduce pain right now, and I hope within a year or three fix it for real.
Many people will need to replace their routers, and will believe that is
expensive; and to them, buying a $100 router *is* expensive. Remember
your audience. And various ISP's aren't going to like the bottom line
cost of replacing/upgrading all the broken equipment.
- Jim
On 02/05/2011 08:23 AM, Eric Raymond wrote:
> I consider this draft coverage-complete for the basic introduction I was
> aiming at. Suggestions from dtaht5 and jg have been incorporated where
> appropriate. Critique and correct, but try not to make it longer. I'm a
> bit unhappy about the length and may actually try to cut it.
>
> You will note that the description of network failure modes is
> somewhat broader than in jg's talk. So is the section on why QoS
> fails to address the problem. This is me putting on my
> system-architect head and doing original analysis; if you think I have
> misunderstood the premises or reasoned about them incorrectly, tell
> me.
>
> Please fix typos and outright grammatical errors. If you think you have spotted
> a higher-level usage problem or awkwardness, check with me before changing it.
> What you think is technically erroneous may be expressive voice.
>
> Explanation: Style is the contrast between expectation and surprise.
> Poets writing metric poetry learn to introduce small breaks in
> scansion in order to induce tension-and-release cycles at a higher
> level that will hold the reader's interest. The corresponding prose
> trick is to bend usage rules or change the register of the writing
> slightly away from what the reader unconsciously expects. If you try
> to "fix" these you will probably be stepping on an intended effect.
> So check first.
>
> (I will also observe that unless you are already an unusually skilled
> writer, you should *not* try to replicate this technique; the risk of
> sounding affected or just teeth-jarringly bad is high. As Penn&
> Teller puts it, "These stunts are being performed by trained,
> *professional* idiots.")
>
> Future directions: unless somebody stops me, I'm going to reorganize
> what wiki docs there are around this thing. The basic idea is to make this
> the page new visitors naturally land on *first*, with embedded
> hotlinks to the more specialized stuff.
>
> Explanation: Outlines and bulleted lists of stuff are deadly. They're
> great for reference, but they scream "too much; don't read" to people
> first trying to wrap their heads around a topic. Narrative
> introductions with hotlinks are both less threatening and more
> effective. The main reason they're not used more is that most people
> find them quite hard to write. I don't.
>
> If I decide I need to cut the length, I'll push some subsections down
> to linked subpages.
>
> I haven't learned Textile yet. I'll probably get to that this weekend.
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 13:42 ` Jim Gettys
@ 2011-02-05 15:12 ` Dave Täht
0 siblings, 0 replies; 22+ messages in thread
From: Dave Täht @ 2011-02-05 15:12 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
Jim Gettys <jg@freedesktop.org> writes:
> Several reactions:
> [elided]
> But memory is sooo cheap we've paved Texas over with extra road, just
> in case.
Why pick on Texas? The maximum latency yet reported was 40 seconds,
which is like 31 lunar distances (40/1.28), or half that if you are
measuring latency as RTT.
It's one of those mind-bogglingly big numbers that douglas adams warned
us about. I wouldn't be surprised if someone reported RTT times as large
as between here and Venus.
Texas has enough problems.
> (QOS isn't universal, much less your complicated rules). Fix the
> bloat: then classify.
For years a very simple classification scheme has existed by
default. Most UDP packets actually used the TOS field sanely and the
OS would prioritize those packets appropriately.
It worked, mostly. It's been devilish with SIP, however.
A few other classification schemes have worked well in the field - the
wondershaper started a trend to prioritize interactive ack packets,
which helps interactive traffic (ssh, x11, stuff like that) a lot,
improving latency under load for latency dependent tcp streams.
Most of the others... Not so much. Interesting edge cases. Maybe a
diamond in the rough here and there.
Lastly, I make a distinction between QoS and AQM - one that's kind of
hard to define. To me AQM is about trying to ensure overall fairness and
goodput (techniques like RED and SFB) by managing queues sanely, and QoS
is about providing high speed lanes with special properties for certain
kinds of traffic.
Both ARE useful, but can be addressed in order of reducing unmanaged
buffers, applying AQM, and then QoS.
> Many people will need to replace their routers, and will believe that
> is expensive; and to them, buying a $100 router *is*
> expensive. Remember your audience. And various ISP's aren't going to
> like the bottom line cost of replacing/upgrading all the broken
> equipment.
A lot of them can just get new firmware. Although it's likely that
dd-wrt and openwrt are worse, out of the box, at present.
My concern is after observing several reviews of new wireless kit in the
press that the most modern gear is exhibiting bufferbloat problems, as
yet undiagnosed.
Possibly here:
http://online.wsj.com/article/SB10001424052748704774604576035691589888786.html
Certainly here:
http://www.dd-wrt.com/phpBB2/viewtopic.php?p=416640&sid=635145ce6d7ee3bb695b39ace6b9c101
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 13:23 [Bloat] First draft of complete "Bufferbloat And You" enclosed Eric Raymond
2011-02-05 13:42 ` Jim Gettys
@ 2011-02-05 15:46 ` Dave Täht
2011-02-06 13:37 ` Eric Raymond
2011-02-05 17:56 ` richard
` (3 subsequent siblings)
5 siblings, 1 reply; 22+ messages in thread
From: Dave Täht @ 2011-02-05 15:46 UTC (permalink / raw)
To: esr; +Cc: bloat
Eric Raymond <esr@thyrsus.com> writes:
> I consider this draft coverage-complete for the basic introduction I was
> aiming at. Suggestions from dtaht5 and jg have been incorporated where
> appropriate. Critique and correct, but try not to make it longer. I'm a
> bit unhappy about the length and may actually try to cut it.
The only paragraph that stood out as a cut target was the one on NN.
A sentence, a passing reference, would suffice. NN, like sex, tends to
jolt a limbic system in the wrong direction from rationality.
(See for example the controversal talk at LCA)
Aside from that I agree that the last section needs to be slightly more,
well, bleak. There is plenty of work left to do. A lot of it is tedious.
A lot of is simple. Some of it requires theoretical breakthroughs.
The fourth item simply isn't true (enough). Work is being done. (Lots)
More people working on the problems identified so far would be great.
A goal for me (at least) for these projects is to see typical Internet
latencies move from seconds - as measured in the US - worse elsewhere -
drop closer to the speed of light in cable - ms - two orders of
magnitude improvement. It will be a better internet experience for
everyone.
(I did enjoy the virtual prozac, however. When I think of the hundreds
of millions of devices that have bufferbloat issues, I find it hard
to sleep)
Also I note the "less hard" section can stand alone - as a call to
action - with pointers to specifics (bulleted list! Agg!)
> What you think is technically erroneous may be expressive voice.
Heh.
> (I will also observe that unless you are already an unusually skilled
> writer, you should *not* try to replicate this technique; the risk of
> sounding affected or just teeth-jarringly bad is high. As Penn &
> Teller puts it, "These stunts are being performed by trained,
> *professional* idiots.")
You don't need to lecture. It's a useful technique.
I will note, however, that some pieces will need to be translated into
other languages and in that case clarity is essential.
I also note that making people laugh - especially at themselves - is
crucial. We're all bozos on this bus. More shared belly laughs would
help.
> Future directions: unless somebody stops me, I'm going to reorganize
> what wiki docs there are around this thing. The basic idea is to make this
> the page new visitors naturally land on *first*, with embedded
> hotlinks to the more specialized stuff.
My thought is that this piece is still WAY too long. And it could use
some graphics. (And PSA music)
What's the elevator pitch?
>
> Explanation: Outlines and bulleted lists of stuff are deadly. They're
> great for reference, but they scream "too much; don't read" to people
> first trying to wrap their heads around a topic. Narrative
> introductions with hotlinks are both less threatening and more
> effective.
Agreed. A narrative structure frees one from bullet point paralysis.
(The wiki format is flexible enough for multiple means of navigation.
We still very much want it to be a resource, but also very much want to
ease people into the concepts.
> The main reason they're not used more is that most people
> find them quite hard to write. I don't.
Have at it!
:me takes cover:
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 15:46 ` Dave Täht
@ 2011-02-06 13:37 ` Eric Raymond
0 siblings, 0 replies; 22+ messages in thread
From: Eric Raymond @ 2011-02-06 13:37 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat
Dave Täht <d@taht.net>:
> The only paragraph that stood out as a cut target was the one on NN.
> A sentence, a passing reference, would suffice. NN, like sex, tends to
> jolt a limbic system in the wrong direction from rationality.
Sorry, which one is that? I want to be sure we're talking about the
same thing, as I'm not currently using the phrase "network neutrality"
anywhere.
> Aside from that I agree that the last section needs to be slightly more,
> well, bleak. There is plenty of work left to do. A lot of it is tedious.
> A lot of is simple. Some of it requires theoretical breakthroughs.
Specify, please. Some such specification needs to be part of our
narrative overview, even if it doesn't stay in the main overview
document.
> The fourth item simply isn't true (enough). Work is being done. (Lots)
> More people working on the problems identified so far would be great.
"Fourth item"? You mean the assertion that it's all software? If tht's it,
what sorts of hardware need to change? I was counting router firmware
as software because it can be upgraded; is that wrong?
> A goal for me (at least) for these projects is to see typical Internet
> latencies move from seconds - as measured in the US - worse elsewhere -
> drop closer to the speed of light in cable - ms - two orders of
> magnitude improvement. It will be a better internet experience for
> everyone.
Should this goal be in the overview?
> Also I note the "less hard" section can stand alone - as a call to
> action - with pointers to specifics (bulleted list! Agg!)
That's true. I'm not going to break it out yet, though, as I think
it's valuable to have the whole overview document achieve coherence and topic
completeness before I explode it to subpages.
(One obvious failure mode if I don't do that is that the document
could bloat without it being easy to notice.)
> My thought is that this piece is still WAY too long. And it could use
> some graphics. (And PSA music)
I took out your image cookies because I consider them an instance of the
better being an enemy of the good. When we have an artist/animator, I'll work
with him enthusiastically. Until then, makes no sense to optimize the
document design for a capability we don't have.
As for way too long...I have mixed feelings. On the one hand, I do
intend to edit for conciseness. On the other hand, there needs to be
*some* overview that is topic-complete, and that implies letting it be
as long as the content requires. If I don't write that here and now,
I'll just have to do it another time under another guise.
> What's the elevator pitch?
The first two paragraphs. I'm going to add a third that says "Here's
the one-sentence version of the problem...", but that has to be
*very* carefully crafted.
--
<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 13:23 [Bloat] First draft of complete "Bufferbloat And You" enclosed Eric Raymond
2011-02-05 13:42 ` Jim Gettys
2011-02-05 15:46 ` Dave Täht
@ 2011-02-05 17:56 ` richard
2011-02-05 19:48 ` richard
` (2 subsequent siblings)
5 siblings, 0 replies; 22+ messages in thread
From: richard @ 2011-02-05 17:56 UTC (permalink / raw)
To: esr; +Cc: bloat
systematically adited <> systematically audited
Might add a note that if equipment does need to be changed out, isn't it
nice that we also have to change it out for the IPV4->IPV6 problem too
and/or that we're hoping the device manufacturers will address the
problem in the IPV6 equipment roll out.
Here in Canada the metered billing is very high visibility at the
moment. Would love to be able to point to this from some of the
discussion areas ASAP :)
Thanks
richard
On Sat, 2011-02-05 at 08:23 -0500, Eric Raymond wrote:
> I consider this draft coverage-complete for the basic introduction I was
> aiming at. Suggestions from dtaht5 and jg have been incorporated where
> appropriate. Critique and correct, but try not to make it longer. I'm a
> bit unhappy about the length and may actually try to cut it.
>
> You will note that the description of network failure modes is
> somewhat broader than in jg's talk. So is the section on why QoS
> fails to address the problem. This is me putting on my
> system-architect head and doing original analysis; if you think I have
> misunderstood the premises or reasoned about them incorrectly, tell
> me.
>
> Please fix typos and outright grammatical errors. If you think you have spotted
> a higher-level usage problem or awkwardness, check with me before changing it.
> What you think is technically erroneous may be expressive voice.
>
> Explanation: Style is the contrast between expectation and surprise.
> Poets writing metric poetry learn to introduce small breaks in
> scansion in order to induce tension-and-release cycles at a higher
> level that will hold the reader's interest. The corresponding prose
> trick is to bend usage rules or change the register of the writing
> slightly away from what the reader unconsciously expects. If you try
> to "fix" these you will probably be stepping on an intended effect.
> So check first.
>
> (I will also observe that unless you are already an unusually skilled
> writer, you should *not* try to replicate this technique; the risk of
> sounding affected or just teeth-jarringly bad is high. As Penn &
> Teller puts it, "These stunts are being performed by trained,
> *professional* idiots.")
>
> Future directions: unless somebody stops me, I'm going to reorganize
> what wiki docs there are around this thing. The basic idea is to make this
> the page new visitors naturally land on *first*, with embedded
> hotlinks to the more specialized stuff.
>
> Explanation: Outlines and bulleted lists of stuff are deadly. They're
> great for reference, but they scream "too much; don't read" to people
> first trying to wrap their heads around a topic. Narrative
> introductions with hotlinks are both less threatening and more
> effective. The main reason they're not used more is that most people
> find them quite hard to write. I don't.
>
> If I decide I need to cut the length, I'll push some subsections down
> to linked subpages.
>
> I haven't learned Textile yet. I'll probably get to that this weekend.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Richard C. Pitt Pacific Data Capture
rcpitt@pacdat.net 604-644-9265
http://digital-rag.com www.pacdat.net
PGP Fingerprint: FCEF 167D 151B 64C4 3333 57F0 4F18 AF98 9F59 DD73
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 13:23 [Bloat] First draft of complete "Bufferbloat And You" enclosed Eric Raymond
` (2 preceding siblings ...)
2011-02-05 17:56 ` richard
@ 2011-02-05 19:48 ` richard
2011-02-05 22:12 ` Dave Täht
2011-02-08 15:17 ` Justin McCann
2011-02-08 19:43 ` Juliusz Chroboczek
5 siblings, 1 reply; 22+ messages in thread
From: richard @ 2011-02-05 19:48 UTC (permalink / raw)
To: esr; +Cc: bloat
Having been one of the commercial internet pioneers here in Canada, I've
spent a lot of time dealing with the end users.
I like to write, and one of the things I try to do is explain computer
problems to my dumb relatives, customers, friends etc.
I like your vehicle analogy but I think today's network-using public can
relate better to a real-world situation in the internet so I've put
together my own article on the problem. I'd already started the article
last week and finally got to finish it today.
http://digital-rag.com/article.php/Buffer-Bloat-Packet-Loss
It could have some more technical terms (latency for example) added to
it but I limited it to the concept of window and ACK for now.
It takes the content of a recent ad from a local ISP and talks about
what is actually going on "under the hood"
richard
On Sat, 2011-02-05 at 08:23 -0500, Eric Raymond wrote:
> I consider this draft coverage-complete for the basic introduction I was
> aiming at. Suggestions from dtaht5 and jg have been incorporated where
> appropriate. Critique and correct, but try not to make it longer. I'm a
> bit unhappy about the length and may actually try to cut it.
>
> You will note that the description of network failure modes is
> somewhat broader than in jg's talk. So is the section on why QoS
> fails to address the problem. This is me putting on my
> system-architect head and doing original analysis; if you think I have
> misunderstood the premises or reasoned about them incorrectly, tell
> me.
>
> Please fix typos and outright grammatical errors. If you think you have spotted
> a higher-level usage problem or awkwardness, check with me before changing it.
> What you think is technically erroneous may be expressive voice.
>
> Explanation: Style is the contrast between expectation and surprise.
> Poets writing metric poetry learn to introduce small breaks in
> scansion in order to induce tension-and-release cycles at a higher
> level that will hold the reader's interest. The corresponding prose
> trick is to bend usage rules or change the register of the writing
> slightly away from what the reader unconsciously expects. If you try
> to "fix" these you will probably be stepping on an intended effect.
> So check first.
>
> (I will also observe that unless you are already an unusually skilled
> writer, you should *not* try to replicate this technique; the risk of
> sounding affected or just teeth-jarringly bad is high. As Penn &
> Teller puts it, "These stunts are being performed by trained,
> *professional* idiots.")
>
> Future directions: unless somebody stops me, I'm going to reorganize
> what wiki docs there are around this thing. The basic idea is to make this
> the page new visitors naturally land on *first*, with embedded
> hotlinks to the more specialized stuff.
>
> Explanation: Outlines and bulleted lists of stuff are deadly. They're
> great for reference, but they scream "too much; don't read" to people
> first trying to wrap their heads around a topic. Narrative
> introductions with hotlinks are both less threatening and more
> effective. The main reason they're not used more is that most people
> find them quite hard to write. I don't.
>
> If I decide I need to cut the length, I'll push some subsections down
> to linked subpages.
>
> I haven't learned Textile yet. I'll probably get to that this weekend.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Richard C. Pitt Pacific Data Capture
rcpitt@pacdat.net 604-644-9265
http://digital-rag.com www.pacdat.net
PGP Fingerprint: FCEF 167D 151B 64C4 3333 57F0 4F18 AF98 9F59 DD73
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 19:48 ` richard
@ 2011-02-05 22:12 ` Dave Täht
2011-02-06 1:29 ` richard
0 siblings, 1 reply; 22+ messages in thread
From: Dave Täht @ 2011-02-05 22:12 UTC (permalink / raw)
To: richard; +Cc: esr, bloat
richard <richard@pacdat.net> writes:
> I like your vehicle analogy but I think today's network-using public can
> relate better to a real-world situation in the internet so I've put
> together my own article on the problem. I'd already started the article
> last week and finally got to finish it today.
> http://digital-rag.com/article.php/Buffer-Bloat-Packet-Loss
Wondeful! That's 3 non-jg pieces in a row that "get" it, and explain
specific bits of it well.
There are three excellent animations of how TCP/IP actually works here:
http://www.kehlet.cx/articles/99.html
Perhaps that would help your piece somewhat.
I keep hoping that someone graphically talented will show up that can do
animations similar to those above, that clearly illustrate
bufferbloat. Anyone? Anyone know anyone?
Another analogy that was kicked around yesterday on the #bufferbloat irc
channel was the plumbing one - where more and more stuff is poured into
a boiling kettle (a still perhaps) until it overflows, or explodes.
Here's a title of a piece that *I* daren't write:
“Draino for the Intertubes”
I've struggled mightily to explain bufferbloat to so many people. For
example I spent 3 hours talking with an artist that understood protools
- and thought the internet was all slaved to a master clock.
I'm very glad to see y'all helping out. There's still lots left to do,
not just in communication but in actually getting some work done on both
the easy and hard engineering problems.
But staying on the communication front:
If a little kid asked you, in a small thin voice,
“Why is the internet slow today?”
How would you explain it?
How'd you explain it to a doctor? A lawyer? Your mom? Your boss?
As it happens I have studio time this weekend, if anyone is into script
writing I can fake up a few voices. I have two ideas that I might be
able to fit into 2 minutes each, but kind of have to tear myself away
from email to work on...
> It could have some more technical terms (latency for example) added to
> it but I limited it to the concept of window and ACK for now.
>
This, though dated, is a good reference on latency.
http://rescomp.stanford.edu/~cheshire/rants/Latency.html
A modernized one would be great... There was some good stuff on one of
the audio lists/web sites that I saw, I'll look for it.
Audio guys *get* latency. So do the real time guys. Few others.
> It takes the content of a recent ad from a local ISP and talks about
> what is actually going on "under the hood"
Lots of public confusion to counter. The nice thing is - we have
mitigations that *work*. What do they have?
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 22:12 ` Dave Täht
@ 2011-02-06 1:29 ` richard
2011-02-06 2:35 ` Dave Täht
0 siblings, 1 reply; 22+ messages in thread
From: richard @ 2011-02-06 1:29 UTC (permalink / raw)
To: Dave Täht; +Cc: esr, bloat
Hi Dave
I saw the animations but need to ask to use them as there is no CC on
the site there and I have ads on my Digital-Rag site
I'm sorry - not an artist :(
On Sat, 2011-02-05 at 15:12 -0700, Dave Täht wrote:
> richard <richard@pacdat.net> writes:
>
> > I like your vehicle analogy but I think today's network-using public can
> > relate better to a real-world situation in the internet so I've put
> > together my own article on the problem. I'd already started the article
> > last week and finally got to finish it today.
> > http://digital-rag.com/article.php/Buffer-Bloat-Packet-Loss
>
> Wondeful! That's 3 non-jg pieces in a row that "get" it, and explain
> specific bits of it well.
>
> There are three excellent animations of how TCP/IP actually works here:
>
> http://www.kehlet.cx/articles/99.html
>
> Perhaps that would help your piece somewhat.
>
> I keep hoping that someone graphically talented will show up that can do
> animations similar to those above, that clearly illustrate
> bufferbloat. Anyone? Anyone know anyone?
>
> Another analogy that was kicked around yesterday on the #bufferbloat irc
> channel was the plumbing one - where more and more stuff is poured into
> a boiling kettle (a still perhaps) until it overflows, or explodes.
>
> Here's a title of a piece that *I* daren't write:
>
> “Draino for the Intertubes”
>
>
> I've struggled mightily to explain bufferbloat to so many people. For
> example I spent 3 hours talking with an artist that understood protools
> - and thought the internet was all slaved to a master clock.
>
If you believe the telcos, it was (and still should be) as that was the
way things like T1s and T3s and ATM all worked.
> I'm very glad to see y'all helping out. There's still lots left to do,
> not just in communication but in actually getting some work done on both
> the easy and hard engineering problems.
>
I'm not a programmer any more - last major "bare metal" programming I
did was on an IBM 360 in assembler.
Today I'm mostly a sysadmin and systems designer and programmer manager,
but I'm doing a lot of streaming video, which is why I'm interested so
much in the buffer bloat.
> But staying on the communication front:
>
> If a little kid asked you, in a small thin voice,
>
> “Why is the internet slow today?”
>
> How would you explain it?
>
> How'd you explain it to a doctor? A lawyer? Your mom? Your boss?
strangely enough - this has come up with me recently. I think I
passed :)
>
> As it happens I have studio time this weekend, if anyone is into script
> writing I can fake up a few voices. I have two ideas that I might be
> able to fit into 2 minutes each, but kind of have to tear myself away
> from email to work on...
>
> > It could have some more technical terms (latency for example) added to
> > it but I limited it to the concept of window and ACK for now.
> >
>
> This, though dated, is a good reference on latency.
>
> http://rescomp.stanford.edu/~cheshire/rants/Latency.html
>
yes - that's a good one
> A modernized one would be great... There was some good stuff on one of
> the audio lists/web sites that I saw, I'll look for it.
>
I'll think about it.
> Audio guys *get* latency. So do the real time guys. Few others.
>
So do video people - try to explain to a bunch of ageing "eagleholics"
that there is a good reason why the audio and video from two cameras in
the same eagle nest are out of sync by minutes by the time they've gone
through the internet a couple of times and various server systems that
are otherwise supposed to be identical, on the way to being viewed by
them on a web page side by side.
> > It takes the content of a recent ad from a local ISP and talks about
> > what is actually going on "under the hood"
>
> Lots of public confusion to counter. The nice thing is - we have
> mitigations that *work*. What do they have?
>
>
talk to you soon
richard
>
--
Richard C. Pitt Pacific Data Capture
rcpitt@pacdat.net 604-644-9265
http://digital-rag.com www.pacdat.net
PGP Fingerprint: FCEF 167D 151B 64C4 3333 57F0 4F18 AF98 9F59 DD73
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-06 1:29 ` richard
@ 2011-02-06 2:35 ` Dave Täht
2011-02-06 2:50 ` richard
0 siblings, 1 reply; 22+ messages in thread
From: Dave Täht @ 2011-02-06 2:35 UTC (permalink / raw)
To: richard; +Cc: esr, bloat
richard <richard@pacdat.net> writes:
> If you believe the telcos, it was (and still should be) as that was the
> way things like T1s and T3s and ATM all worked.
http://www.wired.com/wired/archive/4.10/atm_pr.html
On the self-similar nature of the internet debate, I'd like to see a
paper on, until then, repeating this study sounds interesting:
http://eeweb.poly.edu/el933/papers/Willinger.pdf
The plots they have of traffic are rather different that what I've been
seeing lately.
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-06 2:35 ` Dave Täht
@ 2011-02-06 2:50 ` richard
0 siblings, 0 replies; 22+ messages in thread
From: richard @ 2011-02-06 2:50 UTC (permalink / raw)
To: Dave Täht; +Cc: esr, bloat
Back in 1996, shortly after iStar (Canada's first national ISP) bought
our Wimsey.COM regional, I got a chance to speak in front of a bunch of
people who were both customers and suppliers of the long distance
backbone (alternative) supplier that had spawned iStar.
As the lone internet speaker of the 3 keynotes, I talked about the
future, including things like $0.01/minute long distance using VOIP and
such.
Got my hand slapped hard right after and never got the opportunity to
speak for that company in public again. The company has gone bust.
Telus - the local ILEC here, has been using VOIP technology for their
long distance trunks for the past 10+ years - a leader in the technology
and the dominant in the field other than Bell here.
The next company my buddies and I formed did a bunch of stuff that
included ATM interface equipment and add/drop multiplexers, etc., all to
interface to IP.
Lately I have not heard much about ATM - seems to have pretty much
disappeared it seems :)
I lived this fight - from being told we could no longer use metered
phone lines for our modems, through being screwed over using Centrex
lines, US Robotics modems velcro'd on the wall, all of it.
I even started out climbing telephone poles and installing Strouger
(electro-mechanical) phone switches back in the late 60's - as the sig
on Slashdot says - "been there, done that..." :)
richard
On Sat, 2011-02-05 at 19:35 -0700, Dave Täht wrote:
> richard <richard@pacdat.net> writes:
>
> > If you believe the telcos, it was (and still should be) as that was the
> > way things like T1s and T3s and ATM all worked.
>
> http://www.wired.com/wired/archive/4.10/atm_pr.html
>
> On the self-similar nature of the internet debate, I'd like to see a
> paper on, until then, repeating this study sounds interesting:
>
> http://eeweb.poly.edu/el933/papers/Willinger.pdf
>
> The plots they have of traffic are rather different that what I've been
> seeing lately.
>
--
Richard C. Pitt Pacific Data Capture
rcpitt@pacdat.net 604-644-9265
http://digital-rag.com www.pacdat.net
PGP Fingerprint: FCEF 167D 151B 64C4 3333 57F0 4F18 AF98 9F59 DD73
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 13:23 [Bloat] First draft of complete "Bufferbloat And You" enclosed Eric Raymond
` (3 preceding siblings ...)
2011-02-05 19:48 ` richard
@ 2011-02-08 15:17 ` Justin McCann
2011-02-08 18:18 ` Eric Raymond
2011-02-08 19:43 ` Juliusz Chroboczek
5 siblings, 1 reply; 22+ messages in thread
From: Justin McCann @ 2011-02-08 15:17 UTC (permalink / raw)
To: esr; +Cc: bloat
On Sat, Feb 5, 2011 at 8:23 AM, Eric Raymond <esr@thyrsus.com> wrote:
> Please fix typos and outright grammatical errors. If you think you have spotted
> a higher-level usage problem or awkwardness, check with me before changing it.
> What you think is technically erroneous may be expressive voice.
This may be intentional, but the text launches into an explanation of
why bufferbloat is bad without concisely explaining what it is--- you
have to read the whole first two sections before it's very clear.
Maybe fitting Jim's phrase "the existence of excessively large
(bloated) buffers" (from
http://www.bufferbloat.net/projects/bloat/wiki/Bufferbloat) toward the
beginning would help, bI guess your new third paragraph will have
that.
The second of the three main tactics states, "Second, we can decrease
buffer sizes. This cuts the delay due to latency and decreases the
clumping effect on the traffic." Latency *is* delay; perhaps "cuts the
delay due to buffering" or "due to queueing" would be better, if more
tech-ese.
I've re-read through the Bell Labs talk, and some of the earlier
posts, but could someone explain the "clumping" effect? I understand
the wild variations in congestion windows ("swing[ing] rapidly and
crazily between emptiness and overload"), but clumping makes me think
of closely spaced packet intervals.
This statement is one I find problematic: "A constantly spaced string
of cars coming in tends to turn into a series of clumps coming out,
with size of each clump controlled by the width of the exit from the
the parking lot." If the bottleneck bandwidth is a constant 10 Mbps,
then the outgoing packets will be spaced at the 10 Mbps rate (the ACK
clocking effect). They aren't really more clumped going out than they
came in-- in fact, with more traffic joining at the choke point, a
given flow's packets will be spaced out even more than they were
before. This isn't quite so true on a wireless link, where it's not
the buffering so much as the variation in actual layer-2 goodput due
to retransmissions and rate changes that cause clumping.
The essential problem is that the increase in RTT slows the feedback
loop, so if a queue is creating an 8 second delay, there can be 8
seconds of badness and changes before *any* connection slows down (or
speeds up). The problem isn't clumping so much as it is the delay in
feedback.
Justin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-08 15:17 ` Justin McCann
@ 2011-02-08 18:18 ` Eric Raymond
2011-02-08 18:31 ` richard
` (3 more replies)
0 siblings, 4 replies; 22+ messages in thread
From: Eric Raymond @ 2011-02-08 18:18 UTC (permalink / raw)
To: Justin McCann; +Cc: bloat
Justin McCann <jneilm@gmail.com>:
> This may be intentional, but the text launches into an explanation of
> why bufferbloat is bad without concisely explaining what it is--- you
> have to read the whole first two sections before it's very clear.
Not intentional, exactly, but's inherent. Thec reader *can't* get what
bufferbloat.
> The second of the three main tactics states, "Second, we can decrease
> buffer sizes. This cuts the delay due to latency and decreases the
> clumping effect on the traffic." Latency *is* delay; perhaps "cuts the
> delay due to buffering" or "due to queueing" would be better, if more
> tech-ese.
Good catch, I'll fix.
> I've re-read through the Bell Labs talk, and some of the earlier
> posts, but could someone explain the "clumping" effect? I understand
> the wild variations in congestion windows ("swing[ing] rapidly and
> crazily between emptiness and overload"), but clumping makes me think
> of closely spaced packet intervals.
It's intended to. This is what I got from jg's talk, and I wrote the
SOQU scenario to illustrate it. If my understanding is incorrect (and
I see that you are saying it is) one of the real networking people
here needs to whack me with the enlightenment stick.
The underlying image in my just-so stories about roads and parking lots
is that packet flow coming in smooth on the upstream side of a buffwer
gets turned into a buffer fill, followed by a burst of packets as it
overflows, followed by more data coming into the buffer, followed by
overflow...repeat.
> This statement is one I find problematic: "A constantly spaced string
> of cars coming in tends to turn into a series of clumps coming out,
> with size of each clump controlled by the width of the exit from the
> the parking lot." If the bottleneck bandwidth is a constant 10 Mbps,
> then the outgoing packets will be spaced at the 10 Mbps rate (the ACK
> clocking effect). They aren't really more clumped going out than they
> came in-- in fact, with more traffic joining at the choke point, a
> given flow's packets will be spaced out even more than they were
> before. This isn't quite so true on a wireless link, where it's not
> the buffering so much as the variation in actual layer-2 goodput due
> to retransmissions and rate changes that cause clumping.
>
> The essential problem is that the increase in RTT slows the feedback
> loop, so if a queue is creating an 8 second delay, there can be 8
> seconds of badness and changes before *any* connection slows down (or
> speeds up). The problem isn't clumping so much as it is the delay in
> feedback.
I don't understand "ack clocking". Alas, my grasp of networking becomes
sketchy below the level of socket APIs. I know what's in a TCP packet,
roughly, but have no precise feel for what happens with bits on the wire.
--
<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-08 18:18 ` Eric Raymond
@ 2011-02-08 18:31 ` richard
2011-02-08 18:50 ` Bill Sommerfeld
` (2 subsequent siblings)
3 siblings, 0 replies; 22+ messages in thread
From: richard @ 2011-02-08 18:31 UTC (permalink / raw)
To: esr; +Cc: bloat
On Tue, 2011-02-08 at 13:18 -0500, Eric Raymond wrote:
> Justin McCann <jneilm@gmail.com>:
> > This may be intentional, but the text launches into an explanation of
> > why bufferbloat is bad without concisely explaining what it is--- you
> > have to read the whole first two sections before it's very clear.
>
> Not intentional, exactly, but's inherent. Thec reader *can't* get what
> bufferbloat.
>
> > The second of the three main tactics states, "Second, we can decrease
> > buffer sizes. This cuts the delay due to latency and decreases the
> > clumping effect on the traffic." Latency *is* delay; perhaps "cuts the
> > delay due to buffering" or "due to queueing" would be better, if more
> > tech-ese.
>
> Good catch, I'll fix.
>
> > I've re-read through the Bell Labs talk, and some of the earlier
> > posts, but could someone explain the "clumping" effect? I understand
> > the wild variations in congestion windows ("swing[ing] rapidly and
> > crazily between emptiness and overload"), but clumping makes me think
> > of closely spaced packet intervals.
>
> It's intended to. This is what I got from jg's talk, and I wrote the
> SOQU scenario to illustrate it. If my understanding is incorrect (and
> I see that you are saying it is) one of the real networking people
> here needs to whack me with the enlightenment stick.
>
> The underlying image in my just-so stories about roads and parking lots
> is that packet flow coming in smooth on the upstream side of a buffwer
> gets turned into a buffer fill, followed by a burst of packets as it
> overflows, followed by more data coming into the buffer, followed by
> overflow...repeat.
My electronics (analog, tube, etc.) background makes me view a lot of
this as "tuned" circuits - capacitors, resistors, coils, etc.
If I read things correctly, there are a number of different ways buffers
are used/abused. They're all FIFO (I hope - somebody disabuse me of this
idea if they have evidence) but how they deal with high/low water marks
seems to make a difference.
Actual processor capabilities (and overall system load) may also play a
role - the embedded stack processor and/or the system's CPU including
things like interrupt load, bus mastering, DMA, etc.
If the interface is capable of full bandwidth in and out at the same
time, and high/low water mark detection is quick, then I'd think this is
a circuit that has little tendency to oscillate at any low, detectable
frequency.
If the interface is not capable of full bandwidth in and out at the same
time, and/or the detection of or settings for high/low water mark in the
buffer are screwy, then the system will oscillate at a low frequency and
you'll get clumping.
I'd expect to see this on cheap Gbit Ethernet cards on PCI bus (lots of
interrupts to the main CPU) as the system's load rises for example; one
of the reasons I've stopped using them, even on lightly loaded links.
richard
--
Richard C. Pitt Pacific Data Capture
rcpitt@pacdat.net 604-644-9265
http://digital-rag.com www.pacdat.net
PGP Fingerprint: FCEF 167D 151B 64C4 3333 57F0 4F18 AF98 9F59 DD73
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-08 18:18 ` Eric Raymond
2011-02-08 18:31 ` richard
@ 2011-02-08 18:50 ` Bill Sommerfeld
2011-02-09 15:50 ` Eric Raymond
2011-02-08 20:10 ` Sean Conner
2011-02-09 4:24 ` Justin McCann
3 siblings, 1 reply; 22+ messages in thread
From: Bill Sommerfeld @ 2011-02-08 18:50 UTC (permalink / raw)
To: esr; +Cc: bloat
On Tue, Feb 8, 2011 at 10:18, Eric Raymond <esr@thyrsus.com> wrote:
> I don't understand "ack clocking".
before you go any further, download and read the Van Jacobsen/Karels
paper "Congestion Avoidance and Control":
ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z
in a sliding-window protocol like TCP the arrival of an ack lets the
sender know that the receiver has buffer space for the next packet and
permits the sender to transmit more data; the acks form a kind of
"clock" that signals that data has left the network and that there is
room for more.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-08 18:50 ` Bill Sommerfeld
@ 2011-02-09 15:50 ` Eric Raymond
0 siblings, 0 replies; 22+ messages in thread
From: Eric Raymond @ 2011-02-09 15:50 UTC (permalink / raw)
To: Bill Sommerfeld; +Cc: bloat
Bill Sommerfeld <wsommerfeld@google.com>:
> On Tue, Feb 8, 2011 at 10:18, Eric Raymond <esr@thyrsus.com> wrote:
> > I don't understand "ack clocking".
>
> before you go any further, download and read the Van Jacobsen/Karels
> paper "Congestion Avoidance and Control":
>
> ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z
>
> in a sliding-window protocol like TCP the arrival of an ack lets the
> sender know that the receiver has buffer space for the next packet and
> permits the sender to transmit more data; the acks form a kind of
> "clock" that signals that data has left the network and that there is
> room for more.
Your summary is actually a more concise explanation of "ack clocking" than
the paper offers, but thanks. Reading that was quite useful. I think I
shall have to reread it a couple of times to grok in fullness, and expect
the effort to be wll worthwhile.
--
<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-08 18:18 ` Eric Raymond
2011-02-08 18:31 ` richard
2011-02-08 18:50 ` Bill Sommerfeld
@ 2011-02-08 20:10 ` Sean Conner
2011-02-09 4:24 ` Justin McCann
3 siblings, 0 replies; 22+ messages in thread
From: Sean Conner @ 2011-02-08 20:10 UTC (permalink / raw)
To: bloat
It was thus said that the Great Eric Raymond once stated:
> Justin McCann <jneilm@gmail.com>:
> > This may be intentional, but the text launches into an explanation of
> > why bufferbloat is bad without concisely explaining what it is--- you
> > have to read the whole first two sections before it's very clear.
>
> Not intentional, exactly, but's inherent. Thec reader *can't* get what
> bufferbloat.
>
> > The second of the three main tactics states, "Second, we can decrease
> > buffer sizes. This cuts the delay due to latency and decreases the
> > clumping effect on the traffic." Latency *is* delay; perhaps "cuts the
> > delay due to buffering" or "due to queueing" would be better, if more
> > tech-ese.
>
> Good catch, I'll fix.
>
> > I've re-read through the Bell Labs talk, and some of the earlier
> > posts, but could someone explain the "clumping" effect? I understand
> > the wild variations in congestion windows ("swing[ing] rapidly and
> > crazily between emptiness and overload"), but clumping makes me think
> > of closely spaced packet intervals.
>
> It's intended to. This is what I got from jg's talk, and I wrote the
> SOQU scenario to illustrate it. If my understanding is incorrect (and
> I see that you are saying it is) one of the real networking people
> here needs to whack me with the enlightenment stick.
>
> The underlying image in my just-so stories about roads and parking lots
> is that packet flow coming in smooth on the upstream side of a buffwer
> gets turned into a buffer fill, followed by a burst of packets as it
> overflows, followed by more data coming into the buffer, followed by
> overflow...repeat.
I didn't care for that analogy, roads and parking lots. Better might be
freeways and interchanges, because the buffers are where traffic moves from
one freeway (the path between router A and B) to another (the path between
router B and C). If the interchange is small (say, a lane that's only a few
hundred yards long) then any delay becomes immediately apparent. Buffer
bloat is analogous to either increasing the number of lanes in the
interchange, or making the interchange longer (here in South Florida, the
interchange between I-595 W and I-95 N is two lanes two miles long---I am
not making this up).
The analogy also reminded me of traffic physics
(http://amasci.com/amateur/traffic/traffic1.html). I'm not sure if that has
any bearing and is one reason why I tend to dislike analogies of computer
topics to real world phenomenon---there are so many problems and exceptions
that it tends to cloud the issue.
I took buffer bloat to be that the inflow of packets exceeds the outflow
of packets, and that the buffer does exactly that---buffers the excess,
which delays the dropping of packets that lead to congestion control to kick
in.
-spc
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-08 18:18 ` Eric Raymond
` (2 preceding siblings ...)
2011-02-08 20:10 ` Sean Conner
@ 2011-02-09 4:24 ` Justin McCann
2011-02-10 14:55 ` Jim Gettys
3 siblings, 1 reply; 22+ messages in thread
From: Justin McCann @ 2011-02-09 4:24 UTC (permalink / raw)
To: esr; +Cc: bloat
On Tue, Feb 8, 2011 at 1:18 PM, Eric Raymond <esr@thyrsus.com> wrote:
>...
> The underlying image in my just-so stories about roads and parking lots
> is that packet flow coming in smooth on the upstream side of a buffwer
> gets turned into a buffer fill, followed by a burst of packets as it
> overflows, followed by more data coming into the buffer, followed by
> overflow...repeat.
I think this image isn't quite right for wired networks, but happens a
lot in wireless networks.
>...
> I don't understand "ack clocking". Alas, my grasp of networking becomes
> sketchy below the level of socket APIs. I know what's in a TCP packet,
> roughly, but have no precise feel for what happens with bits on the wire.
The first figure in the paper Bill Sommerfeld linked to is the best
explanation; here's another decent picture of it:
http://sd.wareonearth.com/woe/Briefings/tcptune/sld038.htm
I can explain in more detail, but I figure it will be less precise
than what's in the paper. in the analogy, the parking lot has a
metered-rate exit--- cars coming in from ten different entrances at
different rates can *never* exit faster than the fixed outgoing rate.
They can exit slower than the metered rate, but not faster, so any
"clumping" has to be caused by something other than the size of the
parking lot (at least at this link).
Considering Richard's email, I think the VJ paper assumes that the
scheduling in the OS and hardware are not bursty. That is, that the OS
doesn't send clumps of packets to the interface hardware to transmit,
so the hardware buffer doesn't oscillate between full and empty. If
the OS *does* send in bursts due to interrupt latency, scheduling, bus
contention, or a weird application, then you have a different nasty
problem. That sort of clumpy/flighty/bursty behavior is problematic in
general, but I think bufferbloat is only an indirect cause (glad to be
corrected here).
For example, if you're trying to transmit on a wireless link, you may
have to wait a while before you can use it (noise, contention,
scheduling). If you have a buffer full of packets, you'll jam them out
at a much higher rate than you were doing just moments ago. That
really screws with ACK clocking/pacing, and leads to wild swings in
what TCP sends out--- slides 48 and 49 from that link
(http://sd.wareonearth.com/woe/Briefings/tcptune/sld048.htm) are
examples of what we don't want.
Justin
[1] This skips all sorts of details about varying packet sizes and so
on, but you get the idea.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-09 4:24 ` Justin McCann
@ 2011-02-10 14:55 ` Jim Gettys
2011-02-10 17:50 ` Dave Täht
0 siblings, 1 reply; 22+ messages in thread
From: Jim Gettys @ 2011-02-10 14:55 UTC (permalink / raw)
To: bloat
On 02/08/2011 11:24 PM, Justin McCann wrote:
> On Tue, Feb 8, 2011 at 1:18 PM, Eric Raymond<esr@thyrsus.com> wrote:
>> ...
>> The underlying image in my just-so stories about roads and parking lots
>> is that packet flow coming in smooth on the upstream side of a buffwer
>> gets turned into a buffer fill, followed by a burst of packets as it
>> overflows, followed by more data coming into the buffer, followed by
>> overflow...repeat.
>
> I think this image isn't quite right for wired networks, but happens a
> lot in wireless networks.
>
>> ...
>> I don't understand "ack clocking". Alas, my grasp of networking becomes
>> sketchy below the level of socket APIs. I know what's in a TCP packet,
>> roughly, but have no precise feel for what happens with bits on the wire.
>
> The first figure in the paper Bill Sommerfeld linked to is the best
> explanation; here's another decent picture of it:
>
> http://sd.wareonearth.com/woe/Briefings/tcptune/sld038.htm
>
> I can explain in more detail, but I figure it will be less precise
> than what's in the paper. in the analogy, the parking lot has a
> metered-rate exit--- cars coming in from ten different entrances at
> different rates can *never* exit faster than the fixed outgoing rate.
> They can exit slower than the metered rate, but not faster, so any
> "clumping" has to be caused by something other than the size of the
> parking lot (at least at this link).
>
> Considering Richard's email, I think the VJ paper assumes that the
> scheduling in the OS and hardware are not bursty. That is, that the OS
> doesn't send clumps of packets to the interface hardware to transmit,
> so the hardware buffer doesn't oscillate between full and empty. If
> the OS *does* send in bursts due to interrupt latency, scheduling, bus
> contention, or a weird application, then you have a different nasty
> problem. That sort of clumpy/flighty/bursty behavior is problematic in
> general, but I think bufferbloat is only an indirect cause (glad to be
> corrected here).
>
>
Well, another way to think about transport protocols is as servo
systems, that apply feedback to control the rates.
If you look at the TCP traces that set me off on this merry chase, you
see quite violent periodic behaviour, where the periods are quite long
(of order 10 seconds).
Injecting delays way beyond the natural RTT is hazardous to the
stability of transport protocols.
You can see TCP slowly losing its mind it's RTT estimation gets longer
and longer as the buffer fills. Eventually, it goes ballistic.
The servo system's stability has been destroyed...
- Jim
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-10 14:55 ` Jim Gettys
@ 2011-02-10 17:50 ` Dave Täht
0 siblings, 0 replies; 22+ messages in thread
From: Dave Täht @ 2011-02-10 17:50 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
Jim Gettys <jg@freedesktop.org> writes:
> Well, another way to think about transport protocols is as servo
> systems, that apply feedback to control the rates.
>
> If you look at the TCP traces that set me off on this merry chase, you
> see quite violent periodic behaviour, where the periods are quite long
> (of order 10 seconds).
>
> Injecting delays way beyond the natural RTT is hazardous to the
> stability of transport protocols.
>
> You can see TCP slowly losing its mind it's RTT estimation gets longer
> and longer as the buffer fills. Eventually, it goes ballistic.
>
> The servo system's stability has been destroyed...
I LIKE the idea of trying to think about this as a complex servo system.
I also like many of the other analogies that have gone by.
At the moment, the lower levels of plumbing in the internet's servos
more closely resemble a rube goldberg machine.
Everybody here could use a belly laugh. Try this:
http://www.youtube.com/watch?v=qybUFnY7Y8w
We've also been overcomplicating this discussion, getting overwhelmed in
detail.
There is no perfect analogy, all we can do is - as blind men feeling up
this elephant from trunk, hip and tail - is to keep trying to describe
its shape in as many different ways as possible until it's more than a
shadow on the wall... [1]
If we can take a step back and not go for one-size-fits all and think of
each audience that we need to address, perhaps we'll keep finding
analogies that work for each audience in smaller, more digestible, pieces.
> - Jim
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
[1] Plato and the Elephant - coming soon to a writers workshop near you!
--
Dave Taht
http://nex-6.taht.net
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-05 13:23 [Bloat] First draft of complete "Bufferbloat And You" enclosed Eric Raymond
` (4 preceding siblings ...)
2011-02-08 15:17 ` Justin McCann
@ 2011-02-08 19:43 ` Juliusz Chroboczek
2011-02-08 19:52 ` richard
5 siblings, 1 reply; 22+ messages in thread
From: Juliusz Chroboczek @ 2011-02-08 19:43 UTC (permalink / raw)
To: esr; +Cc: bloat
> If too many cars try to use the road at once, bad things happen. One
> of those bad things is cars running off the road and crashing. The
> Internet analog of this is called 'packet loss'. We want to hold it to
> a minimum.
I object to that, since it compares packet loss to a catastrophic
failure. (I would personally argue against a car analogy in the first
place, but that's just stylistic preference.)
> Suppose your rule for when a car gets to leave the parking lot is the
> simplest possible: it fills up until it overflows, then cars are let
> out the downstream side as fast as they can go.
I'm not sure what kind of buffer you're trying to illustrate. That's
certainly not how I understand tail-drop buffers.
Finally, I'd argue that you're being too harsh on QoS techniques.
--Juliusz
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed.
2011-02-08 19:43 ` Juliusz Chroboczek
@ 2011-02-08 19:52 ` richard
0 siblings, 0 replies; 22+ messages in thread
From: richard @ 2011-02-08 19:52 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
On Tue, 2011-02-08 at 20:43 +0100, Juliusz Chroboczek wrote:
> > If too many cars try to use the road at once, bad things happen. One
> > of those bad things is cars running off the road and crashing. The
> > Internet analog of this is called 'packet loss'. We want to hold it to
> > a minimum.
>
> I object to that, since it compares packet loss to a catastrophic
> failure. (I would personally argue against a car analogy in the first
> place, but that's just stylistic preference.)
>
This bothers me too. You almost have to inject a "MacGuffin" in the form
of a magic transporter at the entrance to all parking lots such that any
car that does not fit gets magically transported back to its origination
to try again - packet loss and retry - not catastrophic but very much a
penalty.
> > Suppose your rule for when a car gets to leave the parking lot is the
> > simplest possible: it fills up until it overflows, then cars are let
> > out the downstream side as fast as they can go.
>
> I'm not sure what kind of buffer you're trying to illustrate. That's
> certainly not how I understand tail-drop buffers.
>
> Finally, I'd argue that you're being too harsh on QoS techniques.
>
> --Juliusz
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Richard C. Pitt Pacific Data Capture
rcpitt@pacdat.net 604-644-9265
http://digital-rag.com www.pacdat.net
PGP Fingerprint: FCEF 167D 151B 64C4 3333 57F0 4F18 AF98 9F59 DD73
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2011-02-10 17:50 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-02-05 13:23 [Bloat] First draft of complete "Bufferbloat And You" enclosed Eric Raymond
2011-02-05 13:42 ` Jim Gettys
2011-02-05 15:12 ` Dave Täht
2011-02-05 15:46 ` Dave Täht
2011-02-06 13:37 ` Eric Raymond
2011-02-05 17:56 ` richard
2011-02-05 19:48 ` richard
2011-02-05 22:12 ` Dave Täht
2011-02-06 1:29 ` richard
2011-02-06 2:35 ` Dave Täht
2011-02-06 2:50 ` richard
2011-02-08 15:17 ` Justin McCann
2011-02-08 18:18 ` Eric Raymond
2011-02-08 18:31 ` richard
2011-02-08 18:50 ` Bill Sommerfeld
2011-02-09 15:50 ` Eric Raymond
2011-02-08 20:10 ` Sean Conner
2011-02-09 4:24 ` Justin McCann
2011-02-10 14:55 ` Jim Gettys
2011-02-10 17:50 ` Dave Täht
2011-02-08 19:43 ` Juliusz Chroboczek
2011-02-08 19:52 ` richard
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox