From: grenville armitage <garmitage@swin.edu.au>
To: bloat@lists.bufferbloat.net
Subject: Re: [Bloat] Solving bufferbloat with TCP using packet delay
Date: Thu, 21 Mar 2013 07:21:52 +1100 [thread overview]
Message-ID: <514A1A60.2090006@swin.edu.au> (raw)
In-Reply-To: <CADC3P9nHcmb214w7edufkkdqhS14EG9AigUqzC_4cLhcY7fDzQ@mail.gmail.com>
On 03/21/2013 02:36, Steffan Norberhuis wrote:
> Hello Everyone,
>
> For a project for the Delft Technical University myself and 3
> students are writing a review paper on the buffer bloat problem and
> its possible solutions.
My colleagues have been dabbling with delay-based CC algorithms,
with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
if that's of any interest.
Some thoughts:
- When delay-based TCPs share bottlenecks with loss-based TCPs,
the delay-based TCPs are punished. Hard. They back-off,
as queuing delay builds, while the loss-based flow(s)
blissfully continue to push the queue to full (drop).
Everyone sharing the bottleneck sees latency fluctuations,
bounded by the bottleneck queue's effective 'length' (set
by physical RAM limits or operator-configurable threshold).
- The previous point suggests perhaps a hybrid TCP which uses
delay-based control, but switches (briefly) to loss-based
control if it detects the bottleneck queue is being
hammered by other, loss-based TCP flows. Challenging
questions arise as to what triggers switching between
delay-based and loss-based modes.
- Reducing a buffer's length requires meddling with the
bottleneck(s) (new firmware or new devices). Deploying
delay-based TCPs requires meddling with endpoints (OS
upgrade/patch). Many more of the latter than the former.
- But yes, in self-contained networks where the end hosts can all
be told to run a delay-based CC algorithm, delay-based CC
can mitigate the impact of bloated buffers in your bottleneck
network devices. Such homogeneous environments do exist, but
the Internet is quite different.
- Alternatively, if one could classify delay-based CC flows into one
queue and loss-based CC flows into another queue at each
bottleneck, the first point above might not be such a problem.
I also want a pink pony ;) (Of course, once we're considering
tweak the bottlenecks with classifiers and multiple queues,
might as continue the surgery and reduce the bloated buffers too.)
cheers,
gja
next prev parent reply other threads:[~2013-03-20 20:21 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-20 15:36 Steffan Norberhuis
2013-03-20 15:55 ` Dave Taht
2013-03-20 16:12 ` Oliver Hohlfeld
2013-03-20 16:35 ` Michael Richardson
2013-03-20 20:21 ` grenville armitage [this message]
2013-03-20 23:16 ` Stephen Hemminger
2013-03-21 1:01 ` Jonathan Morton
2013-03-26 13:10 ` Maarten de Vries
2013-03-26 13:24 ` Jonathan Morton
2013-04-04 0:10 ` Simon Barber
2013-03-21 8:26 ` Hagen Paul Pfeifer
2013-04-03 18:16 ` Juliusz Chroboczek
2013-04-03 18:23 ` Hagen Paul Pfeifer
2013-04-03 19:35 ` Juliusz Chroboczek
2013-04-03 18:14 ` Juliusz Chroboczek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=514A1A60.2090006@swin.edu.au \
--to=garmitage@swin.edu.au \
--cc=bloat@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox