Cake - FQ_codel the next generation
 help / color / mirror / Atom feed
From: Jonathan Morton <chromatix99@gmail.com>
To: moeller0 <moeller0@gmx.de>
Cc: cake@lists.bufferbloat.net
Subject: Re: [Cake] Master branch updated
Date: Tue, 4 Oct 2016 14:18:54 +0300	[thread overview]
Message-ID: <D15CF04F-19DE-4AE2-9B56-C2D5A670A123@gmail.com> (raw)
In-Reply-To: <66438228-D13A-42C4-8B42-11C49E0B2587@gmx.de>


> On 4 Oct, 2016, at 11:46, moeller0 <moeller0@gmx.de> wrote:
> 
> About that PTM accounting, could you explain why you want to perform the adjustment as a a “virtual” size increase per packet instead of a “virtual” rate reduction?

The shaper works by calculating the time occupied by each packet on the wire, and advancing a virtual clock in step with a continuous stream of packets.

The time occupation, in turn, is calculated as the number of bytes which appear on the wire divided by the number of bytes that wire can pass per second.  As an optimisation, the division is turned into a multiplication by the reciprocal.

I’m quite keen to keep the “bytes per second” purely derived from the raw bitrate of the link, because that is the value widely advertised by ISPs and network equipment manufacturers everywhere.  Hence, overhead compensation is implemented purely by increasing the accounted size of the packets.

I have been careful here to calculate ceil(len * 65/64) here, so that the overhead is never underestimated.  For example, a 1500-byte IP packet becomes 1519 with bridged PTM or 1527 with PPPoE over PTM, before the PTM calculation itself.  These both round up to 1536 before division, so 24 more bytes will be added in both cases.

This is less than 2 bits more than actually required (on average), so wastes less than 1/6200 of the bandwidth when full-sized packets dominate the link (as is the usual case).  Users are unlikely to notice this in practice.

Next to all the other stuff Cake does for each packet, the overhead compensation is extremely quick.  And, although the code looks very similar, the PTM compensation is faster than the ATM compensation, because the factor involved is a power of two (which GCC is very good at optimising into shifts and masks).  This is fortunate, since PTM is typically used on higher-bandwidth links than ATM.

Now, if you can show me that the above is in fact incorrect - that significant bandwidth is wasted on some real traffic profile, or that cake_overhead() figures highly in a CPU profile on real hardware - then I will reconsider.

 - Jonathan Morton


  reply	other threads:[~2016-10-04 11:19 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-04  7:22 Jonathan Morton
2016-10-04  8:46 ` moeller0
2016-10-04 11:18   ` Jonathan Morton [this message]
2016-10-04 11:54     ` moeller0
2016-10-04 16:23       ` Loganaden Velvindron
2016-10-04 15:22 ` Kevin Darbyshire-Bryant
2016-10-04 16:28   ` Jonathan Morton
2016-10-11  5:41     ` Jonathan Morton
2016-10-11 12:09       ` Luis E. Garcia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D15CF04F-19DE-4AE2-9B56-C2D5A670A123@gmail.com \
    --to=chromatix99@gmail.com \
    --cc=cake@lists.bufferbloat.net \
    --cc=moeller0@gmx.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox