General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Hans-Kristian Bakke <hkbakke@gmail.com>
To: bloat <bloat@lists.bufferbloat.net>
Subject: [Bloat] Fwd:  Recommendations for fq_codel and tso/gso in 2017
Date: Fri, 27 Jan 2017 20:56:06 +0100	[thread overview]
Message-ID: <CAD_cGvG0PCOb-iJwkFL2GoXW2fUiwshwr1Wv__Y1hCru0PJ7ew@mail.gmail.com> (raw)
In-Reply-To: <CAD_cGvFSmmFOAyArqCzjhSZAwDYnBqpAvCjKAyBi+PJS5Ofm3A@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 3579 bytes --]

Thank you for answering!

On 27 January 2017 at 08:55, Dave Täht <dave@taht.net> wrote:

>
>
> On 1/26/17 11:21 PM, Hans-Kristian Bakke wrote:
> > Hi
> >
> > After having had some issues with inconcistent tso/gso configuration
> > causing performance issues for sch_fq with pacing in one of my systems,
> > I wonder if is it still recommended to disable gso/tso for interfaces
> > used with fq_codel qdiscs and shaping using HTB etc.
>
> At lower bandwidths gro can do terrible things. Say you have a 1Mbit
> uplink, and IW10. (At least one device (mvneta) will synthesise 64k of
> gro packets)
>
> a single IW10 burst from one flow injects 130ms of latency.
>
> >
> > If there is a trade off, at which bandwith does it generally make more
> > sense to enable tso/gso than to have it disabled when doing HTB shaped
> > fq_codel qdiscs?
>
> I stopped caring about tuning params at > 40Mbit. < 10 gbit, or rather,
> trying get below 200usec of jitter|latency. (Others care)
>
> And: My expectation was generally that people would ignore our
> recommendations on disabling offloads!
>
> Yes, we should revise the sample sqm code and recommendations for a post
> gigabit era to not bother with changing network offloads. Were you
> modifying the old debloat script?
>

​I just picked it up from just about any bufferbloat script or introduction
​I have seen in the last 4 years.
In addition it seemed to bring the bandwith accuracy of the shaped stream a
little bit closer to the bandwith I actually configured in HTB in my own
testing, which, if I remember correctly, was then done on a symmetrical
link that was shaped to around 25 mbit/s, so I just took it for granted.

However, the fq pacing issue I had when I had a bond interface with tso and
gso disabled on top of physical nics with tso and gso enabled, made me
think that disabling tso and gso perhaps is not really expected behaviour
for new implentations in the linux network stack. Perhaps it works nicely
for my shaping needs, but also gives me other not so obvious issues in
other ways.


> TBF & sch_Cake do peeling of gro/tso/gso back into packets, and then
> interleave their scheduling, so GRO is both helpful (transiting the
> stack faster) and harmless, at all bandwidths.
>
> HTB doesn't peel. We just ripped out hsfc for sqm-scripts (too buggy),
> alsp. Leaving: tbf + fq_codel, htb+fq_codel, and cake models there.
>
> ...
>
> Cake is coming along nicely. I'd love a test in your 2Gbit bonding
> scenario, particularly in a per host fairness test, at line or shaped
> rates. We recently got cake working well with nat.
>
>
​Is this something I can do for you? This is a system in production.
Non-critical enough to play with some qdiscs and generate some bandwith
usage, but still in production​. It is not really possible for me to remove
all other traffic and factors that may interfere with the results (or is a
real life scenario perhaps the point?). But running a few scripts is no
problem if that is what is required!



> http://blog.cerowrt.org/flent/steam/down_working.svg (ignore the latency
> figure, the 6 flows were to spots all over the world)
>
> > Regards,
> > Hans-Kristian
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 5674 bytes --]

      parent reply	other threads:[~2017-01-27 19:56 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-27  7:21 [Bloat] " Hans-Kristian Bakke
2017-01-27  7:55 ` Dave Täht
2017-01-27 14:40   ` Eric Dumazet
2017-01-27 14:49     ` Sebastian Moeller
2017-01-27 14:59       ` Eric Dumazet
     [not found]     ` <CAD_cGvErzbNiP+5ADhboWpGj8Q-rQrqaRYvFZ4U8CjxEregZ4A@mail.gmail.com>
2017-01-27 19:57       ` [Bloat] Fwd: " Hans-Kristian Bakke
     [not found]   ` <CAD_cGvFSmmFOAyArqCzjhSZAwDYnBqpAvCjKAyBi+PJS5Ofm3A@mail.gmail.com>
2017-01-27 19:56     ` Hans-Kristian Bakke [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAD_cGvG0PCOb-iJwkFL2GoXW2fUiwshwr1Wv__Y1hCru0PJ7ew@mail.gmail.com \
    --to=hkbakke@gmail.com \
    --cc=bloat@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox