[Bloat] virtio_net: BQL?
Stephen Hemminger
stephen at networkplumber.org
Mon May 17 19:00:36 EDT 2021
On Mon, 17 May 2021 14:48:46 -0700
Dave Taht <dave.taht at gmail.com> wrote:
> On Mon, May 17, 2021 at 1:23 PM Willem de Bruijn
> <willemdebruijn.kernel at gmail.com> wrote:
> >
> > On Mon, May 17, 2021 at 2:44 PM Dave Taht <dave.taht at gmail.com> wrote:
> > >
> > > Not really related to this patch, but is there some reason why virtio
> > > has no support for BQL?
> >
> > There have been a few attempts to add it over the years.
> >
> > Most recently, https://lore.kernel.org/lkml/20181205225323.12555-2-mst@redhat.com/
> >
> > That thread has a long discussion. I think the key open issue remains
> >
> > "The tricky part is the mode switching between napi and no napi."
>
> Oy, vey.
>
> I didn't pay any attention to that discussion, sadly enough.
>
> It's been about that long (2018) since I paid any attention to
> bufferbloat in the cloud and my cloudy provider (linode) switched to
> using virtio when I wasn't looking. For over a year now, I'd been
> getting reports saying that comcast's pie rollout wasn't working as
> well as expected, that evenroute's implementation of sch_cake and sqm
> on inbound wasn't working right, nor pf_sense's and numerous other
> issues at Internet scale.
>
> Last week I ran a string of benchmarks against starlink's new services
> and was really aghast at what I found there, too. but the problem
> seemed deeper than in just the dishy...
>
> Without BQL, there's no backpressure for fq_codel to do its thing.
> None. My measurement servers aren't FQ-codeling
> no matter how much load I put on them. Since that qdisc is the default
> now in most linux distributions, I imagine that the bulk of the cloud
> is now behaving as erratically as linux was in 2011 with enormous
> swings in throughput and latency from GSO/TSO hitting overlarge rx/tx
> rings, [1], breaking various rate estimators in codel, pie and the tcp
> stack itself.
>
> See:
>
> http://fremont.starlink.taht.net/~d/virtio_nobql/rrul_-_evenroute_v3_server_fq_codel.png
>
> See the swings in latency there? that's symptomatic of tx/rx rings
> filling and emptying.
>
> it wasn't until I switched my measurement server temporarily over to
> sch_fq that I got a rrul result that was close to the results we used
> to get from the virtualized e1000e drivers we were using in 2014.
>
> http://fremont.starlink.taht.net/~d/virtio_nobql/rrul_-_evenroute_v3_server_fq.png
>
> While I have long supported the use of sch_fq for tcp-heavy workloads,
> it still behaves better with bql in place, and fq_codel is better for
> generic workloads... but needs bql based backpressure to kick in.
>
> [1] I really hope I'm overreacting but, um, er, could someone(s) spin
> up a new patch that does bql in some way even half right for this
> driver and help test it? I haven't built a kernel in a while.
>
The Azure network driver (netvsc) also does not have BQL. Several years ago
I tried adding it but it benchmarked worse and there is the added complexity
of handling the accelerated networking VF path.
More information about the Bloat
mailing list