General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] sweeping up the bloat
@ 2015-06-16 17:10 Dave Taht
  2015-06-16 17:33 ` Steinar H. Gunderson
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Taht @ 2015-06-16 17:10 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: bloat

In a larger context, I was also trying to find ways to describe and
offload a set of simple tasks (like adding BQL and this), and to find
ways to enthuse folk about tackling them, at scale. We shouldn't have
to sweat the small stuff so much.

So this is my preliminary list:

http://www.bufferbloat.net/projects/bloat/wiki/Janitorial_Tasks

Any additions or suggestions for text welcomed. What other easy stuff
can be hit across the board using the bufferbloat-fighting facilities
that have landed in Linux?

As one example: I don't have a firm guideline for how, why or when or
when to enable pacing - so far as I know the patch(es) for xvnc is
still out of tree?


On Tue, Jun 16, 2015 at 9:33 AM, Jonathan Morton <chromatix99@gmail.com> wrote:
>
>> On 16 Jun, 2015, at 19:18, Steinar H. Gunderson <sgunderson@bigfoot.com> wrote:
>>
>> On Tue, Jun 16, 2015 at 09:11:08AM -0700, Dave Taht wrote:
>>> I just tossed off a quick patch for rsync, not that I have a clue as
>>> to whether it would make any difference there.
>>
>> For bulk applications (like rsync), how would this make sense at all?
>> I thought the entire point of this option was if you knew what data to send
>> now, but that you might want to change your mind later if it takes some time
>> to send it. The latter doesn't apply to rsync.
>
> Actually, it does.  Rsync is designed to be used to update an existing set of files, so the protocol interleaves control and data information asynchronously.
>
> More generally, I think it’s worth setting LOWAT on *any* application that uses select() or poll() with a readable and writable socket population simultaneously.
>
> - Jonathan Morton
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Dave Täht
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Bloat] sweeping up the bloat
  2015-06-16 17:10 [Bloat] sweeping up the bloat Dave Taht
@ 2015-06-16 17:33 ` Steinar H. Gunderson
  2015-06-16 17:53   ` Dave Taht
  0 siblings, 1 reply; 5+ messages in thread
From: Steinar H. Gunderson @ 2015-06-16 17:33 UTC (permalink / raw)
  To: bloat

On Tue, Jun 16, 2015 at 10:10:18AM -0700, Dave Taht wrote:
> As one example: I don't have a firm guideline for how, why or when or
> when to enable pacing

My guideline is “always”.

/* Steinar */
-- 
Homepage: http://www.sesse.net/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Bloat] sweeping up the bloat
  2015-06-16 17:33 ` Steinar H. Gunderson
@ 2015-06-16 17:53   ` Dave Taht
  2015-06-16 18:32     ` Eric Dumazet
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Taht @ 2015-06-16 17:53 UTC (permalink / raw)
  To: Steinar H. Gunderson; +Cc: bloat

On Tue, Jun 16, 2015 at 10:33 AM, Steinar H. Gunderson
<sgunderson@bigfoot.com> wrote:
> On Tue, Jun 16, 2015 at 10:10:18AM -0700, Dave Taht wrote:
>> As one example: I don't have a firm guideline for how, why or when or
>> when to enable pacing
>
> My guideline is “always”.

As you are the original champion of the idea, sure! :) Certainly
seeing the original paper on pacing thoroughly refuted[1] and
observing the effects on tons of traffic now, I, too am mostly a fan
(aggregation bothers me but it's hard to measure, and fq_codel remains
the right thing for routers, bare metal servers hosting vms, and stuff
that gets hw flow control. IMHO. I would love to be able to turn on
the right things more automagically in all these cases)

But guidelines on how to configure it in applications are missing. As
are when where and how to implement it in DCs, handheld clients,
internal servers and hosts, home routers, slow networks, VMs, and bare
metal servers.

Quic does pacing, so far as I know, entirely in userspace, or does it
rely on sch_fq to do so? Should a VOIP app or server like freeswitch
use it?

I see in the kernel support for sk_pacing_rate, and max_pacing_rate
and it is unclear how/when those options can be of aid and set. I have
never seen the patches for vlc (not that I recall, anyway), and
certainly think that pacing and tcp_notsent_lowat would help things
like x11 tunneling. And I'd like to add correct support for it to
netperf and flent so as to better observe the effects on and off at
various rates, bandwidths, and link layer technologies.

So as you are the expert on this, can I request you write up where to
use it and how? Neednt be one big chunk...

[1] https://reproducingnetworkresearch.wordpress.com/2013/03/13/cs244-13-tcp-pacing-and-buffer-sizing/

>
> /* Steinar */
> --
> Homepage: http://www.sesse.net/
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Dave Täht
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Bloat] sweeping up the bloat
  2015-06-16 17:53   ` Dave Taht
@ 2015-06-16 18:32     ` Eric Dumazet
  2015-06-16 19:33       ` Dave Taht
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Dumazet @ 2015-06-16 18:32 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

On Tue, 2015-06-16 at 10:53 -0700, Dave Taht wrote:

> But guidelines on how to configure it in applications are missing.

Well, you know the optimal rate -> set it. It is that simple.

If not, leave this to TCP stack.

>  As
> are when where and how to implement it in DCs, handheld clients,
> internal servers and hosts, home routers, slow networks, VMs, and bare
> metal servers.
> 
> Quic does pacing, so far as I know, entirely in userspace, or does it
> rely on sch_fq to do so? Should a VOIP app or server like freeswitch
> use it?

A Quic server handles thousands (millions ?) of flows. Having one kernel
socket per flow would be way too expensive.

(And BTW, rx path in UDP is not optimized for 4-tuple hashing).

There are 2 hash tables, one lookup on destination_IP:destination_port,
and one on *:destination_port.

For Quic server, all sockets would share same keys.

> 
> I see in the kernel support for sk_pacing_rate, and max_pacing_rate
> and it is unclear how/when those options can be of aid and set.

Really ?

You haven't try hard, and this is quite upsetting given your concerns.

http://www.spinics.net/lists/netdev/msg251368.html

u32 val = 1000000;
setsockopt(sockfd, SOL_SOCKET, SO_MAX_PACING_RATE, &val, sizeof(val));


Can it be simpler than that ? (In C I mean...)




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Bloat] sweeping up the bloat
  2015-06-16 18:32     ` Eric Dumazet
@ 2015-06-16 19:33       ` Dave Taht
  0 siblings, 0 replies; 5+ messages in thread
From: Dave Taht @ 2015-06-16 19:33 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

On Tue, Jun 16, 2015 at 11:32 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Tue, 2015-06-16 at 10:53 -0700, Dave Taht wrote:
>
>> But guidelines on how to configure it in applications are missing.
>
> Well, you know the optimal rate -> set it. It is that simple.
>
> If not, leave this to TCP stack.
>
>>  As
>> are when where and how to implement it in DCs, handheld clients,
>> internal servers and hosts, home routers, slow networks, VMs, and bare
>> metal servers.
>>
>> Quic does pacing, so far as I know, entirely in userspace, or does it
>> rely on sch_fq to do so? Should a VOIP app or server like freeswitch
>> use it?
>
> A Quic server handles thousands (millions ?) of flows. Having one kernel
> socket per flow would be way too expensive.
>
> (And BTW, rx path in UDP is not optimized for 4-tuple hashing).
>
> There are 2 hash tables, one lookup on destination_IP:destination_port,
> and one on *:destination_port.
>
> For Quic server, all sockets would share same keys.

thank you. I am watching the progress on the github libs for quic with
great anticipation.

>>
>> I see in the kernel support for sk_pacing_rate, and max_pacing_rate
>> and it is unclear how/when those options can be of aid and set.
>
> Really ?
>
> You haven't try hard, and this is quite upsetting given your concerns.

Consider the last few weeks as "trying harder".

While I have great enthusiasm for all the improvements in e2e stuff,
my time for any of it shrank considerably while keeping a roof over my
head with largely irrellevant work, teasing the long tail of folk that
still believed in reno, bringing up new devices that were 2.6 based
(sigh), and stabilizing openwrt stuff for a new re-deployment on 3.18,
that had significant breakages in babel (1.6.1 is due tonight),
dnsmasq (2.73 just released) and ongoing issues in hnetd, and dhcp-pd.

toke's testbed remains at 3.14. My principal server is still 3.9.

In recent months I have established 3.19 servers all over the world to
test with, but have not got around to doing anything truly coherent
with them. Next step was to add them to the flent pool. been
distracted by "cake".

but my ongoing, massive, step, has been to try to assemble sufficient
solutions and resources to tackle make-wifi-fast at all layers 0-10,
which has eaten most of my "spare" time for many, many months now.

Jim is focused on other things, and I long ago hit "overwhelm" with
the resources I had.

Only now, am I trying to catch up on all the good stuff that has
happened e2e since I last could pay any attention, and it seemed like
a better idea to instead be trying to find people willing to step up
on all other fronts in the bufferbloat battle.

I gotta admit, bufferbloat.net was a lot more fun 4+ years ago. I
stopped scaling shortly before we finalized cerowrt.

I am all in favor of a top down level perspective from you and all
others of this effort as to what we (And I) should be prioritizing in
the future. Closing up shop entirely is also an option.

For fun, these days, I hack on FPGAs.

> http://www.spinics.net/lists/netdev/msg251368.html

Although *I* read all your commits every month (which used to be
daily), I only grepped the kernel for "pacing" just now. There is for
example, nothing on it in Documentation/networking . There are no
patches in open source code enabling it that I can find, actually
using SO_MAX_PACING_RATE. It IS in my glibc headers, but
TCP_NOTSENT_LOWAT is not, which is sort of what sparked this thread -
what good had happened since I last paid attention, and how far had it
spread?

So I think a little more outreach and example patches for various
common applications is needed. From everybody.

> u32 val = 1000000;
> setsockopt(sockfd, SOL_SOCKET, SO_MAX_PACING_RATE, &val, sizeof(val));

Thx. flent and/or netperf could use a fixed rate sender test exercising this.

> Can it be simpler than that ? (In C I mean...)

Heh. Nope.

>
>
>



-- 
Dave Täht
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-06-16 19:33 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-16 17:10 [Bloat] sweeping up the bloat Dave Taht
2015-06-16 17:33 ` Steinar H. Gunderson
2015-06-16 17:53   ` Dave Taht
2015-06-16 18:32     ` Eric Dumazet
2015-06-16 19:33       ` Dave Taht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox