* [Bloat] generic tcp - window scaling limits
@ 2017-11-04 13:36 Matthias Tafelmeier
2017-11-04 22:05 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 3+ messages in thread
From: Matthias Tafelmeier @ 2017-11-04 13:36 UTC (permalink / raw)
To: bloat
[-- Attachment #1.1.1: Type: text/plain, Size: 1078 bytes --]
Hello,
before bringing it forward to LNX netdev and with the risk of restating
something, I wanted to hearken for the take of this here, since it's
also bloating related - when looking at it from the link
clogging/flowyness point of view.
I first surmised some DQL (as done for driver rings - BQL) introduction
to the TCP hard limit could improve perceived flow latency - though the
current hard limit is perfectly enough in conjunction with the window
advertisement/scaling.
As of what I measured here
https://matthias0tafelmeier.wordpress.com/2017/08/24/linux-tcp-window-scaling-quantification-rmemwmem/
it rather appears introducing a kind of per socket settability of the
advertising hard limit maybe organized on a link/receive side realm (be
it a subnet or a flow group) for performance can improve the scene.
Sure, it's not possible to align both ends (RCV/SND) always, but for the
back end world and therefore as a general statement it should hold true.
Let me know if I'm totally going astray.
--
Besten Gruß
Matthias Tafelmeier
[-- Attachment #1.1.2: 0x8ADF343B.asc --]
[-- Type: application/pgp-keys, Size: 4806 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 538 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Bloat] generic tcp - window scaling limits
2017-11-04 13:36 [Bloat] generic tcp - window scaling limits Matthias Tafelmeier
@ 2017-11-04 22:05 ` Toke Høiland-Jørgensen
2017-11-06 21:02 ` Matthias Tafelmeier
0 siblings, 1 reply; 3+ messages in thread
From: Toke Høiland-Jørgensen @ 2017-11-04 22:05 UTC (permalink / raw)
To: Matthias Tafelmeier, bloat
Matthias Tafelmeier <matthias.tafelmeier@gmx.net> writes:
> Hello,
>
> before bringing it forward to LNX netdev and with the risk of restating
> something, I wanted to hearken for the take of this here, since it's
> also bloating related - when looking at it from the link
> clogging/flowyness point of view.
>
> I first surmised some DQL (as done for driver rings - BQL) introduction
> to the TCP hard limit could improve perceived flow latency - though the
> current hard limit is perfectly enough in conjunction with the window
> advertisement/scaling.
>
> As of what I measured here
>
> https://matthias0tafelmeier.wordpress.com/2017/08/24/linux-tcp-window-scaling-quantification-rmemwmem/
Erm, what exactly are you trying to show here? As far as I can tell from
the last (1-flow) plot, you are saturating the link in all the tests
(and indeed the BDP for a 1Gbps with 2ms RTT is around 250kb), which
means that the TCP flow is limited by cwin and not rwin; so I'm not sure
you are really testing what you say you are.
I'm not sure why the latency varies with the different tests, though;
you sure there's not something else varying? Have you tried running
Flent with the --socket-stats option and taking a look at the actual
window each flow is using?
-Toke
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Bloat] generic tcp - window scaling limits
2017-11-04 22:05 ` Toke Høiland-Jørgensen
@ 2017-11-06 21:02 ` Matthias Tafelmeier
0 siblings, 0 replies; 3+ messages in thread
From: Matthias Tafelmeier @ 2017-11-06 21:02 UTC (permalink / raw)
To: Toke Høiland-Jørgensen, bloat
[-- Attachment #1.1.1: Type: text/plain, Size: 1526 bytes --]
> Erm, what exactly are you trying to show here?
I wanted to generically isolate the 'geometry' to expect when scaling
the TCP advertising mem limits. And I must say, it looks a little
askant. Might worth a look at the data structures there. Parallelizing
socket buffers would be rather application specific, I guess. Hm ...
> As far as I can tell from
> the last (1-flow) plot, you are saturating the link in all the tests
> (and indeed the BDP for a 1Gbps with 2ms RTT is around 250kb), which
> means that the TCP flow is limited by cwin and not rwin; so I'm not sure
> you are really testing what you say you are.
That's correct. Excuses for that one, I was decepted by the latency
plots where I surmised to perceive the (reno ~ cubic) saw-toth and the
virtualisation was adding a tbf instance on one of the links. All that
perfectly explains why I wasn't influencing bandwith at all.
> I'm not sure why the latency varies with the different tests, though;
> you sure there's not something else varying? Have you tried running
> Flent with the --socket-stats option and taking a look at the actual
Reran/updated (including --socket-stats) everything and that's now way
more comforting. There are not such performance rifts anymore as
perceived beforehand - 'no low hanging fruit'. For the per link/peering
tuning I mentioned, that's appearing to be more application specific
logic (syscall based) then I'd amend.
Thanks for having had a look.
--
Besten Gruß
Matthias Tafelmeier
[-- Attachment #1.1.2: 0x8ADF343B.asc --]
[-- Type: application/pgp-keys, Size: 4806 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 538 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2017-11-06 21:02 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-04 13:36 [Bloat] generic tcp - window scaling limits Matthias Tafelmeier
2017-11-04 22:05 ` Toke Høiland-Jørgensen
2017-11-06 21:02 ` Matthias Tafelmeier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox