General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] generic tcp - window scaling limits
@ 2017-11-04 13:36 Matthias Tafelmeier
  2017-11-04 22:05 ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 3+ messages in thread
From: Matthias Tafelmeier @ 2017-11-04 13:36 UTC (permalink / raw)
  To: bloat


[-- Attachment #1.1.1: Type: text/plain, Size: 1078 bytes --]

Hello,

before bringing it forward to LNX netdev and with the risk of restating
something, I wanted to hearken for the take of this here, since it's
also bloating related - when looking at it from the link
clogging/flowyness point of view.

I first surmised some DQL (as done for driver rings - BQL) introduction
to the TCP hard limit could improve perceived flow latency - though the
current hard limit is perfectly enough in conjunction with the window
advertisement/scaling.

As of what I measured here

https://matthias0tafelmeier.wordpress.com/2017/08/24/linux-tcp-window-scaling-quantification-rmemwmem/

it rather appears introducing a kind of per socket settability of the
advertising hard limit maybe organized on a link/receive side realm (be
it a subnet or a flow group) for performance can improve the scene.
Sure, it's not possible to align both ends (RCV/SND) always, but for the
back end world and therefore as a general statement it should hold true.

Let me know if I'm totally going astray.

-- 
Besten Gruß

Matthias Tafelmeier


[-- Attachment #1.1.2: 0x8ADF343B.asc --]
[-- Type: application/pgp-keys, Size: 4806 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 538 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-11-06 21:02 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-04 13:36 [Bloat] generic tcp - window scaling limits Matthias Tafelmeier
2017-11-04 22:05 ` Toke Høiland-Jørgensen
2017-11-06 21:02   ` Matthias Tafelmeier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox