From: Matthias Tafelmeier <matthias.tafelmeier@gmx.net>
To: "bloat@lists.bufferbloat.net" <bloat@lists.bufferbloat.net>
Subject: [Bloat] qdisc traversing flows
Date: Sat, 27 Jan 2018 14:40:22 +0100 [thread overview]
Message-ID: <ca9f3dbc-7c14-b743-635b-a88bcd329a25@gmx.net> (raw)
[-- Attachment #1.1.1.1: Type: text/plain, Size: 2818 bytes --]
Hallo,
since this ML has a strong qdisc oriented spin, I'd like to share what I
did end of last year. I was playing a little with BCC/eBPF and kernel
flow interfacing [1] ... excuses for the code quality, it was
prototyping, though ...
I can imagine reviving efforts as to fleutan, since I perceive many
usefel aspects as to flowing and associations of it are simply not
covered yet, at least no conveniently accessible from user land end and
not on from a per node angle. Especially the case for backend scenarios
with > thousands of flows. Quite some acre to be plowed if you asked me.
Feel free to prove me wrong. Mostly, I perceive quicker leaps are down
to mundane things like efficient, convenient diggestabilty of low level
kernel interfacing output ... e.g. iproute2. A lot of vibrant dymanics
are perceivable lately in this corner though - I'm applauding. [2][]
Until further progress, might to the merrit of some.
|$ sudo ./fleutan flows -q -i 5 qdisc queues #> load (bytes) per qu
####################################################################################################
███████████████████████ 0.3K 0
██████████████████████████████████████████████████████████████████████████
1.00K 1 ---- flowing volumes per qu ##> 0
#######################################################################################################################################################
████ 66.00 192.168.10.50#47956 91.1.49.97#80 █████ 78.00 ::#58 ::#0
██████ 86.00 2003:62:4625:d1a4:a166:cf47:30a6:e612#51358
2a00:1450:4001:80b::200a#80 ██████ 86.00
2003:62:4625:d1a4:a166:cf47:30a6:e612#51360 2a00:1450:4001:80b::200a#80
---- flowing volumes per qu ##> 1
#######################################################################################################################################################
██████ 86.00 2002:22:4625:d1a4:a166:cf47:30a6:e612#51360
2a00:1450:4001:80b::200a#80 ███████ 112.00 192.168.10.50#43660
192.111.249.9#443 ████████████ 172.00
2003:62:4625:d1a4:a166:cf47:30a6:e612#55834 2a02:26f0:fc::5c7a:317c#80
██████████████████████████████████████████████████ 710.00
2003:62:4625:d1a4:a166:cf47:30a6:e612#54292 2a00:1450:4001:819::200e#443 |
[1] https://github.com/cherusk/fleutan
[2] https://github.com/svinota/pyroute2
[3]
https://git.kernel.org/pub/scm/linux/kernel/git/dborkman/iproute2.git/commit/?id=43bc20ae736c943a7202fef07104eb1b5800b7f8
--
Besten Gruß
Matthias Tafelmeier
[-- Attachment #1.1.1.2: Type: text/html, Size: 4363 bytes --]
[-- Attachment #1.1.2: 0x8ADF343B.asc --]
[-- Type: application/pgp-keys, Size: 4806 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 538 bytes --]
reply other threads:[~2018-01-27 13:40 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ca9f3dbc-7c14-b743-635b-a88bcd329a25@gmx.net \
--to=matthias.tafelmeier@gmx.net \
--cc=bloat@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox