<div dir="ltr"><div>Thanks again for this link. Super interesting.</div><div><br></div><div>Regarding monitoring socket performance with Kubernetes clusters, I've actually been working on deploying the xTCP socket monitoring tool into kubernetes clusters. The intention would be to stream the tcp_diag data out of all the PoD on a regular basis.<br></div><div><br></div><div>The challenge is that ideally from a single process, you could open a netlink socket into every PoD on the kubernetes node. This is not possible, as far as I understand, because a process can only live in a single name space at any given time.</div><div><br></div><div>e.g. You can't do this:<br></div><div><br></div><div><img src="cid:ii_m4ddvnr40" alt="image.png" width="578" height="322"></div><div><br></div><div>The simple solution would be too run many versions of xTCP as a "sidecar" in each PoD, like this:</div><div><br></div><div><img src="cid:ii_m4ddwm3x1" alt="image.png" width="578" height="322"></div><div><br></div><div>This isn't great because then xTCP would be duplicated many times, so it would waste RAMs, and you would have lots of Kafka sockets streaming the socket data out out each PoD.<br></div><div><br></div><div>An alternative I was thinking would be to possibly have a small unix domain socket (UDS) to netlink proxy in each PoD. Over the UDS, the xTCP daemonset ( single instance per node) could read and write the tcp_diag data.</div><div><br></div><div>(e.g. I was thinking of a little rust binary that would essentially open a UDS socket and a netlink socket, and then essentially copy from one to the other. )<br></div><div><br></div><div><br></div><div><img src="cid:ii_m4ddxpo32" alt="image.png" width="578" height="322"><br><br></div><div><br></div><div>I don't really know if this is a good idea, or if I'm missing some other way to extract the socket data from many PoDs. Happy to hear ideas please! :)<br></div><div><br></div><div>Full xtCP slides, including these "xTCP for kubernetes" here:<br></div><div><a href="https://docs.google.com/presentation/d/11rixKNfIBCdofUpPL2wOuiWJXa40x4I0A1wuTMMv4Uo/edit#slide=id.g31cb19b2f14_0_87">https://docs.google.com/presentation/d/11rixKNfIBCdofUpPL2wOuiWJXa40x4I0A1wuTMMv4Uo/edit#slide=id.g31cb19b2f14_0_87</a></div><div><br></div><div>Thanks,</div><div>Dave Seddon<br></div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Tue, Nov 19, 2024 at 8:07 AM Dave Taht via Cake <<a href="mailto:cake@lists.bufferbloat.net">cake@lists.bufferbloat.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><a href="https://github.com/cilium/cilium/issues/29083#issuecomment-2485142294" rel="noreferrer" target="_blank">https://github.com/cilium/cilium/issues/29083#issuecomment-2485142294</a><br>
<br>
-- <br>
Dave Täht CSO, LibreQos<br>
_______________________________________________<br>
Cake mailing list<br>
<a href="mailto:Cake@lists.bufferbloat.net" target="_blank">Cake@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/cake" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/cake</a><br>
</blockquote></div><div><br clear="all"></div><br><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>Regards,<br></div>Dave Seddon<br>+1 415 857 5102<br></div></div></div></div></div></div>