Hi All,
Happy New Year! My apologies up front if this email is not of interest and hence spam to you.
I was hoping to get some technical feedback from industry networking and queuing experts with respect to latency fault detection and forwarding path decisions.
Iperf 2 flows code produces dendrograms (ward clustering) with respect to e2e latencies. (Latencies can be video frames, write to reads, packets, etc.) The metric used to populate the distance matrices comes per the
Kolmogorov Smirnov two sample test which supports comparing non-parametric distributions.
WiFi 6 plans to use MU transmissions to reduce latency as one TXOP or scheduled trigger can TX multiple AP/STA packets, i.e. kinda a form of multicast (just doesn't use L2 MCAST DAs but rather does it in the RF domain.)
I'm wondering about realtime detection of "latency timing violations," and possibly using ML to identify better low latency path trees (LLPT) analogous to IP multicast RPT and SPT. The idea is to cluster the graphs in a way that packets get probabilistically bunched into microbursts per the final AP's MU groups (since it's assumed the last hop is WiFi.)
Does this seem technically reasonable and, if so, is there a reasonable return on the engineering? I'd like to prototype it with iperf 2, python (pyflows) and on some test rigs but it's a lot of work and I don't want to take on that work if the return on engineering (ROE) is near zero.
Thanks in advance for your thoughts and opinions,
Bob