[Make-wifi-fast] QoS and test setups
bob.mcmahon at broadcom.com
Sun May 8 15:07:03 EDT 2016
On a statistician - I've been learning from Shashi Sathyanarayana of Numeric
Insight <http://www.numericinsight.com/Home.html> with an intention of
using machine learning techniques (PCA,
<https://en.wikipedia.org/wiki/Principal_component_analysis> etc.) applied
to both network traffic and wi-fi traffic.
*Shashi Sathyanarayana Ph.D, the founder of Numeric Insight, Inc has spent
more than a decade accumulating expertise in scientific programming,
algorithm development and teaching.*
Things are still in the early stages of prototyping so if there are
specific needs not mentioned in the current threads it would be interesting
to know them.
(A current project is clustering rig results per the frequency responses
and spatial streams eigemnmodes, i.e. learn per multiple test rigs
phy characteristics which should allow for scaling. Thought this has to be
done per controlled PHY environments.)
With respect to per UDP packet latency, the end/end is already in 2.0.8.
Realtime scheduling and kernel RX timestamping is used if the host supports
them (well, except for Mac OS X where that part of the code is still not
Something others might find helpful is the ability to insert microsecond
timestamps inside a UDP payload as packets move through a subsystem. It
might be a good idea to standardize this if its of interest to the larger
group. Here's an example where there is the end/end timestamp and five
contributing timestamps. The client inserts a tag on write to trigger
timestamp insertion and the server will produce the subgrouped
mean/min/max/stdev and a PDFs per each report. A higher level tool then
can plot them for either human visualizations or machine analysis.
11:12:35.158 HNDLR UDP-rx [ 3] 0.10-0.15 sec 1467060 Bytes
234729600 bits/sec 1.972 ms 45/ 1043 (4.3%) 2.322/ 1.224/ 3.664/
0.534 DHD: 2.317/ 1.219/ 3.656/ 0.534 FW1: 1.900/ 1.209/ 2.994/ 0.310
FW2: 4.772/ 4.109/ 8.120/ 0.304 FW3:32.626/ 0.169/64.681/18.486 FW4:
1.389/ 0.946/ 2.075/ 0.215 ms 19787 pps (4357Bif,file8)
11:12:35.161 HNDLR UDP-rx [ 3] 0.10-0.15 sec
PDF:ToT(bins/size=10k/10us,10/90=170/308)=123:1 124:1 127:2 128:1
129:1 130:1 132:6 133:2 134:2 135:2 136:2 137:6 138:3 139:2 140:1
141:3 142:6 143:3 144:4 146:6 147:5 148:4 149:3 150:2 151:6 152:5
153:5 154:2 155:9 156:5 157:8 158:5 160:4 161:6 162:4 163:3 164:2
165:4 166:6 167:4 168:1 169:4 170:5 171:4 172:4 173:4 174:3 175:4
176:5 177:3 178:5 179:2 180:6 181:6 182:3 183:5 184:5 185:6 186:4
187:1 188:5 189:4 190:5 191:6 192:2 193:6 194:6 195:7 196:5 197:4
198:5 199:5 200:9 201:2 202:6 203:4 204:5 205:9 206:4 207:8 208:5
209:5 210:6 211:5 212:7 213:6 214:8 215:4 216:8 217:5 218:8 219:6
220:4 221:8 222:4 223:6 224:8 225:6 226:7 227:3 228:10 229:3 230:7
231:6 232:7 233:7 234:3 235:12 236:5 237:6 238:7 239:6 240:9 241:4
242:12 243:3 244:12 245:4 246:4 247:7 248:4 249:6 250:6 251:10 252:6
253:4 254:6 255:6 256:7 257:7 258:7 259:8 260:3 261:6 262:3 263:8
264:3 265:10 266:6 267:4 268:9 269:3 270:8 271:6 272:10 273:4 274:3
275:10 276:3 277:7 278:4 279:6 280:9 281:4 282:6 283:3 284:7 285:5
286:5 287:7 288:4 289:7 290:2 291:6 292:5 293:4 294:7 295:5 296:6
297:4 298:4 299:4 300:4 301:4 302:2 303:5 304:4 305:5 306:3 307:5
308:9 309:3 310:5 311:2 312:2 313:3 314:3 315:3 316:1 317:3 318:3
319:1 320:5 321:2 322:5 324:1 325:1 326:3 327:2 329:2 331:1 332:2
333:1 334:1 336:2 338:1 341:2 343:2 345:1 347:1 348:1 350:1 353:1
355:1 357:1 359:1 362:1 364:1 367:1 (4357Bif,file8)
[snipped secondary histograms]
On Sat, May 7, 2016 at 2:49 PM, Dave Taht <dave.taht at gmail.com> wrote:
> On Sat, May 7, 2016 at 9:50 AM, Dave Taht <dave.taht at gmail.com> wrote:
> > On Thu, May 5, 2016 at 7:08 PM, Aaron Wood <woody77 at gmail.com> wrote:
> >> I saw Dave's tests on WMM vs. without, and started thinking about test
> >> setups for systems when QoS is in use (using classification, not just
> >> SQM/AQM).
> >> There are a LOT of assumptions made when QoS systems based on marked
> >> is used:
> >> - That traffic X can starve others
> >> - That traffic X is more/most important
> >> Our test tools are not particularly good at anything other than
> >> the network (UDP or TCP). At least TCP has a built-in congestion
> >> I've seen many UDP (or even raw IP) test setups that didn't look
> >> like "real" traffic.
> > I sat back on this in the hope that someone else would jump forward...
> > but you asked...
> > I ran across this distribution today:
> > https://en.wikipedia.org/wiki/Rayleigh_distribution which looks closer
> > to reflecting the latency/bandwidth problem we're always looking at.
> > I found this via this thread:
> > https://news.ycombinator.com/item?id=11644845 which was fascinating.
> > I have to admit I have learnt most of my knowledge of statistics
> > through osmosis and by looking at (largely realtime) data that does
> > not yield to "normal" distributions like gaussian. So, rather than
> > coming up with useful methods to reduce stuff to single numbers, I
> > rely on curves and graphs and being always painfully aware of how
> > sampling intervals can smooth out real spikes and problems, and try to
> > convey intuition... and the wifi industry is wedded to charts of "rate
> > over range for tcp and udp". Getting to rate+latency over range for
> > those variables would be nice to see happen in their test tools....
> > There is another distribution that andrew was very hot on a few years
> > ago: https://en.wikipedia.org/wiki/Tracy%E2%80%93Widom_distribution
> > I thought something like it could be used to look at basic problems in
> > factoring in (or factoring out) header overheads, for example.
> > It would be good if we had a good statistician(s) "on staff"... or
> > there must be a whole set of mathematician's mailing lists somewhere,
> > all aching to dive into a more real-world problem?
> >> I know Dave has wanted an isochronous traffic tool that could simulate
> >> traffic (with in-band one-way latency/jitter/loss measurement
> > d-itg, for which flent has some support for, "does that" but it's a
> > pita to setup and not exactly safe to use over the open internet.
> > *Yes*, the fact that the current rrul test suite and most others in
> > flent do not have an isochronous baseline measurement - and uses a
> > rtt-bound measurement instead - leads to very misleading comparison
> > results when the measurement traffic gets a huge latency reduction.
> > Measurement traffic thus becomes larger - and the corresponding
> > observed Bandwidth in most flent tests drops, as we are only measuring
> > the bulk flows, not the measurement, nor the acks.
> > using ping-like traffic was "good enough", when we started, and were
> > cutting latencies by orders of magnitude on a regular basis, but, for
> > example, I just showed a long term 5x latency reduction for stock wifi
> > vs michal's patches at 100mbit - from 100ms to 20ms or so, and I have
> > no idea how the corresponding bandwidth loss is correlated. In a
> > couple tests the measurement flows also drop into another wifi hw
> > queue entirely (and I'm pretty convinced that we should always fold
> > stuff into the nearest queue when we're busy, no matter the marking)
> > Anyway, I'm digesting a ton of the short term results we got from the
> > last week of testing michal's patches...
> > (see the cerowrt blog github repo and compare the stock vs fqmac35
> > results on the short tests). I *think* that most of the difference in
> > performance is due to noise on the test (the 120ms burps downward in
> > bandwidth caused by something else) , and some of the rest can be
> > accounted for by more measurement traffic, and probably all the rest
> > due to dql taking too long to ramp up.
> > The long term result of the fq_codel wifi patch at the mac80211 layer
> > was *better* all round, bandwidth stayed the same, latency and jitter
> > got tons better. (if I figure out what was causing the burps - they
> > don't happen on OSX, just linux) - anyway comparing the baseline
> > patches to the patch here on the second plot...
> > http://blog.cerowrt.org/post/predictive_codeling/
> > Lovely stuff.
> > But the short term results were noisy and the 10s of seconds long dql
> > ramp was visible on some of those tests (sorry, no link for those yet,
> > it was in one of michal's mails)
> > Also (in flent) I increasingly dislike sampling at 200ms intervals,
> > and would prefer to be getting insights at 10-20ms intervals. Or
> > lower! 1ms would be *perfect*. :) I can get --step-size in flent down
> > to about 40ms before starting to see things like fping "get behind" -
> > fixing that would require changing fping to use fdtimers to fire stuff
> > off more precisely than it does, or finding/writing another ping tool.
> > Linux fdtimers are *amazing*; we use those in tc_iterate.c.
> > Only way I can think about getting down below 5ms would be to have
> > better tools for looking at packet captures. I have not had much
> > chance to look at "teacup" as yet. tcptrace -G + xplot.org and
> > wireshark's tools are as far as I go. Any other tools for taking apart
> > captures out there? In particulary, aircaps of wifi traffic,
> > retransmits, rate changes have been giving me enough of a headache to
> > want to sit down and tear apart them with wireshark's lua stuff... or
> > something.
> > It would be nice to measure latencies in bulk flows, directly.
> > ...
> > I've long figured if we ever got to the basic isochronous test on the
> > 10ms interval I originally specified, that we'd either revise the rrul
> > related tests to suit (the rrul2016 "standard"), or create a new set
> > called "crrul" - "correct rrul".
> > We have a few isochronous tests to choose from. there are d-itg tests
> > in the suite that emulate voip fairly well. The show-stopper thus far
> > has been that doing things like that (or iperf/netperfs udp flooding
> > tests) are unsafe to use on the general internet, and I wanted some
> > form of test that negotiated a 3 way handshake, at least, and also
> > enforced a time limit on how long it sent traffic.
> > That said, to heck with it for internal tests as we are doing now.
> > We have a few simpler tools than d-itg that could be built upon. Avery
> > has the isoping tests which I'd forked mildly at one point, but never
> > got around to doing much with. There's also things like what I was
> > calling "twd" that I gave up on, and there's a very prisice
> precise "owamp" thing, part of the internet2 project. I had used it
> for a while (had a parser for it, even, because I preferred the raw
> data). I had basically forgotten about it because it is not packaged
> up for debian, and
> has a few issues with 64 bit platforms that I meant to poke into.
> I did enjoy trying to get it running a few minutes ago.
> root at dancer:/usr/local/etc# owampd
> owampd: WARNING: No limits specified.
> owampd: Running owampd as root is folly!
> owampd: Use the -U option! (or allow root with the -f option)
> I think I'd (or stephen walker) had packaged it up for cerowrt, but as
> we never got gps's into the hands of enough users (and my testbeds
> were largely divorced from the internet) I let the idea slide. Toke's
> testbed has fully synced time.
> now that gpses are even more cheap (like a raspi pi hat), hmm....
> Grump, it doesn't compile on aarch64....
> There is something of a great divide between us and the perfsonar project.
> They use iperf, not netperf, they work on fedora, not ubuntu...
> and last I recall they were still stuck at linux 2.6.32 or somesuch.
> anyone booted that up lately?
> >> What other tools do we need, for replicating traffic types that match
> >> these QoS types in wifi are meant to be used? I think we're doing an
> >> excellent job of showing how they can be abused. Abusing is pretty
> easy, at
> >> this point (rrul, iPerf, etc).
> > :) Solving for abuse is useful, I think, also.
> > Solving for real traffic types like HAS and videoconferencing would be
> > Having a steady, non-greedy flow (like a basic music or video stream)
> > test would be good.
> > I'd love to have a 3-5 flow HAS-like test to fold into the others.
> > I was unaware that iperf3 can output json, am not sure what else can
> > be done with it.
> > We had tried to use the web10g stuff at one point but the kernel
> > patches were too invasive. A lot of what was in web10g has probably
> > made it into the kernel by now, perhaps we can start pulling out more
> > complete stats with things like netstat -ss or TCP_INFO?
> > Incidentally - I don't trust d-itg very far. Could use fdtimers, could
> > use realtime privs.
> >> -Aaron Wood
> >> _______________________________________________
> >> Make-wifi-fast mailing list
> >> Make-wifi-fast at lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> > --
> > Dave Täht
> > Let's go make home routers and wifi faster! With better software!
> > http://blog.cerowrt.org
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> Make-wifi-fast mailing list
> Make-wifi-fast at lists.bufferbloat.net
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Make-wifi-fast