* [Make-wifi-fast] QoS and test setups @ 2016-05-06 2:08 Aaron Wood 2016-05-06 2:24 ` Bob McMahon ` (2 more replies) 0 siblings, 3 replies; 6+ messages in thread From: Aaron Wood @ 2016-05-06 2:08 UTC (permalink / raw) To: make-wifi-fast [-- Attachment #1: Type: text/plain, Size: 979 bytes --] I saw Dave's tests on WMM vs. without, and started thinking about test setups for systems when QoS is in use (using classification, not just SQM/AQM). There are a LOT of assumptions made when QoS systems based on marked packets is used: - That traffic X can starve others - That traffic X is more/most important Our test tools are not particularly good at anything other than hammering the network (UDP or TCP). At least TCP has a built-in congestion control. I've seen many UDP (or even raw IP) test setups that didn't look anything like "real" traffic. I know Dave has wanted an isochronous traffic tool that could simulate voip traffic (with in-band one-way latency/jitter/loss measurement capabilities). What other tools do we need, for replicating traffic types that match how these QoS types in wifi are meant to be used? I think we're doing an excellent job of showing how they can be abused. Abusing is pretty easy, at this point (rrul, iPerf, etc). -Aaron Wood [-- Attachment #2: Type: text/html, Size: 1189 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Make-wifi-fast] QoS and test setups 2016-05-06 2:08 [Make-wifi-fast] QoS and test setups Aaron Wood @ 2016-05-06 2:24 ` Bob McMahon 2016-05-06 4:41 ` Jonathan Morton 2016-05-07 16:50 ` Dave Taht 2 siblings, 0 replies; 6+ messages in thread From: Bob McMahon @ 2016-05-06 2:24 UTC (permalink / raw) To: Aaron Wood; +Cc: make-wifi-fast [-- Attachment #1.1: Type: text/plain, Size: 3693 bytes --] iperf 2.0.8 <https://sourceforge.net/projects/iperf2/> has microsecond end/end latencies per each report interval in mean/min/max/stdev for "end/end" latencies, use -e for enhanced reports. Histograms/PDFs are in prototype stage (see below.) Isochronous traffic is also being prototyped. (The server will merelyindicate a jitter buffer overun/underrun to indicate a problem.) All of this does require synchronized clocks. A quality oscillator to act as a PTP grandmaster can be found in the open market using various time sources or and OCXO can run in free wheeling mode. I use a GPS disciplined oscillator from spectracom and it's been working great. [ 3] 0.05-0.10 sec 1558200 Bytes 249312000 bits/sec 0.157 ms 0/ 1060 (0%) *1.985/ 1.098/ 3.219/ 0.473 ms* 21280 pps This above provides mean/min/max/stdev per every packet. Though per the central limit theorem the underlying distribution is lost and sometimes the underlying distribution is needed. Hence, a proposal is to support histograms, something like: [ 3] 0.05-0.10 sec PDF(bins/binsize=10k/10us, 10/90=137/261)=110:1 111:2 113:1 114:4 115:2 116:4 118:2 119:6 120:3 121:6 122:2 123:1 124:5 125:6 126:7 127:3 128:4 129:3 130:9 131:4 132:7 133:6 134:5 135:7 136:1 137:6 138:10 139:9 140:9 141:8 142:9 143:9 144:14 145:7 146:6 147:6 148:10 149:7 150:7 151:4 152:12 153:8 154:6 155:7 156:10 157:5 158:9 159:4 160:2 161:7 162:6 163:9 164:3 165:5 166:11 167:5 168:8 169:4 170:6 171:8 172:8 173:4 174:5 175:11 176:6 177:8 178:2 179:6 180:10 181:10 182:7 183:4 184:7 185:9 186:11 187:5 188:4 189:8 190:7 191:8 192:2 193:4 194:10 195:12 196:7 197:3 198:4 199:10 200:8 201:5 202:3 203:9 204:8 205:8 206:2 207:5 208:10 209:9 210:6 211:4 212:5 213:13 214:9 215:5 216:5 217:10 218:10 219:4 220:4 221:6 222:15 223:6 224:5 225:5 226:5 227:5 228:13 229:5 230:5 231:9 232:7 233:7 234:1 235:9 236:6 237:7 238:8 239:4 240:2 241:8 242:11 243:4 244:3 245:5 246:7 247:7 248:3 249:3 250:11 251:8 252:5 253:3 254:2 255:12 256:7 257:7 258:2 259:8 260:9 261:9 262:3 263:3 264:8 265:7 266:4 267:4 268:4 269:7 270:5 271:3 272:2 273:3 274:4 275:4 276:1 277:1 278:4 279:2 280:2 281:2 282:2 283:2 284:1 285:2 287:2 288:3 289:2 290:1 292:1 293:2 297:1 298:2 302:2 303:1 307:1 308:1 312:2 317:2 321:1 322:1 It's assumed a higher level tool will parse and handle the histograms. Example plots might look like: [image: Inline image 1] [image: Inline image 1] Bob On Thu, May 5, 2016 at 7:08 PM, Aaron Wood <woody77@gmail.com> wrote: > I saw Dave's tests on WMM vs. without, and started thinking about test > setups for systems when QoS is in use (using classification, not just > SQM/AQM). > > There are a LOT of assumptions made when QoS systems based on marked > packets is used: > > - That traffic X can starve others > - That traffic X is more/most important > > Our test tools are not particularly good at anything other than hammering > the network (UDP or TCP). At least TCP has a built-in congestion control. > I've seen many UDP (or even raw IP) test setups that didn't look anything > like "real" traffic. > > I know Dave has wanted an isochronous traffic tool that could simulate > voip traffic (with in-band one-way latency/jitter/loss measurement > capabilities). > > What other tools do we need, for replicating traffic types that match how > these QoS types in wifi are meant to be used? I think we're doing an > excellent job of showing how they can be abused. Abusing is pretty easy, > at this point (rrul, iPerf, etc). > > -Aaron Wood > > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast > > [-- Attachment #1.2: Type: text/html, Size: 4895 bytes --] [-- Attachment #2: image.png --] [-- Type: image/png, Size: 49426 bytes --] [-- Attachment #3: image.png --] [-- Type: image/png, Size: 55556 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Make-wifi-fast] QoS and test setups 2016-05-06 2:08 [Make-wifi-fast] QoS and test setups Aaron Wood 2016-05-06 2:24 ` Bob McMahon @ 2016-05-06 4:41 ` Jonathan Morton 2016-05-07 16:50 ` Dave Taht 2 siblings, 0 replies; 6+ messages in thread From: Jonathan Morton @ 2016-05-06 4:41 UTC (permalink / raw) To: Aaron Wood; +Cc: make-wifi-fast > On 6 May, 2016, at 05:08, Aaron Wood <woody77@gmail.com> wrote: > > There are a LOT of assumptions made when QoS systems based on marked packets is used: > > - That traffic X can starve others > - That traffic X is more/most important Cake tries to take a more nuanced approach than the above, though it is designed for a wired link. However, there are many systems in the wild which do implement strict priority. It is, after all, one of the easiest schemes to implement within a single transmitter unit. - Jonathan Morton ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Make-wifi-fast] QoS and test setups 2016-05-06 2:08 [Make-wifi-fast] QoS and test setups Aaron Wood 2016-05-06 2:24 ` Bob McMahon 2016-05-06 4:41 ` Jonathan Morton @ 2016-05-07 16:50 ` Dave Taht 2016-05-07 21:49 ` Dave Taht 2 siblings, 1 reply; 6+ messages in thread From: Dave Taht @ 2016-05-07 16:50 UTC (permalink / raw) To: Aaron Wood, flent-devel; +Cc: make-wifi-fast, bloat, Michal Kazior On Thu, May 5, 2016 at 7:08 PM, Aaron Wood <woody77@gmail.com> wrote: > I saw Dave's tests on WMM vs. without, and started thinking about test > setups for systems when QoS is in use (using classification, not just > SQM/AQM). > > There are a LOT of assumptions made when QoS systems based on marked packets > is used: > > - That traffic X can starve others > - That traffic X is more/most important > > Our test tools are not particularly good at anything other than hammering > the network (UDP or TCP). At least TCP has a built-in congestion control. > I've seen many UDP (or even raw IP) test setups that didn't look anything > like "real" traffic. I sat back on this in the hope that someone else would jump forward... but you asked... I ran across this distribution today: https://en.wikipedia.org/wiki/Rayleigh_distribution which looks closer to reflecting the latency/bandwidth problem we're always looking at. I found this via this thread: https://news.ycombinator.com/item?id=11644845 which was fascinating. I have to admit I have learnt most of my knowledge of statistics through osmosis and by looking at (largely realtime) data that does not yield to "normal" distributions like gaussian. So, rather than coming up with useful methods to reduce stuff to single numbers, I rely on curves and graphs and being always painfully aware of how sampling intervals can smooth out real spikes and problems, and try to convey intuition... and the wifi industry is wedded to charts of "rate over range for tcp and udp". Getting to rate+latency over range for those variables would be nice to see happen in their test tools.... There is another distribution that andrew was very hot on a few years ago: https://en.wikipedia.org/wiki/Tracy%E2%80%93Widom_distribution I thought something like it could be used to look at basic problems in factoring in (or factoring out) header overheads, for example. It would be good if we had a good statistician(s) "on staff"... or there must be a whole set of mathematician's mailing lists somewhere, all aching to dive into a more real-world problem? > I know Dave has wanted an isochronous traffic tool that could simulate voip > traffic (with in-band one-way latency/jitter/loss measurement capabilities). d-itg, for which flent has some support for, "does that" but it's a pita to setup and not exactly safe to use over the open internet. *Yes*, the fact that the current rrul test suite and most others in flent do not have an isochronous baseline measurement - and uses a rtt-bound measurement instead - leads to very misleading comparison results when the measurement traffic gets a huge latency reduction. Measurement traffic thus becomes larger - and the corresponding observed Bandwidth in most flent tests drops, as we are only measuring the bulk flows, not the measurement, nor the acks. using ping-like traffic was "good enough", when we started, and were cutting latencies by orders of magnitude on a regular basis, but, for example, I just showed a long term 5x latency reduction for stock wifi vs michal's patches at 100mbit - from 100ms to 20ms or so, and I have no idea how the corresponding bandwidth loss is correlated. In a couple tests the measurement flows also drop into another wifi hw queue entirely (and I'm pretty convinced that we should always fold stuff into the nearest queue when we're busy, no matter the marking) Anyway, I'm digesting a ton of the short term results we got from the last week of testing michal's patches... (see the cerowrt blog github repo and compare the stock vs fqmac35 results on the short tests). I *think* that most of the difference in performance is due to noise on the test (the 120ms burps downward in bandwidth caused by something else) , and some of the rest can be accounted for by more measurement traffic, and probably all the rest due to dql taking too long to ramp up. The long term result of the fq_codel wifi patch at the mac80211 layer was *better* all round, bandwidth stayed the same, latency and jitter got tons better. (if I figure out what was causing the burps - they don't happen on OSX, just linux) - anyway comparing the baseline patches to the patch here on the second plot... http://blog.cerowrt.org/post/predictive_codeling/ Lovely stuff. But the short term results were noisy and the 10s of seconds long dql ramp was visible on some of those tests (sorry, no link for those yet, it was in one of michal's mails) Also (in flent) I increasingly dislike sampling at 200ms intervals, and would prefer to be getting insights at 10-20ms intervals. Or lower! 1ms would be *perfect*. :) I can get --step-size in flent down to about 40ms before starting to see things like fping "get behind" - fixing that would require changing fping to use fdtimers to fire stuff off more precisely than it does, or finding/writing another ping tool. Linux fdtimers are *amazing*; we use those in tc_iterate.c. Only way I can think about getting down below 5ms would be to have better tools for looking at packet captures. I have not had much chance to look at "teacup" as yet. tcptrace -G + xplot.org and wireshark's tools are as far as I go. Any other tools for taking apart captures out there? In particulary, aircaps of wifi traffic, retransmits, rate changes have been giving me enough of a headache to want to sit down and tear apart them with wireshark's lua stuff... or something. It would be nice to measure latencies in bulk flows, directly. ... I've long figured if we ever got to the basic isochronous test on the 10ms interval I originally specified, that we'd either revise the rrul related tests to suit (the rrul2016 "standard"), or create a new set called "crrul" - "correct rrul". We have a few isochronous tests to choose from. there are d-itg tests in the suite that emulate voip fairly well. The show-stopper thus far has been that doing things like that (or iperf/netperfs udp flooding tests) are unsafe to use on the general internet, and I wanted some form of test that negotiated a 3 way handshake, at least, and also enforced a time limit on how long it sent traffic. That said, to heck with it for internal tests as we are doing now. We have a few simpler tools than d-itg that could be built upon. Avery has the isoping tests which I'd forked mildly at one point, but never got around to doing much with. There's also things like what I was calling "twd" that I gave up on, and there's a very prisice > > What other tools do we need, for replicating traffic types that match how > these QoS types in wifi are meant to be used? I think we're doing an > excellent job of showing how they can be abused. Abusing is pretty easy, at > this point (rrul, iPerf, etc). :) Solving for abuse is useful, I think, also. Solving for real traffic types like HAS and videoconferencing would be better. Having a steady, non-greedy flow (like a basic music or video stream) test would be good. I'd love to have a 3-5 flow HAS-like test to fold into the others. I was unaware that iperf3 can output json, am not sure what else can be done with it. We had tried to use the web10g stuff at one point but the kernel patches were too invasive. A lot of what was in web10g has probably made it into the kernel by now, perhaps we can start pulling out more complete stats with things like netstat -ss or TCP_INFO? Incidentally - I don't trust d-itg very far. Could use fdtimers, could use realtime privs. > -Aaron Wood > > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast > -- Dave Täht Let's go make home routers and wifi faster! With better software! http://blog.cerowrt.org ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Make-wifi-fast] QoS and test setups 2016-05-07 16:50 ` Dave Taht @ 2016-05-07 21:49 ` Dave Taht 2016-05-08 19:07 ` Bob McMahon 0 siblings, 1 reply; 6+ messages in thread From: Dave Taht @ 2016-05-07 21:49 UTC (permalink / raw) To: Aaron Wood, flent-devel; +Cc: make-wifi-fast, bloat, Michal Kazior On Sat, May 7, 2016 at 9:50 AM, Dave Taht <dave.taht@gmail.com> wrote: > On Thu, May 5, 2016 at 7:08 PM, Aaron Wood <woody77@gmail.com> wrote: >> I saw Dave's tests on WMM vs. without, and started thinking about test >> setups for systems when QoS is in use (using classification, not just >> SQM/AQM). >> >> There are a LOT of assumptions made when QoS systems based on marked packets >> is used: >> >> - That traffic X can starve others >> - That traffic X is more/most important >> >> Our test tools are not particularly good at anything other than hammering >> the network (UDP or TCP). At least TCP has a built-in congestion control. >> I've seen many UDP (or even raw IP) test setups that didn't look anything >> like "real" traffic. > > I sat back on this in the hope that someone else would jump forward... > but you asked... > > I ran across this distribution today: > https://en.wikipedia.org/wiki/Rayleigh_distribution which looks closer > to reflecting the latency/bandwidth problem we're always looking at. > > I found this via this thread: > https://news.ycombinator.com/item?id=11644845 which was fascinating. > > I have to admit I have learnt most of my knowledge of statistics > through osmosis and by looking at (largely realtime) data that does > not yield to "normal" distributions like gaussian. So, rather than > coming up with useful methods to reduce stuff to single numbers, I > rely on curves and graphs and being always painfully aware of how > sampling intervals can smooth out real spikes and problems, and try to > convey intuition... and the wifi industry is wedded to charts of "rate > over range for tcp and udp". Getting to rate+latency over range for > those variables would be nice to see happen in their test tools.... > > There is another distribution that andrew was very hot on a few years > ago: https://en.wikipedia.org/wiki/Tracy%E2%80%93Widom_distribution > > I thought something like it could be used to look at basic problems in > factoring in (or factoring out) header overheads, for example. > > It would be good if we had a good statistician(s) "on staff"... or > there must be a whole set of mathematician's mailing lists somewhere, > all aching to dive into a more real-world problem? > >> I know Dave has wanted an isochronous traffic tool that could simulate voip >> traffic (with in-band one-way latency/jitter/loss measurement capabilities). > > d-itg, for which flent has some support for, "does that" but it's a > pita to setup and not exactly safe to use over the open internet. > > *Yes*, the fact that the current rrul test suite and most others in > flent do not have an isochronous baseline measurement - and uses a > rtt-bound measurement instead - leads to very misleading comparison > results when the measurement traffic gets a huge latency reduction. > Measurement traffic thus becomes larger - and the corresponding > observed Bandwidth in most flent tests drops, as we are only measuring > the bulk flows, not the measurement, nor the acks. > > using ping-like traffic was "good enough", when we started, and were > cutting latencies by orders of magnitude on a regular basis, but, for > example, I just showed a long term 5x latency reduction for stock wifi > vs michal's patches at 100mbit - from 100ms to 20ms or so, and I have > no idea how the corresponding bandwidth loss is correlated. In a > couple tests the measurement flows also drop into another wifi hw > queue entirely (and I'm pretty convinced that we should always fold > stuff into the nearest queue when we're busy, no matter the marking) > > Anyway, I'm digesting a ton of the short term results we got from the > last week of testing michal's patches... > > (see the cerowrt blog github repo and compare the stock vs fqmac35 > results on the short tests). I *think* that most of the difference in > performance is due to noise on the test (the 120ms burps downward in > bandwidth caused by something else) , and some of the rest can be > accounted for by more measurement traffic, and probably all the rest > due to dql taking too long to ramp up. > > The long term result of the fq_codel wifi patch at the mac80211 layer > was *better* all round, bandwidth stayed the same, latency and jitter > got tons better. (if I figure out what was causing the burps - they > don't happen on OSX, just linux) - anyway comparing the baseline > patches to the patch here on the second plot... > > http://blog.cerowrt.org/post/predictive_codeling/ > > Lovely stuff. > > But the short term results were noisy and the 10s of seconds long dql > ramp was visible on some of those tests (sorry, no link for those yet, > it was in one of michal's mails) > > Also (in flent) I increasingly dislike sampling at 200ms intervals, > and would prefer to be getting insights at 10-20ms intervals. Or > lower! 1ms would be *perfect*. :) I can get --step-size in flent down > to about 40ms before starting to see things like fping "get behind" - > fixing that would require changing fping to use fdtimers to fire stuff > off more precisely than it does, or finding/writing another ping tool. > > Linux fdtimers are *amazing*; we use those in tc_iterate.c. > > Only way I can think about getting down below 5ms would be to have > better tools for looking at packet captures. I have not had much > chance to look at "teacup" as yet. tcptrace -G + xplot.org and > wireshark's tools are as far as I go. Any other tools for taking apart > captures out there? In particulary, aircaps of wifi traffic, > retransmits, rate changes have been giving me enough of a headache to > want to sit down and tear apart them with wireshark's lua stuff... or > something. > > It would be nice to measure latencies in bulk flows, directly. > > ... > > I've long figured if we ever got to the basic isochronous test on the > 10ms interval I originally specified, that we'd either revise the rrul > related tests to suit (the rrul2016 "standard"), or create a new set > called "crrul" - "correct rrul". > > We have a few isochronous tests to choose from. there are d-itg tests > in the suite that emulate voip fairly well. The show-stopper thus far > has been that doing things like that (or iperf/netperfs udp flooding > tests) are unsafe to use on the general internet, and I wanted some > form of test that negotiated a 3 way handshake, at least, and also > enforced a time limit on how long it sent traffic. > > That said, to heck with it for internal tests as we are doing now. > > We have a few simpler tools than d-itg that could be built upon. Avery > has the isoping tests which I'd forked mildly at one point, but never > got around to doing much with. There's also things like what I was > calling "twd" that I gave up on, and there's a very prisice precise "owamp" thing, part of the internet2 project. I had used it for a while (had a parser for it, even, because I preferred the raw data). I had basically forgotten about it because it is not packaged up for debian, and has a few issues with 64 bit platforms that I meant to poke into. I did enjoy trying to get it running a few minutes ago. root@dancer:/usr/local/etc# owampd owampd[19846]: WARNING: No limits specified. owampd[19846]: Running owampd as root is folly! owampd[19846]: Use the -U option! (or allow root with the -f option) http://software.internet2.edu/owamp/details.html I think I'd (or stephen walker) had packaged it up for cerowrt, but as we never got gps's into the hands of enough users (and my testbeds were largely divorced from the internet) I let the idea slide. Toke's testbed has fully synced time. now that gpses are even more cheap (like a raspi pi hat), hmm.... Grump, it doesn't compile on aarch64.... ... There is something of a great divide between us and the perfsonar project. They use iperf, not netperf, they work on fedora, not ubuntu... and last I recall they were still stuck at linux 2.6.32 or somesuch. anyone booted that up lately? >> >> What other tools do we need, for replicating traffic types that match how >> these QoS types in wifi are meant to be used? I think we're doing an >> excellent job of showing how they can be abused. Abusing is pretty easy, at >> this point (rrul, iPerf, etc). > > :) Solving for abuse is useful, I think, also. > > Solving for real traffic types like HAS and videoconferencing would be better. > > Having a steady, non-greedy flow (like a basic music or video stream) > test would be good. > > I'd love to have a 3-5 flow HAS-like test to fold into the others. > > I was unaware that iperf3 can output json, am not sure what else can > be done with it. > > We had tried to use the web10g stuff at one point but the kernel > patches were too invasive. A lot of what was in web10g has probably > made it into the kernel by now, perhaps we can start pulling out more > complete stats with things like netstat -ss or TCP_INFO? > > Incidentally - I don't trust d-itg very far. Could use fdtimers, could > use realtime privs. > >> -Aaron Wood >> >> _______________________________________________ >> Make-wifi-fast mailing list >> Make-wifi-fast@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/make-wifi-fast >> > > > > -- > Dave Täht > Let's go make home routers and wifi faster! With better software! > http://blog.cerowrt.org -- Dave Täht Let's go make home routers and wifi faster! With better software! http://blog.cerowrt.org ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Make-wifi-fast] QoS and test setups 2016-05-07 21:49 ` Dave Taht @ 2016-05-08 19:07 ` Bob McMahon 0 siblings, 0 replies; 6+ messages in thread From: Bob McMahon @ 2016-05-08 19:07 UTC (permalink / raw) To: Dave Taht; +Cc: Aaron Wood, flent-devel, make-wifi-fast, bloat [-- Attachment #1: Type: text/plain, Size: 13900 bytes --] On a statistician - I've been learning from Shashi Sathyanarayana of Numeric Insight <http://www.numericinsight.com/Home.html> with an intention of using machine learning techniques (PCA, <https://en.wikipedia.org/wiki/Principal_component_analysis> etc.) applied to both network traffic and wi-fi traffic. *Shashi Sathyanarayana Ph.D, the founder of Numeric Insight, Inc has spent more than a decade accumulating expertise in scientific programming, algorithm development and teaching.* Things are still in the early stages of prototyping so if there are specific needs not mentioned in the current threads it would be interesting to know them. (A current project is clustering rig results per the frequency responses and spatial streams eigemnmodes, i.e. learn per multiple test rigs phy characteristics which should allow for scaling. Thought this has to be done per controlled PHY environments.) With respect to per UDP packet latency, the end/end is already in 2.0.8. Realtime scheduling and kernel RX timestamping is used if the host supports them (well, except for Mac OS X where that part of the code is still not complete.) Something others might find helpful is the ability to insert microsecond timestamps inside a UDP payload as packets move through a subsystem. It might be a good idea to standardize this if its of interest to the larger group. Here's an example where there is the end/end timestamp and five contributing timestamps. The client inserts a tag on write to trigger timestamp insertion and the server will produce the subgrouped mean/min/max/stdev and a PDFs per each report. A higher level tool then can plot them for either human visualizations or machine analysis. 11:12:35.158 HNDLR UDP-rx [ 3] 0.10-0.15 sec 1467060 Bytes 234729600 bits/sec 1.972 ms 45/ 1043 (4.3%) 2.322/ 1.224/ 3.664/ 0.534 DHD: 2.317/ 1.219/ 3.656/ 0.534 FW1: 1.900/ 1.209/ 2.994/ 0.310 FW2: 4.772/ 4.109/ 8.120/ 0.304 FW3:32.626/ 0.169/64.681/18.486 FW4: 1.389/ 0.946/ 2.075/ 0.215 ms 19787 pps (4357Bif,file8) 11:12:35.161 HNDLR UDP-rx [ 3] 0.10-0.15 sec PDF:ToT(bins/size=10k/10us,10/90=170/308)=123:1 124:1 127:2 128:1 129:1 130:1 132:6 133:2 134:2 135:2 136:2 137:6 138:3 139:2 140:1 141:3 142:6 143:3 144:4 146:6 147:5 148:4 149:3 150:2 151:6 152:5 153:5 154:2 155:9 156:5 157:8 158:5 160:4 161:6 162:4 163:3 164:2 165:4 166:6 167:4 168:1 169:4 170:5 171:4 172:4 173:4 174:3 175:4 176:5 177:3 178:5 179:2 180:6 181:6 182:3 183:5 184:5 185:6 186:4 187:1 188:5 189:4 190:5 191:6 192:2 193:6 194:6 195:7 196:5 197:4 198:5 199:5 200:9 201:2 202:6 203:4 204:5 205:9 206:4 207:8 208:5 209:5 210:6 211:5 212:7 213:6 214:8 215:4 216:8 217:5 218:8 219:6 220:4 221:8 222:4 223:6 224:8 225:6 226:7 227:3 228:10 229:3 230:7 231:6 232:7 233:7 234:3 235:12 236:5 237:6 238:7 239:6 240:9 241:4 242:12 243:3 244:12 245:4 246:4 247:7 248:4 249:6 250:6 251:10 252:6 253:4 254:6 255:6 256:7 257:7 258:7 259:8 260:3 261:6 262:3 263:8 264:3 265:10 266:6 267:4 268:9 269:3 270:8 271:6 272:10 273:4 274:3 275:10 276:3 277:7 278:4 279:6 280:9 281:4 282:6 283:3 284:7 285:5 286:5 287:7 288:4 289:7 290:2 291:6 292:5 293:4 294:7 295:5 296:6 297:4 298:4 299:4 300:4 301:4 302:2 303:5 304:4 305:5 306:3 307:5 308:9 309:3 310:5 311:2 312:2 313:3 314:3 315:3 316:1 317:3 318:3 319:1 320:5 321:2 322:5 324:1 325:1 326:3 327:2 329:2 331:1 332:2 333:1 334:1 336:2 338:1 341:2 343:2 345:1 347:1 348:1 350:1 353:1 355:1 357:1 359:1 362:1 364:1 367:1 (4357Bif,file8) ... [snipped secondary histograms] Bob On Sat, May 7, 2016 at 2:49 PM, Dave Taht <dave.taht@gmail.com> wrote: > On Sat, May 7, 2016 at 9:50 AM, Dave Taht <dave.taht@gmail.com> wrote: > > On Thu, May 5, 2016 at 7:08 PM, Aaron Wood <woody77@gmail.com> wrote: > >> I saw Dave's tests on WMM vs. without, and started thinking about test > >> setups for systems when QoS is in use (using classification, not just > >> SQM/AQM). > >> > >> There are a LOT of assumptions made when QoS systems based on marked > packets > >> is used: > >> > >> - That traffic X can starve others > >> - That traffic X is more/most important > >> > >> Our test tools are not particularly good at anything other than > hammering > >> the network (UDP or TCP). At least TCP has a built-in congestion > control. > >> I've seen many UDP (or even raw IP) test setups that didn't look > anything > >> like "real" traffic. > > > > I sat back on this in the hope that someone else would jump forward... > > but you asked... > > > > I ran across this distribution today: > > https://en.wikipedia.org/wiki/Rayleigh_distribution which looks closer > > to reflecting the latency/bandwidth problem we're always looking at. > > > > I found this via this thread: > > https://news.ycombinator.com/item?id=11644845 which was fascinating. > > > > I have to admit I have learnt most of my knowledge of statistics > > through osmosis and by looking at (largely realtime) data that does > > not yield to "normal" distributions like gaussian. So, rather than > > coming up with useful methods to reduce stuff to single numbers, I > > rely on curves and graphs and being always painfully aware of how > > sampling intervals can smooth out real spikes and problems, and try to > > convey intuition... and the wifi industry is wedded to charts of "rate > > over range for tcp and udp". Getting to rate+latency over range for > > those variables would be nice to see happen in their test tools.... > > > > There is another distribution that andrew was very hot on a few years > > ago: https://en.wikipedia.org/wiki/Tracy%E2%80%93Widom_distribution > > > > I thought something like it could be used to look at basic problems in > > factoring in (or factoring out) header overheads, for example. > > > > It would be good if we had a good statistician(s) "on staff"... or > > there must be a whole set of mathematician's mailing lists somewhere, > > all aching to dive into a more real-world problem? > > > >> I know Dave has wanted an isochronous traffic tool that could simulate > voip > >> traffic (with in-band one-way latency/jitter/loss measurement > capabilities). > > > > d-itg, for which flent has some support for, "does that" but it's a > > pita to setup and not exactly safe to use over the open internet. > > > > *Yes*, the fact that the current rrul test suite and most others in > > flent do not have an isochronous baseline measurement - and uses a > > rtt-bound measurement instead - leads to very misleading comparison > > results when the measurement traffic gets a huge latency reduction. > > Measurement traffic thus becomes larger - and the corresponding > > observed Bandwidth in most flent tests drops, as we are only measuring > > the bulk flows, not the measurement, nor the acks. > > > > using ping-like traffic was "good enough", when we started, and were > > cutting latencies by orders of magnitude on a regular basis, but, for > > example, I just showed a long term 5x latency reduction for stock wifi > > vs michal's patches at 100mbit - from 100ms to 20ms or so, and I have > > no idea how the corresponding bandwidth loss is correlated. In a > > couple tests the measurement flows also drop into another wifi hw > > queue entirely (and I'm pretty convinced that we should always fold > > stuff into the nearest queue when we're busy, no matter the marking) > > > > Anyway, I'm digesting a ton of the short term results we got from the > > last week of testing michal's patches... > > > > (see the cerowrt blog github repo and compare the stock vs fqmac35 > > results on the short tests). I *think* that most of the difference in > > performance is due to noise on the test (the 120ms burps downward in > > bandwidth caused by something else) , and some of the rest can be > > accounted for by more measurement traffic, and probably all the rest > > due to dql taking too long to ramp up. > > > > The long term result of the fq_codel wifi patch at the mac80211 layer > > was *better* all round, bandwidth stayed the same, latency and jitter > > got tons better. (if I figure out what was causing the burps - they > > don't happen on OSX, just linux) - anyway comparing the baseline > > patches to the patch here on the second plot... > > > > http://blog.cerowrt.org/post/predictive_codeling/ > > > > Lovely stuff. > > > > But the short term results were noisy and the 10s of seconds long dql > > ramp was visible on some of those tests (sorry, no link for those yet, > > it was in one of michal's mails) > > > > Also (in flent) I increasingly dislike sampling at 200ms intervals, > > and would prefer to be getting insights at 10-20ms intervals. Or > > lower! 1ms would be *perfect*. :) I can get --step-size in flent down > > to about 40ms before starting to see things like fping "get behind" - > > fixing that would require changing fping to use fdtimers to fire stuff > > off more precisely than it does, or finding/writing another ping tool. > > > > Linux fdtimers are *amazing*; we use those in tc_iterate.c. > > > > Only way I can think about getting down below 5ms would be to have > > better tools for looking at packet captures. I have not had much > > chance to look at "teacup" as yet. tcptrace -G + xplot.org and > > wireshark's tools are as far as I go. Any other tools for taking apart > > captures out there? In particulary, aircaps of wifi traffic, > > retransmits, rate changes have been giving me enough of a headache to > > want to sit down and tear apart them with wireshark's lua stuff... or > > something. > > > > It would be nice to measure latencies in bulk flows, directly. > > > > ... > > > > I've long figured if we ever got to the basic isochronous test on the > > 10ms interval I originally specified, that we'd either revise the rrul > > related tests to suit (the rrul2016 "standard"), or create a new set > > called "crrul" - "correct rrul". > > > > We have a few isochronous tests to choose from. there are d-itg tests > > in the suite that emulate voip fairly well. The show-stopper thus far > > has been that doing things like that (or iperf/netperfs udp flooding > > tests) are unsafe to use on the general internet, and I wanted some > > form of test that negotiated a 3 way handshake, at least, and also > > enforced a time limit on how long it sent traffic. > > > > That said, to heck with it for internal tests as we are doing now. > > > > We have a few simpler tools than d-itg that could be built upon. Avery > > has the isoping tests which I'd forked mildly at one point, but never > > got around to doing much with. There's also things like what I was > > calling "twd" that I gave up on, and there's a very prisice > > precise "owamp" thing, part of the internet2 project. I had used it > for a while (had a parser for it, even, because I preferred the raw > data). I had basically forgotten about it because it is not packaged > up for debian, and > has a few issues with 64 bit platforms that I meant to poke into. > > I did enjoy trying to get it running a few minutes ago. > > root@dancer:/usr/local/etc# owampd > owampd[19846]: WARNING: No limits specified. > owampd[19846]: Running owampd as root is folly! > owampd[19846]: Use the -U option! (or allow root with the -f option) > > http://software.internet2.edu/owamp/details.html > > I think I'd (or stephen walker) had packaged it up for cerowrt, but as > we never got gps's into the hands of enough users (and my testbeds > were largely divorced from the internet) I let the idea slide. Toke's > testbed has fully synced time. > > now that gpses are even more cheap (like a raspi pi hat), hmm.... > > Grump, it doesn't compile on aarch64.... > > ... > > There is something of a great divide between us and the perfsonar project. > > They use iperf, not netperf, they work on fedora, not ubuntu... > > and last I recall they were still stuck at linux 2.6.32 or somesuch. > > anyone booted that up lately? > > > >> > >> What other tools do we need, for replicating traffic types that match > how > >> these QoS types in wifi are meant to be used? I think we're doing an > >> excellent job of showing how they can be abused. Abusing is pretty > easy, at > >> this point (rrul, iPerf, etc). > > > > :) Solving for abuse is useful, I think, also. > > > > Solving for real traffic types like HAS and videoconferencing would be > better. > > > > Having a steady, non-greedy flow (like a basic music or video stream) > > test would be good. > > > > I'd love to have a 3-5 flow HAS-like test to fold into the others. > > > > I was unaware that iperf3 can output json, am not sure what else can > > be done with it. > > > > We had tried to use the web10g stuff at one point but the kernel > > patches were too invasive. A lot of what was in web10g has probably > > made it into the kernel by now, perhaps we can start pulling out more > > complete stats with things like netstat -ss or TCP_INFO? > > > > Incidentally - I don't trust d-itg very far. Could use fdtimers, could > > use realtime privs. > > > >> -Aaron Wood > >> > >> _______________________________________________ > >> Make-wifi-fast mailing list > >> Make-wifi-fast@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/make-wifi-fast > >> > > > > > > > > -- > > Dave Täht > > Let's go make home routers and wifi faster! With better software! > > http://blog.cerowrt.org > > > > -- > Dave Täht > Let's go make home routers and wifi faster! With better software! > http://blog.cerowrt.org > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast > [-- Attachment #2: Type: text/html, Size: 18281 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-05-08 19:07 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-05-06 2:08 [Make-wifi-fast] QoS and test setups Aaron Wood 2016-05-06 2:24 ` Bob McMahon 2016-05-06 4:41 ` Jonathan Morton 2016-05-07 16:50 ` Dave Taht 2016-05-07 21:49 ` Dave Taht 2016-05-08 19:07 ` Bob McMahon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox