* [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems @ 2015-05-14 19:13 Dave Taht 2015-05-15 2:48 ` Greg White 0 siblings, 1 reply; 30+ messages in thread From: Dave Taht @ 2015-05-14 19:13 UTC (permalink / raw) To: cake, cerowrt-devel, bloat; +Cc: Greg White, Klatsky, Carl One thing I find remarkable is that my isochronous 10ms ping flow test (trying to measure the accuracy of the dslreports test) totally heisenbugs the cable 16 (used to be 24) flow dslreports test. Without, using cake as the inbound and outbound shaper, I get a grade of C to F, due to inbound latencies measured in the seconds. http://www.dslreports.com/speedtest/479096 With that measurement flow, I do so much better, (max observed latency of 210ms or so) with grades ranging from B to A... http://www.dslreports.com/speedtest/478950 I am only sending and receiving an extra ~10000 bytes/sec (100 ping packets/sec) to get this difference between results. The uplink is 11Mbits, downlink 110 (configured for 100) Only things I can think of are: * ack prioritization on the modem * hitting packet limits on the CMTS forcing drops upstream (there was a paper on this idea, can't remember the name (?) ) * always on media access reducing grant latency * cake misbehavior (it is well understood codel does not react fast enough here) * cake goodness (fq of the ping making for less ack prioritization?) * ???? I am pretty sure the cable makers would not approve of someone continuously pinging their stuff in order to get lower latency on downloads (but it would certainly be one way to add continuous tuning of the real rate to cake!) Ideas? The simple test: # you need to be root to ping on a 10ms interval # and please pick your own server! $ sudo fping -c 10000 -i 10 -p 10 snapon.lab.bufferbloat.net > vscabletest_cable.out start a dslreports "cable" test in your browser abort the (CNTRL-C) ping when done. Post processing of fping's format $ cat vscabletest_cable.out | cut -f3- -d, | awk '{ print $1 }' > vscabletest-cable.txt import into your favorite spreadsheet and plot. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-14 19:13 [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems Dave Taht @ 2015-05-15 2:48 ` Greg White 2015-05-15 4:44 ` Aaron Wood 2015-05-15 17:47 ` [Cerowrt-devel] " Dave Taht 0 siblings, 2 replies; 30+ messages in thread From: Greg White @ 2015-05-15 2:48 UTC (permalink / raw) To: Dave Taht, cake, cerowrt-devel, bloat; +Cc: Klatsky, Carl I don't have any ideas, but I can try to cross some of yours off the list... :) "* always on media access reducing grant latency" During download test, you are receiving 95 Mbps, that works out to what, an upstream ACK every 0.25ms? The CM will get an opportunity to send a piggybacked request approx every 2ms. It seems like it will always be piggybacking in order to send the 8 new ACKs that have arrived in the last 2ms interval. I can't see how adding a ping packet every 10ms would influence the behavior. "* ack prioritization on the modem" ACK prioritization would mean that the modem would potentially delay the pings (yours and dlsreports') in order to service the ACKs. Not sure why this wouldn't delay dslreports' ACKs when yours are present. "* hitting packet limits on the CMTS..." I'd have to see the paper, but I don't see a significant difference in DS throughput between the two tests. Are you thinking that there are just enough packet drops happening to reduce bufferbloat, but not affect throughput? T'would be lucky. On 5/14/15, 1:13 PM, "Dave Taht" <dave.taht@gmail.com> wrote: >One thing I find remarkable is that my isochronous 10ms ping flow test >(trying to measure the accuracy of the dslreports test) totally >heisenbugs the cable 16 (used to be 24) flow dslreports test. > >Without, using cake as the inbound and outbound shaper, I get a grade >of C to F, due to inbound latencies measured in the seconds. > >http://www.dslreports.com/speedtest/479096 > >With that measurement flow, I do so much better, (max observed latency >of 210ms or so) with grades ranging from B to A... > >http://www.dslreports.com/speedtest/478950 > >I am only sending and receiving an extra ~10000 bytes/sec (100 ping >packets/sec) to get this difference between results. The uplink is >11Mbits, downlink 110 (configured for 100) > >Only things I can think of are: > >* ack prioritization on the modem >* hitting packet limits on the CMTS forcing drops upstream (there was >a paper on this idea, can't remember the name (?) ) >* always on media access reducing grant latency >* cake misbehavior (it is well understood codel does not react fast >enough here) >* cake goodness (fq of the ping making for less ack prioritization?) >* ???? > >I am pretty sure the cable makers would not approve of someone >continuously pinging their stuff in order to get lower latency on >downloads (but it would certainly be one way to add continuous tuning >of the real rate to cake!) > >Ideas? > >The simple test: ># you need to be root to ping on a 10ms interval ># and please pick your own server! > >$ sudo fping -c 10000 -i 10 -p 10 snapon.lab.bufferbloat.net > >vscabletest_cable.out > >start a dslreports "cable" test in your browser > >abort the (CNTRL-C) ping when done. Post processing of fping's format > >$ cat vscabletest_cable.out | cut -f3- -d, | awk '{ print $1 }' > >vscabletest-cable.txt > >import into your favorite spreadsheet and plot. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 2:48 ` Greg White @ 2015-05-15 4:44 ` Aaron Wood 2015-05-15 8:18 ` [Cerowrt-devel] [Bloat] " Eggert, Lars 2015-05-15 17:47 ` [Cerowrt-devel] " Dave Taht 1 sibling, 1 reply; 30+ messages in thread From: Aaron Wood @ 2015-05-15 4:44 UTC (permalink / raw) To: Greg White; +Cc: cake, Klatsky, Carl, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 44 bytes --] ICMP prioritization over TCP? > >Ideas? > [-- Attachment #2: Type: text/html, Size: 284 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 4:44 ` Aaron Wood @ 2015-05-15 8:18 ` Eggert, Lars 2015-05-15 8:55 ` Sebastian Moeller 2015-05-15 11:27 ` [Cerowrt-devel] " Bill Ver Steeg (versteb) 0 siblings, 2 replies; 30+ messages in thread From: Eggert, Lars @ 2015-05-15 8:18 UTC (permalink / raw) To: Aaron Wood; +Cc: cake, Greg White, Klatsky, Carl, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 482 bytes --] On 2015-5-15, at 06:44, Aaron Wood <woody77@gmail.com<mailto:woody77@gmail.com>> wrote: ICMP prioritization over TCP? Probably. Ping in parallel to TCP is a hacky way to measure latencies; not only because of prioritization, but also because you don't measure TCP send/receive buffer latencies (and they can be large, auto-tuning is not so great.) You really need to embed timestamps in the TCP bytestream and echo them back. See the recent netperf patch I sent. Lars [-- Attachment #2: Type: text/html, Size: 1447 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 8:18 ` [Cerowrt-devel] [Bloat] " Eggert, Lars @ 2015-05-15 8:55 ` Sebastian Moeller 2015-05-15 11:10 ` [Cerowrt-devel] [Cake] " Alan Jenkins 2015-05-15 11:27 ` [Cerowrt-devel] " Bill Ver Steeg (versteb) 1 sibling, 1 reply; 30+ messages in thread From: Sebastian Moeller @ 2015-05-15 8:55 UTC (permalink / raw) To: Eggert, Lars; +Cc: Klatsky, Carl, cake, cerowrt-devel, bloat, Greg White Hi Lars, On May 15, 2015, at 10:18 , Eggert, Lars <lars@netapp.com> wrote: > On 2015-5-15, at 06:44, Aaron Wood <woody77@gmail.com> wrote: >> ICMP prioritization over TCP? > > Probably. Interesting so far I often heard ICMP echo requests are bad as they are often rate-limited and/or processed in a slow path in routers... > > Ping in parallel to TCP is a hacky way to measure latencies; not only because of prioritization, but also because you don't measure TCP send/receive buffer latencies (and they can be large, auto-tuning is not so great.) I guess the concurrent ICMP echo requests are a better measure for flow separation and sparse-flow-boostiing than inter-flow latency. TCP embedded timestamps would be a jacky way to measure those ;) . > > You really need to embed timestamps in the TCP bytestream and echo them back. See the recent netperf patch I sent. I hope this makes into the main netperf branch… Best Regards Sebastian > > Lars > _______________________________________________ > Cerowrt-devel mailing list > Cerowrt-devel@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cerowrt-devel ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Cake] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 8:55 ` Sebastian Moeller @ 2015-05-15 11:10 ` Alan Jenkins 0 siblings, 0 replies; 30+ messages in thread From: Alan Jenkins @ 2015-05-15 11:10 UTC (permalink / raw) To: Sebastian Moeller, Eggert, Lars; +Cc: cake, cerowrt-devel, bloat On 15/05/15 09:55, Sebastian Moeller wrote: > Hi Lars, > > > On May 15, 2015, at 10:18 , Eggert, Lars <lars@netapp.com> wrote: > >> On 2015-5-15, at 06:44, Aaron Wood <woody77@gmail.com> wrote: >>> ICMP prioritization over TCP? >> Probably. > Interesting so far I often heard ICMP echo requests are bad as they are often rate-limited and/or processed in a slow path in routers... Yes, if you ping an ISP router itself. You can avoid that by pinging an end-host. Then you'll reveal silly QoS implementations at the edge of the network which prioritize ping. Or hit one like SQM (simple.qos) that de-prioritises it. So you can get biased results in either direction :). Need to test very carefully. I like that rrul includes udp probes as well. The betterspeedtest and netperfrunner.sh scripts let you ping a router if you want, which is what I started off my testing with. You can get a nice low minimum but I don't really trust that now. By default they ping a local google IP, which might give more consistent results. >> Ping in parallel to TCP is a hacky way to measure latencies; not only because of prioritization, but also because you don't measure TCP send/receive buffer latencies (and they can be large, auto-tuning is not so great.) > I guess the concurrent ICMP echo requests are a better measure for flow separation and sparse-flow-boostiing than inter-flow latency. TCP embedded timestamps would be a jacky way to measure those ;) . +1 > >> You really need to embed timestamps in the TCP bytestream and echo them back. See the recent netperf patch I sent. > I hope this makes into the main netperf branch… > > Best Regards > Sebastian > ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 8:18 ` [Cerowrt-devel] [Bloat] " Eggert, Lars 2015-05-15 8:55 ` Sebastian Moeller @ 2015-05-15 11:27 ` Bill Ver Steeg (versteb) 2015-05-15 12:19 ` Jonathan Morton 2015-05-15 12:44 ` Eggert, Lars 1 sibling, 2 replies; 30+ messages in thread From: Bill Ver Steeg (versteb) @ 2015-05-15 11:27 UTC (permalink / raw) To: Eggert, Lars, Aaron Wood; +Cc: cake, Klatsky, Carl, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 1681 bytes --] But the TCP timestamps are impacted by packet loss. You will sometimes get an accurate RTT reading, and you will sometimes get multiples of the RTT due to packet loss and retransmissions. I would hate to see a line classified as bloated when the real problem is simple packet loss. Head of line blocking, cumulative acks, yada, yada, yada. You really need to use a packet oriented protocol (ICMP/UDP) to get a true measure of RTT at the application layer. If you can instrument TCP in the kernel to make instantaneous RTT available to the application, that might work. I am not sure how you would roll that out in a timely manner, though. I think I actually wrote some code to do this on BSD many years ago, and it gave pretty good results. I was building a terminal server (remember those?) and needed to have ~50ms +- 20ms echo times. Bvs From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Eggert, Lars Sent: Friday, May 15, 2015 4:18 AM To: Aaron Wood Cc: cake@lists.bufferbloat.net; Klatsky, Carl; cerowrt-devel@lists.bufferbloat.net; bloat Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems On 2015-5-15, at 06:44, Aaron Wood <woody77@gmail.com<mailto:woody77@gmail.com>> wrote: ICMP prioritization over TCP? Probably. Ping in parallel to TCP is a hacky way to measure latencies; not only because of prioritization, but also because you don't measure TCP send/receive buffer latencies (and they can be large, auto-tuning is not so great.) You really need to embed timestamps in the TCP bytestream and echo them back. See the recent netperf patch I sent. Lars [-- Attachment #2: Type: text/html, Size: 5513 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 11:27 ` [Cerowrt-devel] " Bill Ver Steeg (versteb) @ 2015-05-15 12:19 ` Jonathan Morton 2015-05-15 12:44 ` Eggert, Lars 1 sibling, 0 replies; 30+ messages in thread From: Jonathan Morton @ 2015-05-15 12:19 UTC (permalink / raw) To: Bill Ver Steeg (versteb) Cc: Klatsky, Carl, cake, cerowrt-devel, bloat, Eggert, Lars > On 15 May, 2015, at 14:27, Bill Ver Steeg (versteb) <versteb@cisco.com> wrote: > > But the TCP timestamps are impacted by packet loss. You will sometimes get an accurate RTT reading, and you will sometimes get multiples of the RTT due to packet loss and retransmissions. I would hate to see a line classified as bloated when the real problem is simple packet loss. Head of line blocking, cumulative acks, yada, yada, yada. TCP stacks supporting Timestamps already implement an algorithm to get a relatively reliable RTT measurement out of them. The algorithm is described in the relevant RFC. That’s the entire point of having Timestamps, and it wouldn’t be difficult to replicate that externally by observing both directions of traffic past an intermediate point; you’d get the partial RTTs to each endpoint of the flow, the sum of which is the total RTT. But what you’d get is the RTT of that particular TCP flow. This is likely to be longer than the RTT of a competing sparse flow, if the bottleneck queue uses any kind of competent flow isolation. - Jonathan Morton ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 11:27 ` [Cerowrt-devel] " Bill Ver Steeg (versteb) 2015-05-15 12:19 ` Jonathan Morton @ 2015-05-15 12:44 ` Eggert, Lars 2015-05-15 13:09 ` Bill Ver Steeg (versteb) 1 sibling, 1 reply; 30+ messages in thread From: Eggert, Lars @ 2015-05-15 12:44 UTC (permalink / raw) To: Bill Ver Steeg (versteb); +Cc: cake, Klatsky, Carl, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 2123 bytes --] Hi, On 2015-5-15, at 13:27, Bill Ver Steeg (versteb) <versteb@cisco.com> wrote: > But the TCP timestamps are impacted by packet loss. You will sometimes get an accurate RTT reading, and you will sometimes get multiples of the RTT due to packet loss and retransmissions. right. But those will be transient, and so you can identify them against the baseline. > You really need to use a packet oriented protocol (ICMP/UDP) to get a true measure of RTT at the application layer. I disagree. You can use them to establish a lower bound on the delay an application over TCP will see, but not get an accurate estimate of that (because socket buffers are not included in the measurement.) And you rely on the network to not prioritize ICMP/UDP but otherwise leave it in the same queues. > If you can instrument TCP in the kernel to make instantaneous RTT available to the application, that might work. I am not sure how you would roll that out in a timely manner, though. That's already part of the TCP info struct, I think. At least in Linux. Lars > I think I actually wrote some code to do this on BSD many years ago, and it gave pretty good results. I was building a terminal server (remember those?) and needed to have ~50ms +- 20ms echo times. > > > Bvs > > From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Eggert, Lars > Sent: Friday, May 15, 2015 4:18 AM > To: Aaron Wood > Cc: cake@lists.bufferbloat.net; Klatsky, Carl; cerowrt-devel@lists.bufferbloat.net; bloat > Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems > > On 2015-5-15, at 06:44, Aaron Wood <woody77@gmail.com> wrote: > ICMP prioritization over TCP? > > Probably. > > Ping in parallel to TCP is a hacky way to measure latencies; not only because of prioritization, but also because you don't measure TCP send/receive buffer latencies (and they can be large, auto-tuning is not so great.) > > You really need to embed timestamps in the TCP bytestream and echo them back. See the recent netperf patch I sent. > > Lars [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 163 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 12:44 ` Eggert, Lars @ 2015-05-15 13:09 ` Bill Ver Steeg (versteb) 2015-05-15 13:35 ` Jim Gettys 0 siblings, 1 reply; 30+ messages in thread From: Bill Ver Steeg (versteb) @ 2015-05-15 13:09 UTC (permalink / raw) To: Eggert, Lars; +Cc: cake, Klatsky, Carl, cerowrt-devel, bloat Lars- You make some good points. It boils down to the fact that there are several things that you can measure, and they mean different things. Bvs -----Original Message----- From: Eggert, Lars [mailto:lars@netapp.com] Sent: Friday, May 15, 2015 8:44 AM To: Bill Ver Steeg (versteb) Cc: Aaron Wood; cake@lists.bufferbloat.net; Klatsky, Carl; cerowrt-devel@lists.bufferbloat.net; bloat Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems Hi, On 2015-5-15, at 13:27, Bill Ver Steeg (versteb) <versteb@cisco.com> wrote: > But the TCP timestamps are impacted by packet loss. You will sometimes get an accurate RTT reading, and you will sometimes get multiples of the RTT due to packet loss and retransmissions. right. But those will be transient, and so you can identify them against the baseline. > You really need to use a packet oriented protocol (ICMP/UDP) to get a true measure of RTT at the application layer. I disagree. You can use them to establish a lower bound on the delay an application over TCP will see, but not get an accurate estimate of that (because socket buffers are not included in the measurement.) And you rely on the network to not prioritize ICMP/UDP but otherwise leave it in the same queues. > If you can instrument TCP in the kernel to make instantaneous RTT available to the application, that might work. I am not sure how you would roll that out in a timely manner, though. That's already part of the TCP info struct, I think. At least in Linux. Lars > I think I actually wrote some code to do this on BSD many years ago, and it gave pretty good results. I was building a terminal server (remember those?) and needed to have ~50ms +- 20ms echo times. > > > Bvs > > From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Eggert, Lars > Sent: Friday, May 15, 2015 4:18 AM > To: Aaron Wood > Cc: cake@lists.bufferbloat.net; Klatsky, Carl; cerowrt-devel@lists.bufferbloat.net; bloat > Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems > > On 2015-5-15, at 06:44, Aaron Wood <woody77@gmail.com> wrote: > ICMP prioritization over TCP? > > Probably. > > Ping in parallel to TCP is a hacky way to measure latencies; not only because of prioritization, but also because you don't measure TCP send/receive buffer latencies (and they can be large, auto-tuning is not so great.) > > You really need to embed timestamps in the TCP bytestream and echo them back. See the recent netperf patch I sent. > > Lars ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 13:09 ` Bill Ver Steeg (versteb) @ 2015-05-15 13:35 ` Jim Gettys 2015-05-15 14:36 ` Simon Barber 2015-05-15 16:59 ` [Cerowrt-devel] " Dave Taht 0 siblings, 2 replies; 30+ messages in thread From: Jim Gettys @ 2015-05-15 13:35 UTC (permalink / raw) To: Bill Ver Steeg (versteb) Cc: cake, Klatsky, Carl, Eggert, Lars, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 1653 bytes --] On Fri, May 15, 2015 at 9:09 AM, Bill Ver Steeg (versteb) <versteb@cisco.com > wrote: > Lars- > > You make some good points. It boils down to the fact that there are > several things that you can measure, and they mean different things. > > Bvs > > > -----Original Message----- > From: Eggert, Lars [mailto:lars@netapp.com] > Sent: Friday, May 15, 2015 8:44 AM > To: Bill Ver Steeg (versteb) > Cc: Aaron Wood; cake@lists.bufferbloat.net; Klatsky, Carl; > cerowrt-devel@lists.bufferbloat.net; bloat > Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs > cablemodems > > > I disagree. You can use them to establish a lower bound on the delay an > application over TCP will see, but not get an accurate estimate of that > (because socket buffers are not included in the measurement.) And you rely > on the network to not prioritize ICMP/UDP but otherwise leave it in the > same queues. > On recent versions of Linux and Mac, you can get most of the socket buffers to "go away". I forget the socket option offhand. And TCP small queues in Linux means that Linux no longer gratuitously generates packets just to dump them into the queue discipline system where they will rot. How accurate this now can be is still an interesting question: but has clearly improved the situation a lot over 3-4 years ago. > > If you can instrument TCP in the kernel to make instantaneous RTT > available to the application, that might work. I am not sure how you would > roll that out in a timely manner, though. > > Well, the sooner one starts, the sooner it gets deployed. Jim [-- Attachment #2: Type: text/html, Size: 2936 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 13:35 ` Jim Gettys @ 2015-05-15 14:36 ` Simon Barber 2015-05-18 3:30 ` dpreed 2015-05-15 16:59 ` [Cerowrt-devel] " Dave Taht 1 sibling, 1 reply; 30+ messages in thread From: Simon Barber @ 2015-05-15 14:36 UTC (permalink / raw) To: Jim Gettys, Bill Ver Steeg (versteb) Cc: cake, Klatsky, Carl, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 2309 bytes --] One question about TCP small queues (which I don't think is a good solution to the problem). For 802.11 to be able to perform well it needs to form maximum size aggregates. This means that it needs to maintain a minimum queue size of at least 64 packets, and sometimes more. Will TCP small queues prevent this? Simon Sent with AquaMail for Android http://www.aqua-mail.com On May 15, 2015 6:44:21 AM Jim Gettys <jg@freedesktop.org> wrote: > On Fri, May 15, 2015 at 9:09 AM, Bill Ver Steeg (versteb) <versteb@cisco.com > > wrote: > > > Lars- > > > > You make some good points. It boils down to the fact that there are > > several things that you can measure, and they mean different things. > > > > Bvs > > > > > > -----Original Message----- > > From: Eggert, Lars [mailto:lars@netapp.com] > > Sent: Friday, May 15, 2015 8:44 AM > > To: Bill Ver Steeg (versteb) > > Cc: Aaron Wood; cake@lists.bufferbloat.net; Klatsky, Carl; > > cerowrt-devel@lists.bufferbloat.net; bloat > > Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs > > cablemodems > > > > > > I disagree. You can use them to establish a lower bound on the delay an > > application over TCP will see, but not get an accurate estimate of that > > (because socket buffers are not included in the measurement.) And you rely > > on the network to not prioritize ICMP/UDP but otherwise leave it in the > > same queues. > > > > On recent versions of Linux and Mac, you can get most of the socket > buffers to "go away". I forget the socket option offhand. > > And TCP small queues in Linux means that Linux no longer gratuitously > generates packets just to dump them into the queue discipline system where > they will rot. > > How accurate this now can be is still an interesting question: but has > clearly improved the situation a lot over 3-4 years ago. > > > > > If you can instrument TCP in the kernel to make instantaneous RTT > > available to the application, that might work. I am not sure how you would > > roll that out in a timely manner, though. > > > > Well, the sooner one starts, the sooner it gets deployed. > > Jim > > > > ---------- > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > [-- Attachment #2: Type: text/html, Size: 4195 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 14:36 ` Simon Barber @ 2015-05-18 3:30 ` dpreed 2015-05-18 5:06 ` Simon Barber 0 siblings, 1 reply; 30+ messages in thread From: dpreed @ 2015-05-18 3:30 UTC (permalink / raw) To: Simon Barber Cc: Bill Ver Steeg (versteb), Klatsky, Carl, cake, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 3421 bytes --] What's your definition of 802.11 performing well? Just curious. Maximizing throughput at all costs or maintaing minimal latency for multiple users sharing an access point? Of course, if all you are doing is trying to do point-to-point outdoor links using 802.11 gear, the issue is different - similar to "dallying" to piggyback acks in TCP, which is great when you have two dimensional flows, but lousy if each packet has a latency requirement that is small. To me this is hardly so obvious. Maximizing packet sizes is actually counterproductive for many end-to-end requirements. But of course for "hot rod benchmarkers" applications don't matter at all - just the link performance numbers. One important use of networking is multiplexing multiple users. Otherwise, bufferbloat would never matter. Which is why I think actual numbers rather than "hand waving claims" matter. On Friday, May 15, 2015 10:36am, "Simon Barber" <simon@superduper.net> said: One question about TCP small queues (which I don't think is a good solution to the problem). For 802.11 to be able to perform well it needs to form maximum size aggregates. This means that it needs to maintain a minimum queue size of at least 64 packets, and sometimes more. Will TCP small queues prevent this? Simon Sent with AquaMail for Android [ http://www.aqua-mail.com ]( http://www.aqua-mail.com ) On May 15, 2015 6:44:21 AM Jim Gettys <jg@freedesktop.org> wrote: On Fri, May 15, 2015 at 9:09 AM, Bill Ver Steeg (versteb) <[ versteb@cisco.com ]( mailto:versteb@cisco.com )> wrote: Lars- You make some good points. It boils down to the fact that there are several things that you can measure, and they mean different things. Bvs -----Original Message----- From: Eggert, Lars [mailto:[ lars@netapp.com ]( mailto:lars@netapp.com )] Sent: Friday, May 15, 2015 8:44 AM To: Bill Ver Steeg (versteb) Cc: Aaron Wood; [ cake@lists.bufferbloat.net ]( mailto:cake@lists.bufferbloat.net ); Klatsky, Carl; [ cerowrt-devel@lists.bufferbloat.net ]( mailto:cerowrt-devel@lists.bufferbloat.net ); bloat Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems I disagree. You can use them to establish a lower bound on the delay an application over TCP will see, but not get an accurate estimate of that (because socket buffers are not included in the measurement.) And you rely on the network to not prioritize ICMP/UDP but otherwise leave it in the same queues. On recent versions of Linux and Mac, you can get most of the socket buffers to "go away". I forget the socket option offhand. And TCP small queues in Linux means that Linux no longer gratuitously generates packets just to dump them into the queue discipline system where they will rot. How accurate this now can be is still an interesting question: but has clearly improved the situation a lot over 3-4 years ago. > If you can instrument TCP in the kernel to make instantaneous RTT available to the application, that might work. I am not sure how you would roll that out in a timely manner, though. Well, the sooner one starts, the sooner it gets deployed. Jim_______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net [ https://lists.bufferbloat.net/listinfo/bloat ]( https://lists.bufferbloat.net/listinfo/bloat ) [-- Attachment #2: Type: text/html, Size: 6125 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 3:30 ` dpreed @ 2015-05-18 5:06 ` Simon Barber 2015-05-18 9:06 ` Bill Ver Steeg (versteb) 2015-05-18 11:42 ` Eggert, Lars 0 siblings, 2 replies; 30+ messages in thread From: Simon Barber @ 2015-05-18 5:06 UTC (permalink / raw) To: dpreed Cc: Bill Ver Steeg (versteb), Klatsky, Carl, cake, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 4000 bytes --] Even single user, bufferbloat matters. Windows update will kill your Skype call. Without large size aggregates the necessary physical layer per packet overheads caused by the RF medium kill your efficiency and performance. Fairness between users is another issue as well. Simon On 5/17/2015 8:30 PM, dpreed@reed.com wrote: > > What's your definition of 802.11 performing well? Just curious. > Maximizing throughput at all costs or maintaing minimal latency for > multiple users sharing an access point? > > Of course, if all you are doing is trying to do point-to-point outdoor > links using 802.11 gear, the issue is different - similar to > "dallying" to piggyback acks in TCP, which is great when you have two > dimensional flows, but lousy if each packet has a latency requirement > that is small. > > To me this is hardly so obvious. Maximizing packet sizes is actually > counterproductive for many end-to-end requirements. But of course for > "hot rod benchmarkers" applications don't matter at all - just the > link performance numbers. > > One important use of networking is multiplexing multiple users. > Otherwise, bufferbloat would never matter. > > Which is why I think actual numbers rather than "hand waving claims" > matter. > > > > On Friday, May 15, 2015 10:36am, "Simon Barber" <simon@superduper.net> > said: > > One question about TCP small queues (which I don't think is a good > solution to the problem). For 802.11 to be able to perform well it > needs to form maximum size aggregates. This means that it needs to > maintain a minimum queue size of at least 64 packets, and sometimes > more. Will TCP small queues prevent this? > > Simon > > Sent with AquaMail for Android > http://www.aqua-mail.com > > On May 15, 2015 6:44:21 AM Jim Gettys <jg@freedesktop.org> wrote: > > > On Fri, May 15, 2015 at 9:09 AM, Bill Ver Steeg (versteb) > <versteb@cisco.com <mailto:versteb@cisco.com>> wrote: > > Lars- > > You make some good points. It boils down to the fact that > there are several things that you can measure, and they mean > different things. > > Bvs > > > -----Original Message----- > From: Eggert, Lars [mailto:lars@netapp.com > <mailto:lars@netapp.com>] > Sent: Friday, May 15, 2015 8:44 AM > To: Bill Ver Steeg (versteb) > Cc: Aaron Wood; cake@lists.bufferbloat.net > <mailto:cake@lists.bufferbloat.net>; Klatsky, Carl; > cerowrt-devel@lists.bufferbloat.net > <mailto:cerowrt-devel@lists.bufferbloat.net>; bloat > Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 > flow test vs cablemodems > > > I disagree. You can use them to establish a lower bound on the > delay an application over TCP will see, but not get an > accurate estimate of that (because socket buffers are not > included in the measurement.) And you rely on the network to > not prioritize ICMP/UDP but otherwise leave it in the same queues. > > On recent versions of Linux and Mac, you can get most of the > socket buffers to "go away". I forget the socket option offhand. > And TCP small queues in Linux means that Linux no longer > gratuitously generates packets just to dump them into the queue > discipline system where they will rot. > How accurate this now can be is still an interesting question: but > has clearly improved the situation a lot over 3-4 years ago. > > > > If you can instrument TCP in the kernel to make > instantaneous RTT available to the application, that might > work. I am not sure how you would roll that out in a timely > manner, though. > > Well, the sooner one starts, the sooner it gets deployed. > Jim > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > [-- Attachment #2: Type: text/html, Size: 10272 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 5:06 ` Simon Barber @ 2015-05-18 9:06 ` Bill Ver Steeg (versteb) 2015-05-18 11:42 ` Eggert, Lars 1 sibling, 0 replies; 30+ messages in thread From: Bill Ver Steeg (versteb) @ 2015-05-18 9:06 UTC (permalink / raw) To: Simon Barber, dpreed; +Cc: cake, Klatsky, Carl, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 4600 bytes --] There are conditions when even a single application will suffer from bloat. For instance, several ABR video players use multiple TCP/HTTP sessions to fetch data. Some of the data boils down to large video chunks, and some of the data boils down to small pieces of control information. Think of a video player in part of the screen and data about the event (racing statistics, let’s say). On a bloaty network, the bulk data builds the network buffer and delays the control traffic. This can impact the user experience….. Bvs From: Simon Barber [mailto:simon@superduper.net] Sent: Monday, May 18, 2015 7:06 AM To: dpreed@reed.com Cc: Jim Gettys; Bill Ver Steeg (versteb); cake@lists.bufferbloat.net; Klatsky, Carl; cerowrt-devel@lists.bufferbloat.net; bloat Subject: Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems Even single user, bufferbloat matters. Windows update will kill your Skype call. Without large size aggregates the necessary physical layer per packet overheads caused by the RF medium kill your efficiency and performance. Fairness between users is another issue as well. Simon On 5/17/2015 8:30 PM, dpreed@reed.com<mailto:dpreed@reed.com> wrote: What's your definition of 802.11 performing well? Just curious. Maximizing throughput at all costs or maintaing minimal latency for multiple users sharing an access point? Of course, if all you are doing is trying to do point-to-point outdoor links using 802.11 gear, the issue is different - similar to "dallying" to piggyback acks in TCP, which is great when you have two dimensional flows, but lousy if each packet has a latency requirement that is small. To me this is hardly so obvious. Maximizing packet sizes is actually counterproductive for many end-to-end requirements. But of course for "hot rod benchmarkers" applications don't matter at all - just the link performance numbers. One important use of networking is multiplexing multiple users. Otherwise, bufferbloat would never matter. Which is why I think actual numbers rather than "hand waving claims" matter. On Friday, May 15, 2015 10:36am, "Simon Barber" <simon@superduper.net><mailto:simon@superduper.net> said: One question about TCP small queues (which I don't think is a good solution to the problem). For 802.11 to be able to perform well it needs to form maximum size aggregates. This means that it needs to maintain a minimum queue size of at least 64 packets, and sometimes more. Will TCP small queues prevent this? Simon Sent with AquaMail for Android http://www.aqua-mail.com On May 15, 2015 6:44:21 AM Jim Gettys <jg@freedesktop.org><mailto:jg@freedesktop.org> wrote: On Fri, May 15, 2015 at 9:09 AM, Bill Ver Steeg (versteb) <versteb@cisco.com<mailto:versteb@cisco.com>> wrote: Lars- You make some good points. It boils down to the fact that there are several things that you can measure, and they mean different things. Bvs -----Original Message----- From: Eggert, Lars [mailto:lars@netapp.com<mailto:lars@netapp.com>] Sent: Friday, May 15, 2015 8:44 AM To: Bill Ver Steeg (versteb) Cc: Aaron Wood; cake@lists.bufferbloat.net<mailto:cake@lists.bufferbloat.net>; Klatsky, Carl; cerowrt-devel@lists.bufferbloat.net<mailto:cerowrt-devel@lists.bufferbloat.net>; bloat Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems I disagree. You can use them to establish a lower bound on the delay an application over TCP will see, but not get an accurate estimate of that (because socket buffers are not included in the measurement.) And you rely on the network to not prioritize ICMP/UDP but otherwise leave it in the same queues. On recent versions of Linux and Mac, you can get most of the socket buffers to "go away". I forget the socket option offhand. And TCP small queues in Linux means that Linux no longer gratuitously generates packets just to dump them into the queue discipline system where they will rot. How accurate this now can be is still an interesting question: but has clearly improved the situation a lot over 3-4 years ago. > If you can instrument TCP in the kernel to make instantaneous RTT available to the application, that might work. I am not sure how you would roll that out in a timely manner, though. Well, the sooner one starts, the sooner it gets deployed. Jim _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net<mailto:Bloat@lists.bufferbloat.net> https://lists.bufferbloat.net/listinfo/bloat [-- Attachment #2: Type: text/html, Size: 12084 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 5:06 ` Simon Barber 2015-05-18 9:06 ` Bill Ver Steeg (versteb) @ 2015-05-18 11:42 ` Eggert, Lars 2015-05-18 11:57 ` luca.muscariello 2015-05-18 12:30 ` Simon Barber 1 sibling, 2 replies; 30+ messages in thread From: Eggert, Lars @ 2015-05-18 11:42 UTC (permalink / raw) To: Simon Barber; +Cc: cake, Klatsky, Carl, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 268 bytes --] On 2015-5-18, at 07:06, Simon Barber <simon@superduper.net<mailto:simon@superduper.net>> wrote: Windows update will kill your Skype call. Really? AFAIK Windows Update has been using a LEDBAT-like scavenger-type congestion control algorithm for years now. Lars [-- Attachment #2: Type: text/html, Size: 1121 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 11:42 ` Eggert, Lars @ 2015-05-18 11:57 ` luca.muscariello 2015-05-18 12:30 ` Simon Barber 1 sibling, 0 replies; 30+ messages in thread From: luca.muscariello @ 2015-05-18 11:57 UTC (permalink / raw) To: Eggert, Lars, Simon Barber; +Cc: cake, Klatsky, Carl, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 1899 bytes --] That would be OK for ledbat-like vs TCP Real time vs ledbat-like isn't that clear to me why real time should be protected. I'd say that Skype would suffer against any protocol probing for bandwidth. -------- Original message -------- From: "Eggert, Lars" <lars@netapp.com> Date:18/05/2015 1:49 PM (GMT+01:00) To: Simon Barber <simon@superduper.net> Cc: cake@lists.bufferbloat.net, dpreed@reed.com, "Klatsky, Carl" <carl_klatsky@cable.comcast.com>, cerowrt-devel@lists.bufferbloat.net, bloat <bloat@lists.bufferbloat.net> Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems On 2015-5-18, at 07:06, Simon Barber <simon@superduper.net<mailto:simon@superduper.net>> wrote: Windows update will kill your Skype call. Really? AFAIK Windows Update has been using a LEDBAT-like scavenger-type congestion control algorithm for years now. Lars _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. [-- Attachment #2: Type: text/html, Size: 2841 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 11:42 ` Eggert, Lars 2015-05-18 11:57 ` luca.muscariello @ 2015-05-18 12:30 ` Simon Barber 2015-05-18 15:03 ` Jonathan Morton 2015-05-18 15:09 ` dpreed 1 sibling, 2 replies; 30+ messages in thread From: Simon Barber @ 2015-05-18 12:30 UTC (permalink / raw) To: Eggert, Lars; +Cc: cake, Klatsky, Carl, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 942 bytes --] I am likely out of date about Windows Update, but there's many other programs that do background downloads or uploads that don't implement LEDBAT or similar protection. The current AQM recommendation draft in the IETF will make things worse, by not drawing attention to the fact that implementing AQM without implementing a low priority traffic class (such as DSCP 8 - CS1) will prevent solutions like LEDBAT from working, or there being any alternative. Would appreciate support on the AQM list in the importance of this. Simon Sent with AquaMail for Android http://www.aqua-mail.com On May 18, 2015 4:42:43 AM "Eggert, Lars" <lars@netapp.com> wrote: > On 2015-5-18, at 07:06, Simon Barber > <simon@superduper.net<mailto:simon@superduper.net>> wrote: > Windows update will kill your Skype call. > > Really? AFAIK Windows Update has been using a LEDBAT-like scavenger-type > congestion control algorithm for years now. > > Lars [-- Attachment #2: Type: text/html, Size: 2125 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 12:30 ` Simon Barber @ 2015-05-18 15:03 ` Jonathan Morton 2015-05-18 15:09 ` dpreed 1 sibling, 0 replies; 30+ messages in thread From: Jonathan Morton @ 2015-05-18 15:03 UTC (permalink / raw) To: Simon Barber; +Cc: Klatsky, Carl, cake, Eggert, Lars, cerowrt-devel, bloat > On 18 May, 2015, at 15:30, Simon Barber <simon@superduper.net> wrote: > > implementing AQM without implementing a low priority traffic class (such as DSCP 8 - CS1) will prevent solutions like LEDBAT from working I note that the LEDBAT RFC itself points out this fact, and also that an AQM which successfully “defeats” LEDBAT in fact achieves LEDBAT’s goal (it’s in the name: Low Extra Delay), just in a different way. There’s a *different* reason for having a “background” traffic class, which is that certain applications use multiple flows, and thus tend to outcompete conventional single-flow applications. Some of these multiple-flow applications currently use LEDBAT to mitigate this effect, but in an FQ environment (not with pure AQM!) this particular effect of LEDBAT is frustrated and even reversed. That is the main reason why cake includes Diffserv support. It allows multiple-flow LEDBAT applications to altruistically move themselves out of the way; it also allows applications which are latency-sensitive to request an appropriate boost over heavy best-effort traffic. The trick is arrange such boosts so that requesting them doesn’t give an overwhelming advantage to bulk applications; this is necessary to avoid abuse of the Diffserv facility. I think Cake does achieve that, but some day I’d like some data confirming it. A test I happened to run yesterday (involving 50 uploads and 1 download, with available bandwidth heavily in the download’s favour) does confirm that the Diffserv mechanism does its job properly when asked to, but that doesn’t address the abuse angle. NB: the abuse angle is separate from the attack angle. It’s always possible to flood the system in order to degrade service; that’a an attack. Abuse, by contrast, is gaming the system to gain an unfair advantage. The latter is what cake’s traffic classes are intended to prevent, by limiting the advantage that misrepresenting traffic classes can obtain. If abuse is inherently discouraged by the system, then it becomes possible to *trust* DSCPs to some extent, making them more useful in practice. For some reason, I haven’t actually subscribed to IETF AQM yet. Perhaps I should catch up. - Jonathan Morton ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 12:30 ` Simon Barber 2015-05-18 15:03 ` Jonathan Morton @ 2015-05-18 15:09 ` dpreed 2015-05-18 15:32 ` Simon Barber ` (2 more replies) 1 sibling, 3 replies; 30+ messages in thread From: dpreed @ 2015-05-18 15:09 UTC (permalink / raw) To: Simon Barber; +Cc: cake, Klatsky, Carl, Eggert, Lars, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 1387 bytes --] I'm curious as to why one would need low priority class if you were using fq_codel? Are the LEDBAT flows indistinguishable? Is there no congestion signalling (no drops, no ECN)? The main reason I ask is that end-to-end flows should share capacity well enough without magical and rarely implemented things like diffserv and intserv. On Monday, May 18, 2015 8:30am, "Simon Barber" <simon@superduper.net> said: I am likely out of date about Windows Update, but there's many other programs that do background downloads or uploads that don't implement LEDBAT or similar protection. The current AQM recommendation draft in the IETF will make things worse, by not drawing attention to the fact that implementing AQM without implementing a low priority traffic class (such as DSCP 8 - CS1) will prevent solutions like LEDBAT from working, or there being any alternative. Would appreciate support on the AQM list in the importance of this. Simon Sent with AquaMail for Android [ http://www.aqua-mail.com ]( http://www.aqua-mail.com ) On May 18, 2015 4:42:43 AM "Eggert, Lars" <lars@netapp.com> wrote:On 2015-5-18, at 07:06, Simon Barber <[ simon@superduper.net ]( mailto:simon@superduper.net )> wrote: Windows update will kill your Skype call. Really? AFAIK Windows Update has been using a LEDBAT-like scavenger-type congestion control algorithm for years now. Lars [-- Attachment #2: Type: text/html, Size: 2909 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 15:09 ` dpreed @ 2015-05-18 15:32 ` Simon Barber 2015-05-18 17:21 ` [Cerowrt-devel] [Cake] " Dave Taht 2015-05-18 15:40 ` [Cerowrt-devel] " Jonathan Morton 2015-05-19 16:25 ` [Cerowrt-devel] [Cake] " Sebastian Moeller 2 siblings, 1 reply; 30+ messages in thread From: Simon Barber @ 2015-05-18 15:32 UTC (permalink / raw) To: dpreed; +Cc: cake, Klatsky, Carl, Eggert, Lars, cerowrt-devel, bloat [-- Attachment #1: Type: text/plain, Size: 1823 bytes --] LEDBAT is often used for scavenger traffic - things that should not detract from normal Internet use. There are two effects, latency and bandwidth. While AQM solves the latency problem, it removes the ability of LEDBAT to not impact bandwidth during peak usage. Simon Sent with AquaMail for Android http://www.aqua-mail.com On May 18, 2015 8:09:39 AM dpreed@reed.com wrote: > > I'm curious as to why one would need low priority class if you were using > fq_codel? Are the LEDBAT flows indistinguishable? Is there no congestion > signalling (no drops, no ECN)? The main reason I ask is that end-to-end > flows should share capacity well enough without magical and rarely > implemented things like diffserv and intserv. > > > On Monday, May 18, 2015 8:30am, "Simon Barber" <simon@superduper.net> said: > > > > > > I am likely out of date about Windows Update, but there's many other > programs that do background downloads or uploads that don't implement > LEDBAT or similar protection. The current AQM recommendation draft in the > IETF will make things worse, by not drawing attention to the fact that > implementing AQM without implementing a low priority traffic class (such as > DSCP 8 - CS1) will prevent solutions like LEDBAT from working, or there > being any alternative. Would appreciate support on the AQM list in the > importance of this. > Simon > Sent with AquaMail for Android > [ http://www.aqua-mail.com ]( http://www.aqua-mail.com ) > > On May 18, 2015 4:42:43 AM "Eggert, Lars" <lars@netapp.com> wrote:On > 2015-5-18, at 07:06, Simon Barber <[ simon@superduper.net ]( > mailto:simon@superduper.net )> wrote: > > Windows update will kill your Skype call. > Really? AFAIK Windows Update has been using a LEDBAT-like scavenger-type > congestion control algorithm for years now. > Lars [-- Attachment #2: Type: text/html, Size: 3835 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Cake] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 15:32 ` Simon Barber @ 2015-05-18 17:21 ` Dave Taht 0 siblings, 0 replies; 30+ messages in thread From: Dave Taht @ 2015-05-18 17:21 UTC (permalink / raw) To: Simon Barber; +Cc: Klatsky, Carl, cake, Eggert, Lars, cerowrt-devel, bloat On Mon, May 18, 2015 at 8:32 AM, Simon Barber <simon@superduper.net> wrote: > LEDBAT is often used for scavenger traffic - things that should not detract > from normal Internet use. There are two effects, latency and bandwidth. > While AQM solves the latency problem, it removes the ability of LEDBAT to > not impact bandwidth during peak usage. data transport over neutrinos would help! seriously, iw10 and cubic knock utp out of the way to a larger extent that is a start. I would like it if we saw more work in the area of making ledbat lighter in the light of deployment of fq and aqm technologies. > Simon > > Sent with AquaMail for Android > http://www.aqua-mail.com > > On May 18, 2015 8:09:39 AM dpreed@reed.com wrote: >> >> I'm curious as to why one would need low priority class if you were using >> fq_codel? Are the LEDBAT flows indistinguishable? Is there no congestion >> signalling (no drops, no ECN)? The main reason I ask is that end-to-end >> flows should share capacity well enough without magical and rarely >> implemented things like diffserv and intserv. >> >> >> >> On Monday, May 18, 2015 8:30am, "Simon Barber" <simon@superduper.net> >> said: >> >> I am likely out of date about Windows Update, but there's many other >> programs that do background downloads or uploads that don't implement LEDBAT >> or similar protection. The current AQM recommendation draft in the IETF will >> make things worse, by not drawing attention to the fact that implementing >> AQM without implementing a low priority traffic class (such as DSCP 8 - CS1) >> will prevent solutions like LEDBAT from working, or there being any >> alternative. Would appreciate support on the AQM list in the importance of >> this. >> >> Simon >> >> Sent with AquaMail for Android >> http://www.aqua-mail.com >> >> On May 18, 2015 4:42:43 AM "Eggert, Lars" <lars@netapp.com> wrote: >>> >>> On 2015-5-18, at 07:06, Simon Barber <simon@superduper.net> wrote: >>> >>> Windows update will kill your Skype call. >>> >>> >>> Really? AFAIK Windows Update has been using a LEDBAT-like scavenger-type >>> congestion control algorithm for years now. >>> Lars > > > _______________________________________________ > Cake mailing list > Cake@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cake > -- Dave Täht Open Networking needs **Open Source Hardware** https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 15:09 ` dpreed 2015-05-18 15:32 ` Simon Barber @ 2015-05-18 15:40 ` Jonathan Morton 2015-05-18 17:03 ` Sebastian Moeller 2015-05-19 16:25 ` [Cerowrt-devel] [Cake] " Sebastian Moeller 2 siblings, 1 reply; 30+ messages in thread From: Jonathan Morton @ 2015-05-18 15:40 UTC (permalink / raw) To: dpreed; +Cc: cake, Klatsky, Carl, Simon Barber, cerowrt-devel, bloat > On 18 May, 2015, at 18:09, dpreed@reed.com wrote: > > I'm curious as to why one would need low priority class if you were using fq_codel? Are the LEDBAT flows indistinguishable? Is there no congestion signalling (no drops, no ECN)? The main reason I ask is that end-to-end flows should share capacity well enough without magical and rarely implemented things like diffserv and intserv. The Cloonan paper addresses this question. http://snapon.lab.bufferbloat.net/~d/trimfat/Cloonan_Paper.pdf Let me summarise, with some more up-to-date additions: Consider a situation where a single application is downloading using many (say 50) flows in parallel. It’s rather easy to provoke BitTorrent into doing exactly this. BitTorrent also happens to use LEDBAT by default (via uTP). With a dumb FIFO, LEDBAT will sense the queue depth via the increased latency, and will tend to back off when some other traffic arrives to share that queue. With AQM, the queue depth doesn’t increase much before ECN marks and/or packet drops appear. LEDBAT then behaves like a conventional TCP, since it has lost the delay signal. Hence LEDBAT is indistinguishable from conventional TCP under AQM. With FQ, each flow gets a fair share of the bandwidth. But the *application* using 50 flows gets 50 times as much bandwidth as the application using only 1 flow. If the single-flow application is something elastic like a Web browser or checking e-mail, that might be tolerable. But if the single-flow application is inelastic (as VoIP usually is), and needs more than 2% of the link bandwidth to work properly, that’s a problem if it’s competing against 50 flows. That’s one of the Cloonan paper’s results; what they recommended was to use FQ with a small number of queues, so that this drawback was mitigated by way of hash collisions. Adding Diffserv and recommending that LEDBAT applications use the “background” traffic class (CS1 DSCP) solves this problem more elegantly. The share of bandwidth used by BitTorrent (say) is then independent of the number of flows it uses, and it also makes sense to configure FQ for ideal flow isolation rather than for mitigation. - Jonathan Morton ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 15:40 ` [Cerowrt-devel] " Jonathan Morton @ 2015-05-18 17:03 ` Sebastian Moeller 2015-05-18 17:17 ` Jonathan Morton 0 siblings, 1 reply; 30+ messages in thread From: Sebastian Moeller @ 2015-05-18 17:03 UTC (permalink / raw) To: Jonathan Morton, dpreed Cc: cake, Klatsky, Carl, Simon Barber, cerowrt-devel, bloat Hi Jonathan, On May 18, 2015 5:40:30 PM GMT+02:00, Jonathan Morton <chromatix99@gmail.com> wrote: > [...] > >Adding Diffserv and recommending that LEDBAT applications use the >“background” traffic class (CS1 DSCP) solves this problem more >elegantly. The share of bandwidth used by BitTorrent (say) is then >independent of the number of flows it uses, and it also makes sense to >configure FQ for ideal flow isolation rather than for mitigation. I wonder, for this to work well wouldn't we need to allow/honor at least CS1 marks on ingress? I remember there was some discussion about mislabeled traffic on ingress (Comcast I believe), do you see an easy way around that issue? Best Regards Sebastian > > - Jonathan Morton > >_______________________________________________ >Cerowrt-devel mailing list >Cerowrt-devel@lists.bufferbloat.net >https://lists.bufferbloat.net/listinfo/cerowrt-devel -- Sent from my Android device with K-9 Mail. Please excuse my brevity. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 17:03 ` Sebastian Moeller @ 2015-05-18 17:17 ` Jonathan Morton 2015-05-18 18:14 ` Sebastian Moeller 0 siblings, 1 reply; 30+ messages in thread From: Jonathan Morton @ 2015-05-18 17:17 UTC (permalink / raw) To: Sebastian Moeller; +Cc: Klatsky, Carl, Simon Barber, cake, cerowrt-devel, bloat > On 18 May, 2015, at 20:03, Sebastian Moeller <moeller0@gmx.de> wrote: > >> Adding Diffserv and recommending that LEDBAT applications use the >> “background” traffic class (CS1 DSCP) solves this problem more >> elegantly. The share of bandwidth used by BitTorrent (say) is then >> independent of the number of flows it uses, and it also makes sense to >> configure FQ for ideal flow isolation rather than for mitigation. > > I wonder, for this to work well wouldn't we need to allow/honor at least CS1 marks on ingress? I remember there was some discussion about mislabeled traffic on ingress (Comcast I believe), do you see an easy way around that issue? I don’t know much about the characteristics of this mislabelling. Presumably though, Comcast is using DSCP remarking in an attempt to manage internal congestion. If latency-sensitive and/or inelastic traffic is getting marked CS1, that would be a real problem, and Comcast would need leaning on to fix it. It’s slightly less serious if general best-effort traffic gets CS1 markings. One solution would be to re-mark the traffic at the CPE on ingress, using local knowledge of what traffic is important and which ports are associated with BitTorrent. Unfortunately, the ingress qdisc runs before iptables, making that more difficult. I think it would be necessary to do re-marking using an ingress action before passing it to the qdisc. Either that, or a pseudo-qdisc which just does the re-marking before handing the packet up the stack. I’m not sure whether it’s possible to attach two ingress actions to the same interface, though. If not, the re-marking action module would also need to incorporate act_mirred functionality, or a minimal subset thereof. - Jonathan Morton ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 17:17 ` Jonathan Morton @ 2015-05-18 18:14 ` Sebastian Moeller 2015-05-18 18:37 ` Dave Taht 0 siblings, 1 reply; 30+ messages in thread From: Sebastian Moeller @ 2015-05-18 18:14 UTC (permalink / raw) To: Jonathan Morton; +Cc: Klatsky, Carl, Simon Barber, cake, cerowrt-devel, bloat HI Jonathan, On May 18, 2015, at 19:17 , Jonathan Morton <chromatix99@gmail.com> wrote: > >> On 18 May, 2015, at 20:03, Sebastian Moeller <moeller0@gmx.de> wrote: >> >>> Adding Diffserv and recommending that LEDBAT applications use the >>> “background” traffic class (CS1 DSCP) solves this problem more >>> elegantly. The share of bandwidth used by BitTorrent (say) is then >>> independent of the number of flows it uses, and it also makes sense to >>> configure FQ for ideal flow isolation rather than for mitigation. >> >> I wonder, for this to work well wouldn't we need to allow/honor at least CS1 marks on ingress? I remember there was some discussion about mislabeled traffic on ingress (Comcast I believe), do you see an easy way around that issue? > > I don’t know much about the characteristics of this mislabelling. Presumably though, Comcast is using DSCP remarking in an attempt to manage internal congestion. If latency-sensitive and/or inelastic traffic is getting marked CS1, that would be a real problem, and Comcast would need leaning on to fix it. It’s slightly less serious if general best-effort traffic gets CS1 markings. I do not know any further details, but I think Dave noted that originally, maybe he knows what was mislabeled. > > One solution would be to re-mark the traffic at the CPE on ingress, using local knowledge of what traffic is important and which ports are associated with BitTorrent. In theory that sounds sweet, in practice this is hard I believe, as there is not simple “mark” of bitttotrrent traffic, the TOS bits might be the best we have (if bittorrent would actually mark itself CS1) and we already discussed how unsatisfactory this solution is. > Unfortunately, the ingress qdisc runs before iptables, making that more difficult. I think it would be necessary to do re-marking using an ingress action before passing it to the qdisc. Either that, or a pseudo-qdisc which just does the re-marking before handing the packet up the stack. > > I’m not sure whether it’s possible to attach two ingress actions to the same interface, though. If not, the re-marking action module would also need to incorporate act_mirred functionality, or a minimal subset thereof. For this to be of practical issue we first need to solve the question how to detect incoming bit torrent packets, so we have a need for remarking facilities ;) If I recall correctly the nf_tables developers are working hard ATM to get nf_tables working on ingress as well. There are a few threads on netdev e.g. http://marc.info/?l=netfilter-devel&m=143153372615155&w=2 about nf_tables on ingress. (I noticed in that discussion that our need to use traffic-shapers (instead of policers) on the ingress does seem to be on the developers radar, but I could be wrong ) Best Regards Sebastian > > - Jonathan Morton > ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 18:14 ` Sebastian Moeller @ 2015-05-18 18:37 ` Dave Taht 0 siblings, 0 replies; 30+ messages in thread From: Dave Taht @ 2015-05-18 18:37 UTC (permalink / raw) To: Sebastian Moeller Cc: Klatsky, Carl, cake, cerowrt-devel, bloat, Jonathan Morton On Mon, May 18, 2015 at 11:14 AM, Sebastian Moeller <moeller0@gmx.de> wrote: > HI Jonathan, > > On May 18, 2015, at 19:17 , Jonathan Morton <chromatix99@gmail.com> wrote: > >> >>> On 18 May, 2015, at 20:03, Sebastian Moeller <moeller0@gmx.de> wrote: >>> >>>> Adding Diffserv and recommending that LEDBAT applications use the >>>> “background” traffic class (CS1 DSCP) solves this problem more >>>> elegantly. The share of bandwidth used by BitTorrent (say) is then >>>> independent of the number of flows it uses, and it also makes sense to >>>> configure FQ for ideal flow isolation rather than for mitigation. >>> >>> I wonder, for this to work well wouldn't we need to allow/honor at least CS1 marks on ingress? I remember there was some discussion about mislabeled traffic on ingress (Comcast I believe), do you see an easy way around that issue? >> >> I don’t know much about the characteristics of this mislabelling. Presumably though, Comcast is using DSCP remarking in an attempt to manage internal congestion. If latency-sensitive and/or inelastic traffic is getting marked CS1, that would be a real problem, and Comcast would need leaning on to fix it. It’s slightly less serious if general best-effort traffic gets CS1 markings. > > I do not know any further details, but I think Dave noted that originally, maybe he knows what was mislabeled. all bits except the CS1 bit are masked out on comcast, it seems. qdisc fq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 1500 target 5.0ms interval 100.0ms ecn Sent 1177316772 bytes 16370274 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 1514 drop_overlimit 0 new_flow_count 2626475 ecn_mark 0 new_flows_len 0 old_flows_len 1 qdisc fq_codel 130: parent 1:13 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn Sent 234497554050 bytes 187934189 pkt (dropped 3368, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 1514 drop_overlimit 0 new_flow_count 32445438 ecn_mark 77 new_flows_len 1 old_flows_len 2 >> >> One solution would be to re-mark the traffic at the CPE on ingress, using local knowledge of what traffic is important and which ports are associated with BitTorrent. > > In theory that sounds sweet, in practice this is hard I believe, as there is not simple “mark” of bitttotrrent traffic, the TOS bits might be the best we have (if bittorrent would actually mark itself CS1) and we already discussed how unsatisfactory this solution is. > >> Unfortunately, the ingress qdisc runs before iptables, making that more difficult. I think it would be necessary to do re-marking using an ingress action before passing it to the qdisc. Either that, or a pseudo-qdisc which just does the re-marking before handing the packet up the stack. >> >> I’m not sure whether it’s possible to attach two ingress actions to the same interface, though. If not, the re-marking action module would also need to incorporate act_mirred functionality, or a minimal subset thereof. > > For this to be of practical issue we first need to solve the question how to detect incoming bit torrent packets, so we have a need for remarking facilities ;) > If I recall correctly the nf_tables developers are working hard ATM to get nf_tables working on ingress as well. There are a few threads on netdev e.g. http://marc.info/?l=netfilter-devel&m=143153372615155&w=2 about nf_tables on ingress. (I noticed in that discussion that our need to use traffic-shapers (instead of policers) on the ingress does seem to be on the developers radar, but I could be wrong ) At the moment based on the brutal after effects of watching the dslreports "fiber" test I am leaning towards something more policer-like on inbound so long as dumber rate shapers exist at the isp. > Best Regards > Sebastian > >> >> - Jonathan Morton >> > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat -- Dave Täht Open Networking needs **Open Source Hardware** https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Cake] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-18 15:09 ` dpreed 2015-05-18 15:32 ` Simon Barber 2015-05-18 15:40 ` [Cerowrt-devel] " Jonathan Morton @ 2015-05-19 16:25 ` Sebastian Moeller 2 siblings, 0 replies; 30+ messages in thread From: Sebastian Moeller @ 2015-05-19 16:25 UTC (permalink / raw) To: dpreed Cc: Klatsky, Carl, Simon Barber, cake, Eggert, Lars, cerowrt-devel, bloat Hi David, On May 18, 2015, at 17:09 , dpreed@reed.com wrote: > I'm curious as to why one would need low priority class if you were using fq_codel? Are the LEDBAT flows indistinguishable? Well, as far as I can tell fq_codel treats all flows the same, but we want LEDBAT flows to basically scavenge the left-overs, not get their fair share on the table ;). Updates by the way are not the best example for this kind of problem as some updates are urgent enough to post-pone everything else for. > Is there no congestion signalling (no drops, no ECN)? The main reason I ask is that end-to-end flows should share capacity well enough without magical and rarely implemented things like diffserv and intserv. As far as I can tell bit torrent tries to do two things: 1) open up quite a number of parallel ingress and egress flows and 2) keep that traffic out of the way of other traffic. fq_codel interferes with how 2) is implemented. Currently, the best of the flawed work-arounds is to have bit torrent tell the network that it should be treated as LEDBAT using TOS bits. This is flawed as we have no gurateee whatsoever on the sanity of TOS bits on our networks ingress (and often networks will re-map the TOS bits anyway, so on ingress the LEDBAT TOS signal might not be in the packets any more, and since one man’s ingress is another man’s egress, basically using TOS bits for keeping bit torrent in the background is a loosing proposition). That said I watched a ripe talk by Peter Lothberg where he proposed for the carriers (DTAG in his case) to encode their TOS bits into the IPv6 addresses and simply ignore the IP TOS bits, so they will not need to re-map those as they are totally neutral for DTAG planned internal network. (And interestingly in DTAG’s IPv6 network RRUL test packets from sweden keep their TOS bits fully intact). Best Regards Sebastian > > > On Monday, May 18, 2015 8:30am, "Simon Barber" <simon@superduper.net> said: > > I am likely out of date about Windows Update, but there's many other programs that do background downloads or uploads that don't implement LEDBAT or similar protection. The current AQM recommendation draft in the IETF will make things worse, by not drawing attention to the fact that implementing AQM without implementing a low priority traffic class (such as DSCP 8 - CS1) will prevent solutions like LEDBAT from working, or there being any alternative. Would appreciate support on the AQM list in the importance of this. > > Simon > > Sent with AquaMail for Android > http://www.aqua-mail.com > > On May 18, 2015 4:42:43 AM "Eggert, Lars" <lars@netapp.com> wrote: > > On 2015-5-18, at 07:06, Simon Barber <simon@superduper.net> wrote: > Windows update will kill your Skype call. > > Really? AFAIK Windows Update has been using a LEDBAT-like scavenger-type congestion control algorithm for years now. > Lars > _______________________________________________ > Cake mailing list > Cake@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cake ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 13:35 ` Jim Gettys 2015-05-15 14:36 ` Simon Barber @ 2015-05-15 16:59 ` Dave Taht 1 sibling, 0 replies; 30+ messages in thread From: Dave Taht @ 2015-05-15 16:59 UTC (permalink / raw) To: Jim Gettys Cc: Bill Ver Steeg (versteb), Klatsky, Carl, cake, Eggert, Lars, cerowrt-devel, bloat y'all are mostly missing the point of my question which was why did it have such a dramatic effect on reducing the induced downstream bloat? I have a new theory but I will get to that after a couple more cups of coffee and a spreadsheet. But on the threads here: I used to use "owamp" to do very accurate one way delay measurements in each direction, but it is a a hassle to configure and mostly required GPS synced ntp clocks. It was seriously overengineered (authentication, etc), and I found I had to extract the raw stats to get the info I needed. Definately agree that pulling out the tcp timestamp data in the rrul test would be good, that measuring statistics at a much finer grain within that test would be good, (the 200ms sampling rate is way too high, but you start heisenbugging it at 20ms), the current rrul latency stats are very misleading when fq is present (for example I mostly just monitor queue length with watch tc) and... patches always appreciated. Wireshark could do a better job in it's graphing tools, it makes me crazy to have to compare two graphed wireshark traces in gimp. Web10g is now up to kernel 3.17, and some more stats collection has been continually entering the kernel (TCP_INFO is gaining more fields, there is also some more snmp mibs accessible). I am very big on moving my testbeds to 4.1 due to all the improvements in the FIB handling.... I had generally hoped to start leveraging the quic and/or webrtc codebases to be able to make more progress in userspace. That has selectable reno or cubic cc, for example - but the existing libraries and code are not thread capable.... A lot of things are now at the "mere matter of coding" point, just not enough coders to go around that aren't busy working on the next pets.com. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems 2015-05-15 2:48 ` Greg White 2015-05-15 4:44 ` Aaron Wood @ 2015-05-15 17:47 ` Dave Taht 1 sibling, 0 replies; 30+ messages in thread From: Dave Taht @ 2015-05-15 17:47 UTC (permalink / raw) To: Greg White; +Cc: cake, Klatsky, Carl, cerowrt-devel, bloat On Thu, May 14, 2015 at 7:48 PM, Greg White <g.white@cablelabs.com> wrote: > I don't have any ideas, but I can try to cross some of yours off the > list... :) > > "* always on media access reducing grant latency" > During download test, you are receiving 95 Mbps, that works out to what, > an upstream ACK every 0.25ms? The CM will get an opportunity to send a > piggybacked request approx every 2ms. It seems like it will always be > piggybacking in order to send the 8 new ACKs that have arrived in the last > 2ms interval. I can't see how adding a ping packet every 10ms would > influence the behavior. > > "* ack prioritization on the modem" > ACK prioritization would mean that the modem would potentially delay the > pings (yours and dlsreports') in order to service the ACKs. Not sure why > this wouldn't delay dslreports' ACKs when yours are present. > > "* hitting packet limits on the CMTS..." > I'd have to see the paper, but I don't see a significant difference in DS > throughput between the two tests. Are you thinking that there are just > enough packet drops happening to reduce bufferbloat, but not affect > throughput? T'would be lucky. http://www.i-teletraffic.org/fileadmin/ITCBibDatabase/2014/braud2014tcp.pdf was that paper. My thinking was possibly CMTS's also use GRO equivalents and a limit on the number of packets. So we end up with some big flows having 15k or more "packets" queued up counting as 1 packet. So accumulating 20 ping packets can push out a GRO "packet" in that tail drop system. too bad I don't know any CMTS experts.... ;) I guess I will go fiddle with the math. Lord knows offloads have caused us and continue to cause us major headaches on linux. It would not surprise me if they were elsewhere. (I like that slow start behaviors have become more analyzed of late) So some small > > > > On 5/14/15, 1:13 PM, "Dave Taht" <dave.taht@gmail.com> wrote: > >>One thing I find remarkable is that my isochronous 10ms ping flow test >>(trying to measure the accuracy of the dslreports test) totally >>heisenbugs the cable 16 (used to be 24) flow dslreports test. >> >>Without, using cake as the inbound and outbound shaper, I get a grade >>of C to F, due to inbound latencies measured in the seconds. >> >>http://www.dslreports.com/speedtest/479096 >> >>With that measurement flow, I do so much better, (max observed latency >>of 210ms or so) with grades ranging from B to A... >> >>http://www.dslreports.com/speedtest/478950 >> >>I am only sending and receiving an extra ~10000 bytes/sec (100 ping >>packets/sec) to get this difference between results. The uplink is >>11Mbits, downlink 110 (configured for 100) >> >>Only things I can think of are: >> >>* ack prioritization on the modem >>* hitting packet limits on the CMTS forcing drops upstream (there was >>a paper on this idea, can't remember the name (?) ) >>* always on media access reducing grant latency >>* cake misbehavior (it is well understood codel does not react fast >>enough here) >>* cake goodness (fq of the ping making for less ack prioritization?) >>* ???? >> >>I am pretty sure the cable makers would not approve of someone >>continuously pinging their stuff in order to get lower latency on >>downloads (but it would certainly be one way to add continuous tuning >>of the real rate to cake!) >> >>Ideas? >> >>The simple test: >># you need to be root to ping on a 10ms interval >># and please pick your own server! >> >>$ sudo fping -c 10000 -i 10 -p 10 snapon.lab.bufferbloat.net > >>vscabletest_cable.out >> >>start a dslreports "cable" test in your browser >> >>abort the (CNTRL-C) ping when done. Post processing of fping's format >> >>$ cat vscabletest_cable.out | cut -f3- -d, | awk '{ print $1 }' > >>vscabletest-cable.txt >> >>import into your favorite spreadsheet and plot. > -- Dave Täht Open Networking needs **Open Source Hardware** https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 ^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2015-05-19 16:26 UTC | newest] Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2015-05-14 19:13 [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems Dave Taht 2015-05-15 2:48 ` Greg White 2015-05-15 4:44 ` Aaron Wood 2015-05-15 8:18 ` [Cerowrt-devel] [Bloat] " Eggert, Lars 2015-05-15 8:55 ` Sebastian Moeller 2015-05-15 11:10 ` [Cerowrt-devel] [Cake] " Alan Jenkins 2015-05-15 11:27 ` [Cerowrt-devel] " Bill Ver Steeg (versteb) 2015-05-15 12:19 ` Jonathan Morton 2015-05-15 12:44 ` Eggert, Lars 2015-05-15 13:09 ` Bill Ver Steeg (versteb) 2015-05-15 13:35 ` Jim Gettys 2015-05-15 14:36 ` Simon Barber 2015-05-18 3:30 ` dpreed 2015-05-18 5:06 ` Simon Barber 2015-05-18 9:06 ` Bill Ver Steeg (versteb) 2015-05-18 11:42 ` Eggert, Lars 2015-05-18 11:57 ` luca.muscariello 2015-05-18 12:30 ` Simon Barber 2015-05-18 15:03 ` Jonathan Morton 2015-05-18 15:09 ` dpreed 2015-05-18 15:32 ` Simon Barber 2015-05-18 17:21 ` [Cerowrt-devel] [Cake] " Dave Taht 2015-05-18 15:40 ` [Cerowrt-devel] " Jonathan Morton 2015-05-18 17:03 ` Sebastian Moeller 2015-05-18 17:17 ` Jonathan Morton 2015-05-18 18:14 ` Sebastian Moeller 2015-05-18 18:37 ` Dave Taht 2015-05-19 16:25 ` [Cerowrt-devel] [Cake] " Sebastian Moeller 2015-05-15 16:59 ` [Cerowrt-devel] " Dave Taht 2015-05-15 17:47 ` [Cerowrt-devel] " Dave Taht
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox