* [Bloat] AQM creeping into L2 equipment @ 2014-03-18 14:52 Steinar H. Gunderson 2014-03-18 17:17 ` Dave Taht 2014-03-19 13:11 ` Nikolay Shopik 0 siblings, 2 replies; 30+ messages in thread From: Steinar H. Gunderson @ 2014-03-18 14:52 UTC (permalink / raw) To: bloat Hi, I thought some of you might be interested in a small observation I made today: Cisco 2960-X, their latest low-end (?) L2 access switch offering (well, it can do some L3 as well, especially the 2960-XR, but I don't think it's very commonly used), has WRED on its feature list. They also have something that looks like SFQ. We ordered two a while back but haven't received them yet, so I haven't tested how well it works in practice. I guess this mirrors my desire since a few years back that _any_ congestion point in your network (and a switch that supports both 10gig and 1gig is a prime candidate for becoming a congestion point on downconversion...) should have some form of AQM. Of course, it's no CoDel or PIE, but you take what you get... /* Steinar */ -- Homepage: http://www.sesse.net/ ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-18 14:52 [Bloat] AQM creeping into L2 equipment Steinar H. Gunderson @ 2014-03-18 17:17 ` Dave Taht 2014-03-18 17:53 ` Steinar H. Gunderson 2014-03-18 18:05 ` Fred Baker (fred) 2014-03-19 13:11 ` Nikolay Shopik 1 sibling, 2 replies; 30+ messages in thread From: Dave Taht @ 2014-03-18 17:17 UTC (permalink / raw) To: Steinar H. Gunderson; +Cc: bloat On Tue, Mar 18, 2014 at 10:52 AM, Steinar H. Gunderson <sgunderson@bigfoot.com> wrote: > Hi, > > I thought some of you might be interested in a small observation I made > today: Cisco 2960-X, their latest low-end (?) L2 access switch offering > (well, it can do some L3 as well, especially the 2960-XR, but I don't think > it's very commonly used), has WRED on its feature list. They also have I would certainly like good documentation on how to configure it and results with/without on a two ports into one test. > something that looks like SFQ. DRR was quite common until fairly recently. > We ordered two a while back but haven't > received them yet, so I haven't tested how well it works in practice. > > I guess this mirrors my desire since a few years back that _any_ congestion > point in your network (and a switch that supports both 10gig and 1gig is a > prime candidate for becoming a congestion point on downconversion...) should > have some form of AQM. +1 > Of course, it's no CoDel or PIE, but you take what you > get... I recently spent some time trying to make an edgerouter lite v.1.4.1 work with rate limiting and RED or SFQ. Neither did. (will fiddle some more) Turns out there is an openwrt build for that hw... given that the cerowrt hardware peaks out at about 50mbit I'd really like to find something that got up to 200mbit+. > > /* Steinar */ > -- > Homepage: http://www.sesse.net/ > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-18 17:17 ` Dave Taht @ 2014-03-18 17:53 ` Steinar H. Gunderson 2014-03-19 22:22 ` Steinar H. Gunderson 2014-03-18 18:05 ` Fred Baker (fred) 1 sibling, 1 reply; 30+ messages in thread From: Steinar H. Gunderson @ 2014-03-18 17:53 UTC (permalink / raw) To: Dave Taht; +Cc: bloat On Tue, Mar 18, 2014 at 01:17:03PM -0400, Dave Taht wrote: > I would certainly like good documentation on how to configure it and > results with/without on a two ports into one test. I fear these two will go pretty much directly into prod. :-) But I can probably at least look at configuration options and possibly play around a bit nevertheless. /* Steinar */ -- Homepage: http://www.sesse.net/ ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-18 17:53 ` Steinar H. Gunderson @ 2014-03-19 22:22 ` Steinar H. Gunderson 0 siblings, 0 replies; 30+ messages in thread From: Steinar H. Gunderson @ 2014-03-19 22:22 UTC (permalink / raw) To: Dave Taht; +Cc: bloat On Tue, Mar 18, 2014 at 06:53:55PM +0100, Steinar H. Gunderson wrote: > I fear these two will go pretty much directly into prod. :-) But I can > probably at least look at configuration options and possibly play around > a bit nevertheless. Looks like the feature navigator was just wrong (it often is); I can't find anything in the usual places for WRED. So if so, it's configured differently. /* Steinar */ -- Homepage: http://www.sesse.net/ ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-18 17:17 ` Dave Taht 2014-03-18 17:53 ` Steinar H. Gunderson @ 2014-03-18 18:05 ` Fred Baker (fred) 2014-03-18 18:54 ` Dave Taht 2014-03-18 18:57 ` [Bloat] AQM creeping into L2 equipment Dave Taht 1 sibling, 2 replies; 30+ messages in thread From: Fred Baker (fred) @ 2014-03-18 18:05 UTC (permalink / raw) To: Dave Taht; +Cc: bloat [-- Attachment #1: Type: text/plain, Size: 701 bytes --] On Mar 18, 2014, at 10:17 AM, Dave Taht <dave.taht@gmail.com> wrote: >> I thought some of you might be interested in a small observation I made >> today: Cisco 2960-X, their latest low-end (?) L2 access switch offering >> (well, it can do some L3 as well, especially the 2960-XR, but I don't think >> it's very commonly used), has WRED on its feature list. They also have > > I would certainly like good documentation on how to configure it and > results with/without on a two ports into one test. Thus saith google: http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960x/software/15-0_2_EX/qos/configuration_guide/b_qos_152ex_2960-x_cg/b_qos_152ex_2960-x_cg_chapter_010.html [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 195 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-18 18:05 ` Fred Baker (fred) @ 2014-03-18 18:54 ` Dave Taht 2014-03-20 16:20 ` Toke Høiland-Jørgensen 2014-03-20 18:16 ` [Bloat] AQM creeping into L2 equipment / 10G not-for-profit playground at tetaneutral.net Laurent GUERBY 2014-03-18 18:57 ` [Bloat] AQM creeping into L2 equipment Dave Taht 1 sibling, 2 replies; 30+ messages in thread From: Dave Taht @ 2014-03-18 18:54 UTC (permalink / raw) To: Fred Baker (fred); +Cc: bloat On Tue, Mar 18, 2014 at 2:05 PM, Fred Baker (fred) <fred@cisco.com> wrote: > > On Mar 18, 2014, at 10:17 AM, Dave Taht <dave.taht@gmail.com> wrote: > >>> I thought some of you might be interested in a small observation I made >>> today: Cisco 2960-X, their latest low-end (?) L2 access switch offering >>> (well, it can do some L3 as well, especially the 2960-XR, but I don't think >>> it's very commonly used), has WRED on its feature list. They also have >> >> I would certainly like good documentation on how to configure it and >> results with/without on a two ports into one test. > > Thus saith google: http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960x/software/15-0_2_EX/qos/configuration_guide/b_qos_152ex_2960-x_cg/b_qos_152ex_2960-x_cg_chapter_010.html The standard test that I'm most interested in is the 2 ports into 1 topology: SOURCE | SWITCH | | BOX1 BOX2 and finding "optimal" settings for that. All links are gigE for this... each box runs a copy of the rrul test (attempting to saturate up/down and measure loss/delay on several differently classified measurement flows), with various switch configurations (wred, srr, wtd, whatever) can add in a delay box. Donations/loans of various cool switches gladly accepted. :) A topology with a 10Gige source is also interesting. There have been so many improvements to linux tcp that it's hard to intuit behaviors since when we started the debloating effort. -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-18 18:54 ` Dave Taht @ 2014-03-20 16:20 ` Toke Høiland-Jørgensen 2014-03-20 16:29 ` Dave Taht 2014-03-20 18:16 ` [Bloat] AQM creeping into L2 equipment / 10G not-for-profit playground at tetaneutral.net Laurent GUERBY 1 sibling, 1 reply; 30+ messages in thread From: Toke Høiland-Jørgensen @ 2014-03-20 16:20 UTC (permalink / raw) To: Dave Taht; +Cc: bloat [-- Attachment #1: Type: text/plain, Size: 932 bytes --] Dave Taht <dave.taht@gmail.com> writes: > The standard test that I'm most interested in is the 2 ports into 1 topology: > > SOURCE > | > SWITCH > | | > BOX1 BOX2 Did a couple of tests of this with the default configuration of a Cisco WS-C2960X-24TD-L switch. Graphs and data files here: http://kau.toke.dk/experiments/cisco-switch/cisco-c2960x.html Conclusion: My test boxes need offloads and quite a bit of driver queueing to drive the 1Gbps link, which makes it difficult to say anything about the switch... Haven't fiddled with the QoS settings, but from what I can see they are rather limited, and takes a great deal of fiddling to setup (at least for someone who, like me, has pretty much zero experience with Cisco gear). If someone has suggestions for other switch configurations that would be worthwhile testing (as well as some help on how to configure it), I'll be happy to run the tests. -Toke [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 489 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-20 16:20 ` Toke Høiland-Jørgensen @ 2014-03-20 16:29 ` Dave Taht 2014-03-20 16:44 ` Toke Høiland-Jørgensen 0 siblings, 1 reply; 30+ messages in thread From: Dave Taht @ 2014-03-20 16:29 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: bloat I find it puzzling that you still lose the measurement flows early on. Setting some QoS on via WTD might be interesting. sub 2ms performance under this load is quite good. If you have a later OS than 3.13 on the sources/sinks you might want to try sch_fq (and sch_pfifo_fast for reference) - the improvements to Linux's TCP are such that on a short path like this that the control loop stays very tight - only two TSO offloads per flow, really accurate use of tcp timestamps, etc. See also if you have hardware flow control enabled (via ethtool) On Thu, Mar 20, 2014 at 9:20 AM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > Dave Taht <dave.taht@gmail.com> writes: > >> The standard test that I'm most interested in is the 2 ports into 1 topology: >> >> SOURCE >> | >> SWITCH >> | | >> BOX1 BOX2 > > Did a couple of tests of this with the default configuration of a Cisco > WS-C2960X-24TD-L switch. Graphs and data files here: > > http://kau.toke.dk/experiments/cisco-switch/cisco-c2960x.html > > Conclusion: My test boxes need offloads and quite a bit of driver > queueing to drive the 1Gbps link, which makes it difficult to say > anything about the switch... > > Haven't fiddled with the QoS settings, but from what I can see they are > rather limited, and takes a great deal of fiddling to setup (at least > for someone who, like me, has pretty much zero experience with Cisco > gear). > > If someone has suggestions for other switch configurations that would be > worthwhile testing (as well as some help on how to configure it), I'll > be happy to run the tests. > > -Toke -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-20 16:29 ` Dave Taht @ 2014-03-20 16:44 ` Toke Høiland-Jørgensen 2014-03-20 17:14 ` Dave Taht 0 siblings, 1 reply; 30+ messages in thread From: Toke Høiland-Jørgensen @ 2014-03-20 16:44 UTC (permalink / raw) To: Dave Taht; +Cc: bloat [-- Attachment #1: Type: text/plain, Size: 1693 bytes --] Dave Taht <dave.taht@gmail.com> writes: > I find it puzzling that you still lose the measurement flows early on. > Setting some QoS on via WTD might be interesting. Well if you can be more specific, I'll be happy to. Was looking for a way to have per-protocol QoS settings, but there does not seem to be any From the documentation (only IP-based). > If you have a later OS than 3.13 on the sources/sinks you might want > to try sch_fq (and sch_pfifo_fast for reference) - the improvements to > Linux's TCP are such that on a short path like this that the control > loop stays very tight - only two TSO offloads per flow, really > accurate use of tcp timestamps, etc. Added a result set with sch_fq in place of fq_codel to the bottom of the same page. Doesn't appear to make much of a difference... > See also if you have hardware flow control enabled (via ethtool) If by that you mean pause frames, ethtool seems to think not: Settings for eth2: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes -Toke [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 489 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-20 16:44 ` Toke Høiland-Jørgensen @ 2014-03-20 17:14 ` Dave Taht 2014-03-20 19:34 ` Aaron Wood 2014-03-21 13:41 ` Toke Høiland-Jørgensen 0 siblings, 2 replies; 30+ messages in thread From: Dave Taht @ 2014-03-20 17:14 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: bloat well on the default load, it looks like you want the mls commands http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960/software/release/12-2_37_ey/command/reference/cr.pdf I imagine with the new tcp's pfifo_fast is going to be sub 8ms also. however what we are probably seeing with the measurement flows is slow start causing a whole bunch of packets to be lost in a bunch. Is your hardware fast enough to run tcpdump -s 128 -w whatever.cap -i your interface during an entire rrul test without dropping packets? (on client and server) On Thu, Mar 20, 2014 at 9:44 AM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > Dave Taht <dave.taht@gmail.com> writes: > >> I find it puzzling that you still lose the measurement flows early on. >> Setting some QoS on via WTD might be interesting. > > Well if you can be more specific, I'll be happy to. Was looking for a > way to have per-protocol QoS settings, but there does not seem to be any > From the documentation (only IP-based). > >> If you have a later OS than 3.13 on the sources/sinks you might want >> to try sch_fq (and sch_pfifo_fast for reference) - the improvements to >> Linux's TCP are such that on a short path like this that the control >> loop stays very tight - only two TSO offloads per flow, really >> accurate use of tcp timestamps, etc. > > Added a result set with sch_fq in place of fq_codel to the bottom of the > same page. Doesn't appear to make much of a difference... > >> See also if you have hardware flow control enabled (via ethtool) > > If by that you mean pause frames, ethtool seems to think not: > > Settings for eth2: > Supported ports: [ TP ] > Supported link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Full > Supported pause frame use: No > Supports auto-negotiation: Yes > Advertised link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Full > Advertised pause frame use: No > Advertised auto-negotiation: Yes > Speed: 1000Mb/s > Duplex: Full > Port: Twisted Pair > PHYAD: 1 > Transceiver: internal > Auto-negotiation: on > MDI-X: off > Supports Wake-on: d > Wake-on: d > Current message level: 0x00000007 (7) > drv probe link > Link detected: yes > > > -Toke -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-20 17:14 ` Dave Taht @ 2014-03-20 19:34 ` Aaron Wood 2014-03-20 20:23 ` Dave Taht 2014-03-21 13:41 ` Toke Høiland-Jørgensen 1 sibling, 1 reply; 30+ messages in thread From: Aaron Wood @ 2014-03-20 19:34 UTC (permalink / raw) To: Dave Taht; +Cc: bloat [-- Attachment #1: Type: text/plain, Size: 344 bytes --] > > however what we are probably seeing with the measurement flows is > slow start causing a whole bunch of packets to be lost in a bunch. > That would line up with the timing, and the periodic drops that I see in the flows when using Toke's newer wrapper (and netperf head), which attempt to work around the failing UDP timing flows. -Aaron [-- Attachment #2: Type: text/html, Size: 601 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-20 19:34 ` Aaron Wood @ 2014-03-20 20:23 ` Dave Taht 2014-03-20 23:41 ` Eric Dumazet 0 siblings, 1 reply; 30+ messages in thread From: Dave Taht @ 2014-03-20 20:23 UTC (permalink / raw) To: Aaron Wood; +Cc: renaud sallantin, bloat On Thu, Mar 20, 2014 at 12:34 PM, Aaron Wood <woody77@gmail.com> wrote: >> however what we are probably seeing with the measurement flows is >> slow start causing a whole bunch of packets to be lost in a bunch. > > > That would line up with the timing, and the periodic drops that I see in the > flows when using Toke's newer wrapper (and netperf head), which attempt to > work around the failing UDP timing flows. Well there is some good work in linux 3.14 and beyond, and there was also some interesting work on "initial spreading" presented at ietf. Hopefully patches for this will be available soon. http://tools.ietf.org/html/draft-sallantin-iccrg-initial-spreading-00 I would certainly like to be able to sanely measure the impact of hundreds or thousands of flows in slow start, rather than/in addition to 8 flows in congestion avoidance. > > -Aaron -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-20 20:23 ` Dave Taht @ 2014-03-20 23:41 ` Eric Dumazet 2014-03-20 23:45 ` Steinar H. Gunderson [not found] ` <CAAvOmMtt1RCpBfT1MPNh-2FRhQ1GN4xYbfNPLYJwfP6CaP5vow@mail.gmail.com> 0 siblings, 2 replies; 30+ messages in thread From: Eric Dumazet @ 2014-03-20 23:41 UTC (permalink / raw) To: Dave Taht; +Cc: Steinar H. Gunderson, renaud sallantin, bloat On Thu, 2014-03-20 at 13:23 -0700, Dave Taht wrote: > Well there is some good work in linux 3.14 and beyond, and there was also > some interesting work on "initial spreading" presented at ietf. > > Hopefully patches for this will be available soon. > > http://tools.ietf.org/html/draft-sallantin-iccrg-initial-spreading-00 > > I would certainly like to be able to sanely measure the impact of > hundreds or thousands of flows in slow start, rather than/in addition to 8 flows > in congestion avoidance. As mentioned elsewhere, FQ/pacing does exactly this 'spreading', and not only for the 'initial' burst, but on all the lifetime of tcp flow, for example after recovery or idle period. FQ/pacing is part of linux kernel since 3.12 To play with it, you can set it like that : tc qdisc replace dev eth0 root fq quantum 1514 initial_quantum 1514 http://lwn.net/Articles/564978/ I believe Steinar had success using FQ/pacing lately, presumably using SO_MAX_PACING_RATE as well. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-20 23:41 ` Eric Dumazet @ 2014-03-20 23:45 ` Steinar H. Gunderson 2014-03-20 23:54 ` Steinar H. Gunderson [not found] ` <CAAvOmMtt1RCpBfT1MPNh-2FRhQ1GN4xYbfNPLYJwfP6CaP5vow@mail.gmail.com> 1 sibling, 1 reply; 30+ messages in thread From: Steinar H. Gunderson @ 2014-03-20 23:45 UTC (permalink / raw) To: Eric Dumazet; +Cc: Steinar H. Gunderson, renaud sallantin, bloat On Thu, Mar 20, 2014 at 04:41:24PM -0700, Eric Dumazet wrote: > I believe Steinar had success using FQ/pacing lately, presumably using > SO_MAX_PACING_RATE as well. Yes, I use SO_MAX_PACING_RATE both for TCP and UDP. The user experience over the Internet is markedly better; I doubt I would have problems picking it out in an A/B test. /* Steinar */ -- Homepage: http://www.sesse.net/ ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-20 23:45 ` Steinar H. Gunderson @ 2014-03-20 23:54 ` Steinar H. Gunderson 0 siblings, 0 replies; 30+ messages in thread From: Steinar H. Gunderson @ 2014-03-20 23:54 UTC (permalink / raw) To: Eric Dumazet; +Cc: Steinar H. Gunderson, renaud sallantin, bloat On Fri, Mar 21, 2014 at 12:45:36AM +0100, Steinar H. Gunderson wrote: > Yes, I use SO_MAX_PACING_RATE both for TCP and UDP. The user experience over > the Internet is markedly better; I doubt I would have problems picking it out > in an A/B test. I should point out that this is compared to pfifo_fast, not fq without FO_MAX_PACING_RATE. Apologies for any confusion. /* Steinar */ -- Homepage: http://www.sesse.net/ ^ permalink raw reply [flat|nested] 30+ messages in thread
[parent not found: <CAAvOmMtt1RCpBfT1MPNh-2FRhQ1GN4xYbfNPLYJwfP6CaP5vow@mail.gmail.com>]
* Re: [Bloat] AQM creeping into L2 equipment [not found] ` <CAAvOmMtt1RCpBfT1MPNh-2FRhQ1GN4xYbfNPLYJwfP6CaP5vow@mail.gmail.com> @ 2014-03-21 15:06 ` Eric Dumazet [not found] ` <CAAvOmMvPpmuW1chTdX86s5sQv6X_c622k8YrjW+hN8e5JV+dzA@mail.gmail.com> 0 siblings, 1 reply; 30+ messages in thread From: Eric Dumazet @ 2014-03-21 15:06 UTC (permalink / raw) To: renaud sallantin; +Cc: Steinar H. Gunderson, bloat On Fri, 2014-03-21 at 09:15 +0100, renaud sallantin wrote: > > FQ/pacing enables to do a lot of things, > and as I already said, it could be used to easily implement the > Initial Spreading. > (we did it and it' s just a few lines to add, and a couple of > parameters to change) > > > But for the moment, FQ/Pacing sends the IW in one burst (up to 10 > segments). > This is not true. This depends on RTT and your qdisc parameters. Whole point of TSO autosizing is to make all this stuff automatic. Here is the tcpdump output for a 10ms RTT, which is quite standard. You can see 5 packets are sent, with a delay of more than 1 ms. 07:58:52.616379 IP 10.246.11.51.39905 > 10.246.11.52.41276: S 2187811646:2187811646(0) win 29200 <mss 1460,nop,nop,sackOK,nop,wscale 6> 07:58:52.626575 IP 10.246.11.52.41276 > 10.246.11.51.39905: S 81785763:81785763(0) ack 2187811647 win 29200 <mss 1460,nop,nop,sackOK,nop,wscale 7> 07:58:52.626642 IP 10.246.11.51.39905 > 10.246.11.52.41276: . ack 1 win 457 07:58:52.626671 IP 10.246.11.51.39905 > 10.246.11.52.41276: . 1:2921(2920) ack 1 win 457 07:58:52.627740 IP 10.246.11.51.39905 > 10.246.11.52.41276: . 2921:5841(2920) ack 1 win 457 07:58:52.628815 IP 10.246.11.51.39905 > 10.246.11.52.41276: . 5841:8761(2920) ack 1 win 457 07:58:52.629946 IP 10.246.11.51.39905 > 10.246.11.52.41276: . 8761:11681(2920) ack 1 win 457 07:58:52.631054 IP 10.246.11.51.39905 > 10.246.11.52.41276: . 11681:14601(2920) ack 1 win 457 07:58:52.637147 IP 10.246.11.52.41276 > 10.246.11.51.39905: . ack 2921 win 274 07:58:52.637207 IP 10.246.11.51.39905 > 10.246.11.52.41276: . 14601:17521(2920) ack 1 win 457 07:58:52.638117 IP 10.246.11.51.39905 > 10.246.11.52.41276: . 17521:20441(2920) ack 1 win 457 07:58:52.638114 IP 10.246.11.52.41276 > 10.246.11.51.39905: . ack 5841 win 320 07:58:52.639011 IP 10.246.11.51.39905 > 10.246.11.52.41276: . 20441:23361(2920) ack 1 win 457 You also can tune /proc/sys/net/ipv4/tcp_min_tso_segs from 2 to 1 if you really want... No kernel patches needed... ^ permalink raw reply [flat|nested] 30+ messages in thread
[parent not found: <CAAvOmMvPpmuW1chTdX86s5sQv6X_c622k8YrjW+hN8e5JV+dzA@mail.gmail.com>]
* Re: [Bloat] AQM creeping into L2 equipment [not found] ` <CAAvOmMvPpmuW1chTdX86s5sQv6X_c622k8YrjW+hN8e5JV+dzA@mail.gmail.com> @ 2014-03-21 17:51 ` Eric Dumazet 2014-03-21 18:08 ` Dave Taht 0 siblings, 1 reply; 30+ messages in thread From: Eric Dumazet @ 2014-03-21 17:51 UTC (permalink / raw) To: renaud sallantin; +Cc: Steinar H. Gunderson, bloat On Fri, 2014-03-21 at 16:53 +0100, renaud sallantin wrote: > For our tests, we needed to adjust the "tcp_initial_quantum" in the > FQ, > but as you said, it is just a FQ parameter. > Yep, default ones are a compromise between performance and pacing accuracy. At 40Gbps speeds, it is a bit challenging. The consensus is that IW10 is adopted, meaning that we can send 10 MSS at whatever speed we want without knowing anything of the network conditions. If people want to play with other values, they have to change the route settings of their linux box, and fq parameters if they want. ip ro change default via 192.168.1.254 dev eth0 initcwnd 20 (As a matter of fact it seems some providers use higher values than IW10) > The "patch" we added, and once again, it was just a few lines, > enabled to set, via a sysctl parameter, the initial pacing value, > regardless of the RTT. > This can be valuable for different reasons: > o In case of long RTT, not set the pacing value is going to > introduce an un-necessary delay > (we aims to use this mechanism for satcom, so the delay could be > greater than 500ms) If you have a 500ms rtt, then you also want a bigger IW. Sending 10 MSS in the first RTT is going to be slow, no matter how you pace them. The first ACK wont come before 500 ms. > o In case of a wrong RTT measurement, i.e. an RTT measurement > that is higher that the real RTT (because of congestion for example), > you are going to have a wrong pacing evaluation... Well, if you have big rtt because of congestion, you exactly want to reduce the rate... rate = cwnd * mss / srtt And fq/pacing uses srtt, not rtt, so a single wrong rtt doesn't have a big impact (unless it is the first sample, as it will serve as the ewma initial value) You can not predict the network conditions just by studying the SYN/SYNACK/ACK initial messages. It gives a guess, but it is hard to send everything you want in a single RTT at 'optimal speed' Thats why it was so hard to decide the IW if you want an universal value. It depends on the state of the Internet, and it changes every day or so... ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-21 17:51 ` Eric Dumazet @ 2014-03-21 18:08 ` Dave Taht 2014-03-21 22:00 ` Eric Dumazet 0 siblings, 1 reply; 30+ messages in thread From: Dave Taht @ 2014-03-21 18:08 UTC (permalink / raw) To: Eric Dumazet; +Cc: Steinar H. Gunderson, bloat On Fri, Mar 21, 2014 at 5:51 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote: > On Fri, 2014-03-21 at 16:53 +0100, renaud sallantin wrote: >> For our tests, we needed to adjust the "tcp_initial_quantum" in the >> FQ, >> but as you said, it is just a FQ parameter. >> > > Yep, default ones are a compromise between performance and pacing > accuracy. At 40Gbps speeds, it is a bit challenging. At sub 10Mbit speeds it is also challenging. > The consensus is that IW10 is adopted, meaning that we can send 10 MSS > at whatever speed we want without knowing anything of the network > conditions. And I do wish that research and decision included measurements of the effects on DSL (sub 1mbit) and cable modem (sub 10mbit) uplinks and uploads, and tcp using traffic like bittorrent, ssh, rsync, cifs, etc. > > If people want to play with other values, they have to change the route > settings of their linux box, and fq parameters if they want. > > > ip ro change default via 192.168.1.254 dev eth0 initcwnd 20 This is likely to have brutal effects on slow uplinks and uploads without pacing enabled. to get results semi comparable to (for example) OSX, a initcwnd 3 should be used. I DO have very high hopes for pacing in these cases at various iw settings in the case of slow uplinks and uploads, I am concerned about the effects on wifi and other burst-prefering macs. I look forward to more benchmarks. I'm not in a position to do much with kernels later than 3.10 at the moment... > (As a matter of fact it seems some providers use higher values than > IW10) > >> The "patch" we added, and once again, it was just a few lines, >> enabled to set, via a sysctl parameter, the initial pacing value, >> regardless of the RTT. >> This can be valuable for different reasons: >> o In case of long RTT, not set the pacing value is going to >> introduce an un-necessary delay >> (we aims to use this mechanism for satcom, so the delay could be >> greater than 500ms) > > If you have a 500ms rtt, then you also want a bigger IW. Sending 10 MSS > in the first RTT is going to be slow, no matter how you pace them. > The first ACK wont come before 500 ms. In the case of a satellite link (>800ms RTT) with competing traffic, it is my hope that (n)fq_codel will further mitigate the large iws common today. A fairly common satellite uplink/downlink is 500kbit/8mbit. >> o In case of a wrong RTT measurement, i.e. an RTT measurement >> that is higher that the real RTT (because of congestion for example), >> you are going to have a wrong pacing evaluation... > > Well, if you have big rtt because of congestion, you exactly want to > reduce the rate... > > rate = cwnd * mss / srtt A nice thing about fq_codel inbetween on the congested link is that your first RTT on a new flow is generally very close to your actual physical RTT between links even when congested. > And fq/pacing uses srtt, not rtt, so a single wrong rtt doesn't have a > big impact (unless it is the first sample, as it will serve as the ewma > initial value) > > You can not predict the network conditions just by studying the > SYN/SYNACK/ACK initial messages. It gives a guess, but it is hard to > send everything you want in a single RTT at 'optimal speed' > > Thats why it was so hard to decide the IW if you want an universal > value. > It depends on the state of the Internet, and it changes every day or > so... I kind of wish there was a way to propagate saner IW settings throughout a home network - where you can use a big IW internally, but downgrade to lower values on exit to the real world. Again, if pacing actually works maybe indeed larger iws are truly feasible... (If I sound grumpy - I just wish I had time to play with this new stuff instead of what I'm doing right now - it's promising as heck) > > -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-21 18:08 ` Dave Taht @ 2014-03-21 22:00 ` Eric Dumazet 2014-03-21 22:13 ` Dave Taht 0 siblings, 1 reply; 30+ messages in thread From: Eric Dumazet @ 2014-03-21 22:00 UTC (permalink / raw) To: Dave Taht; +Cc: Steinar H. Gunderson, bloat On Fri, 2014-03-21 at 18:08 +0000, Dave Taht wrote: > This is likely to have brutal effects on slow uplinks and uploads > without pacing enabled. All I said was related to using fq/pacing, maybe it was not clear ? ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-21 22:00 ` Eric Dumazet @ 2014-03-21 22:13 ` Dave Taht 2014-03-23 19:27 ` Eric Dumazet 0 siblings, 1 reply; 30+ messages in thread From: Dave Taht @ 2014-03-21 22:13 UTC (permalink / raw) To: Eric Dumazet; +Cc: Steinar H. Gunderson, bloat On Fri, Mar 21, 2014 at 10:00 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote: > On Fri, 2014-03-21 at 18:08 +0000, Dave Taht wrote: > >> This is likely to have brutal effects on slow uplinks and uploads >> without pacing enabled. > > All I said was related to using fq/pacing, maybe it was not clear ? No, it was clear! Problem I was noting is that turning on sch_fq and setting the iw are not closely coupled so if you run iw20 without ensuring you are also running sch_fq, you are going to hurt a non-fq_codeled net... Apparently for some reason not everyone has made the switch yet. (and there were some other plausible side-effects from that 2000 paper on pacing worth checking up on) Are you ready to make sch_fq the default in 3.15? -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-21 22:13 ` Dave Taht @ 2014-03-23 19:27 ` Eric Dumazet 0 siblings, 0 replies; 30+ messages in thread From: Eric Dumazet @ 2014-03-23 19:27 UTC (permalink / raw) To: Dave Taht; +Cc: Steinar H. Gunderson, bloat On Fri, 2014-03-21 at 22:13 +0000, Dave Taht wrote: > Are you ready to make sch_fq the default in 3.15? sch_fq depends on ktime_get(), so it is a no go if you have clocksource using hpet. pfifo_fast doesn't have such issues. Another issue is TCP CUBIC Hystart 'ACK TRAIN' detection that triggers early, since goal of TSO autosizing + FQ/pacing is to get ACK clocking every ms. By design, it tends to get ACK trains, way before the cwnd might reach BDP. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-20 17:14 ` Dave Taht 2014-03-20 19:34 ` Aaron Wood @ 2014-03-21 13:41 ` Toke Høiland-Jørgensen 2014-03-21 15:39 ` Dave Taht 1 sibling, 1 reply; 30+ messages in thread From: Toke Høiland-Jørgensen @ 2014-03-21 13:41 UTC (permalink / raw) To: Dave Taht; +Cc: bloat [-- Attachment #1: Type: text/plain, Size: 961 bytes --] Dave Taht <dave.taht@gmail.com> writes: > I imagine with the new tcp's pfifo_fast is going to be sub 8ms also. Yeah, turns out I botched the qdisc setup (put it on the wrong interface on one of the servers) for the case with no switch. So the ~6ms was with pfifo_fast in one end. Updated the original graphs for the host-to-host. Data capture files are here: http://kau.toke.dk/experiments/cisco-switch/packet-captures/ -- no idea why the client seems to capture three times as many packets as the server. None of them seem to think they've dropped any (as per tcpdump output). Will add dumps from going through the switch in a bit... > Is your hardware fast enough to run tcpdump -s 128 -w whatever.cap -i > your interface during an entire rrul test without dropping packets? > (on client and server) Well, as above, tcpdump doesn't say anything about dropped packets; but since the client dump is way bigger, perhaps the server-side does anyway? -Toke [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 489 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-21 13:41 ` Toke Høiland-Jørgensen @ 2014-03-21 15:39 ` Dave Taht 2014-03-21 16:42 ` Steinar H. Gunderson 0 siblings, 1 reply; 30+ messages in thread From: Dave Taht @ 2014-03-21 15:39 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: bloat On Fri, Mar 21, 2014 at 1:41 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > Dave Taht <dave.taht@gmail.com> writes: > >> I imagine with the new tcp's pfifo_fast is going to be sub 8ms also. > > Yeah, turns out I botched the qdisc setup (put it on the wrong interface > on one of the servers) for the case with no switch. So the ~6ms was with > pfifo_fast in one end. Oh, goodie. I was puzzled as to why the "fast" fq_codel queue was at 6ms instead of under 2ms, given the BQL size and traffic load. You'd think that data centers and distros would be falling over themselves to switch to sch_fq or sch_fq_codel to get 3x less latency than pfifo_fast for sparse flows, at this point. It's just a sysctl away... > Updated the original graphs for the host-to-host. Retaining the pfifo_fast data is important as a baseline. Not a lot of point to graphing it further tho. I think you will find pie's behavior at these speeds bemusing. > Data capture files are > here: http://kau.toke.dk/experiments/cisco-switch/packet-captures/ -- no > idea why the client seems to capture three times as many packets as the > server. None of them seem to think they've dropped any (as per tcpdump > output). > > Will add dumps from going through the switch in a bit... > >> Is your hardware fast enough to run tcpdump -s 128 -w whatever.cap -i >> your interface during an entire rrul test without dropping packets? >> (on client and server) (question to list) Are there any options to tcpdump or the kernel to make it more possible to capture full packet payloads (64k) without loss at these speeds? tshark? (you might be able to get somewhere with port mirroring off the switch and a separate capture device.) /me sometimes likes living at 100Mbit and below > Well, as above, tcpdump doesn't say anything about dropped packets; but > since the client dump is way bigger, perhaps the server-side does anyway? > > -Toke -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-21 15:39 ` Dave Taht @ 2014-03-21 16:42 ` Steinar H. Gunderson 2014-03-21 18:34 ` Toke Høiland-Jørgensen 2014-03-24 23:16 ` David Lang 0 siblings, 2 replies; 30+ messages in thread From: Steinar H. Gunderson @ 2014-03-21 16:42 UTC (permalink / raw) To: bloat On Fri, Mar 21, 2014 at 03:39:16PM +0000, Dave Taht wrote: >>> Is your hardware fast enough to run tcpdump -s 128 -w whatever.cap -i >>> your interface during an entire rrul test without dropping packets? >>> (on client and server) > (question to list) Are there any options to tcpdump or the kernel to > make it more possible to capture full packet payloads (64k) without > loss at these speeds? tshark? You can capture tens of gigabits of traffic if you use the mmap packet ring stuff. Doubt tcpdump supports it, but it wouldn't be impossible to do. I'm a bit confused if a normal machine these days can't easily saturate gigabit (and capture it to SSD without further problems), though. /* Steinar */ -- Homepage: http://www.sesse.net/ ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-21 16:42 ` Steinar H. Gunderson @ 2014-03-21 18:34 ` Toke Høiland-Jørgensen 2014-03-24 23:16 ` David Lang 1 sibling, 0 replies; 30+ messages in thread From: Toke Høiland-Jørgensen @ 2014-03-21 18:34 UTC (permalink / raw) To: Steinar H. Gunderson; +Cc: bloat [-- Attachment #1: Type: text/plain, Size: 568 bytes --] "Steinar H. Gunderson" <sgunderson@bigfoot.com> writes: > I'm a bit confused if a normal machine these days can't easily > saturate gigabit (and capture it to SSD without further problems), > though. Well, they're not entirely new; the machines I'm using are lab machines that have been replaced and are now recommissioned as my test-bed. /proc/cpuinfo says they're 'Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz'. No SSD, either... They have no issues *saturating* gigabit (as long as the BQL max_limit is not set too low), but capturing, not so much... -Toke [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 489 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-21 16:42 ` Steinar H. Gunderson 2014-03-21 18:34 ` Toke Høiland-Jørgensen @ 2014-03-24 23:16 ` David Lang 1 sibling, 0 replies; 30+ messages in thread From: David Lang @ 2014-03-24 23:16 UTC (permalink / raw) To: Steinar H. Gunderson; +Cc: bloat On Fri, 21 Mar 2014, Steinar H. Gunderson wrote: > On Fri, Mar 21, 2014 at 03:39:16PM +0000, Dave Taht wrote: >>>> Is your hardware fast enough to run tcpdump -s 128 -w whatever.cap -i >>>> your interface during an entire rrul test without dropping packets? >>>> (on client and server) >> (question to list) Are there any options to tcpdump or the kernel to >> make it more possible to capture full packet payloads (64k) without >> loss at these speeds? tshark? > > You can capture tens of gigabits of traffic if you use the mmap packet ring > stuff. Doubt tcpdump supports it, but it wouldn't be impossible to do. > > I'm a bit confused if a normal machine these days can't easily saturate > gigabit (and capture it to SSD without further problems), though. I've been able to capture gigabit traffic with fairly normal CPUs (<3GHz), the key I found was to bypass all tcpdump processing at capture time, just write the raw packets out. I had a disk array on a medium quality FC card for this, but did the same thing with a 3ware 955x RAID card and a handful of SATA drives. David Lang ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment / 10G not-for-profit playground at tetaneutral.net 2014-03-18 18:54 ` Dave Taht 2014-03-20 16:20 ` Toke Høiland-Jørgensen @ 2014-03-20 18:16 ` Laurent GUERBY 1 sibling, 0 replies; 30+ messages in thread From: Laurent GUERBY @ 2014-03-20 18:16 UTC (permalink / raw) To: Dave Taht; +Cc: bloat On Tue, 2014-03-18 at 14:54 -0400, Dave Taht wrote: > On Tue, Mar 18, 2014 at 2:05 PM, Fred Baker (fred) <fred@cisco.com> wrote: > > > > On Mar 18, 2014, at 10:17 AM, Dave Taht <dave.taht@gmail.com> wrote: > > > >>> I thought some of you might be interested in a small observation I made > >>> today: Cisco 2960-X, their latest low-end (?) L2 access switch offering > >>> (well, it can do some L3 as well, especially the 2960-XR, but I don't think > >>> it's very commonly used), has WRED on its feature list. They also have > >> > >> I would certainly like good documentation on how to configure it and > >> results with/without on a two ports into one test. > > > > Thus saith google: http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960x/software/15-0_2_EX/qos/configuration_guide/b_qos_152ex_2960-x_cg/b_qos_152ex_2960-x_cg_chapter_010.html > > > The standard test that I'm most interested in is the 2 ports into 1 topology: > > SOURCE > | > SWITCH > | | > BOX1 BOX2 > > and finding "optimal" settings for that. > > All links are gigE for this... each box runs a copy of the rrul test > (attempting to saturate up/down and measure loss/delay on several > differently classified measurement flows), with various switch > configurations (wred, srr, wtd, whatever) > > can add in a delay box. > > Donations/loans of various cool switches gladly accepted. :) > > A topology with a 10Gige source is also interesting. There have been > so many improvements to linux tcp that it's hard to intuit behaviors > since when we started the debloating effort. Hi Dave, Our not-for-profit AS197422 http://tetaneutral.net based in Toulouse, France, will get leased (for free) two 48 10G port switch Force10 S4810P and in a few weeks we'll get a 10G Cogent uplink (able to pay 1-3 Gbps 5mn 95 percentile so burst to/from internet should stay short :). We're looking for 10G PCIe cards for Debian GNU/Linux boxes (haswell class CPU) advice. We have an Ubiquity edge router lite siting around, we'll probably get a Mikrotik CCR1036-8G-2S+ too. We have also about 300 ubiquity 5GHz antennas and 200+ subscribers with fiber uplink, and about 50 servers hosted + same number of virtual machines. As our not-for-profit name "tetaNEUTRAL.NET" indcates we want to be fair in bandwidth use by our users but without looking at protocols so respecting network neutrality principles. fairness has to be defined of course, can be period of time / volume based, we've played with IP based tc classifiers for a while. And of course we'll be very happy to give access to that stuff to the "Bloat" croad, don't forget to ping us in one month or so if I forget about our offer :). Sincerely, Laurent ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-18 18:05 ` Fred Baker (fred) 2014-03-18 18:54 ` Dave Taht @ 2014-03-18 18:57 ` Dave Taht 2014-03-18 21:06 ` Steinar H. Gunderson 1 sibling, 1 reply; 30+ messages in thread From: Dave Taht @ 2014-03-18 18:57 UTC (permalink / raw) To: Fred Baker (fred); +Cc: bloat On Tue, Mar 18, 2014 at 2:05 PM, Fred Baker (fred) <fred@cisco.com> wrote: > > On Mar 18, 2014, at 10:17 AM, Dave Taht <dave.taht@gmail.com> wrote: > >>> I thought some of you might be interested in a small observation I made >>> today: Cisco 2960-X, their latest low-end (?) L2 access switch offering >>> (well, it can do some L3 as well, especially the 2960-XR, but I don't think >>> it's very commonly used), has WRED on its feature list. They also have >> >> I would certainly like good documentation on how to configure it and >> results with/without on a two ports into one test. > > Thus saith google: http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960x/software/15-0_2_EX/qos/configuration_guide/b_qos_152ex_2960-x_cg/b_qos_152ex_2960-x_cg_chapter_010.html from what I read here, SRR is something that rotates between 4 hardware queues, a far cry from SFQ. And WTD (weighted tail drop) does not look like RED, either. -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-18 18:57 ` [Bloat] AQM creeping into L2 equipment Dave Taht @ 2014-03-18 21:06 ` Steinar H. Gunderson 0 siblings, 0 replies; 30+ messages in thread From: Steinar H. Gunderson @ 2014-03-18 21:06 UTC (permalink / raw) To: Dave Taht; +Cc: bloat On Tue, Mar 18, 2014 at 02:57:46PM -0400, Dave Taht wrote: > And WTD (weighted tail drop) does not look like RED, either. I think WTD is more like “taildrop, but drop some types of traffic before others”. /* Steinar */ -- Homepage: http://www.sesse.net/ ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Bloat] AQM creeping into L2 equipment 2014-03-18 14:52 [Bloat] AQM creeping into L2 equipment Steinar H. Gunderson 2014-03-18 17:17 ` Dave Taht @ 2014-03-19 13:11 ` Nikolay Shopik 1 sibling, 0 replies; 30+ messages in thread From: Nikolay Shopik @ 2014-03-19 13:11 UTC (permalink / raw) To: Steinar H. Gunderson, bloat I wonder when we will see special releases with PIE support from them. I'm not even talking about hardware switched platforms (as these may have their own limitation?) but at least cpu-based ISRs. On 18/03/14 18:52, Steinar H. Gunderson wrote: > it's no CoDel or PIE, but you take what you > get... ^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2014-03-24 23:16 UTC | newest] Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2014-03-18 14:52 [Bloat] AQM creeping into L2 equipment Steinar H. Gunderson 2014-03-18 17:17 ` Dave Taht 2014-03-18 17:53 ` Steinar H. Gunderson 2014-03-19 22:22 ` Steinar H. Gunderson 2014-03-18 18:05 ` Fred Baker (fred) 2014-03-18 18:54 ` Dave Taht 2014-03-20 16:20 ` Toke Høiland-Jørgensen 2014-03-20 16:29 ` Dave Taht 2014-03-20 16:44 ` Toke Høiland-Jørgensen 2014-03-20 17:14 ` Dave Taht 2014-03-20 19:34 ` Aaron Wood 2014-03-20 20:23 ` Dave Taht 2014-03-20 23:41 ` Eric Dumazet 2014-03-20 23:45 ` Steinar H. Gunderson 2014-03-20 23:54 ` Steinar H. Gunderson [not found] ` <CAAvOmMtt1RCpBfT1MPNh-2FRhQ1GN4xYbfNPLYJwfP6CaP5vow@mail.gmail.com> 2014-03-21 15:06 ` Eric Dumazet [not found] ` <CAAvOmMvPpmuW1chTdX86s5sQv6X_c622k8YrjW+hN8e5JV+dzA@mail.gmail.com> 2014-03-21 17:51 ` Eric Dumazet 2014-03-21 18:08 ` Dave Taht 2014-03-21 22:00 ` Eric Dumazet 2014-03-21 22:13 ` Dave Taht 2014-03-23 19:27 ` Eric Dumazet 2014-03-21 13:41 ` Toke Høiland-Jørgensen 2014-03-21 15:39 ` Dave Taht 2014-03-21 16:42 ` Steinar H. Gunderson 2014-03-21 18:34 ` Toke Høiland-Jørgensen 2014-03-24 23:16 ` David Lang 2014-03-20 18:16 ` [Bloat] AQM creeping into L2 equipment / 10G not-for-profit playground at tetaneutral.net Laurent GUERBY 2014-03-18 18:57 ` [Bloat] AQM creeping into L2 equipment Dave Taht 2014-03-18 21:06 ` Steinar H. Gunderson 2014-03-19 13:11 ` Nikolay Shopik
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox