* [Bloat] What is a good burst? -- AQM evaluation guidelines @ 2013-12-15 5:35 Naeem Khademi 2013-12-15 12:26 ` Jonathan Morton ` (2 more replies) 0 siblings, 3 replies; 16+ messages in thread From: Naeem Khademi @ 2013-12-15 5:35 UTC (permalink / raw) To: aqm, bloat, end2end-interest [-- Attachment #1: Type: text/plain, Size: 454 bytes --] Hi all I'm not sure if this has already been covered in any of the other threads, but looking at http://www.ietf.org/proceedings/88/slides/slides-88-aqm-5.pdfand draft-ietf-aqm-recommendation-00, the question remains: "what is a good burst (size) that AQMs should allow?" and/or "how an AQM can have a notion of the right burst size?". and how "naturally-occuring bursts" mentioned in draft-ietf-aqm-recommendation-00 can be defined? Regards, Naeem [-- Attachment #2: Type: text/html, Size: 800 bytes --] ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] What is a good burst? -- AQM evaluation guidelines 2013-12-15 5:35 [Bloat] What is a good burst? -- AQM evaluation guidelines Naeem Khademi @ 2013-12-15 12:26 ` Jonathan Morton 2013-12-15 15:16 ` Scharf, Michael (Michael) 2013-12-15 18:56 ` [Bloat] [aqm] " Curtis Villamizar 2013-12-15 21:42 ` [Bloat] [aqm] " Fred Baker (fred) 2 siblings, 1 reply; 16+ messages in thread From: Jonathan Morton @ 2013-12-15 12:26 UTC (permalink / raw) To: Naeem Khademi; +Cc: bloat Mainlinglist On 15 Dec, 2013, at 7:35 am, Naeem Khademi wrote: > the question remains: "what is a good burst (size) that AQMs should allow?" The ideal size of a TCP congestion window - which limits the size of a burst on a TCP flow - is equal to the natural bandwidth-delay product for the flow. That involves the available bandwidth and the natural RTT delay - ie. without added queueing delay. Codel operates on this basis, making an assumption about typical RTT delays, and permitting queue residency to temporarily rise to that value without initiating marking operations. A larger burst would be evidence of a congestion window that is too large, or an overall sending rate that exceeds the bandwidth at the link the codel queue controls. A persistent queue is always taken as evidence of the latter. In a datacentre or on a LAN, natural RTT delays are much shorter (microseconds) than on the general Internet (milliseconds) - conversely, available bandwidth is typically much higher (Gbps vs. Mbps). The two factors approximately cancel out, so the bandwidth-delay product remains roughly the same in typical cases - although, of course, atypical cases such as satellite links (seconds of latency) and major backbones (extreme aggregate bandwidth and Internet-scale delays) also exist. However, RTT is more consistent between installations than bandwidth is (factor of ten difference in typical range of ADSL link speeds, factor of a hundred in WiFi), so Codel uses a time basis rather than a byte-count basis for regulation, and is by default tuned for typical overland Internet latencies. Fq_codel, as with other FQ-type qdiscs, tends to improve pacing when multiple flows are present, by interleaving packets from different queued bursts. Pacing is the general absence of bursts, and can be implemented at source by a TCP sender that spreads packets within a congestion window through an interval of time corresponding to the measured RTT. AFAIK, very few TCP implementations actually do this, probably due to a desire to avoid interrupt overheads (the CPU would have to be woken up by a timer for each packet). It strikes me as feasible for NIC hardware to take on some of this burden. - Jonathan Morton ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] What is a good burst? -- AQM evaluation guidelines 2013-12-15 12:26 ` Jonathan Morton @ 2013-12-15 15:16 ` Scharf, Michael (Michael) [not found] ` <655C07320163294895BBADA28372AF5D14C5DF@FR712WXCHMBA15.zeu. alcatel-lucent.com> 0 siblings, 1 reply; 16+ messages in thread From: Scharf, Michael (Michael) @ 2013-12-15 15:16 UTC (permalink / raw) To: Jonathan Morton, Naeem Khademi; +Cc: bloat Mainlinglist There are ongoing discussions in the IETF TCPM working group on sender-side pacing: http://www.ietf.org/mail-archive/web/tcpm/current/msg08167.html http://www.ietf.org/proceedings/88/slides/slides-88-tcpm-9.pdf Insight or contributions from further implementers would be highly welcome. Michael ________________________________________ Von: bloat-bounces@lists.bufferbloat.net [bloat-bounces@lists.bufferbloat.net]" im Auftrag von "Jonathan Morton [chromatix99@gmail.com] Gesendet: Sonntag, 15. Dezember 2013 13:26 An: Naeem Khademi Cc: bloat Mainlinglist Betreff: Re: [Bloat] What is a good burst? -- AQM evaluation guidelines On 15 Dec, 2013, at 7:35 am, Naeem Khademi wrote: > the question remains: "what is a good burst (size) that AQMs should allow?" The ideal size of a TCP congestion window - which limits the size of a burst on a TCP flow - is equal to the natural bandwidth-delay product for the flow. That involves the available bandwidth and the natural RTT delay - ie. without added queueing delay. Codel operates on this basis, making an assumption about typical RTT delays, and permitting queue residency to temporarily rise to that value without initiating marking operations. A larger burst would be evidence of a congestion window that is too large, or an overall sending rate that exceeds the bandwidth at the link the codel queue controls. A persistent queue is always taken as evidence of the latter. In a datacentre or on a LAN, natural RTT delays are much shorter (microseconds) than on the general Internet (milliseconds) - conversely, available bandwidth is typically much higher (Gbps vs. Mbps). The two factors approximately cancel out, so the bandwidth-delay product remains roughly the same in typical cases - although, of course, atypical cases such as satellite links (seconds of latency) and major backbones (extreme aggregate bandwidth and Internet-scale delays) also exist. However, RTT is more consistent between installations than bandwidth is (factor of ten difference in typical range of ADSL link speeds, factor of a hundred in WiFi), so Codel uses a time basis rather than a byte-count basis for regulation, and is by default tuned for typical overland Internet latencies. Fq_codel, as with other FQ-type qdiscs, tends to improve pacing when multiple flows are present, by interleaving packets from different queued bursts. Pacing is the general absence of bursts, and can be implemented at source by a TCP sender that spreads packets within a congestion window through an interval of time corresponding to the measured RTT. AFAIK, very few TCP implementations actually do this, probably due to a desire to avoid interrupt overheads (the CPU would have to be woken up by a timer for each packet). It strikes me as feasible for NIC hardware to take on some of this burden. - Jonathan Morton _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat ^ permalink raw reply [flat|nested] 16+ messages in thread
[parent not found: <655C07320163294895BBADA28372AF5D14C5DF@FR712WXCHMBA15.zeu. alcatel-lucent.com>]
* Re: [Bloat] What is a good burst? -- AQM evaluation guidelines [not found] ` <655C07320163294895BBADA28372AF5D14C5DF@FR712WXCHMBA15.zeu. alcatel-lucent.com> @ 2013-12-15 20:56 ` Bob Briscoe 0 siblings, 0 replies; 16+ messages in thread From: Bob Briscoe @ 2013-12-15 20:56 UTC (permalink / raw) To: Naeem Khademi; +Cc: bloat Mainlinglist Naeem, You don't need to go through a BDP calculation if calibrating queue length in time units - you can take the burst size directly as the RTT (otherwise you multiply RTT by link bandwidth to get BDP, then just divide again by link bandwidth to get burst size in time). It's important to use an RTT at the high end of the expected range, otherwise TCP flows with significantly higher RTT can get v poor performance. Having to pick a compromise RTT value is not ideal, because for public Internet you will typically need to assume a worst-case RTT of transcontinental proportions (c.200ms), which is why it is recommended to set 'interval' to 100ms in 'codel' and 'max_burst' to 100ms in PIE. However, most flows nowadays terminate at a CDN with RTT ~20ms. So, having configured the AQM to allow for 100ms RTT, every time the queue fills from idle, it will be delaying any loss signals for about 5 CDN-RTTs. It's an even tougher compromise if your AQM is within your host, which has to support the full range of RTTs from 200ms transcontinental to <2ms across your LAN (whether campus, enterprise or home - e.g. with your media server). Then, you have to configure the AQM to absorb 100ms bursts, so it will delay signals to LAN flows for ~50 of their RTTs. In this time, a multi-round-trip LAN TCP will have pushed the queue into tail-drop, long before the AQM has responded. The notion of a 'good burst' is only necessary if using drop as the signal though. For ECN-capable packets, we have been experimenting with shifting absorbtion of bursts from the network to L4 in the host (which knows its own RTT). We don't wait at all for a burst to persist before sending ECN signals, and then smooth out RTT-length bursts of ECN signals in the congestion avoidance algorithm in the transport. Also, during slow-start the transport doesn't need to smooth out the bursts at all, so it gets the signal within 1RTT and it can respond immediately. See <http://www.ietf.org/proceedings/88/slides/slides-88-tsvwg-20.pdf> and the thread that has just been discussing this: "[aqm] Text for aqm-recommendation on independent ECN config" Bob At 15:16 15/12/2013, Scharf, Michael (Michael) wrote: >There are ongoing discussions in the IETF TCPM working group on >sender-side pacing: > >http://www.ietf.org/mail-archive/web/tcpm/current/msg08167.html > >http://www.ietf.org/proceedings/88/slides/slides-88-tcpm-9.pdf > >Insight or contributions from further implementers would be highly welcome. > >Michael > > >________________________________________ >Von: bloat-bounces@lists.bufferbloat.net >[bloat-bounces@lists.bufferbloat.net]" im Auftrag von >"Jonathan Morton [chromatix99@gmail.com] >Gesendet: Sonntag, 15. Dezember 2013 13:26 >An: Naeem Khademi >Cc: bloat Mainlinglist >Betreff: Re: [Bloat] What is a good burst? -- AQM evaluation guidelines > >On 15 Dec, 2013, at 7:35 am, Naeem Khademi wrote: > > > the question remains: "what is a good burst (size) that AQMs should allow?" > >The ideal size of a TCP congestion window - which limits the size of >a burst on a TCP flow - is equal to the natural bandwidth-delay >product for the flow. That involves the available bandwidth and the >natural RTT delay - ie. without added queueing delay. > >Codel operates on this basis, making an assumption about typical RTT >delays, and permitting queue residency to temporarily rise to that >value without initiating marking operations. A larger burst would >be evidence of a congestion window that is too large, or an overall >sending rate that exceeds the bandwidth at the link the codel queue >controls. A persistent queue is always taken as evidence of the latter. > >In a datacentre or on a LAN, natural RTT delays are much shorter >(microseconds) than on the general Internet (milliseconds) - >conversely, available bandwidth is typically much higher (Gbps vs. >Mbps). The two factors approximately cancel out, so the >bandwidth-delay product remains roughly the same in typical cases - >although, of course, atypical cases such as satellite links (seconds >of latency) and major backbones (extreme aggregate bandwidth and >Internet-scale delays) also exist. However, RTT is more consistent >between installations than bandwidth is (factor of ten difference in >typical range of ADSL link speeds, factor of a hundred in WiFi), so >Codel uses a time basis rather than a byte-count basis for >regulation, and is by default tuned for typical overland Internet latencies. > >Fq_codel, as with other FQ-type qdiscs, tends to improve pacing when >multiple flows are present, by interleaving packets from different >queued bursts. Pacing is the general absence of bursts, and can be >implemented at source by a TCP sender that spreads packets within a >congestion window through an interval of time corresponding to the >measured RTT. AFAIK, very few TCP implementations actually do this, >probably due to a desire to avoid interrupt overheads (the CPU would >have to be woken up by a timer for each packet). It strikes me as >feasible for NIC hardware to take on some of this burden. > > - Jonathan Morton > >_______________________________________________ >Bloat mailing list >Bloat@lists.bufferbloat.net >https://lists.bufferbloat.net/listinfo/bloat >_______________________________________________ >Bloat mailing list >Bloat@lists.bufferbloat.net >https://lists.bufferbloat.net/listinfo/bloat ________________________________________________________________ Bob Briscoe, BT ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-15 5:35 [Bloat] What is a good burst? -- AQM evaluation guidelines Naeem Khademi 2013-12-15 12:26 ` Jonathan Morton @ 2013-12-15 18:56 ` Curtis Villamizar 2014-01-02 6:31 ` Fred Baker (fred) 2013-12-15 21:42 ` [Bloat] [aqm] " Fred Baker (fred) 2 siblings, 1 reply; 16+ messages in thread From: Curtis Villamizar @ 2013-12-15 18:56 UTC (permalink / raw) To: Naeem Khademi; +Cc: end2end-interest, aqm, bloat In message <CAEjQQ5XPiqL9ywD3zXGKvUb_FoWmgJ3Zq_g_b8ONfHieHaRWzg@mail.gmail.com> Naeem Khademi writes: > Hi all > > I'm not sure if this has already been covered in any of the other > threads, but looking at > http://www.ietf.org/proceedings/88/slides/slides-88-aqm-5.pdfand > draft-ietf-aqm-recommendation-00, the question remains: "what is a > good burst (size) that AQMs should allow?" and/or "how an AQM can have > a notion of the right burst size?". > > and how "naturally-occuring bursts" mentioned in > draft-ietf-aqm-recommendation-00 can be defined? > > Regards, > Naeem It is probably best not to try to define "naturally-occuring bursts" since these are dependent on the type of traffic on the Internet, or the target network. This will vary with the type of target network and will vary as services evolve on the Internet. Therefore it may not be definable beyond the words that make up this term and it may be a disservice to try to define it. If the draft is to attempt to define a target for burst tolerance (not implying that leaky bucket is used even though it shares that term), then the definition should be in terms of end results and it should not be specific to any particular type of service. For all services, and perhaps most important is fairness among flows. Beyond that the criteria for "good end results" differs by traffic type. For example, for non-interactive bulk transfer high goodput and fairness are desireable. On the other side of the continuum, For interactive bounded delay is important, though throughput is still important as is fairness. <slightly-off-topic> The whole fairness thing is a sticky point and goes beyond AQM alone. Today fairness on the Internet and most networks relies on the good behavior of end-system protocols such as TCP. There are hyper-aggressive TCP variants and also real time applications that don't reduce load when loss is detected. In order for there to be enforceable fairness something like SFQ or some variant such as cascaded SFQ is needed to better isolate flows. SFQ just breaks the queue up into groups of flows and many unlucky ones will end up in a queue with a flow behaving badly. In cascaded SFQ if any specific queue SFQ is growing, then that queue is broken down further. Depth and/or total number of queue in cascaded SFQ is generally bounded but the end result is to very well isolated poorly behaving flows with far less queues than would be needed for one queue per flow. No one AFAIK has tried to allow flows to pick a type of queue (small queue vs deep). For SFQ or cascaded SFQ it might be best if before loss occurs high variation in delay and/or growing delay causes the delay sensitive application to back off. This way if fairness is acheived, "bounded delay" may be better acheived without forcing a small queue. This may also be related but slightly off topic. </slightly-off-topic> Back to AQM. Some forms of AQM do better at spcific criteria or do better at finding a good tradeoff among the commonly cited criteria: fairness, bulk transfer goodput, bounded delay. Where the tradeoffs should be set is maybe not a good thing for the AQM WG to try to define as the optimal point will depend on perspective - on what mix of services are being used and on which services are of greater importance to a specific individual or organization. So briefly, my answer is: as a WG, I don't think we want to go there. If we do go there at all, then we should define "good AQM" in terms of acheving a "good" tradeoff between fairness, bulk transfer goodput, and bounded delay. IMHO sometimes vague is better. Curtis ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-15 18:56 ` [Bloat] [aqm] " Curtis Villamizar @ 2014-01-02 6:31 ` Fred Baker (fred) 2014-01-03 18:17 ` [Bloat] [e2e] " dpreed 0 siblings, 1 reply; 16+ messages in thread From: Fred Baker (fred) @ 2014-01-02 6:31 UTC (permalink / raw) To: <curtis@ipv6.occnc.com> Cc: bloat, aqm, <end2end-interest@postel.org> [-- Attachment #1: Type: text/plain, Size: 773 bytes --] On Dec 15, 2013, at 10:56 AM, Curtis Villamizar <curtis@ipv6.occnc.com> wrote: > So briefly, my answer is: as a WG, I don't think we want to go there. > If we do go there at all, then we should define "good AQM" in terms of > acheving a "good" tradeoff between fairness, bulk transfer goodput, > and bounded delay. IMHO sometimes vague is better. As you may have worked out from my previous comments in these threads, I agree with you. I don't think this can be nailed down in a universal sense. What can be described is the result in the network, in that delays build up that persist, as opposed to coming and going, and as a result applications don't work as well as they might - and at that point, it is appropriate for the network to inform the transport. [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 195 bytes --] ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [e2e] [aqm] What is a good burst? -- AQM evaluation guidelines 2014-01-02 6:31 ` Fred Baker (fred) @ 2014-01-03 18:17 ` dpreed 2014-01-30 19:27 ` [Bloat] [aqm] [e2e] " Dave Taht 0 siblings, 1 reply; 16+ messages in thread From: dpreed @ 2014-01-03 18:17 UTC (permalink / raw) To: Fred Baker (fred); +Cc: bloat, aqm, <curtis@ipv6.occnc.com> [-- Attachment #1: Type: text/plain, Size: 3402 bytes --] End-to-end queueing delay (aggregate of delays in all queues except for the queues in the endpoints themselves) should typically never (never means <99.9% of any hour-long period) exceed 200 msec. in the worst case, and if at all possible never exceed 100 msec. in networks capable of carrying more than 1 Mbit/sec to and from endpoints (I would call that high-bitrate nets, the stage up from "dialup" networks). There are two reasons for this: 1) round-trip "RPC" response time for interactive applications > 100 msec. become unreasonable. 2) flow control at the source that stanches the entry of data into the network (which can be either switching media codecs or just pushing back on the application rate - whether it is driven by the receiver or the sender, both of which are common) must respond quickly, lest more packets be dumped into the network that sustain congestion. Fairness is a different axis, but I strongly suggest that there are other ways to achieve approximate fairness of any desired type without building up queues in routers. It's perfectly reasonable to remember (in all the memory that *would otherwise have caused trouble by holding packets rather than discarding them*) the source/dest information and sizes of recently processed (forwarded or discarded) packets. This information takes less space than the packets themselves, of course! It can even be further compressed by "coding or hashing" techniques. Such live data about *recent behavior* is all you need for fairness in balancing signaling back to the source. If all of the brainpower on this list cannot take that previous paragraph and expand it to implement the solution I am talking about, I would be happy (at my consulting rates, which are quite high) to write the code for you. But I have a day job that involves low-level scheduling and queueing work in a different domain of application. Can we please get rid of the nonsense that implies that the only information one can have at a router/switch is the set of packets that are clogging its outbound queues? Study some computer algorithms that provide memory of recent history.... and please, please, please stop insisting that intra-network queues should build up for any reason whatsoever other than instantaneous transient burstiness of convergent traffic. They should persist as briefly as possible, and not be sustained for some kind of "optimum" throughput that can be gained by reframing the problem. On Thursday, January 2, 2014 1:31am, "Fred Baker (fred)" <fred@cisco.com> said: > > On Dec 15, 2013, at 10:56 AM, Curtis Villamizar <curtis@ipv6.occnc.com> > wrote: > > > So briefly, my answer is: as a WG, I don't think we want to go there. > > If we do go there at all, then we should define "good AQM" in terms of > > acheving a "good" tradeoff between fairness, bulk transfer goodput, > > and bounded delay. IMHO sometimes vague is better. > > As you may have worked out from my previous comments in these threads, I agree > with you. I don't think this can be nailed down in a universal sense. What can be > described is the result in the network, in that delays build up that persist, as > opposed to coming and going, and as a result applications don't work as well as > they might - and at that point, it is appropriate for the network to inform the > transport. > [-- Attachment #2: Type: text/html, Size: 4419 bytes --] ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [aqm] [e2e] What is a good burst? -- AQM evaluation guidelines 2014-01-03 18:17 ` [Bloat] [e2e] " dpreed @ 2014-01-30 19:27 ` Dave Taht 0 siblings, 0 replies; 16+ messages in thread From: Dave Taht @ 2014-01-30 19:27 UTC (permalink / raw) To: David Reed; +Cc: <curtis@ipv6.occnc.com>, aqm, bloat On Fri, Jan 3, 2014 at 10:17 AM, <dpreed@reed.com> wrote: > End-to-end queueing delay (aggregate of delays in all queues except for the > queues in the endpoints themselves) should typically never (never means > <99.9% of any hour-long period) exceed 200 msec. in the worst case, and if > at all possible never exceed 100 msec. Latency requirements are a budget that can get spent at multiple levels in a stack. If you were to say that total latency/propagation delay should not exceed lightspeed requirements + a few ms, I'd agree with you harder than I already do. http://gettys.wordpress.com/2013/07/10/low-latency-requires-smart-queuing-traditional-aqm-is-not-enough/ I also generally prefer predictable latency (low jitter), not just from a human factors perspective but as a means of doing congestion avoidance detection better. > in networks capable of carrying > more than 1 Mbit/sec to and from endpoints (I would call that high-bitrate > nets, the stage up from "dialup" networks). Here are some recent tests of fq_codel on comcast ipv6 over a coast to coast path (california to maine. Only *65ms* inherent delay - quite good) http://snapon.lab.bufferbloat.net/~cero2/jimreisert/results.html Utilization ~100%, total induced latency under 10ms (some of this coming from the rate limited upstream and HZ=250 cpu scheduler delay for the rate limiter, not the core algorithms) the results with pie or codel alone are considerably worse, (40-60ms induced delay) but not bad compared to the alternative of no aqm/packet scheduling as shown in the 2nd and 3rd graph. I'm happy. There seems to be not a lot of slop left to fix in fq_codel, and amdahls law takes queuing delay out of the latency equation for applications... well, aside from getting the CMTSes/dslams/modems fixed... :( I would additionally distinguish in the range between 100mbit networks and 1g+ networks, (as well as wireless/wifi/cable) where bursty technologies are seemingly needed (so long as bursts are below human perceptible latency factors) I have a test of an intel nuc with linux 3.11 and pfifo_fast (the default qdisc), on a gbit lan vs fq_codel, and with fq_codel and tso/gso/ufo offloading off, lying around somewhere. with pfifo_fast you end up with one stream using more throughput than the others and about 8ms latency, fq_codel, 2.2, and without tso/gso you can't saturate the medium. I have some hopes the new fq scheduler and some tso fixes make the results better, I think 2ms of induced queue latency on a gig lan is a lot. > There are two reasons for this: > 1) round-trip "RPC" response time for interactive applications > 100 msec. > become unreasonable. Add in the whole list of human factors issues noted in the url above. > 2) flow control at the source that stanches the entry of data into the > network (which can be either switching media codecs or just pushing back on > the application rate - whether it is driven by the receiver or the sender, > both of which are common) must respond quickly, lest more packets be dumped > into the network that sustain congestion. A big problem is predictability. recently there has been something of a push to get retries and retransmits on wifi to "give up" at 50ms induced latency, instead of 250ms (which some vendors try to do) We have an inherent quantum problem of 1-4ms per txop in present day wifi that seems to make 50ms a barely achievable outer limit in the presence of multiple stations. > Fairness is a different axis, but I strongly suggest that there are other > ways to achieve approximate fairness of any desired type without building up > queues in routers. It's perfectly reasonable to remember (in all the memory > that *would otherwise have caused trouble by holding packets rather than > discarding them*) the source/dest information and sizes of recently > processed (forwarded or discarded) packets. This information takes less > space than the packets themselves, of course! It can even be further > compressed by "coding or hashing" techniques. Such live data about *recent > behavior* is all you need for fairness in balancing signaling back to the > source. I concur. Long on my todo list for *codel has been gaining the ability to toss drop/mark/current bandwidth information on packets up to userspace, where it could be used to make more intelligent routing decisions, and/or feed more information back into the senders. I don't think "source quench" is going to work, tho... I am encouraged by recent work in openflow in this area. > If all of the brainpower on this list cannot take that previous paragraph > and expand it to implement the solution I am talking about, I would be happy > (at my consulting rates, which are quite high) to write the code for you. > But I have a day job that involves low-level scheduling and queueing work in > a different domain of application. > > > > Can we please get rid of the nonsense that implies that the only information > one can have at a router/switch is the set of packets that are clogging its > outbound queues? Study some computer algorithms that provide memory of > recent history.... Multiple devices are still constrained by memory and cpu and the needs for low latency insides those devices. At 10Gig you have ns to make decisions in. On 802.11ac, you have us. Thus we end up with dedicated hardware and software doing these jobs that have issues doing stuff in-band, or even out of band. that said, softer routers are doing hundreds of gig, 802.11ac devices contain a lot of hardware assist and a dedicated cpu (but closed firmware), and the future looks bright for smarter hardware. btw: we have (at least temporarily) hit a performance wall on the hardware we use in cerowrt - not in the aqm algorithms, but in the software rate limiter, which peaks out currently at about 60mbits. Something faster than the current htb algo is needed... (suggestions?) (or a switch to faster hardware than the 7 year old chipset cero uses) > and please, please, please stop insisting that > intra-network queues should build up for any reason whatsoever other than > instantaneous transient burstiness of convergent traffic. They should > persist as briefly as possible, and not be sustained for some kind of > "optimum" throughput that can be gained by reframing the problem. well I outlined that bursts are needed in some technologies to keep them operating at good throughput. Thus, BQL for linux ethernet, and some similar techniques under consideration for wifi. but largely agree. Anybody want to aim for 5ms queue delay on intercontinental links? > > > > > > > > > > On Thursday, January 2, 2014 1:31am, "Fred Baker (fred)" <fred@cisco.com> > said: > >> >> On Dec 15, 2013, at 10:56 AM, Curtis Villamizar <curtis@ipv6.occnc.com> >> wrote: >> >> > So briefly, my answer is: as a WG, I don't think we want to go there. >> > If we do go there at all, then we should define "good AQM" in terms of >> > acheving a "good" tradeoff between fairness, bulk transfer goodput, >> > and bounded delay. IMHO sometimes vague is better. >> >> As you may have worked out from my previous comments in these threads, I >> agree >> with you. I don't think this can be nailed down in a universal sense. What >> can be >> described is the result in the network, in that delays build up that >> persist, as >> opposed to coming and going, and as a result applications don't work as >> well as >> they might - and at that point, it is appropriate for the network to >> inform the >> transport. >> > > > _______________________________________________ > aqm mailing list > aqm@ietf.org > https://www.ietf.org/mailman/listinfo/aqm > -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-15 5:35 [Bloat] What is a good burst? -- AQM evaluation guidelines Naeem Khademi 2013-12-15 12:26 ` Jonathan Morton 2013-12-15 18:56 ` [Bloat] [aqm] " Curtis Villamizar @ 2013-12-15 21:42 ` Fred Baker (fred) 2013-12-15 22:57 ` [Bloat] [e2e] " Bob Briscoe 2 siblings, 1 reply; 16+ messages in thread From: Fred Baker (fred) @ 2013-12-15 21:42 UTC (permalink / raw) To: Naeem Khademi; +Cc: <end2end-interest@postel.org>, aqm, bloat [-- Attachment #1: Type: text/plain, Size: 3866 bytes --] On Dec 14, 2013, at 9:35 PM, Naeem Khademi <naeem.khademi@gmail.com> wrote: > Hi all > > I'm not sure if this has already been covered in any of the other threads, but looking at http://www.ietf.org/proceedings/88/slides/slides-88-aqm-5.pdf and draft-ietf-aqm-recommendation-00, the question remains: "what is a good burst (size) that AQMs should allow?" and/or "how an AQM can have a notion of the right burst size?". > > and how "naturally-occuring bursts" mentioned in draft-ietf-aqm-recommendation-00 can be defined? Imagine, if you will, that you have a host and a network in front of it including a first hop switch or router.The host gas a TCP offload engine, which is a device that accepts a large chunk of data and sends as much of it as it has permission to send as quickly as it can. The host has, for sake of argument, a 10 MBPS interface, and everything else in the network has interfaces whose rate are measured in gigabits. The host gives its TSO one chunk of data, so that can't be called a "burst" - it's one message. The TSO sends data as quickly as it can, but presumably does little more than keep the transmission system operating without a pause; while it might queue up 45 messages at a crack, there is no requirement that it do so, so the term "burst" there doesn't have a lot of meaning. And as the data moves through the network, the rate of the particular session is absolutely lost in the available capacity. So a burst, in the sense of the definition, never happens. Now, repeat the experiment. However, in this case the host as a gig-E interface, and the next interface that its router uses is 10 or 100 MBPS. The host and its TSO, and for that matter the router, do exactly the same thing. As perceived by the router, data is arriving much more quickly than it is leaving, resulting in a temporarily deep queue. If the propagation delay through the remainder of the network and the destination host are appropriate, acknowledgements could arrive at the TSO, soliciting new transmissions, before that queue empties. In that case, it is very possible that the queue remains full for a period of time. This network event could last for quite some time. The second is clearly a burst, according to the definition, and I would argue that it is naturally occurring. I imagine you have heard Van and/or Kathy talk about "good queue" vs "bad queue". "Good queue" keeps enough traffic in it to fully utilize its egress. "Bad queue" also does so, but does so in a manner that also materially increases measured latency. This difference is what is behind my comment on the objective of a congestion management algorithm (such as TCP's but not limited to it) that its objective is to keep the amount of data outstanding large enough to maximize its transmission rate through the network, but not so large as to materially increase measured latency or probability of loss. I would argue that this concept of "Good Queue" is directly relevant to the concept of an acceptable burst size. In the first transmission in a session, the sender has no information about what it will experience, so it behoves it to behave in a manner that is unlikely to create a significant amount of "bad queue" - conservatively. But it by definition has no numbers by which to quantify that. Hence, we make recommendations about the initial window size. After that, I would argue that it should continue to behave in a manner that doesn't led to "bad queue", but is free to operate in any manner that seeks to keep the amount of data outstanding large enough to maximize its transmission rate through the network, but not so large as to materially increase measured latency or probability of loss. At the point that it sends data in a manner that creates a sustained queue, it has exceeded what would be considered a useful burst size. [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 195 bytes --] ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [e2e] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-15 21:42 ` [Bloat] [aqm] " Fred Baker (fred) @ 2013-12-15 22:57 ` Bob Briscoe 2013-12-16 7:34 ` Fred Baker (fred) 0 siblings, 1 reply; 16+ messages in thread From: Bob Briscoe @ 2013-12-15 22:57 UTC (permalink / raw) To: Fred Baker (fred), Naeem Khademi Cc: bloat, <end2end-interest@postel.org>, aqm Fred, Jonathan Morton, Michael Scharf & I took Naeem's question to mean "What should an AQM assume the size of a good burst is?" whereas I think you and David C-B took the question to mean "What should an end-system take the size of a good burst to be?". Naeem, could you clarify which you were asking? Bob At 21:42 15/12/2013, Fred Baker (fred) wrote: >On Dec 14, 2013, at 9:35 PM, Naeem Khademi <naeem.khademi@gmail.com> wrote: > > > Hi all > > > > I'm not sure if this has already been covered in any of the other > threads, but looking at > http://www.ietf.org/proceedings/88/slides/slides-88-aqm-5.pdf and > draft-ietf-aqm-recommendation-00, the question remains: "what is a > good burst (size) that AQMs should allow?" and/or "how an AQM can > have a notion of the right burst size?". > > > > and how "naturally-occuring bursts" mentioned in > draft-ietf-aqm-recommendation-00 can be defined? > > >Imagine, if you will, that you have a host and a network in front of >it including a first hop switch or router.The host gas a TCP offload >engine, which is a device that accepts a large chunk of data and >sends as much of it as it has permission to send as quickly as it >can. The host has, for sake of argument, a 10 MBPS interface, and >everything else in the network has interfaces whose rate are >measured in gigabits. The host gives its TSO one chunk of data, so >that can't be called a "burst" - it's one message. The TSO sends >data as quickly as it can, but presumably does little more than keep >the transmission system operating without a pause; while it might >queue up 45 messages at a crack, there is no requirement that it do >so, so the term "burst" there doesn't have a lot of meaning. And as >the data moves through the network, the rate of the particular >session is absolutely lost in the available capacity. So a burst, in >the sense of the definition, never happens. > >Now, repeat the experiment. However, in this case the host as a >gig-E interface, and the next interface that its router uses is 10 >or 100 MBPS. The host and its TSO, and for that matter the router, >do exactly the same thing. As perceived by the router, data is >arriving much more quickly than it is leaving, resulting in a >temporarily deep queue. If the propagation delay through the >remainder of the network and the destination host are appropriate, >acknowledgements could arrive at the TSO, soliciting new >transmissions, before that queue empties. In that case, it is very >possible that the queue remains full for a period of time. This >network event could last for quite some time. > >The second is clearly a burst, according to the definition, and I >would argue that it is naturally occurring. > >I imagine you have heard Van and/or Kathy talk about "good queue" vs >"bad queue". "Good queue" keeps enough traffic in it to fully >utilize its egress. "Bad queue" also does so, but does so in a >manner that also materially increases measured latency. This >difference is what is behind my comment on the objective of a >congestion management algorithm (such as TCP's but not limited to >it) that its objective is to keep the amount of data outstanding >large enough to maximize its transmission rate through the network, >but not so large as to materially increase measured latency or >probability of loss. > >I would argue that this concept of "Good Queue" is directly relevant >to the concept of an acceptable burst size. In the first >transmission in a session, the sender has no information about what >it will experience, so it behoves it to behave in a manner that is >unlikely to create a significant amount of "bad queue" - >conservatively. But it by definition has no numbers by which to >quantify that. Hence, we make recommendations about the initial >window size. After that, I would argue that it should continue to >behave in a manner that doesn't led to "bad queue", but is free to >operate in any manner that seeks to keep the amount of data >outstanding large enough to maximize its transmission rate through >the network, but not so large as to materially increase measured >latency or probability of loss. At the point that it sends data in a >manner that creates a sustained queue, it has exceeded what would be >considered a useful burst size. ________________________________________________________________ Bob Briscoe, BT ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [e2e] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-15 22:57 ` [Bloat] [e2e] " Bob Briscoe @ 2013-12-16 7:34 ` Fred Baker (fred) 2013-12-16 13:47 ` Naeem Khademi 0 siblings, 1 reply; 16+ messages in thread From: Fred Baker (fred) @ 2013-12-16 7:34 UTC (permalink / raw) To: Bob Briscoe; +Cc: bloat, aqm, <end2end-interest@postel.org> [-- Attachment #1: Type: text/plain, Size: 994 bytes --] On Dec 15, 2013, at 2:57 PM, Bob Briscoe <bob.briscoe@bt.com> wrote: > Fred, > > Jonathan Morton, Michael Scharf & I took Naeem's question to mean "What should an AQM assume the size of a good burst is?" whereas I think you and David C-B took the question to mean "What should an end-system take the size of a good burst to be?". I can't comment on what he means. I took the question as "what should a system that is in receipt of what it might consider a 'burst', and more especially a 'good burst', to be?" I don't know that a sending transport (which is to be distinguished from the queueing arrangement in that same system) or a receiving system *has* a definition of a "good" or "bad" burst. The one is sending data, which in the context of y two examples might be a good or bad idea, and the other is receiving it. From the receiver's perspective, the data either arrived or it didn't; if it arrived, there is no real argument for not delivering it to its application... [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 195 bytes --] ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [e2e] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-16 7:34 ` Fred Baker (fred) @ 2013-12-16 13:47 ` Naeem Khademi 2013-12-16 14:05 ` Naeem Khademi 2013-12-16 14:28 ` Jonathan Morton 0 siblings, 2 replies; 16+ messages in thread From: Naeem Khademi @ 2013-12-16 13:47 UTC (permalink / raw) To: Fred Baker (fred); +Cc: bloat, aqm, <end2end-interest@postel.org> [-- Attachment #1: Type: text/plain, Size: 2061 bytes --] Bob, Fred and all I'll copy/paste the question here again: "what is a good burst (size) that AQMs should allow?" and/or "how an AQM can have a notion of the right burst size?" So, obviously, as Bob mentioned, I'm concerned about what AQMs should or shouldn't do. The mission of dealing with packet bursts in addition to the task of keeping the standing queue very low or minimal is part of an "AQM evaluation criteria" I envision. While I do agree with all Fred's remarks, I'm more concerned to have an answer for this, for where AQMs might get deployed. An example: when designing my AQM X should I care about 64K TSO-generated bursts to safely pass without dropping or not? Does the answer (whatever it is) also apply to the burst sizes typical of multimedia traffic, etc.? if the answer is "yes", should an AQM design be actively aware of what application layer does in terms of sending bursty traffic or not? and to what extent if yes? Regards, Naeem On Mon, Dec 16, 2013 at 8:34 AM, Fred Baker (fred) <fred@cisco.com> wrote: > > On Dec 15, 2013, at 2:57 PM, Bob Briscoe <bob.briscoe@bt.com> > wrote: > > > Fred, > > > > Jonathan Morton, Michael Scharf & I took Naeem's question to mean "What > should an AQM assume the size of a good burst is?" whereas I think you and > David C-B took the question to mean "What should an end-system take the > size of a good burst to be?". > > I can't comment on what he means. I took the question as "what should a > system that is in receipt of what it might consider a 'burst', and more > especially a 'good burst', to be?" > > I don't know that a sending transport (which is to be distinguished from > the queueing arrangement in that same system) or a receiving system *has* a > definition of a "good" or "bad" burst. The one is sending data, which in > the context of y two examples might be a good or bad idea, and the other is > receiving it. From the receiver's perspective, the data either arrived or > it didn't; if it arrived, there is no real argument for not delivering it > to its application... > [-- Attachment #2: Type: text/html, Size: 3284 bytes --] ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [e2e] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-16 13:47 ` Naeem Khademi @ 2013-12-16 14:05 ` Naeem Khademi 2013-12-16 17:30 ` Fred Baker (fred) 2013-12-16 14:28 ` Jonathan Morton 1 sibling, 1 reply; 16+ messages in thread From: Naeem Khademi @ 2013-12-16 14:05 UTC (permalink / raw) To: Fred Baker (fred); +Cc: bloat, aqm, <end2end-interest@postel.org> [-- Attachment #1: Type: text/plain, Size: 3060 bytes --] and to clarify more about what else was discussed, it seems to me some of us tend to correspond and relate the notion of "good queue" vs. "bad queue" used by KN+VJ ACM queue paper to my question on "good bursts". While they likely to be correlated (I have no argument on this now), the notion of "good burst" goes beyond the "good queue" defined in that paper. Based on their definition a good queue is a queue that minimizes the standing queue (or gets rid of it entirely) while allowing a certain amount of (sub-RTT? typical 100 ms) bursts while avoiding the link to get under-utilized. That notion (again, I have no argument on its correctness for now) is different from my question on "good bursts" which means that: once we manage to get rid of the standing queue, what types/sizes of bursts I should let the AQM X to protect/handle? Naeem On Mon, Dec 16, 2013 at 2:47 PM, Naeem Khademi <naeem.khademi@gmail.com>wrote: > Bob, Fred and all > > I'll copy/paste the question here again: "what is a good burst (size) > that AQMs should allow?" and/or "how an AQM can have a notion of the right > burst size?" > > So, obviously, as Bob mentioned, I'm concerned about what AQMs should or > shouldn't do. The mission of dealing with packet bursts in addition to the > task of keeping the standing queue very low or minimal is part of an "AQM > evaluation criteria" I envision. While I do agree with all Fred's remarks, > I'm more concerned to have an answer for this, for where AQMs might get > deployed. > > An example: when designing my AQM X should I care about 64K TSO-generated > bursts to safely pass without dropping or not? Does the answer (whatever > it is) also apply to the burst sizes typical of multimedia traffic, etc.? > if the answer is "yes", should an AQM design be actively aware of what > application layer does in terms of sending bursty traffic or not? and to > what extent if yes? > > Regards, > Naeem > > On Mon, Dec 16, 2013 at 8:34 AM, Fred Baker (fred) <fred@cisco.com> wrote: > >> >> On Dec 15, 2013, at 2:57 PM, Bob Briscoe <bob.briscoe@bt.com> >> wrote: >> >> > Fred, >> > >> > Jonathan Morton, Michael Scharf & I took Naeem's question to mean "What >> should an AQM assume the size of a good burst is?" whereas I think you and >> David C-B took the question to mean "What should an end-system take the >> size of a good burst to be?". >> >> I can't comment on what he means. I took the question as "what should a >> system that is in receipt of what it might consider a 'burst', and more >> especially a 'good burst', to be?" >> >> I don't know that a sending transport (which is to be distinguished from >> the queueing arrangement in that same system) or a receiving system *has* a >> definition of a "good" or "bad" burst. The one is sending data, which in >> the context of y two examples might be a good or bad idea, and the other is >> receiving it. From the receiver's perspective, the data either arrived or >> it didn't; if it arrived, there is no real argument for not delivering it >> to its application... >> > > [-- Attachment #2: Type: text/html, Size: 4654 bytes --] ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [e2e] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-16 14:05 ` Naeem Khademi @ 2013-12-16 17:30 ` Fred Baker (fred) 0 siblings, 0 replies; 16+ messages in thread From: Fred Baker (fred) @ 2013-12-16 17:30 UTC (permalink / raw) To: Naeem Khademi; +Cc: bloat, aqm, <end2end-interest@postel.org> [-- Attachment #1.1: Type: text/plain, Size: 4384 bytes --] On Dec 16, 2013, at 6:05 AM, Naeem Khademi <naeem.khademi@gmail.com> wrote: > and to clarify more about what else was discussed, it seems to me some of us tend to correspond and relate the notion of "good queue" vs. "bad queue" used by KN+VJ ACM queue paper to my question on "good bursts". While they likely to be correlated (I have no argument on this now), the notion of "good burst" goes beyond the "good queue" defined in that paper. Based on their definition a good queue is a queue that minimizes the standing queue (or gets rid of it entirely) while allowing a certain amount of (sub-RTT? typical 100 ms) bursts while avoiding the link to get under-utilized. That notion (again, I have no argument on its correctness for now) is different from my question on "good bursts" which means that: once we manage to get rid of the standing queue, what types/sizes of bursts I should let the AQM X to protect/handle? I think your question has a problem in it. Going back to my thought experiment, suppose that we have a queuing point whose egress speed is X and a sender that is sending data in a CBR fashion at (1 + epsilon)*X. In a very formal sense, the entire transmission stream is a single burst, and one could imagine it taking hundreds or thousands of packets being sent and forwarded before the queue built up to a point that AQM would push back. In that case, I would expect an "acceptable burst" to be hundreds or thousands of packets. If on the other hand you have a new TCP session in slow-start that is using an intermediate link that is at the time fully utilized and on the cusp of AQM pushing back on it, the new session is very likely to tip the balance, and a burst of a few packets might well push it over the top. So to my mind, the question isn't about the size of the burst. It is about the rate of onset and the effect of that burst on the latency and probably of loss for itself and competing sessions. And it will never come down to a magic number N in that N is somehow "right", N-1 is "better", and N+1 is "over the top." There are no such magic numbers. > Naeem > > > On Mon, Dec 16, 2013 at 2:47 PM, Naeem Khademi <naeem.khademi@gmail.com> wrote: > Bob, Fred and all > > I'll copy/paste the question here again: "what is a good burst (size) that AQMs should allow?" and/or "how an AQM can have a notion of the right burst size?" > > So, obviously, as Bob mentioned, I'm concerned about what AQMs should or shouldn't do. The mission of dealing with packet bursts in addition to the task of keeping the standing queue very low or minimal is part of an "AQM evaluation criteria" I envision. While I do agree with all Fred's remarks, I'm more concerned to have an answer for this, for where AQMs might get deployed. > > An example: when designing my AQM X should I care about 64K TSO-generated bursts to safely pass without dropping or not? Does the answer (whatever it is) also apply to the burst sizes typical of multimedia traffic, etc.? if the answer is "yes", should an AQM design be actively aware of what application layer does in terms of sending bursty traffic or not? and to what extent if yes? > > Regards, > Naeem > > On Mon, Dec 16, 2013 at 8:34 AM, Fred Baker (fred) <fred@cisco.com> wrote: > > On Dec 15, 2013, at 2:57 PM, Bob Briscoe <bob.briscoe@bt.com> > wrote: > > > Fred, > > > > Jonathan Morton, Michael Scharf & I took Naeem's question to mean "What should an AQM assume the size of a good burst is?" whereas I think you and David C-B took the question to mean "What should an end-system take the size of a good burst to be?". > > I can't comment on what he means. I took the question as "what should a system that is in receipt of what it might consider a 'burst', and more especially a 'good burst', to be?" > > I don't know that a sending transport (which is to be distinguished from the queueing arrangement in that same system) or a receiving system *has* a definition of a "good" or "bad" burst. The one is sending data, which in the context of y two examples might be a good or bad idea, and the other is receiving it. From the receiver's perspective, the data either arrived or it didn't; if it arrived, there is no real argument for not delivering it to its application... > > Make things as simple as possible, but not simpler. Albert Einstein [-- Attachment #1.2: Type: text/html, Size: 7509 bytes --] [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 195 bytes --] ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [e2e] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-16 13:47 ` Naeem Khademi 2013-12-16 14:05 ` Naeem Khademi @ 2013-12-16 14:28 ` Jonathan Morton 2013-12-16 14:50 ` Steinar H. Gunderson 1 sibling, 1 reply; 16+ messages in thread From: Jonathan Morton @ 2013-12-16 14:28 UTC (permalink / raw) To: Naeem Khademi; +Cc: bloat Mainlinglist On 16 Dec, 2013, at 3:47 pm, Naeem Khademi wrote: > An example: when designing my AQM X should I care about 64K TSO-generated bursts to safely pass without dropping or not? If there is at least 6Mbps bandwidth downstream of your queue, then codel at least will pass such a burst in isolation. If you have ECN, then you should mark instead of dropping as long as you have queue space, and use FQ semantics to minimise latency impact on other flows. > Does the answer (whatever it is) also apply to the burst sizes typical of multimedia traffic, etc.? if the answer is "yes", should an AQM design be actively aware of what application layer does in terms of sending bursty traffic or not? and to what extent if yes? That's a more interesting question. IMHO the extremely bursty behaviour of certain popular video streaming systems is broken, and should not be worked around by the multitude of receivers - it should be corrected sender-side. Relatively simple pacing algorithms which do this effectively are not difficult to design and, like much in the networking world, I am constantly surprised to find that they have not yet been deployed in earnest. I am also reminded of the livestreamed demoscene event from a couple of years ago, which was producing enormous aggregate bursts every time a frame group completed encoding (multiple times a second). This wasn't enough on a single flow to impact individual end-users much, but it overwhelmed the local and gateway buffers chronically, thereby crippling the entire livestreaming exercise until a workaround could be implemented. So in short... no. Bursts of that magnitude need to be clipped, to signal to senders that they are problematic. As an aside, networking infrastructure is unusual in that hardware innovation (bandwidth, buffer sizes) proceeds at a much faster pace than the associated software elements (which would normally develop in response to increased CPU power). To some extent conservatism is explainable by a desire not to break things, but when things are already broken and need to be fixed... - Jonathan Morton ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Bloat] [e2e] [aqm] What is a good burst? -- AQM evaluation guidelines 2013-12-16 14:28 ` Jonathan Morton @ 2013-12-16 14:50 ` Steinar H. Gunderson 0 siblings, 0 replies; 16+ messages in thread From: Steinar H. Gunderson @ 2013-12-16 14:50 UTC (permalink / raw) To: bloat On Mon, Dec 16, 2013 at 04:28:35PM +0200, Jonathan Morton wrote: > I am also reminded of the livestreamed demoscene event from a couple of > years ago, which was producing enormous aggregate bursts every time a frame > group completed encoding (multiple times a second). This wasn't enough on > a single flow to impact individual end-users much, but it overwhelmed the > local and gateway buffers chronically, thereby crippling the entire > livestreaming exercise until a workaround could be implemented. FWIW, as the person in charge of said livestreaming: Since last year we did pacing (with HTB hacks; for 2014, we'll use sch_fq with hand-set rate limits). It worked out much better. We're in a very simple situation, though, with only one possible quality per stream (no autotuning up/down, users will do that manually if they're not happy); I think there was a paper back where people had reverse-engineered the Netflix algorithm for changing bitrates, and that problem isn't trivial at all. /* Steinar */ -- Homepage: http://www.sesse.net/ ^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2014-01-30 19:27 UTC | newest] Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2013-12-15 5:35 [Bloat] What is a good burst? -- AQM evaluation guidelines Naeem Khademi 2013-12-15 12:26 ` Jonathan Morton 2013-12-15 15:16 ` Scharf, Michael (Michael) [not found] ` <655C07320163294895BBADA28372AF5D14C5DF@FR712WXCHMBA15.zeu. alcatel-lucent.com> 2013-12-15 20:56 ` Bob Briscoe 2013-12-15 18:56 ` [Bloat] [aqm] " Curtis Villamizar 2014-01-02 6:31 ` Fred Baker (fred) 2014-01-03 18:17 ` [Bloat] [e2e] " dpreed 2014-01-30 19:27 ` [Bloat] [aqm] [e2e] " Dave Taht 2013-12-15 21:42 ` [Bloat] [aqm] " Fred Baker (fred) 2013-12-15 22:57 ` [Bloat] [e2e] " Bob Briscoe 2013-12-16 7:34 ` Fred Baker (fred) 2013-12-16 13:47 ` Naeem Khademi 2013-12-16 14:05 ` Naeem Khademi 2013-12-16 17:30 ` Fred Baker (fred) 2013-12-16 14:28 ` Jonathan Morton 2013-12-16 14:50 ` Steinar H. Gunderson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox