<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Hi folks,<div class=""><br class=""></div><div class="">That “Michael” dude was me :)</div><div class=""><br class=""></div><div class="">About the stuff below, a few comments. First, an impressive effort to dig all of this up - I also thought that this was an interesting conversation to have!</div><div class=""><br class=""></div><div class="">However, I would like to point out that thesis defense conversations are meant to be provocative, by design - when I said that CoDel doesn’t usually help and long queues would be the right thing for all applications, I certainly didn’t REALLY REALLY mean that. The idea was just to be thought provoking - and indeed I found this interesting: e.g., if you think about a short HTTP/1 connection, a large buffer just gives it a greater chance to get all packets across, and the perceived latency from the reduced round-trips after not dropping anything may in fact be less than with a smaller (or CoDel’ed) buffer.</div><div class=""><br class=""></div><div class="">But corner cases aside, in fact I very much agree with the answers to my question Pete gives below, and also with the points others have made in answering this thread. Jonathan Morton even mentioned ECN - after Dave’s recent over-reaction to ECN I made a point of not bringing up ECN *yet* again, but… yes indeed, being able to use ECN to tell an application to back off instead of requiring to drop a packet is also one of the benefits.</div><div class="">(I think people easily miss the latency benefit of not dropping a packet, and thereby eliminating head-of-line blocking - packet drops require an extra RTT for retransmission, which can be quite a long time. This is about measuring latency at the right layer...)</div><div class="">BTW, Anna Brunstrom was also very quick to also give me the HTTP/2.0 example in the break after the defense. Also, TCP will generally not work very well when queues get very long… the RTT estimate gets way off.</div><div class=""><br class=""></div><div class="">All in all, I think this is a fun thought to consider for a bit, but not really something worth spending people’s time on, IMO: big buffers are bad, period. All else are corner cases.</div><div class=""><br class=""></div><div class="">I’ll use the opportunity to tell folks that I was also pretty impressed with Toke’s thesis as well as his performance at the defense. Among the many cool things he’s developed (or contributed to), my personal favorite is the airtime fairness scheduler. But, there were many more. Really good stuff.</div><div class=""><br class=""></div><div class="">With that, I wish all the best to all you bloaters out there - thanks for reducing our queues!</div><div class=""><br class=""></div><div class="">Cheers,</div><div class="">Michael</div><div class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Nov 26, 2018, at 8:08 PM, Pete Heist <<a href="mailto:pete@heistp.net" class="">pete@heistp.net</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class="">In Toke’s thesis defense, there was an interesting exchange with examination committee member Michael (apologies for not catching the last name) regarding how the CoDel part of fq_codel helps in the real world:</div><div class=""><br class=""></div><a href="https://www.youtube.com/watch?v=upvx6rpSLSw&t=2h16m20s" class="">https://www.youtube.com/watch?v=upvx6rpSLSw&t=2h16m20s</a><div class=""><br class=""></div><div class="">My attempt at a transcript is at the end of this message. (I probably won’t attempt a full defense transcript, but if someone wants more of a particular section I can try. :)</div><div class=""><br class=""></div><div class="">So I just thought to continue the discussion- when does the CoDel part of fq_codel actually help in the real world? I’ll speculate with a few possibilities:</div><div class=""><br class=""></div><div class="">1) Multiplexed HTTP/2.0 requests containing both a saturating stream and interactive traffic. For example, a game that uses HTTP/2.0 to download new map data while position updates or chat happen at the same time. Standalone programs could use HTTP/2.0 this way, or for web apps, the browser may multiplex concurrent uses of XHR over a single TCP connection. I don’t know of any examples.</div><div class=""><br class=""></div><div class="">2) SSH with port forwarding while using an interactive terminal together with a bulk transfer?</div><div class=""><br class=""></div><div class="">3) Does CoDel help the TCP protocol itself somehow? For example, does it speed up the round-trip time when acknowledging data segments, improving behavior on lossy links? Similarly, does it speed up the TCP close sequence for saturating flows?</div><div class=""><br class=""></div><div class="">Pete</div><div class=""><br class=""></div><div class="">---</div><div class=""><br class=""></div><div class="">M: In fq_codel what is really the point of CoDel?</div><div class="">T: Yeah, uh, a bit better intra-flow latency...</div><div class="">M: Right, who cares about that?</div><div class="">T: Apparently some people do.</div><div class="">M: No I mean specifically, what types of flows care about that?</div><div class="">T: Yeah, so, um, flows that are TCP based or have some kind of- like, elastic flows that still want low latency.</div><div class="">M: Elastic flows that are TCP based that want low latency...</div><div class="">T: Things where you want to discover the- like, you want to utilize the full link and sort of probe the bandwidth, but you still want low latency.</div><div class="">M: Can you be more concrete what kind of application is that?</div><div class="">T: I, yeah, I…</div><div class="">M: Give me any application example that’s gonna benefit from the CoDel part- CoDel bits in fq_codel? Because I have problems with this.</div><div class="">T: I, I do too... So like, you can implement things this way but equivalently if you have something like fq_codel you could, like, if you have a video streaming application that interleaves control…</div><div class="">M: <inaudible> that runs on UDP often.</div><div class="">T: Yeah, but I, Netflix…</div><div class="">M: Ok that’s a long way… <inaudible></div><div class="">T: No, I tend to agree with you that, um…</div><div class="">M: Because the biggest issue in my opinion is, is web traffic- for web traffic, just giving it a huge queue makes the chance bigger that uh, <inaudible, ed: because of the slow start> so you may end up with a (higher) faster completion time by buffering a lot. Uh, you’re not benefitting at all by keeping the queue very small, you are simply <inaudible> Right, you’re benefitting altogether by just <inaudible> which is what the queue does with this nice sparse flow, uh… <inaudible></div><div class="">T: You have the infinite buffers in the <inaudible> for that to work, right. One benefit you get from CoDel is that - you screw with things like - you have to drop eventually.</div><div class="">M: You should at some point. The chances are bigger that the small flow succeeds (if given a huge queue). And, in web surfing, why does that, uh(?)</div><div class="">T: Yeah, mmm...</div><div class="">M: Because that would be an example of something where I care about latency but I care about low completion. Other things where I care about latency they often don’t send very much. <inaudible...> bursts, you have to accommodate them basically. Or you have interactive traffic which is UDP and tries to, often react from queueing delay <inaudible>. I’m beginning to suspect that fq minus CoDel is really the best <inaudible> out there.</div><div class="">T: But if, yeah, if you have enough buffer.</div><div class="">M: Well, the more the better.</div><div class="">T: Yeah, well.</div><div class="">M: Haha, I got you to say yes. [laughter :] That goes in history. I said the more the better and you said yeah.</div><div class="">T: No but like, it goes back to good-queue bad-queue, like, buffering in itself has value, you just need to manage it.</div><div class="">M: Ok.</div><div class="">T: Which is also the reason why just having a small queue doesn’t help in itself.</div><div class="">M: Right yeah. Uh, I have a silly question about fq_codel, a very silly one and there may be something I missed in the papers, probably I did, but I'm I was just wondering I mean first of all this is also a bit silly in that <inaudible> it’s a security thing, and I think that’s kind of a package by itself silly because fq_codel often probably <inaudible> just in principle, is that something I could easily attack by creating new flows for every packet?</div><div class="">T: No because, they, you will…</div><div class="">M: With the sparse flows, and it’s gonna…</div><div class="">T: Yeah, but at some point you’re going to go over the threshold, I, you could, there there’s this thing where the flow goes in, it’s sparse, it empties out and then you put it on the normal round robin implementation before you queue <inaudible> And if you don’t do that than you can have, you could time packets so that they get priority just at the right time and you could have lockout.</div><div class="">M: Yes.</div><div class="">T: But now you will just fall back to fq.</div><div class="">M: Ok, it was just a curiousity, it’s probably in the paper. <inaudible></div><div class="">T: I think we added that in the RFC, um, you really need to, like, this part is important.</div><div class=""><br class=""></div></div>_______________________________________________<br class="">Bloat mailing list<br class=""><a href="mailto:Bloat@lists.bufferbloat.net" class="">Bloat@lists.bufferbloat.net</a><br class="">https://lists.bufferbloat.net/listinfo/bloat<br class=""></div></blockquote></div><br class=""></div></body></html>