* [Codel] sprout
@ 2013-07-10 19:30 Dave Taht
2013-07-10 20:19 ` Keith Winstein
0 siblings, 1 reply; 14+ messages in thread
From: Dave Taht @ 2013-07-10 19:30 UTC (permalink / raw)
To: codel, Keith Winstein
I haven't been paying a lot of attention to rmcat and webrtc until
recently, although I'd had a nice discussion with keith on it a while
back..
this particular thread sums up some interesting issues on that front.
http://www.ietf.org/mail-archive/web/rmcat/current/msg00390.html
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 19:30 [Codel] sprout Dave Taht
@ 2013-07-10 20:19 ` Keith Winstein
2013-07-10 20:38 ` Jim Gettys
2013-07-10 20:40 ` Dave Taht
0 siblings, 2 replies; 14+ messages in thread
From: Keith Winstein @ 2013-07-10 20:19 UTC (permalink / raw)
To: Dave Taht; +Cc: codel
[-- Attachment #1: Type: text/plain, Size: 1810 bytes --]
Thank, Dave.
We have a Web site with more info, a talk, and source code (
http://alfalfa.mit.edu) if anybody is interested.
Something that may interest folks here is that we compared
Sprout-over-unlimited-buffer with TCP-Cubic-over-CoDel on these
cellular-type links. That is, a scenario where the network operators
implemented CoDel inside the LTE/UMTS/1xEV-DO base station (for the
downlink) and the phone manufacturers implemented CoDel inside the
"baseband" chip for the uplink.
Bottom line results is that for the case where a cellular user can control
all their own flows, it's roughly a wash. To a first approximation, you can
fix bufferbloat on a cellular network *either* by putting CoDel inside the
base station and baseband chip (and otherwise running the same endpoint
TCP), *or* by changing the endpoints but leaving the base station and
baseband chip unmodified.
Obviously we benefit dramatically from the per-user queues of the cellular
network. By contrast, in a typical house with a bufferbloated cable modem
where one user can cause big delays for everybody else, you can't fix
bufferbloat by fixing just one endpoint. We will have some results soon on
whether you can fix it by fixing all the endpoints (but still leaving the
"bloated" gateway intact).
Cheers,
Keith
On Wed, Jul 10, 2013 at 3:30 PM, Dave Taht <dave.taht@gmail.com> wrote:
> I haven't been paying a lot of attention to rmcat and webrtc until
> recently, although I'd had a nice discussion with keith on it a while
> back..
>
> this particular thread sums up some interesting issues on that front.
>
> http://www.ietf.org/mail-archive/web/rmcat/current/msg00390.html
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt:
> http://www.teklibre.com/cerowrt/subscribe.html
>
[-- Attachment #2: Type: text/html, Size: 2494 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 20:19 ` Keith Winstein
@ 2013-07-10 20:38 ` Jim Gettys
2013-07-10 20:41 ` Keith Winstein
2013-07-10 20:40 ` Dave Taht
1 sibling, 1 reply; 14+ messages in thread
From: Jim Gettys @ 2013-07-10 20:38 UTC (permalink / raw)
To: Keith Winstein; +Cc: codel
[-- Attachment #1: Type: text/plain, Size: 2419 bytes --]
Keith,
On Wed, Jul 10, 2013 at 4:19 PM, Keith Winstein <keithw@mit.edu> wrote:
> Thank, Dave.
>
> We have a Web site with more info, a talk, and source code (
> http://alfalfa.mit.edu) if anybody is interested.
>
> Something that may interest folks here is that we compared
> Sprout-over-unlimited-buffer with TCP-Cubic-over-CoDel on these
> cellular-type links. That is, a scenario where the network operators
> implemented CoDel inside the LTE/UMTS/1xEV-DO base station (for the
> downlink) and the phone manufacturers implemented CoDel inside the
> "baseband" chip for the uplink.
>
> Bottom line results is that for the case where a cellular user can control
> all their own flows, it's roughly a wash. To a first approximation, you can
> fix bufferbloat on a cellular network *either* by putting CoDel inside the
> base station and baseband chip (and otherwise running the same endpoint
> TCP), *or* by changing the endpoints but leaving the base station and
> baseband chip unmodified.
>
Did you compare with fq_codel? None of us (Van included) advocate CoDel by
itself.
>
> Obviously we benefit dramatically from the per-user queues of the cellular
> network. By contrast, in a typical house with a bufferbloated cable modem
> where one user can cause big delays for everybody else, you can't fix
> bufferbloat by fixing just one endpoint. We will have some results soon on
> whether you can fix it by fixing all the endpoints (but still leaving the
> "bloated" gateway intact).
>
Yup, per user queues help. But those per-user queues can be extremely
large; you can hurt yourself as soon as you want to mix your WebRTC kind of
traffic with anything else.
Jim
>
> Cheers,
> Keith
>
> On Wed, Jul 10, 2013 at 3:30 PM, Dave Taht <dave.taht@gmail.com> wrote:
>
>> I haven't been paying a lot of attention to rmcat and webrtc until
>> recently, although I'd had a nice discussion with keith on it a while
>> back..
>>
>> this particular thread sums up some interesting issues on that front.
>>
>> http://www.ietf.org/mail-archive/web/rmcat/current/msg00390.html
>>
>> --
>> Dave Täht
>>
>> Fixing bufferbloat with cerowrt:
>> http://www.teklibre.com/cerowrt/subscribe.html
>>
>
>
> _______________________________________________
> Codel mailing list
> Codel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel
>
>
[-- Attachment #2: Type: text/html, Size: 4104 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 20:19 ` Keith Winstein
2013-07-10 20:38 ` Jim Gettys
@ 2013-07-10 20:40 ` Dave Taht
1 sibling, 0 replies; 14+ messages in thread
From: Dave Taht @ 2013-07-10 20:40 UTC (permalink / raw)
To: Keith Winstein; +Cc: codel
On Wed, Jul 10, 2013 at 1:19 PM, Keith Winstein <keithw@mit.edu> wrote:
> Thank, Dave.
>
> We have a Web site with more info, a talk, and source code
> (http://alfalfa.mit.edu) if anybody is interested.
Good Talk! Nice way to spend lunch. Boy you talk fast.
The simple expedient of inverting the delay scale against the
bandwidth scale had never occurred to me on the graph, up and to the
right helps!
How did you dynamically stretch out the graphs like that in the preso?
I'd like very much to be able to zoom into those levels of detail and
out again.
Lastly,
what was sprout and sprout-ewma over codel? (rather than vs codel)
> Something that may interest folks here is that we compared
> Sprout-over-unlimited-buffer with TCP-Cubic-over-CoDel on these
> cellular-type links. That is, a scenario where the network operators
> implemented CoDel inside the LTE/UMTS/1xEV-DO base station (for the
> downlink) and the phone manufacturers implemented CoDel inside the
> "baseband" chip for the uplink.
of course I'm always after people to attempt fq_codel in cases like this...
or fq + X, whatever X is....
>
> Bottom line results is that for the case where a cellular user can control
> all their own flows, it's roughly a wash. To a first approximation, you can
> fix bufferbloat on a cellular network *either* by putting CoDel inside the
> base station and baseband chip (and otherwise running the same endpoint
> TCP), *or* by changing the endpoints but leaving the base station and
> baseband chip unmodified.
>
> Obviously we benefit dramatically from the per-user queues of the cellular
> network. By contrast, in a typical house with a bufferbloated cable modem
> where one user can cause big delays for everybody else, you can't fix
> bufferbloat by fixing just one endpoint. We will have some results soon on
> whether you can fix it by fixing all the endpoints (but still leaving the
> "bloated" gateway intact).
>
> Cheers,
> Keith
>
> On Wed, Jul 10, 2013 at 3:30 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>
>> I haven't been paying a lot of attention to rmcat and webrtc until
>> recently, although I'd had a nice discussion with keith on it a while
>> back..
>>
>> this particular thread sums up some interesting issues on that front.
>>
>> http://www.ietf.org/mail-archive/web/rmcat/current/msg00390.html
>>
>> --
>> Dave Täht
>>
>> Fixing bufferbloat with cerowrt:
>> http://www.teklibre.com/cerowrt/subscribe.html
>
>
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 20:38 ` Jim Gettys
@ 2013-07-10 20:41 ` Keith Winstein
2013-07-10 20:46 ` Jim Gettys
0 siblings, 1 reply; 14+ messages in thread
From: Keith Winstein @ 2013-07-10 20:41 UTC (permalink / raw)
To: Jim Gettys; +Cc: codel
[-- Attachment #1: Type: text/plain, Size: 486 bytes --]
On Wed, Jul 10, 2013 at 4:38 PM, Jim Gettys <jg@freedesktop.org> wrote:
> Did you compare with fq_codel? None of us (Van included) advocate CoDel
> by itself.
>
Hi Jim,
In our evaluation there's only one flow over the queue (in each direction)
-- the videoconference. So, yes.
In the upcoming paper where we try and come up with the best end-to-end
schemes for multiuser traffic, we compare against Cubic-over-sfqCoDel (
http://www.pollere.net/Txtdocs/sfqcodel.cc).
Cheers,
Keith
[-- Attachment #2: Type: text/html, Size: 970 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 20:41 ` Keith Winstein
@ 2013-07-10 20:46 ` Jim Gettys
2013-07-10 21:42 ` Dave Taht
2013-07-10 22:10 ` Kathleen Nichols
0 siblings, 2 replies; 14+ messages in thread
From: Jim Gettys @ 2013-07-10 20:46 UTC (permalink / raw)
To: Keith Winstein; +Cc: codel
[-- Attachment #1: Type: text/plain, Size: 817 bytes --]
On Wed, Jul 10, 2013 at 4:41 PM, Keith Winstein <keithw@mit.edu> wrote:
> On Wed, Jul 10, 2013 at 4:38 PM, Jim Gettys <jg@freedesktop.org> wrote:
>
>> Did you compare with fq_codel? None of us (Van included) advocate CoDel
>> by itself.
>>
>
> Hi Jim,
>
> In our evaluation there's only one flow over the queue (in each direction)
> -- the videoconference. So, yes.
>
> In the upcoming paper where we try and come up with the best end-to-end
> schemes for multiuser traffic, we compare against Cubic-over-sfqCoDel (
> http://www.pollere.net/Txtdocs/sfqcodel.cc).
>
Great.
I sure wish we could get together on *fq_codel variants. Kathy thinks that
there is no difference between sfq_codel and fq_codel, and Dave thinks
there is a difference.
Sigh....
- Jim
>
> Cheers,
> Keith
>
[-- Attachment #2: Type: text/html, Size: 2245 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 20:46 ` Jim Gettys
@ 2013-07-10 21:42 ` Dave Taht
2013-07-10 22:10 ` Kathleen Nichols
1 sibling, 0 replies; 14+ messages in thread
From: Dave Taht @ 2013-07-10 21:42 UTC (permalink / raw)
To: Jim Gettys; +Cc: Keith Winstein, codel
On Wed, Jul 10, 2013 at 1:46 PM, Jim Gettys <jg@freedesktop.org> wrote:
>
>
>
> On Wed, Jul 10, 2013 at 4:41 PM, Keith Winstein <keithw@mit.edu> wrote:
>>
>> On Wed, Jul 10, 2013 at 4:38 PM, Jim Gettys <jg@freedesktop.org> wrote:
>>>
>>> Did you compare with fq_codel? None of us (Van included) advocate CoDel
>>> by itself.
>>
>>
>> Hi Jim,
>>
>> In our evaluation there's only one flow over the queue (in each direction)
>> -- the videoconference. So, yes.
>>
>> In the upcoming paper where we try and come up with the best end-to-end
>> schemes for multiuser traffic, we compare against Cubic-over-sfqCoDel
>> (http://www.pollere.net/Txtdocs/sfqcodel.cc).
One of my problems with keith's paper is that it is playing against
traces that assume the background load will itself not mutate based on
available bandwidth.
Love the tracing idea tho...
>
> Great.
>
> I sure wish we could get together on *fq_codel variants. Kathy thinks that
> there is no difference between sfq_codel and fq_codel, and Dave thinks there
> is a difference.
I documented the differences between codels and fq_codels at some
point on some web page somewhere.
The original ns2 codel code as published in the magazine contained an
error and should not be used at all.
linux codel uses a more drastic fall-off curve than ns2 codel.
The ns2_codel patch I have for linux uses much "tighter" constraints
on the interval and in the range of RTTs I was testing at the time
(sub 20ms) did a lot better than linux codel. It is based on the
"experimental" codel code on the pollere web site, but excludes the
floating point hack at highest rates.
The nfq_codel patch, based on that, tries to use a compromise between
SFQ and DRR, providing single packet SFQ-like service within a smaller
DRR-like quantum (usually 300 these days) across the flows.
Results of nfq_codel vs fq_codel have been rather inconclusive, and I
decided I needed tests that covered the entire range of useful stuff,
like web, voip, and video conferencing, which is a lot of work for
which we have no funding. There are some hints that nfq_codel does
better against voip, in particular, but competes more slowly against
reverse traffic than the DRR approach in fq_codel.
the ns2 sfqcodel uses the std ns2 model of codel, and something very
close to nfq_codel. It has been unclear from reading the code if it
actually does the "new flow" idea right relative to the
smaller-than-mtu quantum.
BUT in summary:
In all cases the differences between *fq_codels are pretty subtle, but
rather dramatic compared to std drop tail. I'm satisfied at this point
that drop tail should die, and that some form of fq_codel be made the
default in linux and then we can get around to arguing other points
with data at scale. I am unsure if enough of the core linux network
devs agree - and there are issues with handling multi-queue devices in
particular remaining, and I am unfond of TSO/GSO/GRO as it stands (but
eric's got some fixes planned). It would be great to see a patch
obsoleting pfifo_fast hit mainline linux...
In particular web traffic "cuts through" other sustained flows like
butter, and is almost entirely based on the RTT (as are things like
dns).
My primary nfq_codel deployment with a couple dozen users is pretty
darn successful compared to what was running before. It is really
astonishing how well it works even against multiple DASH flows and
torrent.
and in any case, what I'm mostly concerned about these days are dozens
of flows in slow start on a given web page vs 1-3 uploads/downloads
(and lately, videoconferencing), AND the behavior on wifi access
points, which as keith points out, really needs per user queues in
order to even begin to think about performing as well as it did a
decade ago, prior to N, and has a dozen other "easy" fixes that will
take many man-months to develop and test even on a single
architecture.
I am reluctant to re-join the simulation campaign using ns2, as the
deployed TCPs differ rather radically from the models. I prefer
measuring reality, which unfortunately is sloppy, buggy, and hard to
make repeatable. It does appear that Eric's new netem patches are
going to make it possible to run more accurate longer RTT (100ms) live
network simulations at higher rates than I ever could before. (THX
ERIC)
The results I get in the lab with the current (prepatched) version of
netem are pretty dismal...
http://results.lab.taht.net/delay/rtt_fair_50_netem_10000_fq_codel-100mbit-2.svg
and unusable at gigabit
http://results.lab.taht.net/delay/rtt_fair_50_netem_10000_fq_codel.svg
/me goes back to building patched kernels
>
> Sigh....
> - Jim
>
>>
>>
>> Cheers,
>> Keith
>
>
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 20:46 ` Jim Gettys
2013-07-10 21:42 ` Dave Taht
@ 2013-07-10 22:10 ` Kathleen Nichols
2013-07-10 22:44 ` Dave Taht
1 sibling, 1 reply; 14+ messages in thread
From: Kathleen Nichols @ 2013-07-10 22:10 UTC (permalink / raw)
To: Jim Gettys; +Cc: Keith Winstein, codel
[-- Attachment #1: Type: text/plain, Size: 1120 bytes --]
Is that indeed what I think?
On Jul 10, 2013, at 1:46 PM, Jim Gettys <jg@freedesktop.org> wrote:
>
>
>
> On Wed, Jul 10, 2013 at 4:41 PM, Keith Winstein <keithw@mit.edu> wrote:
>> On Wed, Jul 10, 2013 at 4:38 PM, Jim Gettys <jg@freedesktop.org> wrote:
>>> Did you compare with fq_codel? None of us (Van included) advocate CoDel by itself.
>>
>> Hi Jim,
>>
>> In our evaluation there's only one flow over the queue (in each direction) -- the videoconference. So, yes.
>>
>> In the upcoming paper where we try and come up with the best end-to-end schemes for multiuser traffic, we compare against Cubic-over-sfqCoDel (http://www.pollere.net/Txtdocs/sfqcodel.cc).
>
> Great.
>
> I sure wish we could get together on *fq_codel variants. Kathy thinks that there is no difference between sfq_codel and fq_codel, and Dave thinks there is a difference.
>
> Sigh....
> - Jim
>
>>
>> Cheers,
>> Keith
>
> _______________________________________________
> Codel mailing list
> Codel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel
[-- Attachment #2: Type: text/html, Size: 3018 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 22:10 ` Kathleen Nichols
@ 2013-07-10 22:44 ` Dave Taht
2013-07-10 23:07 ` Eric Dumazet
2013-07-11 15:45 ` Kathleen Nichols
0 siblings, 2 replies; 14+ messages in thread
From: Dave Taht @ 2013-07-10 22:44 UTC (permalink / raw)
To: Kathleen Nichols; +Cc: Keith Winstein, codel
On Wed, Jul 10, 2013 at 3:10 PM, Kathleen Nichols <nichols@pollere.com> wrote:
> Is that indeed what I think?
Heh. On another topic, at my stanford talk, you pointed at maxpacket
being a thing
you were a bit dubious about. After fiddling with the concept in
presence of offloads
(which bloat up maxpacket to the size of a tso packet (20k or more))
I'm more than a bit dubious about it and in my next build of ns2_codel
and nfq_codel
in linux I just capped it at a mtu in the codel_should_drop function:
if (unlikely(qdisc_pkt_len(skb) > stats->maxpacket &&
qdisc_pkt_len(skb) < 1514 ))
stats->maxpacket = qdisc_pkt_len(skb);
Perhaps in fq_codel the entire maxpacket idea can be junked?
The problem that I see is that codel switches out of a potential drop
state here and
at almost any workload maxpacket hits a TSO-like size, and at higher workloads
it's too high. I think eric is working on something that will let
overlarge packets just
work and begin to break them down into smaller packets at higher workloads?
Also
I'd made a suggestion elsewhere that TSQ migrate down in size from 128k to
lower as the number of active flows increased. Something like
tcp_limit_output_size = max((2*BQL's limit)/(number of flows),mtu)
but I realize now that tcp has no idea what interface it's going out
at any given
time... still I'm on a quest to minimize latency and let offloads still work..
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 22:44 ` Dave Taht
@ 2013-07-10 23:07 ` Eric Dumazet
2013-07-11 15:45 ` Kathleen Nichols
1 sibling, 0 replies; 14+ messages in thread
From: Eric Dumazet @ 2013-07-10 23:07 UTC (permalink / raw)
To: Dave Taht; +Cc: Keith Winstein, codel
On Wed, 2013-07-10 at 15:44 -0700, Dave Taht wrote:
> I'd made a suggestion elsewhere that TSQ migrate down in size from 128k to
> lower as the number of active flows increased. Something like
> tcp_limit_output_size = max((2*BQL's limit)/(number of flows),mtu)
>
> but I realize now that tcp has no idea what interface it's going out
> at any given
> time... still I'm on a quest to minimize latency and let offloads still work..
At Google we tried to plug something at the time TX completion happens
(tcp_wfree())
The more time skb are waiting on qdisc, the less tcp_limit_output_size
should be for the TCP flow.
But tcp_limit_output_size had to be per socket tunable instead of
global. Experiments showed no real improvement over existing TCP
behavior.
The tcp_tso_should_defer() was kind of fixed lately [1] anyway, and we
plan to upstream another patch in this function to better preserve ACK
clocking.
[1] :
http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=f4541d60a449afd40448b06496dcd510f505928e
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-10 22:44 ` Dave Taht
2013-07-10 23:07 ` Eric Dumazet
@ 2013-07-11 15:45 ` Kathleen Nichols
2013-07-11 16:22 ` Eric Dumazet
1 sibling, 1 reply; 14+ messages in thread
From: Kathleen Nichols @ 2013-07-11 15:45 UTC (permalink / raw)
To: Dave Taht; +Cc: Keith Winstein, codel
Yes. In point of fact, that is not what I think. I've pointed out the
differences a
few places. I did do a simulator version of the sfqcodel code that could be
configured closer to the fq_codel code at the request/expense of CableLabs.
Dave, not completely sure which reservation about maxpacket is in reference.
Kathie
On 7/10/13 3:44 PM, Dave Taht wrote:
> On Wed, Jul 10, 2013 at 3:10 PM, Kathleen Nichols <nichols@pollere.com> wrote:
>> Is that indeed what I think?
>
> Heh. On another topic, at my stanford talk, you pointed at maxpacket
> being a thing
> you were a bit dubious about. After fiddling with the concept in
> presence of offloads
> (which bloat up maxpacket to the size of a tso packet (20k or more))
> I'm more than a bit dubious about it and in my next build of ns2_codel
> and nfq_codel
> in linux I just capped it at a mtu in the codel_should_drop function:
>
> if (unlikely(qdisc_pkt_len(skb) > stats->maxpacket &&
> qdisc_pkt_len(skb) < 1514 ))
> stats->maxpacket = qdisc_pkt_len(skb);
>
> Perhaps in fq_codel the entire maxpacket idea can be junked?
>
> The problem that I see is that codel switches out of a potential drop
> state here and
> at almost any workload maxpacket hits a TSO-like size, and at higher workloads
> it's too high. I think eric is working on something that will let
> overlarge packets just
> work and begin to break them down into smaller packets at higher workloads?
>
> Also
>
> I'd made a suggestion elsewhere that TSQ migrate down in size from 128k to
> lower as the number of active flows increased. Something like
> tcp_limit_output_size = max((2*BQL's limit)/(number of flows),mtu)
>
> but I realize now that tcp has no idea what interface it's going out
> at any given
> time... still I'm on a quest to minimize latency and let offloads still work..
>
>
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-11 15:45 ` Kathleen Nichols
@ 2013-07-11 16:22 ` Eric Dumazet
2013-07-11 16:54 ` Kathleen Nichols
0 siblings, 1 reply; 14+ messages in thread
From: Eric Dumazet @ 2013-07-11 16:22 UTC (permalink / raw)
To: Kathleen Nichols; +Cc: Keith Winstein, codel
On Thu, 2013-07-11 at 08:45 -0700, Kathleen Nichols wrote:
> Dave, not completely sure which reservation about maxpacket is in reference.
Hi Kathleen
I believe Dave is referring to the fact that we update maxpacket every
time we dequeue a packet, and with GSO packet it makes little sense
because after a while maxpacket is set to ~65535, the limit of the GSO
packet size.
We might remove this code, and make maxpacket a constant.
diff --git a/include/net/codel.h b/include/net/codel.h
index 389cf62..470e1ff 100644
--- a/include/net/codel.h
+++ b/include/net/codel.h
@@ -170,7 +170,7 @@ static void codel_vars_init(struct codel_vars *vars)
static void codel_stats_init(struct codel_stats *stats)
{
- stats->maxpacket = 256;
+ stats->maxpacket = 1500;
}
/*
@@ -221,9 +221,6 @@ static bool codel_should_drop(const struct sk_buff *skb,
vars->ldelay = now - codel_get_enqueue_time(skb);
sch->qstats.backlog -= qdisc_pkt_len(skb);
- if (unlikely(qdisc_pkt_len(skb) > stats->maxpacket))
- stats->maxpacket = qdisc_pkt_len(skb);
-
if (codel_time_before(vars->ldelay, params->target) ||
sch->qstats.backlog <= stats->maxpacket) {
/* went below - stay below for at least interval */
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-11 16:22 ` Eric Dumazet
@ 2013-07-11 16:54 ` Kathleen Nichols
2013-07-11 17:17 ` Dave Taht
0 siblings, 1 reply; 14+ messages in thread
From: Kathleen Nichols @ 2013-07-11 16:54 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Keith Winstein, codel
Yes. I think that's a sort of "application - dependent" bit of code perhaps.
Van and I had various discussions about this and put that in with very
low bandwidth applications in mind, where some "maximums" might
be much smaller than others.
On 7/11/13 9:22 AM, Eric Dumazet wrote:
> On Thu, 2013-07-11 at 08:45 -0700, Kathleen Nichols wrote:
>
>> Dave, not completely sure which reservation about maxpacket is in reference.
>
> Hi Kathleen
>
> I believe Dave is referring to the fact that we update maxpacket every
> time we dequeue a packet, and with GSO packet it makes little sense
> because after a while maxpacket is set to ~65535, the limit of the GSO
> packet size.
>
> We might remove this code, and make maxpacket a constant.
>
> diff --git a/include/net/codel.h b/include/net/codel.h
> index 389cf62..470e1ff 100644
> --- a/include/net/codel.h
> +++ b/include/net/codel.h
> @@ -170,7 +170,7 @@ static void codel_vars_init(struct codel_vars *vars)
>
> static void codel_stats_init(struct codel_stats *stats)
> {
> - stats->maxpacket = 256;
> + stats->maxpacket = 1500;
> }
>
> /*
> @@ -221,9 +221,6 @@ static bool codel_should_drop(const struct sk_buff *skb,
> vars->ldelay = now - codel_get_enqueue_time(skb);
> sch->qstats.backlog -= qdisc_pkt_len(skb);
>
> - if (unlikely(qdisc_pkt_len(skb) > stats->maxpacket))
> - stats->maxpacket = qdisc_pkt_len(skb);
> -
> if (codel_time_before(vars->ldelay, params->target) ||
> sch->qstats.backlog <= stats->maxpacket) {
> /* went below - stay below for at least interval */
>
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Codel] sprout
2013-07-11 16:54 ` Kathleen Nichols
@ 2013-07-11 17:17 ` Dave Taht
0 siblings, 0 replies; 14+ messages in thread
From: Dave Taht @ 2013-07-11 17:17 UTC (permalink / raw)
To: Kathleen Nichols; +Cc: codel
On Thu, Jul 11, 2013 at 9:54 AM, Kathleen Nichols <nichols@pollere.com> wrote:
>
> Yes. I think that's a sort of "application - dependent" bit of code perhaps.
> Van and I had various discussions about this and put that in with very
> low bandwidth applications in mind, where some "maximums" might
> be much smaller than others.
Well perhaps a better number to turn off the scheduler at is a backlog
of 0, as a single packet is 13ms at 1Mbit. I'll fiddle at some really
low rates over the next couple days.
Stopping at 1500 makes not a lot of sense as at the level codel is at
there is always underlying buffering everywhere in the linux stack I
see.
Even with htb there is always one packet outstanding outside of codel...
(note: in the nfq_codel case I'd also tried keeping a per queue
maxpacket, so as an ack-only stream would retain some probability of a
drop)
>
> On 7/11/13 9:22 AM, Eric Dumazet wrote:
>> On Thu, 2013-07-11 at 08:45 -0700, Kathleen Nichols wrote:
>>
>>> Dave, not completely sure which reservation about maxpacket is in reference.
>>
>> Hi Kathleen
>>
>> I believe Dave is referring to the fact that we update maxpacket every
>> time we dequeue a packet, and with GSO packet it makes little sense
>> because after a while maxpacket is set to ~65535, the limit of the GSO
>> packet size.
>>
>> We might remove this code, and make maxpacket a constant.
>>
>> diff --git a/include/net/codel.h b/include/net/codel.h
>> index 389cf62..470e1ff 100644
>> --- a/include/net/codel.h
>> +++ b/include/net/codel.h
>> @@ -170,7 +170,7 @@ static void codel_vars_init(struct codel_vars *vars)
>>
>> static void codel_stats_init(struct codel_stats *stats)
>> {
>> - stats->maxpacket = 256;
>> + stats->maxpacket = 1500;
>> }
>>
>> /*
>> @@ -221,9 +221,6 @@ static bool codel_should_drop(const struct sk_buff *skb,
>> vars->ldelay = now - codel_get_enqueue_time(skb);
>> sch->qstats.backlog -= qdisc_pkt_len(skb);
>>
>> - if (unlikely(qdisc_pkt_len(skb) > stats->maxpacket))
>> - stats->maxpacket = qdisc_pkt_len(skb);
>> -
>> if (codel_time_before(vars->ldelay, params->target) ||
>> sch->qstats.backlog <= stats->maxpacket) {
>> /* went below - stay below for at least interval */
Well you can nuke the variable entirely and the second part of the
conditional...
>>
>>
>
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2013-07-11 17:17 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-10 19:30 [Codel] sprout Dave Taht
2013-07-10 20:19 ` Keith Winstein
2013-07-10 20:38 ` Jim Gettys
2013-07-10 20:41 ` Keith Winstein
2013-07-10 20:46 ` Jim Gettys
2013-07-10 21:42 ` Dave Taht
2013-07-10 22:10 ` Kathleen Nichols
2013-07-10 22:44 ` Dave Taht
2013-07-10 23:07 ` Eric Dumazet
2013-07-11 15:45 ` Kathleen Nichols
2013-07-11 16:22 ` Eric Dumazet
2013-07-11 16:54 ` Kathleen Nichols
2013-07-11 17:17 ` Dave Taht
2013-07-10 20:40 ` Dave Taht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox