* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
[not found] <2lps03vacpqmtehlf4gnq634.1356112034562@email.android.com>
@ 2012-12-21 18:30 ` Jim Gettys
2012-12-21 18:57 ` Dave Taht
0 siblings, 1 reply; 12+ messages in thread
From: Jim Gettys @ 2012-12-21 18:30 UTC (permalink / raw)
To: Alessandro Bolletta, Kathleen Nichols, codel, Dave Taht
[-- Attachment #1: Type: text/plain, Size: 1241 bytes --]
On Fri, Dec 21, 2012 at 12:51 PM, Alessandro Bolletta <
alessandro@mediaspot.net> wrote:
> Hi everybody,
> Thanks so much for your useful help! I solved my problem by reproducing
> bottleneck through HTB queues.
> I tried some bandwidth rates and i saw that target must be increased if
> the available bandwidth is <4mbps. 13ms is a good compromise for that
> situation.
> Also, i removed the switch from my testbed.
> Codel works amazingly well, congratulations for the job that has been done!
> I'll try to make more tests to ensure that it will be suitable for our
> needs; we are building a new wireless mesh network in Italy based on a
> totally new architecture and Codel could be a great improvement for queue
> management on the nodes.
>
> Thanks again for your courtesy!
> Alessandro Bolletta
Kathy,
So in this case, there is another packet of buffering *under* the codel
queue in the HTB line discipline (which buffers one packet), plus whatever
additional buffering of there may be in the device driver (where the
mileage varies).
So codel isn't actually dropping the head of the queue, but the second (or
further) packet back, in effect. So the control law computation won't be
quite right.
- Jim
[-- Attachment #2: Type: text/html, Size: 1777 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 18:30 ` [Codel] R: Making tests on Tp-link router powered by Openwrt svn Jim Gettys
@ 2012-12-21 18:57 ` Dave Taht
2012-12-21 19:29 ` Jim Gettys
0 siblings, 1 reply; 12+ messages in thread
From: Dave Taht @ 2012-12-21 18:57 UTC (permalink / raw)
To: Jim Gettys; +Cc: codel
On Fri, Dec 21, 2012 at 7:30 PM, Jim Gettys <jg@freedesktop.org> wrote:
>
>
>
> On Fri, Dec 21, 2012 at 12:51 PM, Alessandro Bolletta
> <alessandro@mediaspot.net> wrote:
>>
>> Hi everybody,
>> Thanks so much for your useful help! I solved my problem by reproducing
>> bottleneck through HTB queues.
>> I tried some bandwidth rates and i saw that target must be increased if
>> the available bandwidth is <4mbps. 13ms is a good compromise for that
>> situation.
>> Also, i removed the switch from my testbed.
>> Codel works amazingly well, congratulations for the job that has been
>> done!
>> I'll try to make more tests to ensure that it will be suitable for our
>> needs; we are building a new wireless mesh network in Italy based on a
>> totally new architecture and Codel could be a great improvement for queue
>> management on the nodes.
>>
>> Thanks again for your courtesy!
>> Alessandro Bolletta
>
>
> Kathy,
>
> So in this case, there is another packet of buffering *under* the codel
> queue in the HTB line discipline (which buffers one packet), plus whatever
> additional buffering of there may be in the device driver (where the mileage
> varies).
Which exits at line rate, so it's not a huge issue timewise,
particularly in an age where cable modems are specced to run at gigE.
> So codel isn't actually dropping the head of the queue, but the second (or
> further) packet back, in effect. So the control law computation won't be
> quite right.
> - Jim
It certainly is feasible to produce a version of fq_codel that is like
tbf or htb internally. Eric figured it would be a couple dozen lines
of code...
Actually it could be simpler in terms of interacting with the linux
scheduler than those alternatives as we're doing timestamping anyway,
so with an explicit bandwidth limit it's straightforward to predict
when the next packet can be delivered at what re-scheduled time....
It would save an unmanaged packet outstanding, too. Well, hmm, that
would have to get looked at by the estimator...
Use cases:
1) ISPs artificially rate limit lines regardless
2) So do virtual service providers
3) our current need to reduce bandwidth to below the crappy device
next in line..
The last problem is so pervasive that I have a whole bunch of complex
htb scripts to do it right. It would be easier to have a rate limited
fq_codel (well, one that also does prioritization like pfifo_fast) and
less cpu intensive to move all that logic out of the combination of
fq_codel + htb and into one qdisc...
just a thought...
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 18:57 ` Dave Taht
@ 2012-12-21 19:29 ` Jim Gettys
0 siblings, 0 replies; 12+ messages in thread
From: Jim Gettys @ 2012-12-21 19:29 UTC (permalink / raw)
To: Dave Taht; +Cc: codel
[-- Attachment #1: Type: text/plain, Size: 3542 bytes --]
On Fri, Dec 21, 2012 at 1:57 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Fri, Dec 21, 2012 at 7:30 PM, Jim Gettys <jg@freedesktop.org> wrote:
> >
> >
> >
> > On Fri, Dec 21, 2012 at 12:51 PM, Alessandro Bolletta
> > <alessandro@mediaspot.net> wrote:
> >>
> >> Hi everybody,
> >> Thanks so much for your useful help! I solved my problem by reproducing
> >> bottleneck through HTB queues.
> >> I tried some bandwidth rates and i saw that target must be increased if
> >> the available bandwidth is <4mbps. 13ms is a good compromise for that
> >> situation.
> >> Also, i removed the switch from my testbed.
> >> Codel works amazingly well, congratulations for the job that has been
> >> done!
> >> I'll try to make more tests to ensure that it will be suitable for our
> >> needs; we are building a new wireless mesh network in Italy based on a
> >> totally new architecture and Codel could be a great improvement for
> queue
> >> management on the nodes.
> >>
> >> Thanks again for your courtesy!
> >> Alessandro Bolletta
> >
> >
> > Kathy,
> >
> > So in this case, there is another packet of buffering *under* the codel
> > queue in the HTB line discipline (which buffers one packet), plus
> whatever
> > additional buffering of there may be in the device driver (where the
> mileage
> > varies).
>
> Which exits at line rate, so it's not a huge issue timewise,
> particularly in an age where cable modems are specced to run at gigE.
>
But the time the packet spends in HTB *is* significant in terms of time,
and it's not going into the computation of the time in the queue.
>
> > So codel isn't actually dropping the head of the queue, but the second
> (or
> > further) packet back, in effect. So the control law computation won't be
> > quite right.
> > - Jim
>
> It certainly is feasible to produce a version of fq_codel that is like
> tbf or htb internally. Eric figured it would be a couple dozen lines
> of code...
>
For htb that might be the "best" solution, since it is a case we're
unfortunately going to be living with for quite a while.
Then there is the time spent in the device driver; for example, John's hack
Lantiq DSL driver with it's (current) two packets of buffering.
Those packets trickle out at DSL line rate (slow), not the fast speed of
100Mb or 1G ethernet being transmitted to a cable modem.
- Jim
> Actually it could be simpler in terms of interacting with the linux
> scheduler than those alternatives as we're doing timestamping anyway,
> so with an explicit bandwidth limit it's straightforward to predict
> when the next packet can be delivered at what re-scheduled time....
>
> It would save an unmanaged packet outstanding, too. Well, hmm, that
> would have to get looked at by the estimator...
>
> Use cases:
>
> 1) ISPs artificially rate limit lines regardless
> 2) So do virtual service providers
> 3) our current need to reduce bandwidth to below the crappy device
> next in line..
>
> The last problem is so pervasive that I have a whole bunch of complex
> htb scripts to do it right. It would be easier to have a rate limited
> fq_codel (well, one that also does prioritization like pfifo_fast) and
> less cpu intensive to move all that logic out of the combination of
> fq_codel + htb and into one qdisc...
>
> just a thought...
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt:
> http://www.teklibre.com/cerowrt/subscribe.html
>
>
>
[-- Attachment #2: Type: text/html, Size: 4939 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 19:13 ` Kathleen Nichols
@ 2012-12-21 19:34 ` Jim Gettys
0 siblings, 0 replies; 12+ messages in thread
From: Jim Gettys @ 2012-12-21 19:34 UTC (permalink / raw)
To: Kathleen Nichols; +Cc: codel
[-- Attachment #1: Type: text/plain, Size: 3540 bytes --]
On Fri, Dec 21, 2012 at 2:13 PM, Kathleen Nichols <nichols@pollere.com>wrote:
>
> This issue needs more study. I'm not at all convinced you want to add the
> device driver time since CoDel is not controlling that queue. Instead,
> that queue is experienced by CoDel as additional round trip delay. I
> believe that would be better accounted for by a longer interval, that is
> if there is generally some known additional (additional to network path
> delay) delay, that implementation may need a longer interval.
>
Could be: but since the driver and the queue disciple's above end up acting
as a single queue (there is no loss between the driver and the OS above),
these "coupled" queues will at a minimum throw off the square root
computation in proportion to the underlying delay if not accounted for. So
I think the time does have to go into the computation (and is why Dave's
been having to mess with the target).
- Jim
>
> Kathie
>
> On 12/21/12 9:13 AM, Jim Gettys wrote:
> > We aren't adding the time in the device driver to the time spent in the
> > rest of the queue.
> >
> > Right now, we don't have the time available that packets are queued in
> > the device driver (which may have queuing in addition to that in the
> > queue discipline.
> >
> > In any case, that's my theory as to what is going on...
> > - Jim
> >
> >
> >
> > On Fri, Dec 21, 2012 at 12:06 PM, Kathleen Nichols <nichols@pollere.com
> > <mailto:nichols@pollere.com>> wrote:
> >
> > On 12/21/12 2:32 AM, Dave Taht wrote:
> > > On Fri, Dec 21, 2012 at 5:19 AM, Alessandro Bolletta
> > > <alessandro@mediaspot.net <mailto:alessandro@mediaspot.net>>
> wrote:
> > ...
> > >> Also, i tried to decrease interval and target options in order to
> > obtain a
> > >> latency, for connections estabilished while upload is flowing,
> > lower that 5
> > >> ms.
> > >>
> > >> So i set target at 2ms and interval to 5ms.
> > >
> > > You are misunderstanding target and interval. These control the
> > > algorithm for determining when to drop. interval is set to 100ms by
> > > default as to try to find a good estimate for the RTT, and target
> to
> > > 5ms as to have a goal for a maximum delay to aim for. These values
> > > work well down to about 4Mbits, at which point we have been bumping
> > > target up in relation to how long it takes to deliver a packet. A
> > > value I've been using for target at 1Mbit has been 20, as it takes
> > > 13ms to deliver a large packet.
> > >
> >
> > Dave,
> >
> > Thanks for clarifying the target and interval. The notion of using a
> 2ms
> > target
> > and a 5ms interval boggles the mind and is precisely why we were
> looking
> > for parameters that the user didn't have to fiddle. Of course, it
> has to
> > be running
> > in the location of the actual queue!
> >
> > I don't understand why you are lowering the target explicitly as the
> > use of
> > an MTU's worth of packets as the alternate target appeared to work
> quite
> > well at rates down to 64kbps in simulation as well as in changing
> rates.
> > I thought Van explained this nicely in his talk at IETF.
> >
> > Kathie
> > _______________________________________________
> > Codel mailing list
> > Codel@lists.bufferbloat.net <mailto:Codel@lists.bufferbloat.net>
> > https://lists.bufferbloat.net/listinfo/codel
>
>
> >
> >
>
>
[-- Attachment #2: Type: text/html, Size: 5053 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 17:13 ` Jim Gettys
@ 2012-12-21 19:13 ` Kathleen Nichols
2012-12-21 19:34 ` Jim Gettys
0 siblings, 1 reply; 12+ messages in thread
From: Kathleen Nichols @ 2012-12-21 19:13 UTC (permalink / raw)
To: Jim Gettys; +Cc: codel
This issue needs more study. I'm not at all convinced you want to add the
device driver time since CoDel is not controlling that queue. Instead,
that queue is experienced by CoDel as additional round trip delay. I
believe that would be better accounted for by a longer interval, that is
if there is generally some known additional (additional to network path
delay) delay, that implementation may need a longer interval.
Kathie
On 12/21/12 9:13 AM, Jim Gettys wrote:
> We aren't adding the time in the device driver to the time spent in the
> rest of the queue.
>
> Right now, we don't have the time available that packets are queued in
> the device driver (which may have queuing in addition to that in the
> queue discipline.
>
> In any case, that's my theory as to what is going on...
> - Jim
>
>
>
> On Fri, Dec 21, 2012 at 12:06 PM, Kathleen Nichols <nichols@pollere.com
> <mailto:nichols@pollere.com>> wrote:
>
> On 12/21/12 2:32 AM, Dave Taht wrote:
> > On Fri, Dec 21, 2012 at 5:19 AM, Alessandro Bolletta
> > <alessandro@mediaspot.net <mailto:alessandro@mediaspot.net>> wrote:
> ...
> >> Also, i tried to decrease interval and target options in order to
> obtain a
> >> latency, for connections estabilished while upload is flowing,
> lower that 5
> >> ms.
> >>
> >> So i set target at 2ms and interval to 5ms.
> >
> > You are misunderstanding target and interval. These control the
> > algorithm for determining when to drop. interval is set to 100ms by
> > default as to try to find a good estimate for the RTT, and target to
> > 5ms as to have a goal for a maximum delay to aim for. These values
> > work well down to about 4Mbits, at which point we have been bumping
> > target up in relation to how long it takes to deliver a packet. A
> > value I've been using for target at 1Mbit has been 20, as it takes
> > 13ms to deliver a large packet.
> >
>
> Dave,
>
> Thanks for clarifying the target and interval. The notion of using a 2ms
> target
> and a 5ms interval boggles the mind and is precisely why we were looking
> for parameters that the user didn't have to fiddle. Of course, it has to
> be running
> in the location of the actual queue!
>
> I don't understand why you are lowering the target explicitly as the
> use of
> an MTU's worth of packets as the alternate target appeared to work quite
> well at rates down to 64kbps in simulation as well as in changing rates.
> I thought Van explained this nicely in his talk at IETF.
>
> Kathie
> _______________________________________________
> Codel mailing list
> Codel@lists.bufferbloat.net <mailto:Codel@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/codel
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 17:43 ` Dave Taht
@ 2012-12-21 17:51 ` Kathleen Nichols
0 siblings, 0 replies; 12+ messages in thread
From: Kathleen Nichols @ 2012-12-21 17:51 UTC (permalink / raw)
To: Dave Taht; +Cc: codel
Oh, yes, my sfqcodel tests in the simulator are all per-packet rounding
so performance
will vary.
On 12/21/12 9:43 AM, Dave Taht wrote:
> On Fri, Dec 21, 2012 at 6:06 PM, Kathleen Nichols <nichols@pollere.com> wrote:
>> On 12/21/12 2:32 AM, Dave Taht wrote:
>>> On Fri, Dec 21, 2012 at 5:19 AM, Alessandro Bolletta
>>> <alessandro@mediaspot.net> wrote:
>
>> I don't understand why you are lowering the target explicitly as the use of
>> an MTU's worth of packets as the alternate target appeared to work quite
>> well at rates down to 64kbps in simulation as well as in changing rates.
>> I thought Van explained this nicely in his talk at IETF.
>
> I call this the horizontal standing queue problem, and it's specific
> to *fq*_codel (and to some extent, the derivatives so far).
>
> I figure codel, by itself, copes better.
>
> but in fq_codel, each (of 1024 by default) queue is a full blown codel
> queue, and thus drops out of drop state when the mtu limit is hit.
>
> Worse, fq_codel always sends a full quantum's worth of packets from
> each queue, in order, not mixing them. This is perfectly fine and even
> reasonably sane at truly high bandwidths where TSO/GSO and GRO are
> being used, but a pretty terrible idea at 100Mbit and below, and
> results in moving rapidly back and forth through the timestamps in the
> backlog on that queue...
>
> The original fq_codel did no fate-sharing at all, the current one (in
> 3.6) does.
>
> I produced (several months ago) a couple versions of (e and n)
> fq_codel that did better mixing at a higher cpu cost. It was really
> hard to see differences. I just found a bug in my nfq_codel patch
> today in fact...
>
> Also fiddled a lot with the per queue don't shoot me at MTU limit, in
> one version shared amongst all queues, in another, set to the size of
> the biggest packet to have hit that queue since the end of the last
> drop interval.
>
> So I then decided to really look at this, hard, by working on a set of
> benchmarks that made it easy to load up a link with a wide variety of
> actual traffic. I'm really happy with the rrul related tests toke has
> put together.
>
> I tried to talk about this coherently then, and failed, so once I had
> good tests showing various behaviors, I figured I'd talk about it. I
> would still prefer not to talk about until I have results I can trust
> and finding all the sources of experimental error in the labs setup so
> far have eaten most of my time.
>
> I didn't mean to jump the gun on that today, I have a few weeks left
> to go, and to collate the analysis coherently with the lab results,
> and for all I know some aspect of the lab implementations, the known
> BQL issue (bytes rather than packets), or the fact that HTB buffers up
> a packet, are all more key to the problem. If it's real.
>
> In the benchmarks I've been doing via toke's implementation of the
> rrul test suite, *on a asymmetric link* fq_codel behavior gets more
> stable (uses up the largest percentage of bandwidth) if you set target
> above the size of the maximum size packet's delivery time at that
> bandwidth. Another benchmark that will show bad behavior regardless is
> to try pounding a line flat for a while with dozens of full rate
> streams.
>
> in your sfq_codel (which I hope to look at next week), do you have a
> shared MTU limit for all streams or do you do it on a per stream
> basis?
>
>>
>> Kathie
>> _______________________________________________
>> Codel mailing list
>> Codel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/codel
>
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 17:06 ` Kathleen Nichols
2012-12-21 17:13 ` Jim Gettys
@ 2012-12-21 17:43 ` Dave Taht
2012-12-21 17:51 ` Kathleen Nichols
1 sibling, 1 reply; 12+ messages in thread
From: Dave Taht @ 2012-12-21 17:43 UTC (permalink / raw)
To: Kathleen Nichols; +Cc: codel
On Fri, Dec 21, 2012 at 6:06 PM, Kathleen Nichols <nichols@pollere.com> wrote:
> On 12/21/12 2:32 AM, Dave Taht wrote:
>> On Fri, Dec 21, 2012 at 5:19 AM, Alessandro Bolletta
>> <alessandro@mediaspot.net> wrote:
> I don't understand why you are lowering the target explicitly as the use of
> an MTU's worth of packets as the alternate target appeared to work quite
> well at rates down to 64kbps in simulation as well as in changing rates.
> I thought Van explained this nicely in his talk at IETF.
I call this the horizontal standing queue problem, and it's specific
to *fq*_codel (and to some extent, the derivatives so far).
I figure codel, by itself, copes better.
but in fq_codel, each (of 1024 by default) queue is a full blown codel
queue, and thus drops out of drop state when the mtu limit is hit.
Worse, fq_codel always sends a full quantum's worth of packets from
each queue, in order, not mixing them. This is perfectly fine and even
reasonably sane at truly high bandwidths where TSO/GSO and GRO are
being used, but a pretty terrible idea at 100Mbit and below, and
results in moving rapidly back and forth through the timestamps in the
backlog on that queue...
The original fq_codel did no fate-sharing at all, the current one (in
3.6) does.
I produced (several months ago) a couple versions of (e and n)
fq_codel that did better mixing at a higher cpu cost. It was really
hard to see differences. I just found a bug in my nfq_codel patch
today in fact...
Also fiddled a lot with the per queue don't shoot me at MTU limit, in
one version shared amongst all queues, in another, set to the size of
the biggest packet to have hit that queue since the end of the last
drop interval.
So I then decided to really look at this, hard, by working on a set of
benchmarks that made it easy to load up a link with a wide variety of
actual traffic. I'm really happy with the rrul related tests toke has
put together.
I tried to talk about this coherently then, and failed, so once I had
good tests showing various behaviors, I figured I'd talk about it. I
would still prefer not to talk about until I have results I can trust
and finding all the sources of experimental error in the labs setup so
far have eaten most of my time.
I didn't mean to jump the gun on that today, I have a few weeks left
to go, and to collate the analysis coherently with the lab results,
and for all I know some aspect of the lab implementations, the known
BQL issue (bytes rather than packets), or the fact that HTB buffers up
a packet, are all more key to the problem. If it's real.
In the benchmarks I've been doing via toke's implementation of the
rrul test suite, *on a asymmetric link* fq_codel behavior gets more
stable (uses up the largest percentage of bandwidth) if you set target
above the size of the maximum size packet's delivery time at that
bandwidth. Another benchmark that will show bad behavior regardless is
to try pounding a line flat for a while with dozens of full rate
streams.
in your sfq_codel (which I hope to look at next week), do you have a
shared MTU limit for all streams or do you do it on a per stream
basis?
>
> Kathie
> _______________________________________________
> Codel mailing list
> Codel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 17:06 ` Kathleen Nichols
@ 2012-12-21 17:13 ` Jim Gettys
2012-12-21 19:13 ` Kathleen Nichols
2012-12-21 17:43 ` Dave Taht
1 sibling, 1 reply; 12+ messages in thread
From: Jim Gettys @ 2012-12-21 17:13 UTC (permalink / raw)
To: Kathleen Nichols; +Cc: codel
[-- Attachment #1: Type: text/plain, Size: 2089 bytes --]
We aren't adding the time in the device driver to the time spent in the
rest of the queue.
Right now, we don't have the time available that packets are queued in the
device driver (which may have queuing in addition to that in the queue
discipline.
In any case, that's my theory as to what is going on...
- Jim
On Fri, Dec 21, 2012 at 12:06 PM, Kathleen Nichols <nichols@pollere.com>wrote:
> On 12/21/12 2:32 AM, Dave Taht wrote:
> > On Fri, Dec 21, 2012 at 5:19 AM, Alessandro Bolletta
> > <alessandro@mediaspot.net> wrote:
> ...
> >> Also, i tried to decrease interval and target options in order to
> obtain a
> >> latency, for connections estabilished while upload is flowing, lower
> that 5
> >> ms.
> >>
> >> So i set target at 2ms and interval to 5ms.
> >
> > You are misunderstanding target and interval. These control the
> > algorithm for determining when to drop. interval is set to 100ms by
> > default as to try to find a good estimate for the RTT, and target to
> > 5ms as to have a goal for a maximum delay to aim for. These values
> > work well down to about 4Mbits, at which point we have been bumping
> > target up in relation to how long it takes to deliver a packet. A
> > value I've been using for target at 1Mbit has been 20, as it takes
> > 13ms to deliver a large packet.
> >
>
> Dave,
>
> Thanks for clarifying the target and interval. The notion of using a 2ms
> target
> and a 5ms interval boggles the mind and is precisely why we were looking
> for parameters that the user didn't have to fiddle. Of course, it has to
> be running
> in the location of the actual queue!
>
> I don't understand why you are lowering the target explicitly as the use of
> an MTU's worth of packets as the alternate target appeared to work quite
> well at rates down to 64kbps in simulation as well as in changing rates.
> I thought Van explained this nicely in his talk at IETF.
>
> Kathie
> _______________________________________________
> Codel mailing list
> Codel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel
>
[-- Attachment #2: Type: text/html, Size: 2972 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 10:32 ` Dave Taht
2012-12-21 10:54 ` Dave Taht
@ 2012-12-21 17:06 ` Kathleen Nichols
2012-12-21 17:13 ` Jim Gettys
2012-12-21 17:43 ` Dave Taht
1 sibling, 2 replies; 12+ messages in thread
From: Kathleen Nichols @ 2012-12-21 17:06 UTC (permalink / raw)
To: codel
On 12/21/12 2:32 AM, Dave Taht wrote:
> On Fri, Dec 21, 2012 at 5:19 AM, Alessandro Bolletta
> <alessandro@mediaspot.net> wrote:
...
>> Also, i tried to decrease interval and target options in order to obtain a
>> latency, for connections estabilished while upload is flowing, lower that 5
>> ms.
>>
>> So i set target at 2ms and interval to 5ms.
>
> You are misunderstanding target and interval. These control the
> algorithm for determining when to drop. interval is set to 100ms by
> default as to try to find a good estimate for the RTT, and target to
> 5ms as to have a goal for a maximum delay to aim for. These values
> work well down to about 4Mbits, at which point we have been bumping
> target up in relation to how long it takes to deliver a packet. A
> value I've been using for target at 1Mbit has been 20, as it takes
> 13ms to deliver a large packet.
>
Dave,
Thanks for clarifying the target and interval. The notion of using a 2ms
target
and a 5ms interval boggles the mind and is precisely why we were looking
for parameters that the user didn't have to fiddle. Of course, it has to
be running
in the location of the actual queue!
I don't understand why you are lowering the target explicitly as the use of
an MTU's worth of packets as the alternate target appeared to work quite
well at rates down to 64kbps in simulation as well as in changing rates.
I thought Van explained this nicely in his talk at IETF.
Kathie
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 10:32 ` Dave Taht
@ 2012-12-21 10:54 ` Dave Taht
2012-12-21 17:06 ` Kathleen Nichols
1 sibling, 0 replies; 12+ messages in thread
From: Dave Taht @ 2012-12-21 10:54 UTC (permalink / raw)
To: Alessandro Bolletta; +Cc: Codel
To be more clear here;
You outlined your setup as:
laptop <- TPLINK <- gigE switch <- server
And your path for attempting fq_codel is on a samba download from the
server. So it hits the gigE switch, gets buffered up and dropped
there, and then has a clear path the rest of the way to your laptop.
The reverse path is clean (you are just sending 66 byte acks, so there
is no bottleneck in that direction).
So to get some benefit from fq_codel in this situation you would have
to put a rate limited *ingress qdisc on the TPLINK (using qos-scripts
or simple_qos), and even then a lot of drops would happen in the gigE
switch on bursty traffic.
If you were instead doing an upload to the server from the laptop,
merely having fq_codel on there would keep your latency under upload
sane.
Hope this helps.
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-21 10:19 ` [Codel] R: " Alessandro Bolletta
@ 2012-12-21 10:32 ` Dave Taht
2012-12-21 10:54 ` Dave Taht
2012-12-21 17:06 ` Kathleen Nichols
0 siblings, 2 replies; 12+ messages in thread
From: Dave Taht @ 2012-12-21 10:32 UTC (permalink / raw)
To: Alessandro Bolletta; +Cc: Codel
On Fri, Dec 21, 2012 at 5:19 AM, Alessandro Bolletta
<alessandro@mediaspot.net> wrote:
> Hi jonathan,
>
>
>
> This is how i configured the testbed:
>
>
>
> I have a windows 8 laptop connected directly on the tplink/openwrt router.
> Tplink is also connected to a gigabit switch.
>
> So, i thought to make some file uploads from a linux samba file server
> connected to the switch to my windows 8 laptop, through SMB protocol (which
> uses TCP).
>
>
>
> In order to create the bottleneck, tplink router has two 10/100mbits ports.
> So, i restricted the port connected to the switch to 10Mbit half duplex.
Full duplex is the only thing we've ever tested.
Secondly what you are doing here is moving the buffering into the switch.
If you want to rate limit, use HTB. Either the simple_qos.sh or
qos-scripts can be used for this.
http://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel
> Also, i tried to decrease interval and target options in order to obtain a
> latency, for connections estabilished while upload is flowing, lower that 5
> ms.
>
> So i set target at 2ms and interval to 5ms.
You are misunderstanding target and interval. These control the
algorithm for determining when to drop. interval is set to 100ms by
default as to try to find a good estimate for the RTT, and target to
5ms as to have a goal for a maximum delay to aim for. These values
work well down to about 4Mbits, at which point we have been bumping
target up in relation to how long it takes to deliver a packet. A
value I've been using for target at 1Mbit has been 20, as it takes
13ms to deliver a large packet.
The way interval works is that once you've been consistently over the
target delay for the interval, the codel drop scheduler starts and you
drop a packet. If you are still over the delay for the interval/2,
drop another packet, if you are still over the delay for the next
interval/3 drop another packet. when you hit an ideal drop rate, stop
decreasing the interval.
Thank you for giving me a new thing to add to the above url.
>
> So this is the schema:
>
>
>
> laptop----tplink---switch---linuxserver
>
>
>
> where tplink is routing subnets.
>
>
>
> If i ping “linux server” i get very high latencies…and if i ping other PCs
> connected to the switch, i get about 10ms latencies.
>
>
>
> I also tried to lower target at 1 ms and interval at 2ms, but i see the same
> effects.
>
>
>
> If i disable fq_codel, i get always the same result.
you've moved the buffering into the switch. don't do that.
>
>
>
> Can you explain me where i’m going wrong?
>
>
>
> Thanks
>
>
>
> Da: Jonathan Morton [mailto:chromatix99@gmail.com]
> Inviato: giovedì 20 dicembre 2012 19.08
> A: Alessandro Bolletta
> Cc: Codel@lists.bufferbloat.net
> Oggetto: Re: [Codel] Making tests on Tp-link router powered by Openwrt svn
>
>
>
> Is the bottleneck actually at your router, or (as is more usual) at the
> modem?
>
> - Jonathan Morton
>
> On Dec 20, 2012 7:57 PM, "Alessandro Bolletta" <alessandro@mediaspot.net>
> wrote:
>
> Hi everybody,
> Today i made some tests on my tplink home router powered by the lastest
> snapshot build of Openwrt.
> So, i configured tc to make fq_codel the default queuing algorithm for 2 eth
> ports available on the router (leaving unchanged default values).
> So, i started some TCP sessions through my Windows client and I loaded the
> available bandwidth...but the test wasn't going as expected because i
> experienced packet loss and high delays as i did when the default simple
> fifo queue was the default queue.
> Is there something that i'm not understanding?
>
> Thanks,
> Alessandro Bolletta
> _______________________________________________
> Codel mailing list
> Codel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel
>
>
> _______________________________________________
> Codel mailing list
> Codel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel
>
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* [Codel] R: Making tests on Tp-link router powered by Openwrt svn
2012-12-20 18:07 ` Jonathan Morton
@ 2012-12-21 10:19 ` Alessandro Bolletta
2012-12-21 10:32 ` Dave Taht
0 siblings, 1 reply; 12+ messages in thread
From: Alessandro Bolletta @ 2012-12-21 10:19 UTC (permalink / raw)
To: Codel
[-- Attachment #1: Type: text/plain, Size: 2349 bytes --]
Hi jonathan,
This is how i configured the testbed:
I have a windows 8 laptop connected directly on the tplink/openwrt router. Tplink is also connected to a gigabit switch.
So, i thought to make some file uploads from a linux samba file server connected to the switch to my windows 8 laptop, through SMB protocol (which uses TCP).
In order to create the bottleneck, tplink router has two 10/100mbits ports. So, i restricted the port connected to the switch to 10Mbit half duplex.
Also, i tried to decrease interval and target options in order to obtain a latency, for connections estabilished while upload is flowing, lower that 5 ms.
So i set target at 2ms and interval to 5ms.
So this is the schema:
laptop----tplink---switch---linuxserver
where tplink is routing subnets.
If i ping "linux server" i get very high latencies...and if i ping other PCs connected to the switch, i get about 10ms latencies.
I also tried to lower target at 1 ms and interval at 2ms, but i see the same effects.
If i disable fq_codel, i get always the same result.
Can you explain me where i'm going wrong?
Thanks
Da: Jonathan Morton [mailto:chromatix99@gmail.com]
Inviato: giovedì 20 dicembre 2012 19.08
A: Alessandro Bolletta
Cc: Codel@lists.bufferbloat.net
Oggetto: Re: [Codel] Making tests on Tp-link router powered by Openwrt svn
Is the bottleneck actually at your router, or (as is more usual) at the modem?
- Jonathan Morton
On Dec 20, 2012 7:57 PM, "Alessandro Bolletta" <alessandro@mediaspot.net<mailto:alessandro@mediaspot.net>> wrote:
Hi everybody,
Today i made some tests on my tplink home router powered by the lastest snapshot build of Openwrt.
So, i configured tc to make fq_codel the default queuing algorithm for 2 eth ports available on the router (leaving unchanged default values).
So, i started some TCP sessions through my Windows client and I loaded the available bandwidth...but the test wasn't going as expected because i experienced packet loss and high delays as i did when the default simple fifo queue was the default queue.
Is there something that i'm not understanding?
Thanks,
Alessandro Bolletta
_______________________________________________
Codel mailing list
Codel@lists.bufferbloat.net<mailto:Codel@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/codel
[-- Attachment #2: Type: text/html, Size: 9879 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2012-12-21 19:34 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <2lps03vacpqmtehlf4gnq634.1356112034562@email.android.com>
2012-12-21 18:30 ` [Codel] R: Making tests on Tp-link router powered by Openwrt svn Jim Gettys
2012-12-21 18:57 ` Dave Taht
2012-12-21 19:29 ` Jim Gettys
2012-12-20 17:57 [Codel] " Alessandro Bolletta
2012-12-20 18:07 ` Jonathan Morton
2012-12-21 10:19 ` [Codel] R: " Alessandro Bolletta
2012-12-21 10:32 ` Dave Taht
2012-12-21 10:54 ` Dave Taht
2012-12-21 17:06 ` Kathleen Nichols
2012-12-21 17:13 ` Jim Gettys
2012-12-21 19:13 ` Kathleen Nichols
2012-12-21 19:34 ` Jim Gettys
2012-12-21 17:43 ` Dave Taht
2012-12-21 17:51 ` Kathleen Nichols
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox