* [Cake] 5ms target hurting tcp throughput tweakable?
@ 2017-02-26 13:16 Andy Furniss
2017-02-26 13:43 ` Jonathan Morton
0 siblings, 1 reply; 9+ messages in thread
From: Andy Furniss @ 2017-02-26 13:16 UTC (permalink / raw)
To: cake
Hi, I am new to cake and just messing around currently.
I notice that on my setup with vdsl2 20mbit sync the default 5ms target
on best effort hurts netperf throughput a bit as latency to host rises.
My setup.
tc qdisc add dev ppp0 handle 1:0 root cake bandwidth 19690kbit raw
overhead 34 diffserv4
Where ppp0 is pppoe.
qdisc cake 1: root refcnt 2 bandwidth 19690Kbit diffserv4 triple-isolate
rtt 100.0ms noatm overhead 56 via-ethernet
Sent 3250414679 bytes 5976629 pkt (dropped 3404, overlimits 3402556
requeues 0)
backlog 0b 0p requeues 0
memory used: 221952b of 4Mb
capacity estimate: 19690Kbit
Bulk Best Effort Video Voice
thresh 1230Kbit 19690Kbit 9845Kbit 4922Kbit
target 14.8ms 5.0ms 5.0ms 5.0ms
interval 109.8ms 100.0ms 100.0ms 100.0ms
pk_delay 63us 9us 0us 146us
av_delay 7us 6us 0us 10us
sp_delay 4us 4us 0us 4us
pkts 1237775 4725460 0 16798
bytes 1781536029 1472719560 0 1224478
way_inds 11 43327 0 1
way_miss 14051 69600 0 3066
way_cols 0 0 0 0
drops 81 3323 0 0
marks 0 0 0 0
sp_flows 36 1 0 0
bk_flows 1 0 0 0
un_flows 0 0 0 0
max_len 1500 1500 0 1428
Testing to flent-eu.bufferbloat.net which is 50ms from me, putting
a single netperf upload into bulk gets up to 0.6 - 1mbit better throughput
that if it goes through best effort.
Trying a simulation on lan with netem and cake vs hfsc 100p fifo also
shows that latency doesn't need to rise much to start hurting netperf
throughput.
Is there any way or plans to allow users to relax slightly the target?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Cake] 5ms target hurting tcp throughput tweakable?
2017-02-26 13:16 [Cake] 5ms target hurting tcp throughput tweakable? Andy Furniss
@ 2017-02-26 13:43 ` Jonathan Morton
2017-02-26 14:34 ` Andy Furniss
0 siblings, 1 reply; 9+ messages in thread
From: Jonathan Morton @ 2017-02-26 13:43 UTC (permalink / raw)
To: Andy Furniss; +Cc: cake
> On 26 Feb, 2017, at 15:16, Andy Furniss <adf.lists@gmail.com> wrote:
>
> Is there any way or plans to allow users to relax slightly the target?
You can do that by selecting a higher assumed RTT, for instance with the “oceanic” or “satellite” keywords. This also increases the interval, which makes the AQM less aggressive in general.
- Jonathan Morton
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Cake] 5ms target hurting tcp throughput tweakable?
2017-02-26 13:43 ` Jonathan Morton
@ 2017-02-26 14:34 ` Andy Furniss
2017-02-26 22:21 ` Sebastian Moeller
0 siblings, 1 reply; 9+ messages in thread
From: Andy Furniss @ 2017-02-26 14:34 UTC (permalink / raw)
To: Jonathan Morton; +Cc: cake
Jonathan Morton wrote:
>
>> On 26 Feb, 2017, at 15:16, Andy Furniss <adf.lists@gmail.com> wrote:
>>
>> Is there any way or plans to allow users to relax slightly the target?
>
> You can do that by selecting a higher assumed RTT, for instance with the “oceanic” or “satellite” keywords. This also increases the interval, which makes the AQM less aggressive in general.
Ok, thanks, I'll try.
I didn't know what the implications of "lying" about RTT were.
I do know that on a 60mbit ingress test I did to london = 10ms away, when I
tried using metro = 10ms I killed single threaded throughput, so was
wary of changing defaults after that.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Cake] 5ms target hurting tcp throughput tweakable?
2017-02-26 14:34 ` Andy Furniss
@ 2017-02-26 22:21 ` Sebastian Moeller
2017-02-27 18:02 ` Andy Furniss
0 siblings, 1 reply; 9+ messages in thread
From: Sebastian Moeller @ 2017-02-26 22:21 UTC (permalink / raw)
To: Andy Furniss; +Cc: Jonathan Morton, cake
Looking at tc-adv, I would recommend to use “rtt 50” (maybe it is “rtt 50ms”) which allows to directly explicitly request a new “interval” (which IIRC is corresponding to the time you allow for the TCP control loop to react to cake’s ecn-marking/dropping) “target’ will be calculated as 5% of the explicit interval, in accordance with the rationale in the codel RFC.
Best Regards
> On Feb 26, 2017, at 15:34, Andy Furniss <adf.lists@gmail.com> wrote:
>
> Jonathan Morton wrote:
>>
>>> On 26 Feb, 2017, at 15:16, Andy Furniss <adf.lists@gmail.com> wrote:
>>>
>>> Is there any way or plans to allow users to relax slightly the target?
>>
>> You can do that by selecting a higher assumed RTT, for instance with the “oceanic” or “satellite” keywords. This also increases the interval, which makes the AQM less aggressive in general.
>
> Ok, thanks, I'll try.
>
> I didn't know what the implications of "lying" about RTT were.
>
> I do know that on a 60mbit ingress test I did to london = 10ms away, when I
> tried using metro = 10ms I killed single threaded throughput, so was
> wary of changing defaults after that.
>
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Cake] 5ms target hurting tcp throughput tweakable?
2017-02-26 22:21 ` Sebastian Moeller
@ 2017-02-27 18:02 ` Andy Furniss
2017-02-27 18:13 ` Sebastian Moeller
0 siblings, 1 reply; 9+ messages in thread
From: Andy Furniss @ 2017-02-27 18:02 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Jonathan Morton, cake
Sebastian Moeller wrote:
> Looking at tc-adv, I would recommend to use “rtt 50” (maybe it is
> “rtt 50ms”) which allows to directly explicitly request a new
> “interval” (which IIRC is corresponding to the time you allow for
> the TCP control loop to react to cake’s ecn-marking/dropping)
> “target’ will be calculated as 5% of the explicit interval, in
> accordance with the rationale in the codel RFC.
50 is certainly better than 10, but still seems to hurt single a bit
even on a close server.
On more distant servers maybe even more - I am getting results that are
too variable to tell properly at the current time (of day).
http://www.thinkbroadband.com/speedtest/results.html?id=1488216166262542155
Almost OK, but with 100ms this test usually shows x1 and x6 the same.
http://www.thinkbroadband.com/speedtest/results.html?id=1488218291872647755
Of course as we all know ingress shaping is a different beast anyway and
would deserve its own thread.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Cake] 5ms target hurting tcp throughput tweakable?
2017-02-27 18:02 ` Andy Furniss
@ 2017-02-27 18:13 ` Sebastian Moeller
2017-02-27 19:26 ` Andy Furniss
0 siblings, 1 reply; 9+ messages in thread
From: Sebastian Moeller @ 2017-02-27 18:13 UTC (permalink / raw)
To: Andy Furniss; +Cc: Jonathan Morton, cake
Hi Andy,
> On Feb 27, 2017, at 19:02, Andy Furniss <adf.lists@gmail.com> wrote:
>
> Sebastian Moeller wrote:
>> Looking at tc-adv, I would recommend to use “rtt 50” (maybe it is
>> “rtt 50ms”) which allows to directly explicitly request a new
>> “interval” (which IIRC is corresponding to the time you allow for
>> the TCP control loop to react to cake’s ecn-marking/dropping)
>> “target’ will be calculated as 5% of the explicit interval, in
>> accordance with the rationale in the codel RFC.
>
> 50 is certainly better than 10, but still seems to hurt single a bit
> even on a close server.
Oopps, what I meant to convey is that there is the numeric option “rtt NNN” that allows to select the exact RTT/interval you believe to be valid. I picked 50 just because I wanted to give a concrete example, not because I believe 50 to be correct for your experiments...
>
> On more distant servers maybe even more - I am getting results that are
> too variable to tell properly at the current time (of day).
>
> http://www.thinkbroadband.com/speedtest/results.html?id=1488216166262542155
>
> Almost OK, but with 100ms this test usually shows x1 and x6 the same.
>
> http://www.thinkbroadband.com/speedtest/results.html?id=1488218291872647755
Interesting. My mental image is that interval defines the time you give both involved TCPs to get their act together before escalating cake’s/codel’s signaling. The theory as far as I understand it says that the RTT is the lower bound for the time required, so I am not totally amazed that relaxing that interval a bit increases bandwidth utilisation at a vert moderate latency cost (if any). The art is to figure out how to pick the interval (and I believe the codel paper showed, that at least for codel the exact number is not so important but the ballpark/ order of magnitude should match).
>
> Of course as we all know ingress shaping is a different beast anyway and
> would deserve its own thread.
The cool thing is that ingress shaping with all its “approximateness” (is that a word) works as well as it does ;)
Best Regards
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Cake] 5ms target hurting tcp throughput tweakable?
2017-02-27 18:13 ` Sebastian Moeller
@ 2017-02-27 19:26 ` Andy Furniss
2017-03-01 2:16 ` Benjamin Cronce
0 siblings, 1 reply; 9+ messages in thread
From: Andy Furniss @ 2017-02-27 19:26 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Jonathan Morton, cake
Sebastian Moeller wrote:
> Hi Andy,
>
>
>> On Feb 27, 2017, at 19:02, Andy Furniss <adf.lists@gmail.com>
>> wrote:
>>
>> Sebastian Moeller wrote:
>>> Looking at tc-adv, I would recommend to use “rtt 50” (maybe it is
>>> “rtt 50ms”) which allows to directly explicitly request a new
>>> “interval” (which IIRC is corresponding to the time you allow for
>>> the TCP control loop to react to cake’s ecn-marking/dropping)
>>> “target’ will be calculated as 5% of the explicit interval, in
>>> accordance with the rationale in the codel RFC.
>>
>> 50 is certainly better than 10, but still seems to hurt single a
>> bit even on a close server.
>
> Oopps, what I meant to convey is that there is the numeric option
> “rtt NNN” that allows to select the exact RTT/interval you believe
> to be valid. I picked 50 just because I wanted to give a concrete
> example, not because I believe 50 to be correct for your
> experiments...
Ahh, OK.
>
>>
>> On more distant servers maybe even more - I am getting results
>> that are too variable to tell properly at the current time (of
>> day).
>>
>> http://www.thinkbroadband.com/speedtest/results.html?id=1488216166262542155
>>
>>
>>
>>
Almost OK, but with 100ms this test usually shows x1 and x6 the same.
>>
>> http://www.thinkbroadband.com/speedtest/results.html?id=1488218291872647755
>
>>
>>
> Interesting. My mental image is that interval defines the time you
> give both involved TCPs to get their act together before escalating
> cake’s/codel’s signaling. The theory as far as I understand it says
> that the RTT is the lower bound for the time required, so I am not
> totally amazed that relaxing that interval a bit increases bandwidth
> utilisation at a vert moderate latency cost (if any). The art is to
> figure out how to pick the interval (and I believe the codel paper
> showed, that at least for codel the exact number is not so important
> but the ballpark/ order of magnitude should match).
Seems to be a repeatable result.
>
>>
>> Of course as we all know ingress shaping is a different beast
>> anyway and would deserve its own thread.
>
> The cool thing is that ingress shaping with all its
> “approximateness” (is that a word) works as well as it does ;)
Yea, though I am always interested in any ingress specific tweaks.
On my current line I have a 67 meg sync and the ISP buffer seems to
spike to max 80ms then settle to 40 = luxury compared to when I used to
have a 288/576 kbit sync adsl line with a 600ms remote buffer. Ingress
shaping on that was a bit more "interesting", especially as I targeted
latency for gaming. Back then I used to use connbytes and a short head
dropping sfq (needed hack) to get tcp out of (no so) slow start quickly.
Even on this line I can measure regular (without qos) tcp blips of 70ms
when someone is watching an HD vid on youtube. Streaming is misleading
description for this test at least, blipping would be better :-)
Sacrificing 6 meg does make it better (bit variable 20 - 40ms), but the
blips still show. The challenge of keeping the remote buffer empty ish
without sacrificing too much speed is an interesting one.
Until recently I didn't need to care as my ISP did QOS with AFAIK
Ellacoyas. Unfortunately they had a big change in their network and for
reasons unknown it all went pear shaped so they have turned it off.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Cake] 5ms target hurting tcp throughput tweakable?
2017-02-27 19:26 ` Andy Furniss
@ 2017-03-01 2:16 ` Benjamin Cronce
2017-03-01 22:48 ` Andy Furniss
0 siblings, 1 reply; 9+ messages in thread
From: Benjamin Cronce @ 2017-03-01 2:16 UTC (permalink / raw)
To: Andy Furniss; +Cc: Sebastian Moeller, cake
[-- Attachment #1: Type: text/plain, Size: 4456 bytes --]
On Mon, Feb 27, 2017 at 1:26 PM, Andy Furniss <adf.lists@gmail.com> wrote:
> Sebastian Moeller wrote:
>
>> Hi Andy,
>>
>>
>> On Feb 27, 2017, at 19:02, Andy Furniss <adf.lists@gmail.com>
>>> wrote:
>>>
>>> Sebastian Moeller wrote:
>>>
>>>> Looking at tc-adv, I would recommend to use “rtt 50” (maybe it is
>>>> “rtt 50ms”) which allows to directly explicitly request a new
>>>> “interval” (which IIRC is corresponding to the time you allow for
>>>> the TCP control loop to react to cake’s ecn-marking/dropping)
>>>> “target’ will be calculated as 5% of the explicit interval, in
>>>> accordance with the rationale in the codel RFC.
>>>>
>>>
>>> 50 is certainly better than 10, but still seems to hurt single a
>>> bit even on a close server.
>>>
>>
>> Oopps, what I meant to convey is that there is the numeric option
>> “rtt NNN” that allows to select the exact RTT/interval you believe
>> to be valid. I picked 50 just because I wanted to give a concrete
>> example, not because I believe 50 to be correct for your
>> experiments...
>>
>
> Ahh, OK.
>
>
>>
>>> On more distant servers maybe even more - I am getting results
>>> that are too variable to tell properly at the current time (of
>>> day).
>>>
>>> http://www.thinkbroadband.com/speedtest/results.html?id=1488
>>> 216166262542155
>>>
>>>
>>>
>>>
>>> Almost OK, but with 100ms this test usually shows x1 and x6 the same.
>
>>
>>> http://www.thinkbroadband.com/speedtest/results.html?id=1488
>>> 218291872647755
>>>
>>
>>
>>>
>>> Interesting. My mental image is that interval defines the time you
>> give both involved TCPs to get their act together before escalating
>> cake’s/codel’s signaling. The theory as far as I understand it says
>> that the RTT is the lower bound for the time required, so I am not
>> totally amazed that relaxing that interval a bit increases bandwidth
>> utilisation at a vert moderate latency cost (if any). The art is to
>> figure out how to pick the interval (and I believe the codel paper
>> showed, that at least for codel the exact number is not so important
>> but the ballpark/ order of magnitude should match).
>>
>
> Seems to be a repeatable result.
>
>
>>
>>> Of course as we all know ingress shaping is a different beast
>>> anyway and would deserve its own thread.
>>>
>>
>> The cool thing is that ingress shaping with all its
>> “approximateness” (is that a word) works as well as it does ;)
>>
>
> Yea, though I am always interested in any ingress specific tweaks.
>
> On my current line I have a 67 meg sync and the ISP buffer seems to
> spike to max 80ms then settle to 40 = luxury compared to when I used to
> have a 288/576 kbit sync adsl line with a 600ms remote buffer. Ingress
> shaping on that was a bit more "interesting", especially as I targeted
> latency for gaming. Back then I used to use connbytes and a short head
> dropping sfq (needed hack) to get tcp out of (no so) slow start quickly.
>
> Even on this line I can measure regular (without qos) tcp blips of 70ms
> when someone is watching an HD vid on youtube. Streaming is misleading
> description for this test at least, blipping would be better :-)
>
I have not sampled YouTube data in a while, but the last time I looked it
had packet-pacing issues. With TCP going from idle to full, several times a
second. Not only do you get the issue that TCP will front-load the entire
TCP window all at once, but if the data being transfers fits within the TCP
window, it never learns to back down. If you have a 30ms ping to YouTube,
at 67Mb/s, your window is about 2mbits, which is about 256KiB, which is
about the same size as the request sizes to "stream", last I knew.
Netflix is working on packet pacing for FreeBSD. After bufferbloat, it's
the next big issue.
>
> Sacrificing 6 meg does make it better (bit variable 20 - 40ms), but the
> blips still show. The challenge of keeping the remote buffer empty ish
> without sacrificing too much speed is an interesting one.
>
> Until recently I didn't need to care as my ISP did QOS with AFAIK
> Ellacoyas. Unfortunately they had a big change in their network and for
> reasons unknown it all went pear shaped so they have turned it off.
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
[-- Attachment #2: Type: text/html, Size: 6788 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Cake] 5ms target hurting tcp throughput tweakable?
2017-03-01 2:16 ` Benjamin Cronce
@ 2017-03-01 22:48 ` Andy Furniss
0 siblings, 0 replies; 9+ messages in thread
From: Andy Furniss @ 2017-03-01 22:48 UTC (permalink / raw)
To: Benjamin Cronce; +Cc: Sebastian Moeller, cake
Benjamin Cronce wrote:
> I have not sampled YouTube data in a while, but the last time I looked it
> had packet-pacing issues. With TCP going from idle to full, several times a
> second. Not only do you get the issue that TCP will front-load the entire
> TCP window all at once, but if the data being transfers fits within the TCP
> window, it never learns to back down. If you have a 30ms ping to YouTube,
> at 67Mb/s, your window is about 2mbits, which is about 256KiB, which is
> about the same size as the request sizes to "stream", last I knew.
>
> Netflix is working on packet pacing for FreeBSD. After bufferbloat, it's
> the next big issue.
Interesting, I guess to some extent client behavior plays a part as well.
I did another test with the same vid looking with a live ping monitor
and "stats for nerds" on. In this case the spacing was 2 to 4 seconds,
with the ping spikes matching the display showing buffer fills.
I guess just changing the buffer watermarks to do more smaller reads
would also be helpful in this case.
Pinging at 10pps shows the spikes last 0.2 to 0.3 seconds on my 67mbit
line. The CDN the data is coming from is only 9ms away from me.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2017-03-01 22:48 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-26 13:16 [Cake] 5ms target hurting tcp throughput tweakable? Andy Furniss
2017-02-26 13:43 ` Jonathan Morton
2017-02-26 14:34 ` Andy Furniss
2017-02-26 22:21 ` Sebastian Moeller
2017-02-27 18:02 ` Andy Furniss
2017-02-27 18:13 ` Sebastian Moeller
2017-02-27 19:26 ` Andy Furniss
2017-03-01 2:16 ` Benjamin Cronce
2017-03-01 22:48 ` Andy Furniss
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox