* [Cake] cake vs fqcodel with 1 client, 4 servers
@ 2017-12-04 22:30 Georgios Amanakis
2017-12-05 0:06 ` Dave Taht
0 siblings, 1 reply; 24+ messages in thread
From: Georgios Amanakis @ 2017-12-04 22:30 UTC (permalink / raw)
To: Dave Taht, Cake List
[-- Attachment #1: Type: text/plain, Size: 744 bytes --]
I tried to simulate a situation resembling Windows
updates/Steam/Torrents on a slow 10/2mbit connection.
veth setup, 1 client, 4 servers
setup.tgz:
./vsetup.sh
./sshd.sh
./vcake.sh
./mm.sh
servers -- delay -- isp -- mbox -- client
(4) 20ms 10/2mbit 9/1.8mbit (1)
The client is creating in parallel 11 downstream and 2 upstream flows
to *each* of the 4 servers.
This was done by running 4 rrul_be_nflows tests in parallel.
Cake vs HTB/fqcodel at mbox.
Cake tested with ack-filter and ingress/egress.
Cake ingress, as expected, achieves better latency at the cost of
bandwidth. This does wonders on slow connections like mine.
I will try to increase the number of clients to 4 and run some tests.
George
[-- Attachment #2: setup.tgz --]
[-- Type: application/x-compressed-tar, Size: 4163 bytes --]
[-- Attachment #3: rrulbe_11_4_4servers_1client_10mbit_2mbit_cake_htbfqcodel.tgz --]
[-- Type: application/x-compressed-tar, Size: 2748437 bytes --]
[-- Attachment #4: totals_cake_fqcodel.png --]
[-- Type: image/png, Size: 85274 bytes --]
[-- Attachment #5: ping_cake_fqcodel.png --]
[-- Type: image/png, Size: 621345 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-04 22:30 [Cake] cake vs fqcodel with 1 client, 4 servers Georgios Amanakis
@ 2017-12-05 0:06 ` Dave Taht
2017-12-05 1:13 ` Georgios Amanakis
0 siblings, 1 reply; 24+ messages in thread
From: Dave Taht @ 2017-12-05 0:06 UTC (permalink / raw)
To: Georgios Amanakis; +Cc: Cake List
The puzzling thing about that graph is that you are only achieving 1.3
mbit in the ing case.
On Mon, Dec 4, 2017 at 2:30 PM, Georgios Amanakis <gamanakis@gmail.com> wrote:
> I tried to simulate a situation resembling Windows
> updates/Steam/Torrents on a slow 10/2mbit connection.
>
> veth setup, 1 client, 4 servers
> setup.tgz:
> ./vsetup.sh
> ./sshd.sh
> ./vcake.sh
> ./mm.sh
>
> servers -- delay -- isp -- mbox -- client
> (4) 20ms 10/2mbit 9/1.8mbit (1)
>
> The client is creating in parallel 11 downstream and 2 upstream flows
> to *each* of the 4 servers.
> This was done by running 4 rrul_be_nflows tests in parallel.
>
> Cake vs HTB/fqcodel at mbox.
> Cake tested with ack-filter and ingress/egress.
>
> Cake ingress, as expected, achieves better latency at the cost of
> bandwidth. This does wonders on slow connections like mine.
>
> I will try to increase the number of clients to 4 and run some tests.
>
> George
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-05 0:06 ` Dave Taht
@ 2017-12-05 1:13 ` Georgios Amanakis
2017-12-05 1:38 ` Jonathan Morton
0 siblings, 1 reply; 24+ messages in thread
From: Georgios Amanakis @ 2017-12-05 1:13 UTC (permalink / raw)
To: Dave Taht; +Cc: Cake List
[-- Attachment #1: Type: text/plain, Size: 1692 bytes --]
Yes, I know. In cake without ingress each *flow* gets 0.19mbit/s while
with ingress 0.12mbit/s. Multiplying this x11flows x4servers it gives
8.36mbit/s without ingress vs 5.2mbit/s with ingress.
This was done in diffserv3 mode. I had hoped than Jonathan's latest
adjustment of failsafe ingress would have ameliorated this.
Nevertheless latency is unnoticeable.
If one decreases the number of flows, the difference becomes smaller.
This is why I had proposed that perhaps the failsafe ingress adjustment
should be done according to the number of the flows and not blindly.
George
On Mon, 2017-12-04 at 16:06 -0800, Dave Taht wrote:
> The puzzling thing about that graph is that you are only achieving
> 1.3
> mbit in the ing case.
>
> On Mon, Dec 4, 2017 at 2:30 PM, Georgios Amanakis <gamanakis@gmail.co
> m> wrote:
> > I tried to simulate a situation resembling Windows
> > updates/Steam/Torrents on a slow 10/2mbit connection.
> >
> > veth setup, 1 client, 4 servers
> > setup.tgz:
> > ./vsetup.sh
> > ./sshd.sh
> > ./vcake.sh
> > ./mm.sh
> >
> > servers -- delay -- isp -- mbox -- client
> > (4) 20ms 10/2mbit 9/1.8mbit (1)
> >
> > The client is creating in parallel 11 downstream and 2 upstream
> > flows
> > to *each* of the 4 servers.
> > This was done by running 4 rrul_be_nflows tests in parallel.
> >
> > Cake vs HTB/fqcodel at mbox.
> > Cake tested with ack-filter and ingress/egress.
> >
> > Cake ingress, as expected, achieves better latency at the cost of
> > bandwidth. This does wonders on slow connections like mine.
> >
> > I will try to increase the number of clients to 4 and run some
> > tests.
> >
> > George
>
>
>
[-- Attachment #2: download_per_flow_cake_ing.png --]
[-- Type: image/png, Size: 43009 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-05 1:13 ` Georgios Amanakis
@ 2017-12-05 1:38 ` Jonathan Morton
2017-12-05 3:50 ` George Amanakis
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Morton @ 2017-12-05 1:38 UTC (permalink / raw)
To: Georgios Amanakis; +Cc: Dave Taht, Cake List
[-- Attachment #1: Type: text/plain, Size: 1502 bytes --]
Ingress mode works by counting dropped packets, not only delivered packets,
against the shaped limit. When there's a large number of non-ECN flows and
a low BDP per flow, a lot of packets are dropped to try and keep the
intra-flow latency in line. So the goodput tends to decrease when the flow
count increases, but this is necessary to control latency.
The modified failsafe ensures that at most a third of the total bandwidth
will "go missing" this way. Previously, as much as three-quarters could.
At that threshold, Cake stops counting dropped packets, trading a reduction
in latency control for maintaining reasonable goodput. There is no more
sophisticated heuristic that I can think of to achieve ingress mode's goals.
However, it might be worth revisiting an old question once raised over
fq_codel's use of a fixed set of Codel parameters regardless of active flow
count. It was then argued that the delay target wasn't dependent on the
flow count.
But when the flow count is high, a fixed delay target plus the baseline
latency might end up requiring a lower BDP than the sender is able to
select as a congestion window (typical TCPs have a hard lower limit of 4x
MSS). In that case, currently packets are being dropped for no effect on
send rate. This wouldn't matter with ECN, of course.
So a better fix might be to adjust the target latency according to the
number of active bulk flows. Fortunately for performance, this should be a
multiply, not a division.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 1652 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-05 1:38 ` Jonathan Morton
@ 2017-12-05 3:50 ` George Amanakis
2017-12-05 4:48 ` Jonathan Morton
0 siblings, 1 reply; 24+ messages in thread
From: George Amanakis @ 2017-12-05 3:50 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Dave Taht, Cake List
[-- Attachment #1: Type: text/plain, Size: 1849 bytes --]
Of course Jonathan is right. Furthermore adjusting the failsafe ingress
to the number of flows would result in an extremely jerky behaviour
regarding the bandwidth.
I redid the test with net.ipv4.tcp_ecn=1 and I am attaching the
results.
On Tue, 2017-12-05 at 03:38 +0200, Jonathan Morton wrote:
> Ingress mode works by counting dropped packets, not only delivered
> packets, against the shaped limit. When there's a large number of
> non-ECN flows and a low BDP per flow, a lot of packets are dropped to
> try and keep the intra-flow latency in line. So the goodput tends to
> decrease when the flow count increases, but this is necessary to
> control latency.
> The modified failsafe ensures that at most a third of the total
> bandwidth will "go missing" this way. Previously, as much as three-
> quarters could. At that threshold, Cake stops counting dropped
> packets, trading a reduction in latency control for maintaining
> reasonable goodput. There is no more sophisticated heuristic that I
> can think of to achieve ingress mode's goals.
> However, it might be worth revisiting an old question once raised
> over fq_codel's use of a fixed set of Codel parameters regardless of
> active flow count. It was then argued that the delay target wasn't
> dependent on the flow count.
> But when the flow count is high, a fixed delay target plus the
> baseline latency might end up requiring a lower BDP than the sender
> is able to select as a congestion window (typical TCPs have a hard
> lower limit of 4x MSS). In that case, currently packets are being
> dropped for no effect on send rate. This wouldn't matter with ECN,
> of course.
> So a better fix might be to adjust the target latency according to
> the number of active bulk flows. Fortunately for performance, this
> should be a multiply, not a division.
> - Jonathan Morton
[-- Attachment #2: rrulbe_11_4_4servers_1client_10mbit_2mbit_cake_htbfqcodel.tgz --]
[-- Type: application/x-compressed-tar, Size: 3275211 bytes --]
[-- Attachment #3: download_per_flow_cake_ing_ecn.png --]
[-- Type: image/png, Size: 60804 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-05 3:50 ` George Amanakis
@ 2017-12-05 4:48 ` Jonathan Morton
2017-12-05 21:15 ` Dave Taht
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Morton @ 2017-12-05 4:48 UTC (permalink / raw)
To: Georgios Amanakis; +Cc: Dave Taht, Cake List
[-- Attachment #1: Type: text/plain, Size: 85 bytes --]
I might try to implement a dynamic target adjustment later today.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 124 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-05 4:48 ` Jonathan Morton
@ 2017-12-05 21:15 ` Dave Taht
2017-12-05 21:26 ` Georgios Amanakis
2017-12-07 2:27 ` Jonathan Morton
0 siblings, 2 replies; 24+ messages in thread
From: Dave Taht @ 2017-12-05 21:15 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Georgios Amanakis, Cake List
Jonathan Morton <chromatix99@gmail.com> writes:
> I might try to implement a dynamic target adjustment later today.
The loss of throughput here compared to non-ingress mode is
a blocker for mainlining and for that matter, wedging this into lede.
Has it always deteriorated this way?
>
> - Jonathan Morton
>
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-05 21:15 ` Dave Taht
@ 2017-12-05 21:26 ` Georgios Amanakis
2017-12-05 22:39 ` xnor
2017-12-07 2:27 ` Jonathan Morton
1 sibling, 1 reply; 24+ messages in thread
From: Georgios Amanakis @ 2017-12-05 21:26 UTC (permalink / raw)
To: Dave Taht; +Cc: Jonathan Morton, Cake List
[-- Attachment #1: Type: text/plain, Size: 765 bytes --]
As a reminder, noticeable loss of throughput occurs only when there are a
lot of concurrent flows (>16, on my connection).
To my knowledge and testing, ingress mode has been behaving like this from
the beginning.
On Dec 5, 2017 4:15 PM, "Dave Taht" <dave@taht.net> wrote:
> Jonathan Morton <chromatix99@gmail.com> writes:
>
> > I might try to implement a dynamic target adjustment later today.
>
> The loss of throughput here compared to non-ingress mode is
> a blocker for mainlining and for that matter, wedging this into lede.
>
> Has it always deteriorated this way?
>
> >
> > - Jonathan Morton
> >
> >
> > _______________________________________________
> > Cake mailing list
> > Cake@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake
>
[-- Attachment #2: Type: text/html, Size: 1516 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-05 21:26 ` Georgios Amanakis
@ 2017-12-05 22:39 ` xnor
2017-12-06 9:37 ` Jonathan Morton
0 siblings, 1 reply; 24+ messages in thread
From: xnor @ 2017-12-05 22:39 UTC (permalink / raw)
To: cake
[-- Attachment #1: Type: text/plain, Size: 692 bytes --]
>As a reminder, noticeable loss of throughput occurs only when there are
>a lot of concurrent flows (>16, on my connection).
>
>To my knowledge and testing, ingress mode has been behaving like this
>from the beginning.
I'm using the old cobalt branch with the version after ingress mode was
implemented.
tc-cake is configured with ingress mode and bandwidth 18800Kbit.
Downloading from a server with 2x16 connections using two aria2c
processes, ~35ms ping latency to the server, I get about 18.7 Mbit/s
received bytes on the interface (/proc/net/dev).
The remaining gross download rate is about 17.3 Mbit/s. That also
matches the qdisc stats (drops/pkts = roughly 8%).
[-- Attachment #2: Type: text/html, Size: 1782 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-05 22:39 ` xnor
@ 2017-12-06 9:37 ` Jonathan Morton
2017-12-06 19:32 ` xnor
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Morton @ 2017-12-06 9:37 UTC (permalink / raw)
To: xnor; +Cc: Cake List
[-- Attachment #1: Type: text/plain, Size: 501 bytes --]
At those speeds, you probably still have a large enough BDP per flow that
TCP can respond appropriately. I'm testing at 512Kbps, where it's easy to
get into single-packet BDPs.
A first-pass dynamic target adjustment is now pushed, and running here. It
doesn't solve the problem completely, probably because it doesn't guarantee
4xMTU, only 1xMTU, but it looks like it might be beginning to help and I'm
pretty sure it's the correct approach. Tuning it from here should be easy.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 587 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-06 9:37 ` Jonathan Morton
@ 2017-12-06 19:32 ` xnor
[not found] ` <CAJq5cE3MUfCuV_=GXAAOwvAXxee_dcJBaUkRVAEEWwyT5m7gAw@mail.gmail.com>
0 siblings, 1 reply; 24+ messages in thread
From: xnor @ 2017-12-06 19:32 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Cake List
>At those speeds, you probably still have a large enough BDP per flow
>that TCP can respond appropriately. I'm testing at 512Kbps, where it's
>easy to get into single-packet BDPs.
I've tried again with 96 connections to the same server, so 196 Kbps
each, and that reduces the gross download rate from over 17 to below 10
Mpbs.
The inbound interface still receives 18.7 Mbps and crude latency
measurements with a ping running at the same time show only a slight
increase on average.
The only problem I have with this (besides the amount of perfectly good
received data that needs to be discarded without ECN to keep TCP in
check... that's terrible protocol design) is that I do get 3x to 4x
spikes over the steady state latency when the download begins. That's
with "just" 16 additional connections starting.
>A first-pass dynamic target adjustment is now pushed, and running here.
> It doesn't solve the problem completely, probably because it doesn't
>guarantee 4xMTU, only 1xMTU, but it looks like it might be beginning to
>help and I'm pretty sure it's the correct approach. Tuning it from
>here should be easy.
>
I don't know exactly what you're adjusting and what you're adjusting to,
but I see two things that would help:
1) Reduce the amount of data that needs to be dropped.
Maybe the "pattern" of how packets are dropped can be improved such that
on *average* the sender sends at the same rate but overall less data
needs to be dropped at the receiver.
For example, instead of dropping the just received packet because it
exceeded some threshold and doing this for each packet it could make
more sense to drop multiple packets at once, making the sender to slow
down significantly.
The problem with this of course would be a more pronounced zig-zag
pattern in the bandwidth (with the implementation either limiting on the
peaks or average rate).
(I'm not sure if this makes sense given common TCP implementations.)
2) Predict the slope of the bandwidth and slow down the sender
proactively, especially during slow-start.
If the connection is in an exponential speed up phase then we have to
drop packets *before* we notice that the current (or even worse:
average) rate is above the configured rate. The situation gets worse if
multiple connections are in that phase at the same time, and ideally
that would have to be accounted for as well.
Those are just some suggestions. I haven't studied cake's implementation
so maybe it already does something like that.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
[not found] ` <CAJq5cE3MUfCuV_=GXAAOwvAXxee_dcJBaUkRVAEEWwyT5m7gAw@mail.gmail.com>
@ 2017-12-07 1:59 ` Jonathan Morton
0 siblings, 0 replies; 24+ messages in thread
From: Jonathan Morton @ 2017-12-07 1:59 UTC (permalink / raw)
To: xnor; +Cc: Cake List
[-- Attachment #1: Type: text/plain, Size: 231 bytes --]
These ideas have all been considered at great length in the past, and
resulted in the Codel algorithm in the first place. You might want to read
some of the original literature on it to understand my reasoning.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 274 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-05 21:15 ` Dave Taht
2017-12-05 21:26 ` Georgios Amanakis
@ 2017-12-07 2:27 ` Jonathan Morton
2017-12-07 2:30 ` George Amanakis
2017-12-07 2:58 ` George Amanakis
1 sibling, 2 replies; 24+ messages in thread
From: Jonathan Morton @ 2017-12-07 2:27 UTC (permalink / raw)
To: Dave Täht; +Cc: Georgios Amanakis, Cake List
[-- Attachment #1: Type: text/plain, Size: 504 bytes --]
The latest push now enforces 4 x MTU x flows on the target intra-flow
latency. When lightly loaded, the normal target still applies.
This gives a noticeable improvement on the goodput of World of Warships'
updater, which does its level best to stuff 150 HTTP flows through my
512Kbps downlink, while retaining reasonable inter-flow latency as measured
by using other applications and/or hosts.
I'd like to see a more controlled test of this, to compare with the
previous behaviour.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 589 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 2:27 ` Jonathan Morton
@ 2017-12-07 2:30 ` George Amanakis
2017-12-07 2:58 ` George Amanakis
1 sibling, 0 replies; 24+ messages in thread
From: George Amanakis @ 2017-12-07 2:30 UTC (permalink / raw)
To: Jonathan Morton, Dave Täht; +Cc: Cake List
I will try to get a run using the same setup.
On Thu, 2017-12-07 at 04:27 +0200, Jonathan Morton wrote:
> The latest push now enforces 4 x MTU x flows on the target intra-flow
> latency. When lightly loaded, the normal target still applies.
> This gives a noticeable improvement on the goodput of World of
> Warships' updater, which does its level best to stuff 150 HTTP flows
> through my 512Kbps downlink, while retaining reasonable inter-flow
> latency as measured by using other applications and/or hosts.
> I'd like to see a more controlled test of this, to compare with the
> previous behaviour.
> - Jonathan Morton
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 2:27 ` Jonathan Morton
2017-12-07 2:30 ` George Amanakis
@ 2017-12-07 2:58 ` George Amanakis
2017-12-07 3:04 ` Jonathan Morton
1 sibling, 1 reply; 24+ messages in thread
From: George Amanakis @ 2017-12-07 2:58 UTC (permalink / raw)
To: Jonathan Morton, Dave Täht; +Cc: Cake List
[-- Attachment #1: Type: text/plain, Size: 939 bytes --]
4servers -- delay -- isp -- mbox -- 1client
netserver 10/10ms 10/2mbit 9/1.8mbit flent
net.ipv4.tcp_ecn=2
11down/2upstream flows using 4 parallel rrul_be_nflows
cobalt branch
cake params: rtt 20ms triple-isolate ack-filter
cake_4mtu --> r402 with ingress
cake_noadjust --> r400 with ingress
cake_noing --> r400 without ingress
On Thu, 2017-12-07 at 04:27 +0200, Jonathan Morton wrote:
> The latest push now enforces 4 x MTU x flows on the target intra-flow
> latency. When lightly loaded, the normal target still applies.
> This gives a noticeable improvement on the goodput of World of
> Warships' updater, which does its level best to stuff 150 HTTP flows
> through my 512Kbps downlink, while retaining reasonable inter-flow
> latency as measured by using other applications and/or hosts.
> I'd like to see a more controlled test of this, to compare with the
> previous behaviour.
> - Jonathan Morton
[-- Attachment #2: rrulbe_11_4_4servers_1client_10mbit_2mbit_cake_ingress.tgz --]
[-- Type: application/x-compressed-tar, Size: 1927267 bytes --]
[-- Attachment #3: boxcombine_cake_ingress.png.png --]
[-- Type: image/png, Size: 104088 bytes --]
[-- Attachment #4: boxtotals_cake_ingress.png --]
[-- Type: image/png, Size: 133037 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 2:58 ` George Amanakis
@ 2017-12-07 3:04 ` Jonathan Morton
2017-12-07 3:08 ` Georgios Amanakis
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Morton @ 2017-12-07 3:04 UTC (permalink / raw)
To: Georgios Amanakis; +Cc: Dave Täht, Cake List
[-- Attachment #1: Type: text/plain, Size: 155 bytes --]
Looks like a win, but not as much of one as I hoped. I'm reluctant to
increase the limit further, but it might be worth trying anyway.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 202 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 3:04 ` Jonathan Morton
@ 2017-12-07 3:08 ` Georgios Amanakis
2017-12-07 7:08 ` Jonathan Morton
0 siblings, 1 reply; 24+ messages in thread
From: Georgios Amanakis @ 2017-12-07 3:08 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Dave Taht, Cake List
[-- Attachment #1: Type: text/plain, Size: 322 bytes --]
I am giving it a try with interplanetary, but the download rate doesn't
increase further..
On Dec 6, 2017 10:04 PM, "Jonathan Morton" <chromatix99@gmail.com> wrote:
Looks like a win, but not as much of one as I hoped. I'm reluctant to
increase the limit further, but it might be worth trying anyway.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 698 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 3:08 ` Georgios Amanakis
@ 2017-12-07 7:08 ` Jonathan Morton
2017-12-07 8:21 ` Jonathan Morton
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Morton @ 2017-12-07 7:08 UTC (permalink / raw)
To: Georgios Amanakis; +Cc: Dave Täht, Cake List
[-- Attachment #1: Type: text/plain, Size: 512 bytes --]
I should check whether the interplanetary keyword actually works as
intended.
- Jonathan Morton
On 7 Dec 2017 05:08, "Georgios Amanakis" <gamanakis@gmail.com> wrote:
> I am giving it a try with interplanetary, but the download rate doesn't
> increase further..
>
> On Dec 6, 2017 10:04 PM, "Jonathan Morton" <chromatix99@gmail.com> wrote:
>
> Looks like a win, but not as much of one as I hoped. I'm reluctant to
> increase the limit further, but it might be worth trying anyway.
>
> - Jonathan Morton
>
>
>
[-- Attachment #2: Type: text/html, Size: 1207 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 7:08 ` Jonathan Morton
@ 2017-12-07 8:21 ` Jonathan Morton
2017-12-07 8:51 ` Kevin Darbyshire-Bryant
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Morton @ 2017-12-07 8:21 UTC (permalink / raw)
To: Georgios Amanakis; +Cc: Dave Täht, Cake List
[-- Attachment #1: Type: text/plain, Size: 836 bytes --]
Okay, I found that the parameters used were suspicious and didn't prevent
Cake from dropping packets. I've pushed a tc-adv update to fix that, so
you should try that again.
Other RTT keywords are unaffected.
- Jonathan Morton
On 7 Dec 2017 09:08, "Jonathan Morton" <chromatix99@gmail.com> wrote:
> I should check whether the interplanetary keyword actually works as
> intended.
>
> - Jonathan Morton
>
> On 7 Dec 2017 05:08, "Georgios Amanakis" <gamanakis@gmail.com> wrote:
>
>> I am giving it a try with interplanetary, but the download rate doesn't
>> increase further..
>>
>> On Dec 6, 2017 10:04 PM, "Jonathan Morton" <chromatix99@gmail.com> wrote:
>>
>> Looks like a win, but not as much of one as I hoped. I'm reluctant to
>> increase the limit further, but it might be worth trying anyway.
>>
>> - Jonathan Morton
>>
>>
>>
[-- Attachment #2: Type: text/html, Size: 1876 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 8:21 ` Jonathan Morton
@ 2017-12-07 8:51 ` Kevin Darbyshire-Bryant
2017-12-07 9:03 ` Jonathan Morton
0 siblings, 1 reply; 24+ messages in thread
From: Kevin Darbyshire-Bryant @ 2017-12-07 8:51 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Georgios Amanakis, Cake List
> On 7 Dec 2017, at 08:21, Jonathan Morton <chromatix99@gmail.com> wrote:
>
> Okay, I found that the parameters used were suspicious and didn't prevent Cake from dropping packets. I've pushed a tc-adv update to fix that, so you should try that again.
>
> Other RTT keywords are unaffected.
Can’t find the tweak?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 8:51 ` Kevin Darbyshire-Bryant
@ 2017-12-07 9:03 ` Jonathan Morton
2017-12-07 13:17 ` Georgios Amanakis
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Morton @ 2017-12-07 9:03 UTC (permalink / raw)
To: Kevin Darbyshire-Bryant; +Cc: Georgios Amanakis, Cake List
[-- Attachment #1: Type: text/plain, Size: 75 bytes --]
Ah, the push didn't actually complete. It's there now.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 126 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 9:03 ` Jonathan Morton
@ 2017-12-07 13:17 ` Georgios Amanakis
2017-12-07 22:12 ` xnor
0 siblings, 1 reply; 24+ messages in thread
From: Georgios Amanakis @ 2017-12-07 13:17 UTC (permalink / raw)
To: Jonathan Morton, Kevin Darbyshire-Bryant, Dave Täht; +Cc: Cake List
[-- Attachment #1: Type: text/plain, Size: 384 bytes --]
I redid the previous test. Same topology, just substituting cake's rtt
20ms with interplanetary after applying the tc-adv patch.
Now the download rate is at the levels where it should be (~2.2mbit/s)
but of course latency jumped to about 7000ms.
On Thu, 2017-12-07 at 11:03 +0200, Jonathan Morton wrote:
> Ah, the push didn't actually complete. It's there now.
> - Jonathan Morton
[-- Attachment #2: box_combine.png --]
[-- Type: image/png, Size: 473190 bytes --]
[-- Attachment #3: rrulbe_11_4_4servers_1client_10mbit_2mbit_cake_ingress.tgz --]
[-- Type: application/x-compressed-tar, Size: 2861575 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 13:17 ` Georgios Amanakis
@ 2017-12-07 22:12 ` xnor
2017-12-07 22:34 ` Georgios Amanakis
0 siblings, 1 reply; 24+ messages in thread
From: xnor @ 2017-12-07 22:12 UTC (permalink / raw)
To: Georgios Amanakis; +Cc: Cake List
>Now the download rate is at the levels where it should be (~2.2mbit/s)
>but of course latency jumped to about 7000ms.
Why should it be at 2.2 Mbps? I thought you had set isp to 2 Mbps and
mbox to 1.8 Mbps?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Cake] cake vs fqcodel with 1 client, 4 servers
2017-12-07 22:12 ` xnor
@ 2017-12-07 22:34 ` Georgios Amanakis
0 siblings, 0 replies; 24+ messages in thread
From: Georgios Amanakis @ 2017-12-07 22:34 UTC (permalink / raw)
To: xnor; +Cc: Cake List
In this setup mbox is shaping at 9/1.8mbit.
The client generates 11 flows to each of 4 servers.
9mbit / 4servers = 2.25mbit/server = 2.25mbit/11 flows
On Thu, 2017-12-07 at 22:12 +0000, xnor wrote:
> > Now the download rate is at the levels where it should be
> > (~2.2mbit/s)
> > but of course latency jumped to about 7000ms.
>
> Why should it be at 2.2 Mbps? I thought you had set isp to 2 Mbps
> and
> mbox to 1.8 Mbps?
>
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2017-12-07 22:34 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-04 22:30 [Cake] cake vs fqcodel with 1 client, 4 servers Georgios Amanakis
2017-12-05 0:06 ` Dave Taht
2017-12-05 1:13 ` Georgios Amanakis
2017-12-05 1:38 ` Jonathan Morton
2017-12-05 3:50 ` George Amanakis
2017-12-05 4:48 ` Jonathan Morton
2017-12-05 21:15 ` Dave Taht
2017-12-05 21:26 ` Georgios Amanakis
2017-12-05 22:39 ` xnor
2017-12-06 9:37 ` Jonathan Morton
2017-12-06 19:32 ` xnor
[not found] ` <CAJq5cE3MUfCuV_=GXAAOwvAXxee_dcJBaUkRVAEEWwyT5m7gAw@mail.gmail.com>
2017-12-07 1:59 ` Jonathan Morton
2017-12-07 2:27 ` Jonathan Morton
2017-12-07 2:30 ` George Amanakis
2017-12-07 2:58 ` George Amanakis
2017-12-07 3:04 ` Jonathan Morton
2017-12-07 3:08 ` Georgios Amanakis
2017-12-07 7:08 ` Jonathan Morton
2017-12-07 8:21 ` Jonathan Morton
2017-12-07 8:51 ` Kevin Darbyshire-Bryant
2017-12-07 9:03 ` Jonathan Morton
2017-12-07 13:17 ` Georgios Amanakis
2017-12-07 22:12 ` xnor
2017-12-07 22:34 ` Georgios Amanakis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox