General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] Bloat done correctly?
@ 2015-06-12  4:45 Benjamin Cronce
  2015-06-12  9:08 ` Sebastian Moeller
  0 siblings, 1 reply; 12+ messages in thread
From: Benjamin Cronce @ 2015-06-12  4:45 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 2145 bytes --]

This is my first time using a mailing list, so I apologize if I break any
etiquettes.

Here is my situation.
I have 100/100 via GPON, the ISP claims "dedicated" bandwidth defined as
the port is not oversubscribed.
I was told their core network can handle all customers at 100% of their
provisioned speeds, but their
trunk would go down in a spectacular blaze. I was also told their trunk
consists of 6 links to Level 3
and they could handle 5 of those links going down without congestion
occurring on the trunk during peak hours.
The GPON head unit aggregates directly into the core router, router is some
flashy new Cisco
that supports "a lot of 10Gb and 100Gb ports". I have a 1ms hop to my ISP,
then a 9ms hop to Level 3.

The reason I mentioned this is it may be useful when interpreting these
results that the only likely point
of congestion is my 100Mb connection.

DSLReports Jitter test
https://lh3.googleusercontent.com/HxTrRZob4RNU9OdmdRoxS5Ig0xf-9qwZhFwh67uyVPg=w389-h540-no


On to bufferbloat.

Bypass firewall(PFSense) - no AQM/QoS on my part

32/16
http://www.dslreports.com/speedtest/624054

24/12
http://www.dslreports.com/speedtest/624060

Single Stream restriction
http://www.dslreports.com/speedtest/624065


Through the firewall. No other traffic, so HFSC does not matter
80/443/8080 go into the same queue and uses just regular CoDel.
DSLReports uses a "web ping", so the ping goes through the same queue as
the speedtest
http://www.dslreports.com/speedtest/624075

Under load while doing P2P(About 80Mb down and 20Mb up just as I started
the test)
HFSC: P2P in 20% queue and 80/443/8080 in 40% queue with ACKs going to a
20% realtime queue
http://www.dslreports.com/speedtest/622452

Here you can see my quality graph spiked during the tests when I was
outside the firewall.
Because I bypassed the firewall and was no longer being traffic shaped, I
was able to overload the connection.

https://lh3.googleusercontent.com/V4mpd_EMNXCIpdjMGQKbgrYjT_Kts9iIuFR5PnH_5Po=w760-h420-no

To me it seems like bufferbloat is mostly handled. I did send them some
emails to see if I could get a response
on what they use, but no luck.

[-- Attachment #2: Type: text/html, Size: 3219 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12  4:45 [Bloat] Bloat done correctly? Benjamin Cronce
@ 2015-06-12  9:08 ` Sebastian Moeller
  2015-06-12 15:33   ` Benjamin Cronce
  2015-06-12 18:51   ` Alex Elsayed
  0 siblings, 2 replies; 12+ messages in thread
From: Sebastian Moeller @ 2015-06-12  9:08 UTC (permalink / raw)
  To: Benjamin Cronce; +Cc: bloat

Hi Benjamin,

To go off onto a tangent:

On Jun 12, 2015, at 06:45 , Benjamin Cronce <bcronce@gmail.com> wrote:

> [...]
> Under load while doing P2P(About 80Mb down and 20Mb up just as I started the test)
> HFSC: P2P in 20% queue and 80/443/8080 in 40% queue with ACKs going to a 20% realtime queue
> http://www.dslreports.com/speedtest/622452

	I know this is not really your question, but I think the ACKs should go into the same queue as the matching data packets. Think about it that way, if the data is delayed due to congestion it does not make too much sense to tell the sender to send more faster (which essentially is what ACK prioritization does) as that will not really reduce the congestion but rather increase it. 
	There is one caveat though: when ECN is used it might make sense to send out the ACK that will signal the congestion state back to the sender faster… So if you prioritize ACKs only select those with an ECN-Echo flag ;) 
	@bloat : What do you all think about this refined ACK prioritization scheme?

Best Regards
	Sebastian



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12  9:08 ` Sebastian Moeller
@ 2015-06-12 15:33   ` Benjamin Cronce
  2015-06-12 17:51     ` Sebastian Moeller
  2015-06-12 18:51   ` Alex Elsayed
  1 sibling, 1 reply; 12+ messages in thread
From: Benjamin Cronce @ 2015-06-12 15:33 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 3917 bytes --]

> On Fri, Jun 12, 2015 at 4:08 AM, Sebastian Moeller wrote:
> Hi Benjamin,
>
> To go off onto a tangent:
>
> On Jun 12, 2015, at 06:45 , Benjamin Cronce wrote:
>
> > [...]
> > Under load while doing P2P(About 80Mb down and 20Mb up just as I
started the test)
> > HFSC: P2P in 20% queue and 80/443/8080 in 40% queue with ACKs going to
a 20% realtime queue
> > http://www.dslreports.com/speedtest/622452
>
>         I know this is not really your question, but I think the ACKs
should go into the same queue as the matching data packets. Think about it
that way, if the data is delayed due to congestion it does not make too
much sense to tell the sender to send more faster (which essentially is
what ACK prioritization does) as that will not really reduce the congestion
but rather increase it.
>         There is one caveat though: when ECN is used it might make sense
to send out the ACK that will signal the congestion state back to the
sender faster… So if you prioritize ACKs only select those with an ECN-Echo
flag ;)
>         @bloat : What do you all think about this refined ACK
prioritization scheme?
>
> Best Regards
>         Sebastian

Here's a very real problem for many users. If you have a highly
asymmetrical connection like what many DSL and cable users have, doing even
mild uploading can consume enough bandwidth to affect your ability to
upload ACKs, which can negatively affect your downloads.
A regular offender to many is P2P. If you're uploading while downloading on
a 30/3 connection, you may not be able to ACK data fast enough. Of course
this is in of itself mostly an issue caused by bufferbloat and/or lack of
fair queueing.

I guess the real question is, should ACKs be prioritized in a system that
does not exhibit bufferbloat or has fair queuing that allows the ACKs to
not feel the affect of other bandwidth being consumed. With no data to back
this up, my first guess would be it will not matter. If you have no
bufferbloat or you have fair queuing, the ACKs should effectively be sent
nearly immediately.
One case where it could make a difference if where ACKs make up a sizable
portion of the upload bandwidth if download is saturated. An example would
be a hyper-asymmetrical connection like 60/3. In this case, having the ACKs
in a separate queue would actually do the opposite, it would put an upper
bound on how much bandwidth ACKs could consume unless you gave your ACK
queue nearly all of the bandwidth.

In the two examples that I could quickly think of, either it made little
difference or it did the opposite of what your proposed. Of course it
depends on the configuration of the shaping.

Minor side tangent about bloat and ACKs that doesn't really need a
discussion

A few months back I was downloading some Linux torrents when I noticed I
was getting packetloss. I pretty much never get packetloss. I also noticed
that I was receiving a fluctuating 100Mb/s-103Mb/s of ingress on my WAN but
only seeing about 80Mb/s of egress on my LAN. I did a packet capture and
saw something funny. My WAN was sending out a lot of Dup ACK packets.
I grabbed a few of the dest IPs that my firewall was sending to and did a
trace route. Nice low pings for most of the path, then suddenly about 2-3
hops before the seeder in case their ISP's network, I started to see huge
jitter and pings, in the 1sec-3sec range. Cox and Comcast were the two main
offenders but they do represent a sizable portion of the population.
My conclusion was bufferbloat was so bad on their networks that the seeders
where not getting ACKs from me in a timely fashion, so they resent the
segments assuming they were lost. This issue was so bad that about 20Mb/s
of the 100Mb/s was duplicate data segments. I was being DDOS'd by
bufferbloat.

P.S. I removed the emails because I was not absolutely sure if they would
be scrubbed.

[-- Attachment #2: Type: text/html, Size: 4444 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12 15:33   ` Benjamin Cronce
@ 2015-06-12 17:51     ` Sebastian Moeller
  2015-06-12 18:44       ` Benjamin Cronce
  0 siblings, 1 reply; 12+ messages in thread
From: Sebastian Moeller @ 2015-06-12 17:51 UTC (permalink / raw)
  To: Benjamin Cronce; +Cc: bloat

Hi Benjamin,

On Jun 12, 2015, at 17:33 , Benjamin Cronce <bcronce@gmail.com> wrote:

> > On Fri, Jun 12, 2015 at 4:08 AM, Sebastian Moeller wrote:
> > Hi Benjamin,
> > 
> > To go off onto a tangent:
> > 
> > On Jun 12, 2015, at 06:45 , Benjamin Cronce wrote:
> > 
> > > [...]
> > > Under load while doing P2P(About 80Mb down and 20Mb up just as I started the test)
> > > HFSC: P2P in 20% queue and 80/443/8080 in 40% queue with ACKs going to a 20% realtime queue
> > > http://www.dslreports.com/speedtest/622452
> > 
> >         I know this is not really your question, but I think the ACKs should go into the same queue as the matching data packets. Think about it that way, if the data is delayed due to congestion it does not make too much sense to tell the sender to send more faster (which essentially is what ACK prioritization does) as that will not really reduce the congestion but rather increase it.
> >         There is one caveat though: when ECN is used it might make sense to send out the ACK that will signal the congestion state back to the sender faster… So if you prioritize ACKs only select those with an ECN-Echo flag ;)
> >         @bloat : What do you all think about this refined ACK prioritization scheme?
> > 
> > Best Regards
> >         Sebastian
> 
> Here's a very real problem for many users. If you have a highly asymmetrical connection like what many DSL and cable users have, doing even mild uploading can consume enough bandwidth to affect your ability to upload ACKs, which can negatively affect your downloads.
> A regular offender to many is P2P. If you're uploading while downloading on a 30/3 connection, you may not be able to ACK data fast enough. Of course this is in of itself mostly an issue caused by bufferbloat and/or lack of fair queueing.

	I agree, but I think reducing buffer bloat and using fair queueing is superior to ACK prioritization, that’s all.

> 
> I guess the real question is, should ACKs be prioritized in a system that does not exhibit bufferbloat or has fair queuing that allows the ACKs to not feel the affect of other bandwidth being consumed. With no data to back this up, my first guess would be it will not matter. If you have no bufferbloat or you have fair queuing, the ACKs should effectively be sent nearly immediately.

	Actually fq_codel mildly boosts sparse flows, and since ACK typically are sparse they usually do not suffer. But all sparse or starting flows get the same treatment, so nothing ACK specific just something that typically works well. 

> One case where it could make a difference if where ACKs make up a sizable portion of the upload bandwidth if download is saturated. An example would be a hyper-asymmetrical connection like 60/3.

	Okay, I will bite, at one ACK every 2 full MSS send, ACKs take up roughly 2% of the reverse traffic, so at 60 the ACK will cost 60*0.02 = 1.2 or 100*1.2/3 = 40%, and I agree that is most likely will cause problems for people using smaller priority queues. It is also one reason why I have read recommendations to size priority classes equally (percentage wise) for up- and download, and make sure ACKs get the same priority as the reverse data...

> In this case, having the ACKs in a separate queue would actually do the opposite, it would put an upper bound on how much bandwidth ACKs could consume unless you gave your ACK queue nearly all of the bandwidth.

	Well, as stated above if your incoming traffic class allows 100% bandwidth use the corresponding outgoing class would better allow the same percentage… So i still think not putting ACKs in higher priorities gives better overall system balance. BUT I have no experience with P2P traffic, so things could be different there. I would hope to be able to tell my P2P apps to mark their packets as background priority, so the data and ACKs would move out of the way of more urgent traffic. I understand that you have actual P2P experience so take my input with a grain of salt, please...

Best Regards
	Sebastian

> 
> In the two examples that I could quickly think of, either it made little difference or it did the opposite of what your proposed. Of course it depends on the configuration of the shaping.
> 
> Minor side tangent about bloat and ACKs that doesn't really need a discussion
> 
> A few months back I was downloading some Linux torrents when I noticed I was getting packetloss. I pretty much never get packetloss. I also noticed that I was receiving a fluctuating 100Mb/s-103Mb/s of ingress on my WAN but only seeing about 80Mb/s of egress on my LAN. I did a packet capture and saw something funny. My WAN was sending out a lot of Dup ACK packets.
> I grabbed a few of the dest IPs that my firewall was sending to and did a trace route. Nice low pings for most of the path, then suddenly about 2-3 hops before the seeder in case their ISP's network, I started to see huge jitter and pings, in the 1sec-3sec range. Cox and Comcast were the two main offenders but they do represent a sizable portion of the population.
> My conclusion was bufferbloat was so bad on their networks that the seeders where not getting ACKs from me in a timely fashion, so they resent the segments assuming they were lost. This issue was so bad that about 20Mb/s of the 100Mb/s was duplicate data segments. I was being DDOS'd by bufferbloat.
> 
> P.S. I removed the emails because I was not absolutely sure if they would be scrubbed.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12 17:51     ` Sebastian Moeller
@ 2015-06-12 18:44       ` Benjamin Cronce
  0 siblings, 0 replies; 12+ messages in thread
From: Benjamin Cronce @ 2015-06-12 18:44 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 6648 bytes --]

> Hi Benjamin,
>
> On Jun 12, 2015, at 17:33 , Benjamin Cronce <bcronce at gmail.com> wrote:
>
> > > On Fri, Jun 12, 2015 at 4:08 AM, Sebastian Moeller wrote:
> > > Hi Benjamin,
> > >
> > > To go off onto a tangent:
> > >
> > > On Jun 12, 2015, at 06:45 , Benjamin Cronce wrote:
> > >
> > > > [...]
> > > > Under load while doing P2P(About 80Mb down and 20Mb up just as I
started the test)
> > > > HFSC: P2P in 20% queue and 80/443/8080 in 40% queue with ACKs going
to a 20% realtime queue
> > > > http://www.dslreports.com/speedtest/622452
> > >
> > >         I know this is not really your question, but I think the ACKs
should go into the same queue as the matching data packets. Think about it
that way, if the data is delayed due to congestion it does not make too
much sense to tell the sender to send more faster (which essentially is
what ACK prioritization does) as that will not really reduce the congestion
but rather increase it.
> > >         There is one caveat though: when ECN is used it might make
sense to send out the ACK that will signal the congestion state back to the
sender faster… So if you prioritize ACKs only select those with an ECN-Echo
flag ;)
> > >         @bloat : What do you all think about this refined ACK
prioritization scheme?
> > >
> > > Best Regards
> > >         Sebastian
> >
> > Here's a very real problem for many users. If you have a highly
asymmetrical connection like what many DSL and cable users have, doing even
mild uploading can consume enough bandwidth to affect your ability to
upload ACKs, which can negatively affect your downloads.
> > A regular offender to many is P2P. If you're uploading while
downloading on a 30/3 connection, you may not be able to ACK data fast
enough. Of course this is in of itself mostly an issue caused by
bufferbloat and/or lack of fair queueing.
>
> I agree, but I think reducing buffer bloat and using fair queueing is
superior to ACK prioritization, that’s all.

I wholly agree. A separate ACK queue could help in some very specific cases
where a trade off is needed, but letting an AQM do its job is probably
better because it should "just work".
I will do away with my ACK queue once fq_codel comes to PFSense, but until
then, I want to make sure my ACKs are not getting stuck behind some burst
of traffic.

>
> >
> > I guess the real question is, should ACKs be prioritized in a system
that does not exhibit bufferbloat or has fair queuing that allows the ACKs
to not feel the affect of other bandwidth being consumed. With no data to
back this up, my first guess would be it will not matter. If you have no
bufferbloat or you have fair queuing, the ACKs should effectively be sent
nearly immediately.
>
> Actually fq_codel mildly boosts sparse flows, and since ACK typically are
sparse they usually do not suffer. But all sparse or starting flows get the
same treatment, so nothing ACK specific just something that typically works
well.
>
> > One case where it could make a difference if where ACKs make up a
sizable portion of the upload bandwidth if download is saturated. An
example would be a hyper-asymmetrical connection like 60/3.
>
> Okay, I will bite, at one ACK every 2 full MSS send, ACKs take up roughly
2% of the reverse traffic, so at 60 the ACK will cost 60*0.02 = 1.2 or
100*1.2/3 = 40%, and I agree that is most likely will cause problems for
people using smaller priority queues. It is also one reason why I have read
recommendations to size priority classes equally (percentage wise) for up-
and download, and make sure ACKs get the same priority as the reverse
data...
>
> > In this case, having the ACKs in a separate queue would actually do the
opposite, it would put an upper bound on how much bandwidth ACKs could
consume unless you gave your ACK queue nearly all of the bandwidth.
>
> Well, as stated above if your incoming traffic class allows 100%
bandwidth use the corresponding outgoing class would better allow the same
percentage… So i still think not putting ACKs in higher priorities gives
better overall system balance. BUT I have no experience with P2P traffic,
so things could be different there. I would hope to be able to tell my P2P
apps to mark their packets as background priority, so the data and ACKs
would move out of the way of more urgent traffic. I understand that you
have actual P2P experience so take my input with a grain of salt, please...
>
> Best Regards
> Sebastian

My P2P experience is limited at best. I'm on a dedicated symmetrical
connection and my ISP already has anti-bufferbloat stuff implemented. Most
of what I hear about using separate ACK queues is entirely because of
bufferbloat, you don't want your ACKs stuck behind 1sec of bloat or getting
dropped.
Until bufferbloat is mostly solved, there probably won't be enough user
stories to make a case one way or the other about how modern AQMs will
affect ACKs under a wide range of loads, technologies, topologies, and
bandwidths. Don't fix what ain't broken and KISS. If you have access to a
reliable AQM, don't use an ACK queue unless you have proof that it'll help.

>
> >
> > In the two examples that I could quickly think of, either it made
little difference or it did the opposite of what your proposed. Of course
it depends on the configuration of the shaping.
> >
> > Minor side tangent about bloat and ACKs that doesn't really need a
discussion
> >
> > A few months back I was downloading some Linux torrents when I noticed
I was getting packetloss. I pretty much never get packetloss. I also
noticed that I was receiving a fluctuating 100Mb/s-103Mb/s of ingress on my
WAN but only seeing about 80Mb/s of egress on my LAN. I did a packet
capture and saw something funny. My WAN was sending out a lot of Dup ACK
packets.
> > I grabbed a few of the dest IPs that my firewall was sending to and did
a trace route. Nice low pings for most of the path, then suddenly about 2-3
hops before the seeder in case their ISP's network, I started to see huge
jitter and pings, in the 1sec-3sec range. Cox and Comcast were the two main
offenders but they do represent a sizable portion of the population.
> > My conclusion was bufferbloat was so bad on their networks that the
seeders where not getting ACKs from me in a timely fashion, so they resent
the segments assuming they were lost. This issue was so bad that about
20Mb/s of the 100Mb/s was duplicate data segments. I was being DDOS'd by
bufferbloat.
> >
> > P.S. I removed the emails because I was not absolutely sure if they
would be scrubbed.

[-- Attachment #2: Type: text/html, Size: 9226 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12  9:08 ` Sebastian Moeller
  2015-06-12 15:33   ` Benjamin Cronce
@ 2015-06-12 18:51   ` Alex Elsayed
  2015-06-12 19:14     ` Jonathan Morton
  2015-06-12 19:21     ` Sebastian Moeller
  1 sibling, 2 replies; 12+ messages in thread
From: Alex Elsayed @ 2015-06-12 18:51 UTC (permalink / raw)
  To: bloat

Sebastian Moeller wrote:

> Hi Benjamin,
> 
> To go off onto a tangent:
> 
> On Jun 12, 2015, at 06:45 , Benjamin Cronce
> <bcronce@gmail.com> wrote:
> 
>> [...]
>> Under load while doing P2P(About 80Mb down and 20Mb up just as I started
>> the test) HFSC: P2P in 20% queue and 80/443/8080 in 40% queue with ACKs
>> going to a 20% realtime queue http://www.dslreports.com/speedtest/622452
> 
> I know this is not really your question, but I think the ACKs should go
> into the same queue as the matching data packets. Think about it that way,
> if the data is delayed due to congestion it does not make too much sense
> to tell the sender to send more faster (which essentially is what ACK
> prioritization does) as that will not really reduce the congestion but
> rather increase it. There is one caveat though: when ECN is used it might
> make sense to send out the ACK that will signal the congestion state back
> to the sender faster… So if you prioritize ACKs only select those with an
> ECN-Echo flag ;) @bloat : What do you all think about this refined ACK
> prioritization scheme?

I'd say that this is wrongly attempting to bind upstream congestion to 
downstream congestion.

Let's have two endpoints, A and B. There exists a stream sent from A towards 
B.

If A does not receive an ack from B in a timely manner, it draws inference 
as to the congestion on the path _towards_ B. Prioritizing acks from B to A 
thus makes this _more accurate to reality_ - a lost ack (rather than the 
absence of an ack due to a lost packet) actually behaves as misinformation 
to the sender, causing them to

1.) back off sending when the sending channel is not congested and
2.) resend a packet that _already arrived_.

The latter point is a big one: Prioritized ACKs (may) reduce spurious 
resends, especially on asymmetric connections - and suprious resends are 
pure network inefficiency. Especially since the data packets are likely far 
larger than the ACKs. Which would _also_ get resent.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12 18:51   ` Alex Elsayed
@ 2015-06-12 19:14     ` Jonathan Morton
  2015-06-12 19:54       ` Sebastian Moeller
  2015-06-12 19:21     ` Sebastian Moeller
  1 sibling, 1 reply; 12+ messages in thread
From: Jonathan Morton @ 2015-06-12 19:14 UTC (permalink / raw)
  To: Alex Elsayed; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1409 bytes --]

We have a test in Flent which tries to exercise this case: 50 flows in one
direction and 1 in the other, all TCP. Where the 50 flows are on the narrow
side of an asymmetric link, it is possible to see just what happens when
there isn't enough bandwidth for the acks of the single opposing flow.

What I see is that acks behave like an unresponsive flow in themselves, but
one that is reasonably tolerant to loss (more so than to delay). On a
standard AQM, the many flows end up yielding to the acks; on a
flow-isolating AQM, the acks are restricted to a fair (1/51) share, but
enough of them are dropped to (eventually) let the opposing flow get most
of the available bandwidth on its side. But on an FQ without AQM, acks
don't get dropped so they get delayed instead, and the opposing flow will
be ack clocked to a limited bandwidth until the ack queue overflows.

Cake ends up causing odd behaviour this way. I have a suspicion about why
one of the weirder effects shows up - it has to get so aggressive about
dropping acks that the count variable for that queue wraps around.
Implementing saturating arithmetic there might help.

There is a proposed TCP extension for ack congestion control, which allows
the ack ratio to be varied in response to ack losses. This would be a
cleaner way to achieve the same effect, and would allow enabling ECN on the
acks, but it's highly experimental.

- Jonathan Morton

[-- Attachment #2: Type: text/html, Size: 1514 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12 18:51   ` Alex Elsayed
  2015-06-12 19:14     ` Jonathan Morton
@ 2015-06-12 19:21     ` Sebastian Moeller
  2015-06-12 22:56       ` Alex Elsayed
  1 sibling, 1 reply; 12+ messages in thread
From: Sebastian Moeller @ 2015-06-12 19:21 UTC (permalink / raw)
  To: Alex Elsayed; +Cc: bloat

Hi Alex,

On Jun 12, 2015, at 20:51 , Alex Elsayed <eternaleye@gmail.com> wrote:

> Sebastian Moeller wrote:
> 
>> Hi Benjamin,
>> 
>> To go off onto a tangent:
>> 
>> On Jun 12, 2015, at 06:45 , Benjamin Cronce
>> <bcronce@gmail.com> wrote:
>> 
>>> [...]
>>> Under load while doing P2P(About 80Mb down and 20Mb up just as I started
>>> the test) HFSC: P2P in 20% queue and 80/443/8080 in 40% queue with ACKs
>>> going to a 20% realtime queue http://www.dslreports.com/speedtest/622452
>> 
>> I know this is not really your question, but I think the ACKs should go
>> into the same queue as the matching data packets. Think about it that way,
>> if the data is delayed due to congestion it does not make too much sense
>> to tell the sender to send more faster (which essentially is what ACK
>> prioritization does) as that will not really reduce the congestion but
>> rather increase it. There is one caveat though: when ECN is used it might
>> make sense to send out the ACK that will signal the congestion state back
>> to the sender faster… So if you prioritize ACKs only select those with an
>> ECN-Echo flag ;) @bloat : What do you all think about this refined ACK
>> prioritization scheme?
> 
> I'd say that this is wrongly attempting to bind upstream congestion to 
> downstream congestion.
> 
> Let's have two endpoints, A and B. There exists a stream sent from A towards 
> B.
> 
> If A does not receive an ack from B in a timely manner, it draws inference 
> as to the congestion on the path _towards_ B. Prioritizing acks from B to A 
> thus makes this _more accurate to reality_ - a lost ack (rather than the 
> absence of an ack due to a lost packet) actually behaves as misinformation 
> to the sender, causing them to

	So my silent assumption was that we talk about a debloated access link, after all this is the bloat list and we think we have solved most of that problem. So there is no major congestion on the part of the uplink where prioritization would work (the home router’s egress interface), so not misinformation there. As I started in another mail to Benjamin, instead of ACK prioritization I would de-bloat the access  link ;)
	I add that the currently recommenders solution shaper+fq_codel or cake both give some precedence to sparse flows, which does boost small packets like ACKs (until there are too many competing spade flows then all flows are treated as non-sparse, both AQMs also IIRC preferably drop/mark packets from large flows so ACKs will still get some love on upstream congestion).

> 
> 1.) back off sending when the sending channel is not congested and
> 2.) resend a packet that _already arrived_.

	But TCP ACKs are cumulative so the information from a lost ACK are also included in the next, so you need to loose a stretch of ACKs before your scenario becomes relevant, no?

> 
> The latter point is a big one: Prioritized ACKs (may) reduce spurious 
> resends, especially on asymmetric connections - and suprious resends are 
> pure network inefficiency. Especially since the data packets are likely far 
> larger than the ACKs. Which would _also_ get resent.

	But for the spurious resends you either breed to drop several ACKs in a row or delay the ACKs long enough that the RTO triggers; both are situation I would recommend to avoid anyways ;) So I am still not convinced on the ACK priority rationale, assuming a de-bloated access link. If you have enough control over the link to implement ACK games, I believe you are better off de-bloating it more thoroughly… Again not an expert just a layman’s opinion.

Best Regards
	Sebastuian


> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12 19:14     ` Jonathan Morton
@ 2015-06-12 19:54       ` Sebastian Moeller
  2015-06-12 21:19         ` Benjamin Cronce
  0 siblings, 1 reply; 12+ messages in thread
From: Sebastian Moeller @ 2015-06-12 19:54 UTC (permalink / raw)
  To: Jonathan Morton, Alex Elsayed; +Cc: bloat

Hi Jonathan,

On June 12, 2015 9:14:02 PM GMT+02:00, Jonathan Morton <chromatix99@gmail.com> wrote:
>We have a test in Flent which tries to exercise this case: 50 flows in
>one
>direction and 1 in the other, all TCP. Where the 50 flows are on the
>narrow
>side of an asymmetric link, it is possible to see just what happens
>when
>there isn't enough bandwidth for the acks of the single opposing flow.
>
>What I see is that acks behave like an unresponsive flow in themselves,
>but
>one that is reasonably tolerant to loss (more so than to delay). On a
>standard AQM, the many flows end up yielding to the acks; on a
>flow-isolating AQM, the acks are restricted to a fair (1/51) share, but
>enough of them are dropped to (eventually) let the opposing flow get
>most
>of the available bandwidth on its side. But on an FQ without AQM, acks
>don't get dropped so they get delayed instead, and the opposing flow
>will
>be ack clocked to a limited bandwidth until the ack queue overflows.
>
>Cake ends up causing odd behaviour this way. I have a suspicion about
>why
>one of the weirder effects shows up - it has to get so aggressive about
>dropping acks that the count variable for that queue wraps around.
>Implementing saturating arithmetic there might help.
>
>There is a proposed TCP extension for ack congestion control, which
>allows
>the ack ratio to be varied in response to ack losses. This would be a
>cleaner way to achieve the same effect, and would allow enabling ECN on
>the
>acks, but it's highly experimental.

       This is reducing the ACK-rate to make losses less likely, but at the same time it makes a single loss more costly, so whether this is a win depends on whether the sparser ACK flow has a much higher probability to pass trough the congested link. I wonder what percentage of an ACK flow can be dropped without slowing the sender?

>
>- Jonathan Morton
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Bloat mailing list
>Bloat@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/bloat

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12 19:54       ` Sebastian Moeller
@ 2015-06-12 21:19         ` Benjamin Cronce
  0 siblings, 0 replies; 12+ messages in thread
From: Benjamin Cronce @ 2015-06-12 21:19 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: Jonathan Morton, Alex Elsayed, bloat

[-- Attachment #1: Type: text/plain, Size: 3961 bytes --]

> Hi Jonathan,
>
> On June 12, 2015 9:14:02 PM GMT+02:00, Jonathan Morton <chromatix99 at
gmail.com> wrote:
> >We have a test in Flent which tries to exercise this case: 50 flows in
one
> >direction and 1 in the other, all TCP. Where the 50 flows are on the
narrow
> >side of an asymmetric link, it is possible to see just what happens when
> >there isn't enough bandwidth for the acks of the single opposing flow.
> >
> >What I see is that acks behave like an unresponsive flow in themselves,
but
> >one that is reasonably tolerant to loss (more so than to delay). On a
> >standard AQM, the many flows end up yielding to the acks; on a
> >flow-isolating AQM, the acks are restricted to a fair (1/51) share, but
> >enough of them are dropped to (eventually) let the opposing flow get most
> >of the available bandwidth on its side. But on an FQ without AQM, acks
> >don't get dropped so they get delayed instead, and the opposing flow will
> >be ack clocked to a limited bandwidth until the ack queue overflows.
> >
> >Cake ends up causing odd behaviour this way. I have a suspicion about why
> >one of the weirder effects shows up - it has to get so aggressive about
> >dropping acks that the count variable for that queue wraps around.
> >Implementing saturating arithmetic there might help.
> >
> >There is a proposed TCP extension for ack congestion control, which
allows
> >the ack ratio to be varied in response to ack losses. This would be a
> >cleaner way to achieve the same effect, and would allow enabling ECN on
the
> >acks, but it's highly experimental.
>
>        This is reducing the ACK-rate to make losses less likely, but at
the same time it makes a single loss more costly, so whether this is a win
depends on whether the sparser ACK flow has a much higher probability to
pass trough the congested link. I wonder what percentage of an ACK flow can
be dropped without slowing the sender?
>
> >
> >- Jonathan Morton

I also question the general usefulness of sparser ACK flows just to
accommodate hyper-asymmetrical connections. The main causes of these issues
are old DSL techs or DOCSIS rollouts that haven't fully switched to
DOCSIS3. Many cable companies claim to be DOCSIS3 because their downstream
is 3.0, but their upstream is 2.0 because 3.0 requires old line-filters to
be replaced with newer ones that open up some low frequency channels. Once
these channels get opened up, assuming fiber doesn't get their first, there
will be a lot more available upstream bandwidth, assuming the ISPs
provision it.

Modern OSs already support naggle to be toggled on/off for a given TCP
flow. Maybe a change to the algorithm to detect TCP RTTs above a threshhold
or certain patterns in lost ACKs to trigger increasing the number of
packets naggle coalesces for ACKs. My understanding of naggle is it take
two parameters, a window and a max number. I think the default window is
something like 100ms and the default max coalesced ACKs are two. But you
can modify either value. So technically ACK rates can already be modified,
it's just not done dynamically, but the feature already exists. Instead of
making further changes to TCP, educate people on how to change their TCP
settings?

I could see this making strange issues with really low RTTs where the RTT
is lower than the naggle window making for a small receive window. Because
TCP implementations have a minimum of 2 segments, it matches with naggle's
default to combine two ACKs. If you suddenly decided to combine 3 segments,
two segments get sent, the other side receives the segments but does not
ACK them because it's waiting for a 3rd. The other side does not send any
more segments because it's waiting for an ACK. You suddenly get these
strange pulses based on the naggle window.

Because naggle's combine matches perfectly with the minimum outstanding
segments, this corner case does not exist. But I'll leave this to people
more knowledgeable than me. Just thinking out loud here.

[-- Attachment #2: Type: text/html, Size: 5460 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12 19:21     ` Sebastian Moeller
@ 2015-06-12 22:56       ` Alex Elsayed
  2015-06-13  7:13         ` Sebastian Moeller
  0 siblings, 1 reply; 12+ messages in thread
From: Alex Elsayed @ 2015-06-12 22:56 UTC (permalink / raw)
  To: bloat

Sebastian Moeller wrote:

> Hi Alex,
> 
> On Jun 12, 2015, at 20:51 , Alex Elsayed
> <eternaleye@gmail.com> wrote:
> 
>> Sebastian Moeller wrote:
>> 
>>> Hi Benjamin,
>>> 
>>> To go off onto a tangent:
>>> 
>>> On Jun 12, 2015, at 06:45 , Benjamin Cronce
>>> <bcronce@gmail.com> wrote:
>>> 
>>>> [...]
>>>> Under load while doing P2P(About 80Mb down and 20Mb up just as I
>>>> started the test) HFSC: P2P in 20% queue and 80/443/8080 in 40% queue
>>>> with ACKs going to a 20% realtime queue
>>>> http://www.dslreports.com/speedtest/622452
>>> 
>>> I know this is not really your question, but I think the ACKs should go
>>> into the same queue as the matching data packets. Think about it that
>>> way, if the data is delayed due to congestion it does not make too much
>>> sense to tell the sender to send more faster (which essentially is what
>>> ACK prioritization does) as that will not really reduce the congestion
>>> but rather increase it. There is one caveat though: when ECN is used it
>>> might make sense to send out the ACK that will signal the congestion
>>> state back to the sender faster… So if you prioritize ACKs only select
>>> those with an ECN-Echo flag ;) @bloat : What do you all think about this
>>> refined ACK prioritization scheme?
>> 
>> I'd say that this is wrongly attempting to bind upstream congestion to
>> downstream congestion.
>> 
>> Let's have two endpoints, A and B. There exists a stream sent from A
>> towards B.
>> 
>> If A does not receive an ack from B in a timely manner, it draws
>> inference as to the congestion on the path _towards_ B. Prioritizing acks
>> from B to A thus makes this _more accurate to reality_ - a lost ack
>> (rather than the absence of an ack due to a lost packet) actually behaves
>> as misinformation to the sender, causing them to
> 
> So my silent assumption was that we talk about a debloated access link,
> after all this is the bloat list and we think we have solved most of that
> problem. So there is no major congestion on the part of the uplink where
> prioritization would work (the home router’s egress interface), so not
> misinformation there. As I started in another mail to Benjamin, instead of
> ACK prioritization I would de-bloat the access  link ;) I add that the
> currently recommenders solution shaper+fq_codel or cake both give some
> precedence to sparse flows, which does boost small packets like ACKs
> (until there are too many competing spade flows then all flows are treated
> as non-sparse, both AQMs also IIRC preferably drop/mark packets from large
> flows so ACKs will still get some love on upstream congestion).

Sure, the access link is debloated. But there's also the remote downlink, 
etc. Our access link may not be the bottleneck for the ACKs.

And yes, boosting sparse flows is likely a more beneficial behavior than 
prioritizing ACKs specifically (especially on deeply asymmetric links)

>> 
>> 1.) back off sending when the sending channel is not congested and
>> 2.) resend a packet that _already arrived_.
> 
> But TCP ACKs are cumulative so the information from a lost ACK are also
> included in the next, so you need to loose a stretch of ACKs before your
> scenario becomes relevant, no?

Sure, though again the local access link isn't the only possible source of 
congestion.

>> 
>> The latter point is a big one: Prioritized ACKs (may) reduce spurious
>> resends, especially on asymmetric connections - and suprious resends are
>> pure network inefficiency. Especially since the data packets are likely
>> far larger than the ACKs. Which would _also_ get resent.
> 
> But for the spurious resends you either breed to drop several ACKs in a
> row or delay the ACKs long enough that the RTO triggers; both are
> situation I would recommend to avoid anyways ;) So I am still not
> convinced on the ACK priority rationale, assuming a de-bloated access
> link. If you have enough control over the link to implement ACK games, I
> believe you are better off de-bloating it more thoroughly… Again not an
> expert just a layman’s opinion.

Sure, debloating more thoroughly is the best solution. It's just a nonlocal 
solution, and until debloating conquers the world, local solutions have a 
place.

> Best Regards
> Sebastuian
> 
> 
>> 
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] Bloat done correctly?
  2015-06-12 22:56       ` Alex Elsayed
@ 2015-06-13  7:13         ` Sebastian Moeller
  0 siblings, 0 replies; 12+ messages in thread
From: Sebastian Moeller @ 2015-06-13  7:13 UTC (permalink / raw)
  To: Alex Elsayed; +Cc: bloat

Hi Alex,

On Jun 13, 2015, at 00:56 , Alex Elsayed <eternaleye@gmail.com> wrote:

> [...]
> Sure, the access link is debloated. But there's also the remote downlink, 
> etc. Our access link may not be the bottleneck for the ACKs.

	I am confused now, prioritizing ACKs will only work (reliably) on the access link (for normal end users, business contracts might be different) and the ISP will either ignore them or re-map them to zero. Often enough the access link really is the relevant bottleneck, but you are right there a number of situations where the congestion is upstream and neither de-bloating the home link nor prioritizing ACKs in the home network will help.


> 
> And yes, boosting sparse flows is likely a more beneficial behavior than 
> prioritizing ACKs specifically (especially on deeply asymmetric links)
> 
>>> 
>>> 1.) back off sending when the sending channel is not congested and
>>> 2.) resend a packet that _already arrived_.
>> 
>> But TCP ACKs are cumulative so the information from a lost ACK are also
>> included in the next, so you need to loose a stretch of ACKs before your
>> scenario becomes relevant, no?
> 
> Sure, though again the local access link isn't the only possible source of 
> congestion.

	But isn’t it the only link that is sufficiently under our control to allow to implement remedies?

> [...]
> 
> Sure, debloating more thoroughly is the best solution. It's just a nonlocal 
> solution, and until debloating conquers the world, local solutions have a 
> place.

	So currently the observation is that for most situations even 1-tier shaper+fq_codel as implemented in sqm-scripts helps to fight access link buffer bloat efficiently enough that ACK-priritization does not seem to be needed nor recommended anymore (well unless your router only allows playing ACK-games but does not offer flow fair queueing)

Best Regards
	Sebastian


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2015-06-13  7:13 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-12  4:45 [Bloat] Bloat done correctly? Benjamin Cronce
2015-06-12  9:08 ` Sebastian Moeller
2015-06-12 15:33   ` Benjamin Cronce
2015-06-12 17:51     ` Sebastian Moeller
2015-06-12 18:44       ` Benjamin Cronce
2015-06-12 18:51   ` Alex Elsayed
2015-06-12 19:14     ` Jonathan Morton
2015-06-12 19:54       ` Sebastian Moeller
2015-06-12 21:19         ` Benjamin Cronce
2015-06-12 19:21     ` Sebastian Moeller
2015-06-12 22:56       ` Alex Elsayed
2015-06-13  7:13         ` Sebastian Moeller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox