<p dir="ltr">I do see your arguments. Let it be known that I didn't initiate the ack-filter in Cake, though it does seem to work quite well.</p>
<p dir="ltr">With respect to BBR, I don't think it depends strongly on the return rate of acks in themselves, but rather on the rate of sequence number advance that they indicate. For this purpose, having the receiver emit sparser but still regularly spaced acks would be better than having some middlebox delete some less-predictable subset of them. So I think BBR could be a good testbed for AckCC implementation, especially as it is inherently paced and thus doesn't suffer from burstiness as a conventional ack-clocked TCP might.</p>
<p dir="ltr">The real trouble with AckCC is that it requires implementation on the client as well as the server. That's most likely why Google hasn't tried it yet; there are no receivers in the wild that would give them valid data on its effectiveness. Adding support in Linux would help here, but aside from Android devices, Linux is only a relatively small proportion of Google's client traffic - and Android devices are slow to pick up new kernel features if they can't immediately turn it into a consumer-friendly bullet point.</p>
<p dir="ltr">Meanwhile we have highly asymmetric last-mile links (10:1 is typical, 50:1 is occasionally seen), where a large fraction of upload bandwidth is occupied by acks in order to fully utilise the download bandwidth in TCP. Any concurrent upload flows have to compete with that dense ack flow, which in various schemes is unfair to either the upload or the download throughput.</p>
<p dir="ltr">That is a problem as soon as you have multiple users on the same link, eg. a family household at the weekend. Thinning out those acks in response to uplink congestion is a solution. Maybe not the best possible solution, but a deployable one that works.</p>
<p dir="ltr"> - Jonathan Morton<br>
</p>