[Bloat] [iccrg] Musings on the future of Internet Congestion Control

David Lang david at lang.hm
Wed Jul 13 11:43:09 EDT 2022


On Wed, 13 Jul 2022, Sebastian Moeller wrote:

> Hi David,
>
>
>> On Jul 12, 2022, at 21:22, David Lang <david at lang.hm> wrote:
>>
>> On Tue, 12 Jul 2022, Sebastian Moeller wrote:
>>
>>> Hi David,
>>>
>>> Thanks!
>>>
>>>> On Jul 12, 2022, at 19:56, David Lang <david at lang.hm> wrote:
>>>>
>>>> On Tue, 12 Jul 2022, Sebastian Moeller via Bloat wrote:
>>>>
>>>>>>>> There are plenty of useful things that they can do and yes, I personally think they’re the way of the future - but **not** in their current form, where they must “lie” to TCP, cause ossification,
>>>>>>>
>>>>>>> 	[SM] Here I happily agree, if we can get the nagative side-effects removed that would be great, however is that actually feasible or just desirable?
>>>>>>>> etc. PEPs have never been considered as part of the congestion control design - when they came on the scene, in the IETF, they were despised for breaking the architecture, and then all the trouble with how they need to play tricks was discovered (spoofing IP addresses, making assumptions about header fields, and whatnot). That doesn’t mean that a very different kind of PEP - one which is authenticated and speaks an agreed-upon protocol - couldn’t be a good solution.
>>>>>>>
>>>>>>> 	[SM] Again, I agree it could in theory especially if well-architected.
>>>>>> That’s what I’m advocating.
>>>>>
>>>>> 	[SM] Well, can you give an example of an existing well-architected PEP as proof of principle?
>>>>
>>>> the windows protocols work very poorly over high latency links (i.e. long distance links) and the PEPs that short circuit those protocols make life much nicer for users as well as reducing network traffic.
>>>
>>> 	[SM] Windows protocols, like in microsoft's server message block (smb) protocol or as in "protocols using data windows", like TCP's congestion and receive window?
>>
>> microsoft windows smb
>
> 	[SM2] Thanks!
>
>
>>
>>>> it's a nasty protocol to start with, but it's the reality on the ground and proxies do help a lot.
>>>
>>> 	[SM] Are such proxies located in third party middle boxes/proxies or are these part of microsoft's software suite for enterprises (assuming the first as answer to my question)?
>>
>> third party middle boxes that you put in each office as a proxy.
>>
>> David Lang
>
>
> [SM2] Interesting, I had actually noted that accessing files via my work VPN is a pain (in both windows and macos, as the servers use SMB). My work around was to use microsofts remote desktop (which on my access link feels reasonably snappy) to do most work remotely and also offers to transfer files, so I did all the heavy processing remotely and only exchanged either initial input or final output files, essentially working around the fact that SMB is less than impressive once the RTT goes into the milliseconds range... (However I wonder, with a filesystem essentially being a general purpose database designed for arbitrarily large blobs, how much of that issue is inherent to the problem and how much avoidable pain did microsoft add when designing their protocol?)

It's very much a microsoft protocol issue. It's very chatty, and so very 
sensitive to high latency. the microsoft proxies implement a combination of 
caching and bulk queries to eliminate round trips for the user.

Local HTTP and DNS proxy servers do the same thing with their protocols, cache 
the data that doesn't change and only send the queries that they must over the 
networks.

Google is experimenting with similar things for mobile devices where something 
at the datacenter (with high speed links and lots of caching available) can 
assemble all the elements of a web page and send the combined result to the 
user.

With Bufferbloat, we spend a lot of time focusing on eliminating unneccessary 
latency, but the speed of light imposes minimum latency for connections, and 
there are benefits to the users to be able to further eliminate those round 
trips where possible (and where you can't eliminate them, look at shortening the 
distances)

good performance proxies are not re-coding images or things like that most of 
the time, that takes significant CPU and requires that the proxy receive the 
entire image to re-process it, instead they are protocol-aware systems that work 
to eliminate round trips to the server.

There are cases where the endpoints is expected to be low performance where it 
can make sense to re-code things from what the server provides to something that 
the client can handle with hardware acceleration, or to something that requires 
less bandwidth, but those are special cases, not the common case.

David Lang


More information about the Bloat mailing list