[NNagain] On "Throttling" behaviors
Dave Taht
dave.taht at gmail.com
Sun Oct 1 15:51:48 EDT 2023
I kind of expect many, many forks of conversations here, and for those
introducing a new topic,
I would like to encourage using a relevant subject line, so I have
changed this one to suit.
On Sun, Oct 1, 2023 at 11:57 AM Frantisek Borsik
<frantisek.borsik at gmail.com> wrote:
>
> OK, so I will bite the bullet! I have invited Ajit Pai and Martin Geddes to join us here and let's see if they still have some time and/or even stomach for current round of NN discussion.
Honestly I was hoping for some time to setup, and even perhaps, have
enough of us here
to agree on one of the definitions of NN, to start with!
> Anyway, here is my bullet. I will argue with Martin, that - Net Neutrality CAN'T be implemented:
You meant "argue along with" rather than "with". I know you are not a
native english speaker, but in the way you said it, it meant you were
arguing against what he described.
>
>> Whilst people argue over the virtues of net neutrality as a regulatory policy, computer science tells us regulatory implementation is a fool’s errand.
>> Suppose for a moment that you are the victim of a wicked ISP that engages in disallowed “throttling” under a “neutral” regime for Internet access. You like to access streaming media from a particular “over the top” service provider. By coincidence, the performance of your favoured application drops at the same time your ISP launches a rival content service of its own.
>> You then complain to the regulator, who investigates. She finds that your ISP did indeed change their traffic management settings right at the point that the “throttling” began. A swathe of routes, including the one to your preferred “over the top” application, have been given a different packet scheduling and routing treatment.
>> It seems like an open-and-shut case of “throttling” resulting in a disallowed “neutrality violation”. Or is it?
>> Here’s why the regulator’s enforcement order will never survive the resulting court case and expert witness scrutiny:
>
>
> https://www.martingeddes.com/one-reason-net-neutrality-cant-implemented/
Throttling, using DPI or other methods, is indeed feasible. It is very
straightforward to limit flows to or from a given set of IP addresses.
However, there are also technical limitations, based on for example,
the underlying connectivity of a path be it one gbit or 10, which
would also show a customer problem in unwinding the difference between
intentionally throttling and merely being out of bandwidth across that
link. A lot of the netflix controversy was generated because netflix
suddenly ate far far more bandwidth that anyone had provisioned, and
was ultimately addressed by them developing and making easily
available a caching architecture that could be located within an ISPs
borders, saving an enormous amount on transit costs. The rest of the
computing universe followed with enormous numbers of CDNs from the
bigger services being built out in the years following.
In part I kind of reject a few older arguments here in the face of
subsequent, technical improvements on how the internet works.
I had many discussions with martin back in the day. The reasoning in
this piece is pretty sound, except that "fairness" can be achieved via
various means (notably flow (fair) queueing), and it has always been a
(imperfectly implemented) goal of our e2e congestion control
algorithms to ultimately converge to a equal amount of bandwidth at
differing RTTs to different services.
My principal kvetch with his work was that every blog entry during
this period making these points then ended up linking to a "Quality
Floor", called "delta-something", a mathematical method that was
ill-documented, not available as open source and impossible, for me,
at least to understand. The link to that solution is broken in that
link, for example. That quasi-mystical jump to "the solution", I was
never able to make.
I believe something like this method lives on in Domos´s work today,
and despite them filing an internet draft on the subject (
https://datatracker.ietf.org/doc/draft-olden-ippm-qoo/ ) I remain
mostly in the dark, without being able to evaluate their methods
against tools I already understand.
I like the idea of what I think a "quality floor" might provide, which
is something that fq-everywhere can provide, and no known e2e
congestion control can guarantee.
I would like it if instead of ISPs selling "up to X bandwidth" they
sold a minimum guarantee of at least Y bandwidth, which is easier to
reason and provision for, but harder to sell.
Instead:
In the last 14 years I focused on restoring correct behavior of the
well-defined congestion controls of the internet, first documented
here: https://ee.lbl.gov/papers/congavoid.pdf and built on the
decades since, while many others made enormous advances on what was
possible - packet pacing, for example, is a genuine breakthrough in
how multiple services from multiple providers can leave sufficient
space for other flows to co-exist and eventually use up their fair
share.
>
> I hope you will read the link ^^ before jumping to Martin's conclusion, but still, here it is:
>
>>
>> So if not “neutrality”, then what else?
This is the phase where his arguments began to fall into the weeds.
>> The only option is to focus on the end-to-end service quality.
I agree that achieving quality on various good metrics, especially
under load, is needed. The popular MOS metric for voip could use some
improvement, and we lack any coherent framework for measuring
videoconferencing well.
>>The local traffic management is an irrelevance and complete distraction.
I am not sure how to tie this to the rest of the argument. The local
traffic management can be as simple as short buffers, or inordinately
complex, as you will find complex organisations internally trying to
balance the needs for throughput and latency, and for example, the
CAKE qdisc not only does active queue management, and FQ, but
optionally allows additional means of differentiation for
voice/videoconferencing, best effort, and "bulk" traffic, in an
effort, ultimately to balance goals to achieve the "service quality"
the user desires. Then there are the side-effects of various layer 2
protocols - wifi tends to be batchy, 5G tends towards being hugely
bufferbloated - PON has a 250us lower limit, cable
> Terms like “throttling” are technically meaningless. The lawgeneers who have written articles and books saying otherwise are unconsciously incompetent at computer science.
There are use cases, both good and bad, for "throttling". It is and
has always been technically feasible to rate limit flows from anyone
to anyone. Advisable, sometimes! DDOS attacks are one case where
throttling is needed.
Breaking the user perception of being intentionally throttled vs the
fate of the the rest of the network would be a goodness. The side
effects of one service, living on a slow network, becoming suddenly
popular, is known as the "slashdot effect", and is mostly mediated by
leveraging CDN and cloud technologies, and totally out of the control
of the local ISP.
>> We computer scientists call this viable alternative “end-to-end” approach a “quality floor”.
In googling I have thus far been unable to find a definition of
"Quality floor". Cite, please?
> The good news is that we now have a practical means to measure it and hard science to model it.
Weeds, here.
>> Maybe we should consciously and competently try it?
... if only we had running code and rough consensus.
>
>
>
> All the best,
>
> Frank
>
> Frantisek (Frank) Borsik
>
>
>
> https://www.linkedin.com/in/frantisekborsik
>
> Signal, Telegram, WhatsApp: +421919416714
>
> iMessage, mobile: +420775230885
>
> Skype: casioa5302ca
>
> frantisek.borsik at gmail.com
>
>
>
> On Sun, Oct 1, 2023 at 7:15 PM Dave Taht via Nnagain <nnagain at lists.bufferbloat.net> wrote:
>>
>> I am pleased to see over 100 people have signed up for this list
>> already. I am not really planning on "activating" this list until
>> tuesday or so, after a few more people I have reached out to sign up
>> (or not).
>>
>> I would like y´all to seek out people with differing opinions and
>> background, in the hope that one day, we can shed more light than heat
>> about the science and technologies that "govern" the internet, to
>> those that wish to regulate it. In the short term, I would like enough
>> of us to agree on an open letter, or NPRM filing,and to put out a
>> press release(s), in the hope that this time, the nn and title ii
>> discussion is more about real, than imagined, internet issues. [1]
>>
>> I am basically planning to move the enormous discussion from over
>> here, titled "network neutrality back in the news":
>>
>> https://lists.bufferbloat.net/pipermail/starlink/2023-September/thread.html
>>
>> to here. I expect that we are going to be doing this discussion for a
>> long time, and many more issues besides my short term ones will be
>> discussed. I hope that we can cleanly isolate technical issues from
>> political ones, in particular, and remain civil, and factual, and
>> avoid hyperbole.
>>
>> Since the FCC announcement of a proposed NPRM as of Oct 19th... my own
>> initial impetus was to establish why the NN debate first started in
>> 2005, and the conflict between the legal idea of "common carriage" vs
>> what the internet was actually capable of in mixing voip and
>> bittorrent, in
>> "The Bufferbloat vs Bittorrent vs Voip" phase. Jim Gettys, myself, and
>> Jason Livinggood have weighed in on their stories on linkedin,
>> twitter, and elsewhere.
>>
>> There was a second phase, somewhat triggered by netflix, that Jonathan
>> Morton summarized in that thread, ending in the first establishment of
>> some title ii rules in 2015.
>>
>> The third phase was when title ii was rescinded... and all that has
>> happened since.
>>
>> I, for one, am fiercely proud about how our tech community rose to
>> meet the challenge of covid, and how, for example, videoconferencing
>> mostly just worked for so many, after a postage stamp sized start in
>> 2012[2]. The oh-too-faint-praise for that magnificent effort from
>> higher levels rankles me greatly, but I will try to get it under
>> control.
>>
>> And this fourth phase, opening in a few weeks, is more, I think about
>> privacy and power than all the other phases, and harmonization with EU
>> legislation, perhaps. What is on the table for the industry and
>> internet is presently unknown.
>>
>> So here we "NN-again". Lay your issues out!
>>
>>
>>
>> [1] I have only had one fight with the FCC. Won it handily:
>> https://www.computerworld.com/article/2993112/vint-cerf-and-260-experts-give-fcc-a-plan-to-secure-wi-fi-routers.html
>> In this case this is not so much a fight, I hope, but a collaborative
>> effort towards a better, faster, lower latency, and more secure,
>> internet for everyone.
>>
>> [2] https://archive.org/details/video1_20191129
>> --
>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>> Dave Täht CSO, LibreQos
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
More information about the Nnagain
mailing list