[NNagain] On "Throttling" behaviors

Sebastian Moeller moeller0 at gmx.de
Mon Oct 2 02:34:40 EDT 2023


Hi Dave,


> On Oct 1, 2023, at 21:51, Dave Taht via Nnagain <nnagain at lists.bufferbloat.net> wrote:
> 
> I kind of expect many, many forks of conversations here, and for those
> introducing a new topic,
> I would like to encourage using a relevant subject line, so I have
> changed this one to suit.
> 
> On Sun, Oct 1, 2023 at 11:57 AM Frantisek Borsik
> <frantisek.borsik at gmail.com> wrote:
>> 
>> OK, so I will bite the bullet! I have invited Ajit Pai and Martin Geddes to join us here and let's see if they still have some time and/or even stomach for current round of NN discussion.
> 
> Honestly I was hoping for some time to setup, and even perhaps, have
> enough of us here
> to agree on one of the definitions of NN, to start with!
> 
>> Anyway, here is my bullet. I will argue with Martin, that - Net Neutrality CAN'T be implemented:
> 
> You meant "argue along with" rather than "with". I know you are not a
> native english speaker, but in the way you said it, it meant you were
> arguing against what he described.
> 
>> 
>>> Whilst people argue over the virtues of net neutrality as a regulatory policy, computer science tells us regulatory implementation is a fool’s errand.
>>> Suppose for a moment that you are the victim of a wicked ISP that engages in disallowed “throttling” under a “neutral” regime for Internet access. You like to access streaming media from a particular “over the top” service provider. By coincidence, the performance of your favoured application drops at the same time your ISP launches a rival content service of its own.
>>> You then complain to the regulator, who investigates. She finds that your ISP did indeed change their traffic management settings right at the point that the “throttling” began. A swathe of routes, including the one to your preferred “over the top” application, have been given a different packet scheduling and routing treatment.
>>> It seems like an open-and-shut case of “throttling” resulting in a disallowed “neutrality violation”. Or is it?
>>> Here’s why the regulator’s enforcement order will never survive the resulting court case and expert witness scrutiny:
>> 
>> 
>> https://www.martingeddes.com/one-reason-net-neutrality-cant-implemented/
> 
> Throttling, using DPI or other methods, is indeed feasible. It is very
> straightforward to limit flows to or from a given set of IP addresses.
> However, there are also technical limitations, based on for example,
> the underlying connectivity of a path be it one gbit or 10, which
> would also show a customer problem in unwinding the difference between
> intentionally throttling and merely being out of bandwidth across that
> link. A lot of the netflix controversy was generated because netflix
> suddenly ate far far more bandwidth that anyone had provisioned, and
> was ultimately addressed by them developing and making easily
> available a caching architecture that could be located within an ISPs
> borders, saving an enormous amount on transit costs.  The rest of the
> computing universe followed with enormous numbers of CDNs from the
> bigger services being built out in the years following.
> 
> In part I kind of reject a few older arguments here in the face of
> subsequent, technical improvements on how the internet works.
> 
> I had many discussions with martin back in the day. The reasoning in
> this piece is pretty sound,

	[SM] So to me this really looks like a Nirvana fallacy, that tries to invoke the halting problem of all things and deduces all regulatory action to be futile. But in the end really proposes what NN is really all about, making sure end-customers end to end experience is not artificially biased. I guess I am misunderstanding his point and would appreciate be set straight here ;)


> except that "fairness" can be achieved via
> various means (notably flow (fair) queueing), and it has always been a
> (imperfectly implemented) goal of our e2e congestion control
> algorithms to ultimately converge to a equal amount of bandwidth at
> differing RTTs to different services.

	[SM] This is a good point that makes me reflect my position on this topic a bit; because I have been living in a flow queued environment for over a decade now, I fail to see how achieving that (on a "good enough" level) should be unachievable.


> My principal kvetch with his work was that every blog entry during
> this period making these points then ended up linking to a "Quality
> Floor", called "delta-something", a mathematical method that was
> ill-documented, not available as open source and impossible, for me,
> at least to understand. The link to that solution is broken in that
> link, for example. That quasi-mystical jump to "the solution", I was
> never able to make.

	[SM] Looked from high above his argument really seems to say, NN measurement should not focus on per-network configuration but on conparung end to end performance/experience, something I guess nobody disagrees with all that much?


> 
> I believe something like this method lives on in Domos´s work today,
> and despite them filing an internet draft on the subject (
> https://datatracker.ietf.org/doc/draft-olden-ippm-qoo/ ) I remain
> mostly in the dark, without being able to evaluate their methods
> against tools I already understand.

	[SM] Their telemetry method, really is one that is only achievable for an ISP inside their own network, as clever as it appears to be as unsuited to end-to-end measurements over the internet they seem to be as almost no path will be fully instrumented...



> I like the idea of what I think a "quality floor" might provide, which
> is something that fq-everywhere can provide, and no known e2e
> congestion control can guarantee.
> 
> I would like it if instead of ISPs selling "up to X bandwidth" they
> sold a minimum guarantee of at least Y bandwidth, which is easier to
> reason and provision for, but harder to sell.

	[SM] Independently of what an ISP promises, not all paths will be able to carry that much capacity at any one time and often not for the fault of that ISP. The way the German regulatory agency (BNetzA) tried t tackle this issue is by requiring foremost that the contracted rates can be measured against a set of reference speedtest servers operated for the BNetzA and located at one/some of Germany's largest IX (though ISPs can also opt for private peering with the AS hosting the servers, which IMHO is sub-optimal). The EU regulation also makes an exemption for things out of the ISPs control, say if Notflix [sic] would decide to not serve ISP A than ISP A can not be faulted for having shitty connectivity to Notflix, even if Notflix would offer excellent access for excellent $$$$. Not saying the EU regime is perfect, but it at least covers a lot of ground and seems all in all balanced between end-user, ISP and content provider perspectives.



> 
> Instead:
> 
> In the last 14 years I focused on restoring correct behavior of the
> well-defined congestion controls of the internet, first documented
> here:  https://ee.lbl.gov/papers/congavoid.pdf and built on the
> decades since, while many others made enormous advances on what was
> possible - packet pacing, for example, is a genuine breakthrough in
> how multiple services from multiple providers can leave sufficient
> space for other flows to co-exist and eventually use up their fair
> share.

	[SM] Yes, we as a community, made great strides on the technical side, but that did not magically remove all NN issues. For example mobile carriers in the EU opted for tilting the playing field by zero-rating some selected services on their volume-limited mobile networks, something that was finally deemed in violation against the EU rules exactly because ISPs are required to not "pick winners or loosers" but to offer unbiased access to the internet.



> 
>> 
>> I hope you will read the link ^^ before jumping to Martin's conclusion, but still, here it is:
>> 
>>> 
>>> So if not “neutrality”, then what else?
> 
> This is the phase where his arguments began to fall into the weeds.
> 
>>> The only option is to focus on the end-to-end service quality.
> 
> I agree that achieving quality on various good metrics, especially
> under load, is needed. The popular MOS metric for voip could use some
> improvement, and we lack any coherent framework for measuring
> videoconferencing well.
> 
>>> The local traffic management is an irrelevance and complete distraction.
> 
> I am not sure how to tie this to the rest of the argument. The local
> traffic management can be as simple as short buffers, or inordinately
> complex, as you will find complex organisations internally trying to
> balance the needs for throughput and latency, and for example, the
> CAKE qdisc not only does active queue management, and FQ, but
> optionally allows additional means of differentiation for
> voice/videoconferencing, best effort, and "bulk" traffic, in an
> effort, ultimately to balance goals to achieve the "service quality"
> the user desires. Then there are the side-effects of various layer 2
> protocols - wifi tends to be batchy, 5G tends towards being hugely
> bufferbloated - PON has a 250us lower limit, cable

	[SM] I think his argument is that a sufficiently malevolent and genius ISP might be able to spread a discriminatory policy so carefully over many network nodes, such that the NN police would never be able to find enough evidence for unfair tampering and hence that ISP would evade NN policing. I think this is a strawman, as surely NN policing works by measuring whether there is observable discrimination end to end, not by having tech police swecure and scrutinize rputer/switch configurations.



> 
>> Terms like “throttling” are technically meaningless. The lawgeneers who have written articles and books saying otherwise are unconsciously incompetent at computer science.
> 
> There are use cases, both good and bad, for "throttling". It is and
> has always been technically feasible to rate limit flows from anyone
> to anyone. Advisable, sometimes! DDOS attacks are one case where
> throttling is needed.

	[SM] Yes, and e.g. the EU rules allow for reasonable traffic management, and during covid lock downs made it explicitly clear that in some cases this can mean slowing down a whole class of traffic (lke streaming video) as long as that is traffic management of the whole class and not simply service X. So in reality many potential objections against simplistic NN rules can be avoided simply by not making the NN rules simplistic ;)


> 
> Breaking the user perception of being intentionally throttled vs the
> fate of the the rest of the network would be a goodness. The side
> effects of one service, living on a slow network, becoming suddenly
> popular, is known as the "slashdot effect", and is mostly mediated by
> leveraging CDN and cloud technologies, and totally out of the control
> of the local ISP.

	[SM] In the short term for sure! Longer term an ISP constantly running into such a problem might reconsider their connectivity with upstream to remedy this issue or cooperation with that service provider as long as these options are financially reasonable....


> 
>>> We computer scientists call this viable alternative “end-to-end” approach a “quality floor”.
> 
> In googling I have thus far been unable to find a definition of
> "Quality floor". Cite, please?

	[SM] Ah, good old quality_floor() which is a hand cfafted artisanal version of the mass-produced floor() function used in many programming languages ;)

> 
>> The good news is that we now have a practical means to measure it and hard science to model it.
> 
> Weeds, here.
> 
>>> Maybe we should consciously and competently try it?
> 
> ... if only we had running code and rough consensus.

	[SM] I see what you did there... Tip of the hat

Regards
	Sebastian



> 
>> 
>> 
>> 
>> All the best,
>> 
>> Frank
>> 
>> Frantisek (Frank) Borsik
>> 
>> 
>> 
>> https://www.linkedin.com/in/frantisekborsik
>> 
>> Signal, Telegram, WhatsApp: +421919416714
>> 
>> iMessage, mobile: +420775230885
>> 
>> Skype: casioa5302ca
>> 
>> frantisek.borsik at gmail.com
>> 
>> 
>> 
>> On Sun, Oct 1, 2023 at 7:15 PM Dave Taht via Nnagain <nnagain at lists.bufferbloat.net> wrote:
>>> 
>>> I am pleased to see over 100 people have signed up for this list
>>> already. I am not really planning on "activating" this list until
>>> tuesday or so, after a few more people I have reached out to sign up
>>> (or not).
>>> 
>>> I would like y´all to seek out people with differing opinions and
>>> background, in the hope that one day, we can shed more light than heat
>>> about the science and technologies that "govern" the internet, to
>>> those that wish to regulate it. In the short term, I would like enough
>>> of us to agree on an open letter, or NPRM filing,and to put out a
>>> press release(s), in the hope that this time, the nn and title ii
>>> discussion is more about real, than imagined, internet issues. [1]
>>> 
>>> I am basically planning to move the enormous discussion from over
>>> here, titled "network neutrality back in the news":
>>> 
>>> https://lists.bufferbloat.net/pipermail/starlink/2023-September/thread.html
>>> 
>>> to here. I expect that we are going to be doing this discussion for a
>>> long time, and many more issues besides my short term ones will be
>>> discussed. I hope that we can cleanly isolate technical issues from
>>> political ones, in particular, and remain civil, and factual, and
>>> avoid hyperbole.
>>> 
>>> Since the FCC announcement of a proposed NPRM as of Oct 19th... my own
>>> initial impetus was to establish why the NN debate first started in
>>> 2005, and the conflict between the legal idea of "common carriage" vs
>>> what the internet was actually capable of in mixing voip and
>>> bittorrent, in
>>> "The Bufferbloat vs Bittorrent vs Voip" phase. Jim Gettys, myself, and
>>> Jason Livinggood have weighed in on their stories on linkedin,
>>> twitter, and elsewhere.
>>> 
>>> There was a second phase, somewhat triggered by netflix, that Jonathan
>>> Morton summarized in that thread, ending in the first establishment of
>>> some title ii rules in 2015.
>>> 
>>> The third phase was when title ii was rescinded... and all that has
>>> happened since.
>>> 
>>> I, for one, am fiercely proud about how our tech community rose to
>>> meet the challenge of covid, and how, for example, videoconferencing
>>> mostly just worked for so many, after a postage stamp sized start in
>>> 2012[2]. The oh-too-faint-praise for that magnificent effort from
>>> higher levels rankles me greatly, but I will try to get it under
>>> control.
>>> 
>>> And this fourth phase, opening in a few weeks, is more, I think about
>>> privacy and power than all the other phases, and harmonization with EU
>>> legislation, perhaps. What is on the table for the industry and
>>> internet is presently unknown.
>>> 
>>> So here we "NN-again". Lay your issues out!
>>> 
>>> 
>>> 
>>> [1] I have only had one fight with the FCC. Won it handily:
>>> https://www.computerworld.com/article/2993112/vint-cerf-and-260-experts-give-fcc-a-plan-to-secure-wi-fi-routers.html
>>> In this case this is not so much a fight, I hope, but a collaborative
>>> effort towards a better, faster, lower latency, and more secure,
>>> internet for everyone.
>>> 
>>> [2] https://archive.org/details/video1_20191129
>>> --
>>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>> Dave Täht CSO, LibreQos
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
> 
> 
> 
> -- 
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> _______________________________________________
> Nnagain mailing list
> Nnagain at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain



More information about the Nnagain mailing list