From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 4CB2A3B2A4 for ; Tue, 17 Oct 2023 13:26:37 -0400 (EDT) Received: by mail-pl1-x634.google.com with SMTP id d9443c01a7336-1c9a1762b43so47849925ad.1 for ; Tue, 17 Oct 2023 10:26:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697563595; x=1698168395; darn=lists.bufferbloat.net; h=references:to:cc:in-reply-to:date:subject:mime-version:message-id :from:from:to:cc:subject:date:message-id:reply-to; bh=qYm+24mU/9J1gYWsyEKSvSrS7d7HD14gpa2+tKYCRbA=; b=BlVSXRbTEEwS76bqraT1DFOUWWYcC8wlv9jDn3NA3S10lmH1o0wpziMerG/gKhguMe tZjThj9m6L1O2XVV4N8gPqzs7NjGdfQICXJ8B4DfudagNIPpS3q6OuFcaKtikFPlbnj/ Up52NINrF8O+l/KGC/1jX+kEafMXlSMIhIH4Ks2iPHjKUhwjRj5MK3vCOCyNcToLzQzO hghrnUSBsaYxfqDbvoL0XU6KgwW9Dkf5dy6hhtucFwAoLasjd2y37d06s1gzmwYAFVzm 36YYtJAVPTKqM9H8KrMxfZld51ylzoAfFU3XMqeXlpH5OXfCGkxHF35p9XCFlyUhu33X qprA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697563595; x=1698168395; h=references:to:cc:in-reply-to:date:subject:mime-version:message-id :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qYm+24mU/9J1gYWsyEKSvSrS7d7HD14gpa2+tKYCRbA=; b=BzEmluQZYVe5OzdGGXcjXSaEsI6HYWu9fAYBWA2iee65+Cy5kLawgpiuDaUJtuXZbK HKuGxH8oS/HUXWXAe3oQpD58TDd4Ak24w0Y6+c2bQLsBVCg34dcxM4LX7qoEJKKV8iQO dunHBJTobSTf9CSa2jfM69cF7othe09chotk1Yn1nG0gjeGJVJiVOjSw+w0FH8MOt7JB fdxkBBf7owe0LA7Fh8VmiMy5trxPYy1uSVh9OLtslWB0ccqRjABxdDa+1INqhveDQnAo fBPtWJxsJZiXFZukhZw4/fZz4bqHPm2x2mexkaxS0Ov5MvY2EYGRaZc1vws5Z33Cn8uT oE1w== X-Gm-Message-State: AOJu0YxEobMKFa4zqGPKTlrMjfv29gLbnjFItT4fTWn6ZGsDxJWky/tY xd6mzIazAf+EpKTGewnemhotu72Ah+c= X-Google-Smtp-Source: AGHT+IH7lIkFTBG/x9xZE+u3r8UKMKQQGnCKDviAP+y6LqhlSvVC42rVbRK0KtbUk62G9Oxbg7BRAw== X-Received: by 2002:a17:902:f945:b0:1c9:bfdc:7efe with SMTP id kx5-20020a170902f94500b001c9bfdc7efemr2444655plb.68.1697563594504; Tue, 17 Oct 2023 10:26:34 -0700 (PDT) Received: from smtpclient.apple ([2601:603:4881:6e20:e032:cb75:ba3e:35f1]) by smtp.gmail.com with ESMTPSA id e9-20020a170902d38900b001c9b29b9bd4sm1857119pld.38.2023.10.17.10.26.33 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 17 Oct 2023 10:26:33 -0700 (PDT) From: Spencer Sevilla Message-Id: Content-Type: multipart/alternative; boundary="Apple-Mail=_83D5DDA0-9CF8-4510-A0F0-2FB4C3609AAF" Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.700.6\)) Date: Tue, 17 Oct 2023 10:26:23 -0700 In-Reply-To: <3234A2D6-69C8-4506-9985-C52CC5EB4682@gmx.de> To: =?utf-8?Q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_as?= =?utf-8?Q?pects_heard_this_time!?= References: <4c44a9ef4c4b14a06403e553e633717d@rjmcmahon.com> <150CFF4F99854A2F9DCC034476343836@SRA6> <3234A2D6-69C8-4506-9985-C52CC5EB4682@gmx.de> X-Mailer: Apple Mail (2.3731.700.6) Subject: Re: [NNagain] NN and freedom of speech, and whether there is worthwhile good-faith discussion in that direction X-BeenThere: nnagain@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: =?utf-8?q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_aspects_heard_this_time!?= List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Oct 2023 17:26:37 -0000 --Apple-Mail=_83D5DDA0-9CF8-4510-A0F0-2FB4C3609AAF Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 I know this is a small side note but I felt compelled to speak up in = defense of online gaming. I=E2=80=99m not a gamer at all and up till a = year or two ago, would have agreed with Dick=E2=80=99s take about = benefit to =E2=80=9Csociety as a whole.=E2=80=9D However, lately I=E2=80=99= ve started hearing some research on the benefits of groups of friends = using online games to socialize together, effectively using the game = primarily as a group call. There=E2=80=99s also this project, where people have collected = banned/censored books into a library in Minecraft. Specifically as a = solution to contexts where regulators/censors ban and monitor content = through other channels (websites etc) but don=E2=80=99t surveil = Minecraft... Presumably because they share Dick=E2=80=99s opinion ;-) = https://www.uncensoredlibrary.com/en > On Oct 17, 2023, at 03:26, Sebastian Moeller via Nnagain = wrote: >=20 > Hi Richard, >=20 >=20 >> On Oct 16, 2023, at 20:04, Dick Roy wrote: >>=20 >> Good points all, Sebastien. How to "trade-off" a fixed capacity = amongst many users is ultimately a game theoretic problem when users are = allowed to make choices, which is certainly the case here. Secondly, = any network that can and does generate "more traffic" (aka overhead such = as ACKs NACKs and retries) reduces the capacity of the network, and = ultimately can lead to the "user" capacity going to zero! Such is life = in the fast lane (aka the internet). >>=20 >> Lastly, on the issue of low-latency real-time experience, there are = many applications that need/want such capabilities that actually have a = net benefit to the individuals involved AND to society as a whole. IMO, = interactive gaming is NOT one of those. >=20 > [SM] Yes, gaming is one obvious example of a class of uses that = work best with low latency and low jitter, not necessarily an example = for a use-case worthy enough to justify the work required to increase = the responsiveness of the internet. Other examples are video = conferences, VoIP, in extension of both musical collaboration over the = internet, and surprising to some even plain old web browsing (it often = needs to first read a page before being able to follow links and load = resources, and every read takes at best a single RTT). None of these are = inherently beneficial or detrimental to individuals or society, but most = can be used to improve the status quo... I would argue that in the last = 4 years the relevance of interactive use-cases has been made quite clear = to a lot of folks... >=20 >=20 >> OK, so now you know I don't engage in these time sinks with no = redeeming social value.:) >=20 > [SM] Duly noted ;) >=20 >> Since it is not hard to argue that just like power distribution, = information exchange/dissemination is "in the public interest", the = question becomes "Do we allow any and all forms of information = exchange/dissemination over what is becoming something akin to a public = utility?" FWIW, I don't know the answer to this question! :) >=20 > [SM] This is an interesting question and one (only) tangentially = related to network neutrality... it is more related to freedom of speech = and limits thereof. Maybe a question for another mailing list? Certainly = one meriting a topic change... >=20 >=20 > Regards > Sebastian >=20 >>=20 >> Cheers, >>=20 >> RR >>=20 >> -----Original Message----- >> From: Sebastian Moeller [mailto:moeller0@gmx.de]=20 >> Sent: Monday, October 16, 2023 10:36 AM >> To: dickroy@alum.mit.edu; Network Neutrality is back! Let=C2=B4s make = the technical aspects heard this time! >> Subject: Re: [NNagain] transit and peering costs projections >>=20 >> Hi Richard, >>=20 >>=20 >>> On Oct 16, 2023, at 19:01, Dick Roy via Nnagain = wrote: >>>=20 >>> Just an observation: ANY type of congestion control that changes = application behavior in response to congestion, or predicted congestion = (ENC), begs the question "How does throttling of application information = exchange rate (aka behavior) affect the user experience and will the = user tolerate it?"=20 >>=20 >> [SM] The trade-off here is, if the application does not respond = (or rather if no application would respond) we would end up with = congestion collapse where no application would gain much of anything as = the network busies itself trying to re-transmit dropped packets without = making much head way... Simplistic game theory application might imply = that individual applications could try to game this, and generally that = seems to be true, but we have remedies for that available.. >>=20 >>=20 >>>=20 >>> Given any (complex and packet-switched) network topology of = interconnected nodes and links, each with possible a different capacity = and characteristics, such as the internet today, IMO the two fundamental = questions are: >>>=20 >>> 1) How can a given network be operated/configured so as to maximize = aggregate throughput (i.e. achieve its theoretical capacity), and >>> 2) What things in the network need to change to increase the = throughput (aka parameters in the network with the largest Lagrange = multipliers associated with them)? >>=20 >> [SM] The thing is we generally know how to maximize (average) = throughput, just add (over-)generous amounts of buffering, the problem = is that this screws up the other important quality axis, latency... We = ideally want low latency and even more low latency variance (aka jitter) = AND high throughput... Turns out though that above a certain throughput = threshold* many users do not seem to care all that much for more = throughput as long as interactive use cases are sufficiently = responsive... but high responsiveness requires low latency and low = jitter... This is actually a good thing, as that means we do not = necessarily aim for 100% utilization (almost requiring deep buffers and = hence resulting in compromised latency) but can get away with say 80-90% = where shallow buffers will do (or rather where buffer filling stays = shallow, there is IMHO still value in having deep buffers for rare = events that need it). >>=20 >>=20 >>=20 >> *) This is not a hard physical law so the exact threshold is not set = in stone, but unless one has many parallel users, something in the 20-50 = Mbps range is plenty and that is only needed in the "loaded" direction, = that is for pure consumers the upload can be thinner, for pure producers = the download can be thinner. >>=20 >>=20 >>>=20 >>> I am not an expert in this field, >>=20 >> [SM] Nor am I, I come from the wet-ware side of things so not = even soft- or hard-ware ;) >>=20 >>=20 >>> however it seems to me that answers to these questions would be = useful, assuming they are not yet available! >>>=20 >>> Cheers, >>>=20 >>> RR >>>=20 >>>=20 >>>=20 >>> -----Original Message----- >>> From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On = Behalf Of rjmcmahon via Nnagain >>> Sent: Sunday, October 15, 2023 1:39 PM >>> To: Network Neutrality is back! Let=C2=B4s make the technical = aspects heard this time! >>> Cc: rjmcmahon >>> Subject: Re: [NNagain] transit and peering costs projections >>>=20 >>> Hi Jack, >>>=20 >>> Thanks again for sharing. It's very interesting to me. >>>=20 >>> Today, the networks are shifting from capacity constrained to = latency=20 >>> constrained, as can be seen in the IX discussions about how the = speed of=20 >>> light over fiber is too slow even between Houston & Dallas. >>>=20 >>> The mitigations against standing queues (which cause bloat today) = are: >>>=20 >>> o) Shrink the e2e bottleneck queue so it will drop packets in a flow = and=20 >>> TCP will respond to that "signal" >>> o) Use some form of ECN marking where the network forwarding plane=20= >>> ultimately informs the TCP source state machine so it can slow down = or=20 >>> pace effectively. This can be an earlier feedback signal and, if = done=20 >>> well, can inform the sources to avoid bottleneck queuing. There are=20= >>> couple of approaches with ECN. Comcast is trialing L4S now which = seems=20 >>> interesting to me as a WiFi test & measurement engineer. The jury is=20= >>> still out on this and measurements are needed. >>> o) Mitigate source side bloat via TCP_NOTSENT_LOWAT >>>=20 >>> The QoS priority approach per congestion is orthogonal by my = judgment as=20 >>> it's typically not supported e2e, many networks will bleach DSCP=20 >>> markings. And it's really too late by my judgment. >>>=20 >>> Also, on clock sync, yes your generation did us both a service and=20= >>> disservice by getting rid of the PSTN TDM clock ;) So IP networking=20= >>> devices kinda ignored clock sync, which makes e2e one way delay = (OWD)=20 >>> measurements impossible. Thankfully, the GPS atomic clock is now=20 >>> available mostly everywhere and many devices use TCXO oscillators so=20= >>> it's possible to get clock sync and use oscillators that can = minimize=20 >>> drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an=20 >>> example. >>>=20 >>> It seems silly to me that clocks aren't synced to the GPS atomic = clock=20 >>> even if by a proxy even if only for measurement and monitoring. >>>=20 >>> Note: As Richard Roy will point out, there really is no such thing = as=20 >>> synchronized clocks across geographies per general relativity - so = those=20 >>> syncing clocks need to keep those effects in mind. I limited the = iperf 2=20 >>> timestamps to microsecond precision in hopes avoiding those issues. >>>=20 >>> Note: With WiFi, a packet drop can occur because an intermittent RF=20= >>> channel condition. TCP can't tell the difference between an RF drop = vs a=20 >>> congested queue drop. That's another reason ECN markings from = network=20 >>> devices may be better than dropped packets. >>>=20 >>> Note: I've added some iperf 2 test support around pacing as that = seems=20 >>> to be the direction the industry is heading as networks are less and=20= >>> less capacity strained and user quality of experience is being = driven by=20 >>> tail latencies. One can also test with the Prague CCA for the L4S=20 >>> scenarios. (This is a fun project: https://www.l4sgear.com/ and = fairly=20 >>> low cost) >>>=20 >>> --fq-rate n[kmgKMG] >>> Set a rate to be used with fair-queuing based socket-level pacing, = in=20 >>> bytes or bits per second. Only available on platforms supporting the=20= >>> SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate=20= >>> bytes/sec or bits/sec per use of uppercase or lowercase, = respectively) >>>=20 >>> --fq-rate-step n[kmgKMG] >>> Set a step of rate to be used with fair-queuing based socket-level=20= >>> pacing, in bytes or bits per second. Step occurs every=20 >>> fq-rate-step-interval (defaults to one second) >>>=20 >>> --fq-rate-step-interval n >>> Time in seconds before stepping the fq-rate >>>=20 >>> Bob >>>=20 >>> PS. Iperf 2 man page = https://iperf2.sourceforge.io/iperf-manpage.html >>>=20 >>>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about >>>> latency. It's not just "rewarding" to have lower latencies; high >>>> latencies may make VGV unusable. Average (or "typical") latency = as >>>> the FCC label proposes isn't a good metric to judge usability. A = path >>>> which has high variance in latency can be unusable even if the = average >>>> is quite low. Having your voice or video or gameplay "break up" >>>> every minute or so when latency spikes to 500 msec makes the "user >>>> experience" intolerable. >>>>=20 >>>> A few years ago, I ran some simple "ping" tests to help a friend = who >>>> was trying to use a gaming app. My data was only for one specific >>>> path so it's anecdotal. What I saw was surprising - zero data = loss, >>>> every datagram was delivered, but occasionally a datagram would = take >>>> up to 30 seconds to arrive. I didn't have the ability to poke = around >>>> inside, but I suspected it was an experience of "bufferbloat", = enabled >>>> by the dramatic drop in price of memory over the decades. >>>>=20 >>>> It's been a long time since I was involved in operating any part of >>>> the Internet, so I don't know much about the inner workings today. >>>> Apologies for my ignorance.... >>>>=20 >>>> There was a scenario in the early days of the Internet for which we >>>> struggled to find a technical solution. Imagine some node in the >>>> bowels of the network, with 3 connected "circuits" to some other >>>> nodes. On two of those inputs, traffic is arriving to be forwarded >>>> out the third circuit. The incoming flows are significantly more = than >>>> the outgoing path can accept. >>>>=20 >>>> What happens? How is "backpressure" generated so that the = incoming >>>> flows are reduced to the point that the outgoing circuit can handle >>>> the traffic? >>>>=20 >>>> About 45 years ago, while we were defining TCPV4, we struggled with >>>> this issue, but didn't find any consensus solutions. So = "placeholder" >>>> mechanisms were defined in TCPV4, to be replaced as research = continued >>>> and found a good solution. >>>>=20 >>>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message = was >>>> defined; it was to be sent by a switching node back toward the = sender >>>> of any datagram that had to be discarded because there wasn't any >>>> place to put it. >>>>=20 >>>> In addition, the TOS (Type Of Service) and TTL (Time To Live) = fields >>>> were defined in IP. >>>>=20 >>>> TOS would allow the sender to distinguish datagrams based on their >>>> needs. For example, we thought "Interactive" service might be = needed >>>> for VGV traffic, where timeliness of delivery was most important.=20= >>>> "Bulk" service might be useful for activities like file transfers, >>>> backups, et al. "Normal" service might now mean activities like >>>> using the Web. >>>>=20 >>>> The TTL field was an attempt to inform each switching node about = the >>>> "expiration date" for a datagram. If a node somehow knew that a >>>> particular datagram was unlikely to reach its destination in time = to >>>> be useful (such as a video datagram for a frame that has already = been >>>> displayed), the node could, and should, discard that datagram to = free >>>> up resources for useful traffic. Sadly we had no mechanisms for >>>> measuring delay, either in transit or in queuing, so TTL was = defined >>>> in terms of "hops", which is not an accurate proxy for time. But >>>> it's all we had. >>>>=20 >>>> Part of the complexity was that the "flow control" mechanism of the >>>> Internet had put much of the mechanism in the users' computers' TCP >>>> implementations, rather than the switches which handle only IP. >>>> Without mechanisms in the users' computers, all a switch could do = is >>>> order more circuits, and add more memory to the switches for = queuing.=20 >>>> Perhaps that led to "bufferbloat". >>>>=20 >>>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a >>>> future release that would introduce a "real" form of Backpressure = and >>>> the ability to handle different types of traffic. Meanwhile, = these >>>> rudimentary mechanisms would provide some flow control. Hopefully = the >>>> users' computers sending the flows would respond to the SQ >>>> backpressure, and switches would prioritize traffic using the TTL = and >>>> TOS information. >>>>=20 >>>> But, being way out of touch, I don't know what actually happens >>>> today. Perhaps the current operators and current government = watchers >>>> can answer?:git clone = https://rjmcmahon@git.code.sf.net/p/iperf2/code=20 >>>> iperf2-code >>>>=20 >>>> 1/ How do current switches exert Backpressure to reduce competing >>>> traffic flows? Do they still send SQs? >>>>=20 >>>> 2/ How do the current and proposed government regulations treat the >>>> different needs of different types of traffic, e.g., "Bulk" versus >>>> "Interactive" versus "Normal"? Are Internet carriers permitted to >>>> treat traffic types differently? Are they permitted to charge >>>> different amounts for different types of service? >>>>=20 >>>> Jack Haverty >>>>=20 >>>> On 10/15/23 09:45, Dave Taht via Nnagain wrote: >>>>> For starters I would like to apologize for cc-ing both nanog and = my >>>>> new nn list. (I will add sender filters) >>>>>=20 >>>>> A bit more below. >>>>>=20 >>>>> On Sun, Oct 15, 2023 at 9:32=E2=80=AFAM Tom Beecher = =20 >>>>> wrote: >>>>>>> So for now, we'll keep paying for transit to get to the others=20= >>>>>>> (since it=E2=80=99s about as much as transporting IXP from = Dallas), and=20 >>>>>>> hoping someone at Google finally sees Houston as more than a = third=20 >>>>>>> rate city hanging off of Dallas. Or=E2=80=A6 someone finally = brings a=20 >>>>>>> worthwhile IX to Houston that gets us more than peering to = Kansas=20 >>>>>>> City. Yeah, I think the former is more likely. =F0=9F=98=8A >>>>>>=20 >>>>>> There is often a chicken/egg scenario here with the economics. As = an=20 >>>>>> eyeball network, your costs to build out and connect to Dallas = are=20 >>>>>> greater than your transit cost, so you do that. Totally fair. >>>>>>=20 >>>>>> However think about it from the content side. Say I want to build=20= >>>>>> into to Houston. I have to put routers in, and a bunch of cache=20= >>>>>> servers, so I have capital outlay , plus opex for space, power,=20= >>>>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of=20= >>>>>> calculations that go into it. Is there enough total eyeball = traffic=20 >>>>>> there to make it worth it? Is saving 8-10ms enough of a = performance=20 >>>>>> boost to justify the spend? What are the long term trends in that=20= >>>>>> market? These answers are of course different for a company = running=20 >>>>>> their own CDN vs the commercial CDNs. >>>>>>=20 >>>>>> I don't work for Google and obviously don't speak for them, but I=20= >>>>>> would suspect that they're happy to eat a 8-10ms performance hit = to=20 >>>>>> serve from Dallas , versus the amount of capital outlay to build = out=20 >>>>>> there right now. >>>>> The three forms of traffic I care most about are voip, gaming, and >>>>> videoconferencing, which are rewarding to have at lower latencies. >>>>> When I was a kid, we had switched phone networks, and while the = sound >>>>> quality was poorer than today, the voice latency cross-town was = just >>>>> like "being there". Nowadays we see 500+ms latencies for this kind = of >>>>> traffic. >>>>>=20 >>>>> As to how to make calls across town work that well again, = cost-wise, I >>>>> do not know, but the volume of traffic that would be better served = by >>>>> these interconnects quite low, respective to the overall gains in >>>>> lower latency experiences for them. >>>>>=20 >>>>>=20 >>>>>=20 >>>>>> On Sat, Oct 14, 2023 at 11:47=E2=80=AFPM Tim Burke = wrote: >>>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can = be=20 >>>>>>> had for a good bit less than $900 on the wholesale market. >>>>>>>=20 >>>>>>> Sadly, IXP=E2=80=99s are seemingly turning into a pay to play = game, with=20 >>>>>>> rates almost costing as much as transit in many cases after you=20= >>>>>>> factor in loop costs. >>>>>>>=20 >>>>>>> For example, in the Houston market (one of the largest and = fastest=20 >>>>>>> growing regions in the US!), we do not have a major IX, so to = get up=20 >>>>>>> to Dallas it=E2=80=99s several thousand for a 100g wave, plus = several=20 >>>>>>> thousand for a 100g port on one of those major IXes. Or, a = better=20 >>>>>>> option, we can get a 100g flat internet transit for just a = little=20 >>>>>>> bit more. >>>>>>>=20 >>>>>>> Fortunately, for us as an eyeball network, there are a good = number=20 >>>>>>> of major content networks that are allowing for private peering = in=20 >>>>>>> markets like Houston for just the cost of a cross connect and a = QSFP=20 >>>>>>> if you=E2=80=99re in the right DC, with Google and some others = being the=20 >>>>>>> outliers. >>>>>>>=20 >>>>>>> So for now, we'll keep paying for transit to get to the others=20= >>>>>>> (since it=E2=80=99s about as much as transporting IXP from = Dallas), and=20 >>>>>>> hoping someone at Google finally sees Houston as more than a = third=20 >>>>>>> rate city hanging off of Dallas. Or=E2=80=A6 someone finally = brings a=20 >>>>>>> worthwhile IX to Houston that gets us more than peering to = Kansas=20 >>>>>>> City. Yeah, I think the former is more likely. =F0=9F=98=8A >>>>>>>=20 >>>>>>> See y=E2=80=99all in San Diego this week, >>>>>>> Tim >>>>>>>=20 >>>>>>> On Oct 14, 2023, at 18:04, Dave Taht = wrote: >>>>>>>> =EF=BB=BFThis set of trendlines was very interesting. = Unfortunately the=20 >>>>>>>> data >>>>>>>> stops in 2015. Does anyone have more recent data? >>>>>>>>=20 >>>>>>>> = https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And= -Projected.php >>>>>>>>=20 >>>>>>>> I believe a gbit circuit that an ISP can resell still runs at = about >>>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere? >>>>>>>>=20 >>>>>>>> ... >>>>>>>>=20 >>>>>>>> I am under the impression that many IXPs remain very = successful, >>>>>>>> states without them suffer, and I also find the concept of = doing=20 >>>>>>>> micro >>>>>>>> IXPs at the city level, appealing, and now achievable with = cheap=20 >>>>>>>> gear. >>>>>>>> Finer grained cross connects between telco and ISP and IXP = would=20 >>>>>>>> lower >>>>>>>> latencies across town quite hugely... >>>>>>>>=20 >>>>>>>> PS I hear ARIN is planning on dropping the price for, and = bundling=20 >>>>>>>> 3 >>>>>>>> BGP AS numbers at a time, as of the end of this year, also. >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> -- >>>>>>>> Oct 30:=20 >>>>>>>> = https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html >>>>>>>> Dave T=C3=A4ht CSO, LibreQos >>>>>=20 >>>>>=20 >>>>=20 >>>> _______________________________________________ >>>> Nnagain mailing list >>>> Nnagain@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/nnagain >>> _______________________________________________ >>> Nnagain mailing list >>> Nnagain@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/nnagain >>>=20 >>> _______________________________________________ >>> Nnagain mailing list >>> Nnagain@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/nnagain >>=20 >=20 > _______________________________________________ > Nnagain mailing list > Nnagain@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/nnagain --Apple-Mail=_83D5DDA0-9CF8-4510-A0F0-2FB4C3609AAF Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 I know this is = a small side note but I felt compelled to speak up in defense of online = gaming. I=E2=80=99m not a gamer at all and up till a year or two ago, = would have agreed with Dick=E2=80=99s take about benefit to =E2=80=9Csocie= ty as a whole.=E2=80=9D However, lately I=E2=80=99ve started hearing = some research on the benefits of groups of friends using online games to = socialize together, effectively using the game primarily as a group = call.

There=E2=80=99s also this project, where people = have collected banned/censored books into a library in Minecraft. = Specifically as a solution to contexts where regulators/censors ban and = monitor content through other channels (websites etc) but don=E2=80=99t = surveil Minecraft... Presumably because they share Dick=E2=80=99s = opinion ;-) https://www.uncensoredlibrar= y.com/en

On Oct 17, = 2023, at 03:26, Sebastian Moeller via Nnagain = <nnagain@lists.bufferbloat.net> wrote:

Hi Richard,


On Oct 16, 2023, at 20:04, Dick Roy = <dickroy@alum.mit.edu> wrote:

Good points all, Sebastien. =  How to "trade-off" a fixed capacity amongst many users is = ultimately a game theoretic problem when users are allowed to make = choices, which is certainly the case here.  Secondly, any network = that can and does generate "more traffic" (aka overhead such as ACKs = NACKs and retries) reduces the capacity of the network, and ultimately = can lead to the "user" capacity going to zero!  Such is life in the = fast lane (aka the internet).

Lastly, on the issue of low-latency = real-time experience, there are many applications that need/want such = capabilities that actually have a net benefit to the individuals = involved AND to society as a whole.  IMO, interactive gaming is NOT = one of those.

= [SM] Yes, gaming is one = obvious example of a class of uses that work best with low latency and = low jitter, not necessarily an example for a use-case worthy enough to = justify the work required to increase the responsiveness of the = internet. Other examples are video conferences, VoIP, in extension of = both musical collaboration over the internet, and surprising to some = even plain old web browsing (it often needs to first read a page before = being able to follow links and load resources, and every read takes at = best a single RTT). None of these are inherently beneficial or = detrimental to individuals or society, but most can be used to improve = the status quo... I would argue that in the last 4 years the relevance = of interactive use-cases has been made quite clear to a lot of = folks...


OK, so now you know I don't engage in these time = sinks with no redeeming social value.:)

[SM] Duly noted = ;)

Since it is not hard to argue that just like = power distribution, information exchange/dissemination is "in the public = interest", the question becomes "Do we allow any and all forms of = information exchange/dissemination over what is becoming something akin = to a public utility?"  FWIW, I don't know the answer to this = question! :)

= [SM] This is an = interesting question and one (only) tangentially related to network = neutrality... it is more related to freedom of speech and limits = thereof. Maybe a question for another mailing list? Certainly one = meriting a topic change...


Regards
= Sebastian


Cheers,

RR

-----Original Message-----
From: = Sebastian Moeller [mailto:moeller0@gmx.de] 
Sent: Monday, October = 16, 2023 10:36 AM
To: dickroy@alum.mit.edu; Network Neutrality is = back! Let=C2=B4s make the technical aspects heard this time!
Subject: = Re: [NNagain] transit and peering costs projections

Hi = Richard,


On Oct 16, 2023, at 19:01, = Dick Roy via Nnagain <nnagain@lists.bufferbloat.net> = wrote:

Just an observation:  ANY type of congestion control = that changes application behavior in response to congestion, or = predicted congestion (ENC), begs the question "How does throttling of = application information exchange rate (aka behavior) affect the user = experience and will the user tolerate it?" 

[SM] The = trade-off here is, if the application does not respond (or rather if no = application would respond) we would end up with congestion collapse = where no application would gain much of anything as the network busies = itself trying to re-transmit dropped packets without making much head = way... Simplistic game theory application might imply that individual = applications could try to game this, and generally that seems to be = true, but we have remedies for that available..



Given any (complex and packet-switched) network = topology of interconnected nodes and links, each with possible a = different capacity and characteristics, such as the internet today, IMO = the two fundamental questions are:

1) How can a given network be = operated/configured so as to maximize aggregate throughput (i.e. achieve = its theoretical capacity), and
2) What things in the network need to = change to increase the throughput (aka parameters in the network with = the largest Lagrange multipliers associated with = them)?

[SM] The thing is we generally = know how to maximize (average) throughput, just add (over-)generous = amounts of buffering, the problem is that this screws up the other = important quality axis, latency... We ideally want low latency and even = more low latency variance (aka jitter) AND high throughput... Turns out = though that above a certain throughput threshold* many users do not seem = to care all that much for more throughput as long as interactive use = cases are sufficiently responsive... but high responsiveness requires = low latency and low jitter... This is actually a good thing, as that = means we do not necessarily aim for 100% utilization (almost requiring = deep buffers and hence resulting in compromised latency) but can get = away with say 80-90% where shallow buffers will do (or rather where = buffer filling stays shallow, there is IMHO still value in having deep = buffers for rare events that need it).



*) This is not a = hard physical law so the exact threshold is not set in stone, but unless = one has many parallel users, something in the 20-50 Mbps range is plenty = and that is only needed in the "loaded" direction, that is for pure = consumers the upload can be thinner, for pure producers the download can = be thinner.



I am not an expert = in this field,

     [SM] = Nor am I, I come from the wet-ware side of things so not even soft- or = hard-ware ;)


however it seems to me = that answers to these questions would be useful, assuming they are not = yet available!

Cheers,

RR



-----Original = Message-----
From: Nnagain = [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of rjmcmahon = via Nnagain
Sent: Sunday, October 15, 2023 1:39 PM
To: Network = Neutrality is back! Let=C2=B4s make the technical aspects heard this = time!
Cc: rjmcmahon
Subject: Re: [NNagain] transit and peering = costs projections

Hi Jack,

Thanks again for sharing. It's = very interesting to me.

Today, the networks are shifting from = capacity constrained to latency 
constrained, as can be = seen in the IX discussions about how the speed of 
light over fiber is too = slow even between Houston & Dallas.

The mitigations against = standing queues (which cause bloat today) are:

o) Shrink the e2e = bottleneck queue so it will drop packets in a flow and 
TCP will respond to = that "signal"
o) Use some form of ECN marking where the network = forwarding plane 
ultimately informs the = TCP source state machine so it can slow down or 
pace effectively. This = can be an earlier feedback signal and, if done 
well, can inform the = sources to avoid bottleneck queuing. There are 
couple of approaches = with ECN. Comcast is trialing L4S now which seems 
interesting to me as a = WiFi test & measurement engineer. The jury is 
still out on this and = measurements are needed.
o) Mitigate source side bloat via = TCP_NOTSENT_LOWAT

The QoS priority approach per congestion is = orthogonal by my judgment as 
it's typically not = supported e2e, many networks will bleach DSCP 
markings. And it's = really too late by my judgment.

Also, on clock sync, yes your = generation did us both a service and 
disservice by getting = rid of the PSTN TDM clock ;) So IP networking 
devices kinda ignored = clock sync, which makes e2e one way delay (OWD) 
measurements = impossible. Thankfully, the GPS atomic clock is now 
available mostly = everywhere and many devices use TCXO oscillators so 
it's possible to get = clock sync and use oscillators that can minimize 
drift. I pay $14 for a = Rpi4 GPS chip with pulse per second as an 
example.

It = seems silly to me that clocks aren't synced to the GPS atomic clock 
even if by a proxy even = if only for measurement and monitoring.

Note: As Richard Roy will = point out, there really is no such thing as 
synchronized clocks = across geographies per general relativity - so those 
syncing clocks need to = keep those effects in mind. I limited the iperf 2 
timestamps to = microsecond precision in hopes avoiding those issues.

Note: With = WiFi, a packet drop can occur because an intermittent RF 
channel condition. TCP = can't tell the difference between an RF drop vs a 
congested queue drop. = That's another reason ECN markings from network 
devices may be better = than dropped packets.

Note: I've added some iperf 2 test support = around pacing as that seems 
to be the direction the = industry is heading as networks are less and 
less capacity strained = and user quality of experience is being driven by 
tail latencies. One can = also test with the Prague CCA for the L4S 
scenarios. (This is a = fun project: https://www.l4sgear.com/ and fairly 
low = cost)

--fq-rate n[kmgKMG]
Set a rate to be used with = fair-queuing based socket-level pacing, in 
bytes or bits per = second. Only available on platforms supporting the 
SO_MAX_PACING_RATE = socket option. (Note: Here the suffixes indicate 
bytes/sec or bits/sec = per use of uppercase or lowercase, respectively)

--fq-rate-step = n[kmgKMG]
Set a step of rate to be used with fair-queuing based = socket-level 
pacing,= in bytes or bits per second. Step occurs every 
fq-rate-step-interval = (defaults to one second)

--fq-rate-step-interval n
Time in = seconds before stepping the fq-rate

Bob

PS. Iperf 2 man = page https://iperf2.sourceforge.io/iperf-manpage.html

The "VGV User" (Voice, Gaming, Videoconferencing) cares a = lot about
latency.   It's not just "rewarding" to have = lower latencies; high
latencies may make VGV unusable. =   Average (or "typical") latency as
the FCC label proposes = isn't a good metric to judge usability.  A path
which has high = variance in latency can be unusable even if the average
is quite low. =   Having your voice or video or gameplay "break up"
every = minute or so when latency spikes to 500 msec makes the = "user
experience" intolerable.

A few years ago, I ran some = simple "ping" tests to help a friend who
was trying to use a gaming = app.  My data was only for one specific
path so it's anecdotal. =  What I saw was surprising - zero data loss,
every datagram was = delivered, but occasionally a datagram would take
up to 30 seconds to = arrive.  I didn't have the ability to poke around
inside, but I = suspected it was an experience of "bufferbloat", enabled
by the = dramatic drop in price of memory over the decades.

It's been a = long time since I was involved in operating any part of
the Internet, = so I don't know much about the inner workings today.
Apologies for my = ignorance....

There was a scenario in the early days of the = Internet for which we
struggled to find a technical solution. =  Imagine some node in the
bowels of the network, with 3 = connected "circuits" to some other
nodes.  On two of those = inputs, traffic is arriving to be forwarded
out the third circuit. =  The incoming flows are significantly more than
the outgoing = path can accept.

What happens?   How is "backpressure" = generated so that the incoming
flows are reduced to the point that = the outgoing circuit can handle
the traffic?

About 45 years = ago, while we were defining TCPV4, we struggled with
this issue, but = didn't find any consensus solutions.  So = "placeholder"
mechanisms were defined in TCPV4, to be replaced as = research continued
and found a good solution.

In that = "placeholder" scheme, the "Source Quench" (SQ) IP message = was
defined; it was to be sent by a switching node back toward the = sender
of any datagram that had to be discarded because there wasn't = any
place to put it.

In addition, the TOS (Type Of Service) = and TTL (Time To Live) fields
were defined in IP.

TOS would = allow the sender to distinguish datagrams based on their
needs. =  For example, we thought "Interactive" service might be = needed
for VGV traffic, where timeliness of delivery was most = important. 
"Bulk" = service might be useful for activities like file transfers,
backups, = et al.   "Normal" service might now mean activities = like
using the Web.

The TTL field was an attempt to inform = each switching node about the
"expiration date" for a datagram. =   If a node somehow knew that a
particular datagram was = unlikely to reach its destination in time to
be useful (such as a = video datagram for a frame that has already been
displayed), the node = could, and should, discard that datagram to free
up resources for = useful traffic.  Sadly we had no mechanisms for
measuring delay, = either in transit or in queuing, so TTL was defined
in terms of = "hops", which is not an accurate proxy for time.   But
it's = all we had.

Part of the complexity was that the "flow control" = mechanism of the
Internet had put much of the mechanism in the users' = computers' TCP
implementations, rather than the switches which handle = only IP.
Without mechanisms in the users' computers, all a switch = could do is
order more circuits, and add more memory to the switches = for queuing. 
Perhaps= that led to "bufferbloat".

So TOS, SQ, and TTL were all = placeholders, for some mechanism in a
future release that would = introduce a "real" form of Backpressure and
the ability to handle = different types of traffic.   Meanwhile, these
rudimentary = mechanisms would provide some flow control. Hopefully the
users' = computers sending the flows would respond to the SQ
backpressure, and = switches would prioritize traffic using the TTL and
TOS = information.

But, being way out of touch, I don't know what = actually happens
today.  Perhaps the current operators and = current government watchers
can answer?:git clone = https://rjmcmahon@git.code.sf.net/p/iperf2/code 
iperf2-code

1/ = How do current switches exert Backpressure to  reduce = competing
traffic flows?  Do they still send SQs?

2/ How = do the current and proposed government regulations treat = the
different needs of different types of traffic, e.g., "Bulk" = versus
"Interactive" versus "Normal"?  Are Internet carriers = permitted to
treat traffic types differently?  Are they = permitted to charge
different amounts for different types of = service?

Jack Haverty

On 10/15/23 09:45, Dave Taht via = Nnagain wrote:
For starters I would like to = apologize for cc-ing both nanog and my
new nn list. (I will add = sender filters)

A bit more below.

On Sun, Oct 15, 2023 at = 9:32=E2=80=AFAM Tom Beecher <beecher@beecher.cc> 
wrote:
So for now, we'll keep paying = for transit to get to the others 
(since it=E2=80=99s = about as much as transporting IXP from Dallas), and 
hoping someone at = Google finally sees Houston as more than a third 
rate city hanging off = of Dallas. Or=E2=80=A6 someone finally brings a 
worthwhile IX to = Houston that gets us more than peering to Kansas 
City. Yeah, I think the = former is more likely. =F0=9F=98=8A

There is often a = chicken/egg scenario here with the economics. As an 
eyeball network, your = costs to build out and connect to Dallas are 
greater than your = transit cost, so you do that. Totally fair.

However think about = it from the content side. Say I want to build 
into to Houston. I have = to put routers in, and a bunch of cache 
servers, so I have = capital outlay , plus opex for space, power, 
IX/backhaul/transit = costs. That's not cheap, so there's a lot of 
calculations that go = into it. Is there enough total eyeball traffic 
there to make it worth = it? Is saving 8-10ms enough of a performance 
boost to justify the = spend? What are the long term trends in that 
market? These answers = are of course different for a company running 
their own CDN vs the = commercial CDNs.

I don't work for Google and obviously don't = speak for them, but I 
would suspect that = they're happy to eat a 8-10ms performance hit to 
serve from Dallas , = versus the amount of capital outlay to build out 
there right = now.
The three forms of traffic I care most about are = voip, gaming, and
videoconferencing, which are rewarding to have at = lower latencies.
When I was a kid, we had switched phone networks, = and while the sound
quality was poorer than today, the voice latency = cross-town was just
like "being there". Nowadays we see 500+ms = latencies for this kind of
traffic.

As to how to make calls = across town work that well again, cost-wise, I
do not know, but the = volume of traffic that would be better served by
these interconnects = quite low, respective to the overall gains in
lower latency = experiences for them.



On Sat, = Oct 14, 2023 at 11:47=E2=80=AFPM Tim Burke <tim@mid.net> = wrote:
I would say that a 1Gbit IP transit = in a carrier neutral DC can be 
had for a good bit less = than $900 on the wholesale market.

Sadly, IXP=E2=80=99s are = seemingly turning into a pay to play game, with 
rates almost costing as = much as transit in many cases after you 
factor in loop = costs.

For example, in the Houston market (one of the largest and = fastest 
growing = regions in the US!), we do not have a major IX, so to get up 
to Dallas it=E2=80=99s = several thousand for a 100g wave, plus several 
thousand for a 100g = port on one of those major IXes. Or, a better 
option, we can get a = 100g flat internet transit for just a little 
bit = more.

Fortunately, for us as an eyeball network, there are a good = number 
of major = content networks that are allowing for private peering in 
markets like Houston = for just the cost of a cross connect and a QSFP 
if you=E2=80=99re in = the right DC, with Google and some others being the 
outliers.

So for = now, we'll keep paying for transit to get to the others 
(since it=E2=80=99s = about as much as transporting IXP from Dallas), and 
hoping someone at = Google finally sees Houston as more than a third 
rate city hanging off = of Dallas. Or=E2=80=A6 someone finally brings a 
worthwhile IX to = Houston that gets us more than peering to Kansas 
City. Yeah, I think the = former is more likely. =F0=9F=98=8A

See y=E2=80=99all in San = Diego this week,
Tim

On Oct 14, 2023, at 18:04, Dave Taht = <dave.taht@gmail.com> wrote:
=EF=BB=BFT= his set of trendlines was very interesting. Unfortunately the 
data
stops in 2015. = Does anyone have more recent = data?

https://drpeering.net/white-papers/Internet-Transit-Pricing-H= istorical-And-Projected.php

I believe a gbit circuit that an ISP = can resell still runs at about
$900 - $1.4k (?) in the usa? How about = elsewhere?

...

I am under the impression that many IXPs = remain very successful,
states without them suffer, and I also find = the concept of doing 
micro
IXPs at the = city level, appealing, and now achievable with cheap 
gear.
Finer grained = cross connects between telco and ISP and IXP would 
lower
latencies = across town quite hugely...

PS I hear ARIN is planning on = dropping the price for, and bundling 
3
BGP AS numbers at = a time, as of the end of this year, also.



--
Oct = 30: 
https://netdevconf.info/0= x17/news/the-maestro-and-the-music-bof.html
Dave T=C3=A4ht CSO, = LibreQos


_______________________________________________
Nnagain mailing = list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/lis= tinfo/nnagain
____________________________________________= ___
Nnagain mailing = list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/lis= tinfo/nnagain

_______________________________________________
Nn= again mailing = list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/lis= tinfo/nnagain


_______________________________________________
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain

= --Apple-Mail=_83D5DDA0-9CF8-4510-A0F0-2FB4C3609AAF--