From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bosmailout01.eigbox.net (bosmailout01.eigbox.net [66.96.184.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 182523B29D for ; Mon, 16 Oct 2023 14:04:24 -0400 (EDT) Received: from bosmailscan04.eigbox.net ([10.20.15.4]) by bosmailout01.eigbox.net with esmtp (Exim) id 1qsRx1-0006gA-I1 for nnagain@lists.bufferbloat.net; Mon, 16 Oct 2023 14:04:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=alum.mit.edu; s=dkim; h=Sender:Content-Transfer-Encoding:Content-Type: MIME-Version:Message-ID:Date:Subject:In-Reply-To:References:To:From:Reply-To: Cc:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=lkB1rFBeT/sVnOaOYK57F951xIYpwWskV1CjALdZG5g=; b=Fv7ee+JBPAYu++a2DCrYkyO6Ey 4Of7YYl/bfgCu72Oxl791X370g9jQqN7FCQiFoE1Bdkk6u6WatvQPCy5nkF3SrM7TBKMKSOhybPA+ B9zWMrYjq3+EiWn39rbn2UeBDkESRL4m3JY6ZMLbiUgwsjpIslnNWTi/lJD/BVqPY7FlUUIEt7wn4 a4E1iV2u/7C9cKRRfhbCimm7QEMEFcqcBQyIMuTnSRFQWSCZGtH0PecxGyRntXsDybpF4H5KIxjPv sIUttNB3ZHy9gyOHobAG6N+1MQtrB8CxuUt0noS/Xv9x10I1bpx5ZfB2NxEcUtfZv3KDUi1wjGe46 AjYcTokg==; Received: from [10.115.3.33] (helo=bosimpout13) by bosmailscan04.eigbox.net with esmtp (Exim) id 1qsRx1-0001TV-9Q for nnagain@lists.bufferbloat.net; Mon, 16 Oct 2023 14:04:23 -0400 Received: from bosauthsmtp02.yourhostingaccount.com ([10.20.18.2]) by bosimpout13 with id yi4L2A00402gpmq01i4Pw8; Mon, 16 Oct 2023 14:04:23 -0400 X-Authority-Analysis: v=2.3 cv=Q6tJH7+a c=1 sm=1 tr=0 a=9MP9vxlQrmnoeofDS6o88g==:117 a=tKttg/DTfI8zZz0UFxdR5w==:17 a=IkcTkHD0fZMA:10 a=bhdUkHdE2iEA:10 a=kurRqvosAAAA:8 a=gcS-kS_IAAAA:8 a=ZkvPBPLQAAAA:8 a=2z1OXlWFAAAA:8 a=3oGU1CO3AAAA:8 a=pGLkceISAAAA:8 a=aWDjA9EtAAAA:8 a=3nI6nj7hAAAA:8 a=aVsgeI2QVZZJ3LDAya4A:9 a=QEXdDO2ut3YA:10 a=6nOIGRkTGDwA:10 a=CHoBO7vaALwA:10 a=AB5Pgl2823wA:10 a=ahHLdPVtsAUA:10 a=-FEs8UIgK8oA:10 a=kbxRQ_lfPIoQnHsAj2-A:22 a=zUkndHwK1TG2BjCMP9Kp:22 a=SFr2u9Cu4sbnRqnMvguH:22 a=SNRPda0NjyR9MlWdJ_lJ:22 a=CaOJnntE3efkP4TL9Bkc:22 a=Vz2zr2CgXkogbCJve16i:22 a=PUQwBqpy_9XipHPXVRm3:22 Received: from c-73-158-253-41.hsd1.ca.comcast.net ([73.158.253.41]:63988 helo=SRA6) by bosauthsmtp02.eigbox.net with esmtpa (Exim) id 1qsRwx-0003qI-JS; Mon, 16 Oct 2023 14:04:19 -0400 Reply-To: From: "Dick Roy" To: "'Sebastian Moeller'" , =?utf-8?Q?'Network_Neutrality_is_back!_Let?= =?utf-8?Q?=C2=B4s_make_the_technical_aspects_he?= =?utf-8?Q?ard_this_time!'?= References: <4c44a9ef4c4b14a06403e553e633717d@rjmcmahon.com> In-Reply-To: Date: Mon, 16 Oct 2023 11:04:17 -0700 Organization: SRA Message-ID: <150CFF4F99854A2F9DCC034476343836@SRA6> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Microsoft Office Outlook 11 Thread-Index: AdoAVz8AozP+b+3eS+W2IYIFGdaGNwAAjzoQ X-MimeOLE: Produced By Microsoft MimeOLE X-EN-UserInfo: f809475445fb8041985048e338e1a001:931c98230c6409dcc37fa7e93b490c27 X-EN-AuthUser: dickroy@intellicommunications.com Sender: "Dick Roy" X-EN-OrigIP: 73.158.253.41 X-EN-OrigHost: c-73-158-253-41.hsd1.ca.comcast.net Subject: Re: [NNagain] transit and peering costs projections X-BeenThere: nnagain@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: =?utf-8?q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_aspects_heard_this_time!?= List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Oct 2023 18:04:24 -0000 Good points all, Sebastien. How to "trade-off" a fixed capacity amongst = many users is ultimately a game theoretic problem when users are allowed = to make choices, which is certainly the case here. Secondly, any = network that can and does generate "more traffic" (aka overhead such as = ACKs NACKs and retries) reduces the capacity of the network, and = ultimately can lead to the "user" capacity going to zero! Such is life = in the fast lane (aka the internet). Lastly, on the issue of low-latency real-time experience, there are many = applications that need/want such capabilities that actually have a net = benefit to the individuals involved AND to society as a whole. IMO, = interactive gaming is NOT one of those. OK, so now you know I don't = engage in these time sinks with no redeeming social value.:) Since it is = not hard to argue that just like power distribution, information = exchange/dissemination is "in the public interest", the question becomes = "Do we allow any and all forms of information exchange/dissemination = over what is becoming something akin to a public utility?" FWIW, I = don't know the answer to this question! :) Cheers, RR -----Original Message----- From: Sebastian Moeller [mailto:moeller0@gmx.de]=20 Sent: Monday, October 16, 2023 10:36 AM To: dickroy@alum.mit.edu; Network Neutrality is back! Let=C2=B4s make = the technical aspects heard this time! Subject: Re: [NNagain] transit and peering costs projections Hi Richard, > On Oct 16, 2023, at 19:01, Dick Roy via Nnagain = wrote: >=20 > Just an observation: ANY type of congestion control that changes = application behavior in response to congestion, or predicted congestion = (ENC), begs the question "How does throttling of application information = exchange rate (aka behavior) affect the user experience and will the = user tolerate it?"=20 [SM] The trade-off here is, if the application does not respond (or = rather if no application would respond) we would end up with congestion = collapse where no application would gain much of anything as the network = busies itself trying to re-transmit dropped packets without making much = head way... Simplistic game theory application might imply that = individual applications could try to game this, and generally that seems = to be true, but we have remedies for that available.. >=20 > Given any (complex and packet-switched) network topology of = interconnected nodes and links, each with possible a different capacity = and characteristics, such as the internet today, IMO the two fundamental = questions are: >=20 > 1) How can a given network be operated/configured so as to maximize = aggregate throughput (i.e. achieve its theoretical capacity), and > 2) What things in the network need to change to increase the = throughput (aka parameters in the network with the largest Lagrange = multipliers associated with them)? [SM] The thing is we generally know how to maximize (average) = throughput, just add (over-)generous amounts of buffering, the problem = is that this screws up the other important quality axis, latency... We = ideally want low latency and even more low latency variance (aka jitter) = AND high throughput... Turns out though that above a certain throughput = threshold* many users do not seem to care all that much for more = throughput as long as interactive use cases are sufficiently = responsive... but high responsiveness requires low latency and low = jitter... This is actually a good thing, as that means we do not = necessarily aim for 100% utilization (almost requiring deep buffers and = hence resulting in compromised latency) but can get away with say 80-90% = where shallow buffers will do (or rather where buffer filling stays = shallow, there is IMHO still value in having deep buffers for rare = events that need it). *) This is not a hard physical law so the exact threshold is not set in = stone, but unless one has many parallel users, something in the 20-50 = Mbps range is plenty and that is only needed in the "loaded" direction, = that is for pure consumers the upload can be thinner, for pure producers = the download can be thinner. >=20 > I am not an expert in this field, [SM] Nor am I, I come from the wet-ware side of things so not = even soft- or hard-ware ;) > however it seems to me that answers to these questions would be = useful, assuming they are not yet available! >=20 > Cheers, >=20 > RR >=20 >=20 >=20 > -----Original Message----- > From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf = Of rjmcmahon via Nnagain > Sent: Sunday, October 15, 2023 1:39 PM > To: Network Neutrality is back! Let=C2=B4s make the technical aspects = heard this time! > Cc: rjmcmahon > Subject: Re: [NNagain] transit and peering costs projections >=20 > Hi Jack, >=20 > Thanks again for sharing. It's very interesting to me. >=20 > Today, the networks are shifting from capacity constrained to latency=20 > constrained, as can be seen in the IX discussions about how the speed = of=20 > light over fiber is too slow even between Houston & Dallas. >=20 > The mitigations against standing queues (which cause bloat today) are: >=20 > o) Shrink the e2e bottleneck queue so it will drop packets in a flow = and=20 > TCP will respond to that "signal" > o) Use some form of ECN marking where the network forwarding plane=20 > ultimately informs the TCP source state machine so it can slow down or = > pace effectively. This can be an earlier feedback signal and, if done=20 > well, can inform the sources to avoid bottleneck queuing. There are=20 > couple of approaches with ECN. Comcast is trialing L4S now which seems = > interesting to me as a WiFi test & measurement engineer. The jury is=20 > still out on this and measurements are needed. > o) Mitigate source side bloat via TCP_NOTSENT_LOWAT >=20 > The QoS priority approach per congestion is orthogonal by my judgment = as=20 > it's typically not supported e2e, many networks will bleach DSCP=20 > markings. And it's really too late by my judgment. >=20 > Also, on clock sync, yes your generation did us both a service and=20 > disservice by getting rid of the PSTN TDM clock ;) So IP networking=20 > devices kinda ignored clock sync, which makes e2e one way delay (OWD)=20 > measurements impossible. Thankfully, the GPS atomic clock is now=20 > available mostly everywhere and many devices use TCXO oscillators so=20 > it's possible to get clock sync and use oscillators that can minimize=20 > drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an=20 > example. >=20 > It seems silly to me that clocks aren't synced to the GPS atomic clock = > even if by a proxy even if only for measurement and monitoring. >=20 > Note: As Richard Roy will point out, there really is no such thing as=20 > synchronized clocks across geographies per general relativity - so = those=20 > syncing clocks need to keep those effects in mind. I limited the iperf = 2=20 > timestamps to microsecond precision in hopes avoiding those issues. >=20 > Note: With WiFi, a packet drop can occur because an intermittent RF=20 > channel condition. TCP can't tell the difference between an RF drop vs = a=20 > congested queue drop. That's another reason ECN markings from network=20 > devices may be better than dropped packets. >=20 > Note: I've added some iperf 2 test support around pacing as that seems = > to be the direction the industry is heading as networks are less and=20 > less capacity strained and user quality of experience is being driven = by=20 > tail latencies. One can also test with the Prague CCA for the L4S=20 > scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly = > low cost) >=20 > --fq-rate n[kmgKMG] > Set a rate to be used with fair-queuing based socket-level pacing, in=20 > bytes or bits per second. Only available on platforms supporting the=20 > SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate=20 > bytes/sec or bits/sec per use of uppercase or lowercase, respectively) >=20 > --fq-rate-step n[kmgKMG] > Set a step of rate to be used with fair-queuing based socket-level=20 > pacing, in bytes or bits per second. Step occurs every=20 > fq-rate-step-interval (defaults to one second) >=20 > --fq-rate-step-interval n > Time in seconds before stepping the fq-rate >=20 > Bob >=20 > PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html >=20 >> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about >> latency. It's not just "rewarding" to have lower latencies; high >> latencies may make VGV unusable. Average (or "typical") latency as >> the FCC label proposes isn't a good metric to judge usability. A = path >> which has high variance in latency can be unusable even if the = average >> is quite low. Having your voice or video or gameplay "break up" >> every minute or so when latency spikes to 500 msec makes the "user >> experience" intolerable. >>=20 >> A few years ago, I ran some simple "ping" tests to help a friend who >> was trying to use a gaming app. My data was only for one specific >> path so it's anecdotal. What I saw was surprising - zero data loss, >> every datagram was delivered, but occasionally a datagram would take >> up to 30 seconds to arrive. I didn't have the ability to poke around >> inside, but I suspected it was an experience of "bufferbloat", = enabled >> by the dramatic drop in price of memory over the decades. >>=20 >> It's been a long time since I was involved in operating any part of >> the Internet, so I don't know much about the inner workings today. >> Apologies for my ignorance.... >>=20 >> There was a scenario in the early days of the Internet for which we >> struggled to find a technical solution. Imagine some node in the >> bowels of the network, with 3 connected "circuits" to some other >> nodes. On two of those inputs, traffic is arriving to be forwarded >> out the third circuit. The incoming flows are significantly more = than >> the outgoing path can accept. >>=20 >> What happens? How is "backpressure" generated so that the incoming >> flows are reduced to the point that the outgoing circuit can handle >> the traffic? >>=20 >> About 45 years ago, while we were defining TCPV4, we struggled with >> this issue, but didn't find any consensus solutions. So = "placeholder" >> mechanisms were defined in TCPV4, to be replaced as research = continued >> and found a good solution. >>=20 >> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was >> defined; it was to be sent by a switching node back toward the sender >> of any datagram that had to be discarded because there wasn't any >> place to put it. >>=20 >> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields >> were defined in IP. >>=20 >> TOS would allow the sender to distinguish datagrams based on their >> needs. For example, we thought "Interactive" service might be needed >> for VGV traffic, where timeliness of delivery was most important.=20 >> "Bulk" service might be useful for activities like file transfers, >> backups, et al. "Normal" service might now mean activities like >> using the Web. >>=20 >> The TTL field was an attempt to inform each switching node about the >> "expiration date" for a datagram. If a node somehow knew that a >> particular datagram was unlikely to reach its destination in time to >> be useful (such as a video datagram for a frame that has already been >> displayed), the node could, and should, discard that datagram to free >> up resources for useful traffic. Sadly we had no mechanisms for >> measuring delay, either in transit or in queuing, so TTL was defined >> in terms of "hops", which is not an accurate proxy for time. But >> it's all we had. >>=20 >> Part of the complexity was that the "flow control" mechanism of the >> Internet had put much of the mechanism in the users' computers' TCP >> implementations, rather than the switches which handle only IP. >> Without mechanisms in the users' computers, all a switch could do is >> order more circuits, and add more memory to the switches for queuing. = >> Perhaps that led to "bufferbloat". >>=20 >> So TOS, SQ, and TTL were all placeholders, for some mechanism in a >> future release that would introduce a "real" form of Backpressure and >> the ability to handle different types of traffic. Meanwhile, these >> rudimentary mechanisms would provide some flow control. Hopefully the >> users' computers sending the flows would respond to the SQ >> backpressure, and switches would prioritize traffic using the TTL and >> TOS information. >>=20 >> But, being way out of touch, I don't know what actually happens >> today. Perhaps the current operators and current government watchers >> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code = >> iperf2-code >>=20 >> 1/ How do current switches exert Backpressure to reduce competing >> traffic flows? Do they still send SQs? >>=20 >> 2/ How do the current and proposed government regulations treat the >> different needs of different types of traffic, e.g., "Bulk" versus >> "Interactive" versus "Normal"? Are Internet carriers permitted to >> treat traffic types differently? Are they permitted to charge >> different amounts for different types of service? >>=20 >> Jack Haverty >>=20 >> On 10/15/23 09:45, Dave Taht via Nnagain wrote: >>> For starters I would like to apologize for cc-ing both nanog and my >>> new nn list. (I will add sender filters) >>>=20 >>> A bit more below. >>>=20 >>> On Sun, Oct 15, 2023 at 9:32=E2=80=AFAM Tom Beecher = =20 >>> wrote: >>>>> So for now, we'll keep paying for transit to get to the others=20 >>>>> (since it=E2=80=99s about as much as transporting IXP from = Dallas), and=20 >>>>> hoping someone at Google finally sees Houston as more than a third = >>>>> rate city hanging off of Dallas. Or=E2=80=A6 someone finally = brings a=20 >>>>> worthwhile IX to Houston that gets us more than peering to Kansas=20 >>>>> City. Yeah, I think the former is more likely. =F0=9F=98=8A >>>>=20 >>>> There is often a chicken/egg scenario here with the economics. As = an=20 >>>> eyeball network, your costs to build out and connect to Dallas are=20 >>>> greater than your transit cost, so you do that. Totally fair. >>>>=20 >>>> However think about it from the content side. Say I want to build=20 >>>> into to Houston. I have to put routers in, and a bunch of cache=20 >>>> servers, so I have capital outlay , plus opex for space, power,=20 >>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of=20 >>>> calculations that go into it. Is there enough total eyeball traffic = >>>> there to make it worth it? Is saving 8-10ms enough of a performance = >>>> boost to justify the spend? What are the long term trends in that=20 >>>> market? These answers are of course different for a company running = >>>> their own CDN vs the commercial CDNs. >>>>=20 >>>> I don't work for Google and obviously don't speak for them, but I=20 >>>> would suspect that they're happy to eat a 8-10ms performance hit to = >>>> serve from Dallas , versus the amount of capital outlay to build = out=20 >>>> there right now. >>> The three forms of traffic I care most about are voip, gaming, and >>> videoconferencing, which are rewarding to have at lower latencies. >>> When I was a kid, we had switched phone networks, and while the = sound >>> quality was poorer than today, the voice latency cross-town was just >>> like "being there". Nowadays we see 500+ms latencies for this kind = of >>> traffic. >>>=20 >>> As to how to make calls across town work that well again, cost-wise, = I >>> do not know, but the volume of traffic that would be better served = by >>> these interconnects quite low, respective to the overall gains in >>> lower latency experiences for them. >>>=20 >>>=20 >>>=20 >>>> On Sat, Oct 14, 2023 at 11:47=E2=80=AFPM Tim Burke = wrote: >>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be = >>>>> had for a good bit less than $900 on the wholesale market. >>>>>=20 >>>>> Sadly, IXP=E2=80=99s are seemingly turning into a pay to play = game, with=20 >>>>> rates almost costing as much as transit in many cases after you=20 >>>>> factor in loop costs. >>>>>=20 >>>>> For example, in the Houston market (one of the largest and fastest = >>>>> growing regions in the US!), we do not have a major IX, so to get = up=20 >>>>> to Dallas it=E2=80=99s several thousand for a 100g wave, plus = several=20 >>>>> thousand for a 100g port on one of those major IXes. Or, a better=20 >>>>> option, we can get a 100g flat internet transit for just a little=20 >>>>> bit more. >>>>>=20 >>>>> Fortunately, for us as an eyeball network, there are a good number = >>>>> of major content networks that are allowing for private peering in = >>>>> markets like Houston for just the cost of a cross connect and a = QSFP=20 >>>>> if you=E2=80=99re in the right DC, with Google and some others = being the=20 >>>>> outliers. >>>>>=20 >>>>> So for now, we'll keep paying for transit to get to the others=20 >>>>> (since it=E2=80=99s about as much as transporting IXP from = Dallas), and=20 >>>>> hoping someone at Google finally sees Houston as more than a third = >>>>> rate city hanging off of Dallas. Or=E2=80=A6 someone finally = brings a=20 >>>>> worthwhile IX to Houston that gets us more than peering to Kansas=20 >>>>> City. Yeah, I think the former is more likely. =F0=9F=98=8A >>>>>=20 >>>>> See y=E2=80=99all in San Diego this week, >>>>> Tim >>>>>=20 >>>>> On Oct 14, 2023, at 18:04, Dave Taht wrote: >>>>>> =EF=BB=BFThis set of trendlines was very interesting. = Unfortunately the=20 >>>>>> data >>>>>> stops in 2015. Does anyone have more recent data? >>>>>>=20 >>>>>> = https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-An= d-Projected.php >>>>>>=20 >>>>>> I believe a gbit circuit that an ISP can resell still runs at = about >>>>>> $900 - $1.4k (?) in the usa? How about elsewhere? >>>>>>=20 >>>>>> ... >>>>>>=20 >>>>>> I am under the impression that many IXPs remain very successful, >>>>>> states without them suffer, and I also find the concept of doing=20 >>>>>> micro >>>>>> IXPs at the city level, appealing, and now achievable with cheap=20 >>>>>> gear. >>>>>> Finer grained cross connects between telco and ISP and IXP would=20 >>>>>> lower >>>>>> latencies across town quite hugely... >>>>>>=20 >>>>>> PS I hear ARIN is planning on dropping the price for, and = bundling=20 >>>>>> 3 >>>>>> BGP AS numbers at a time, as of the end of this year, also. >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>> -- >>>>>> Oct 30:=20 >>>>>> = https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html >>>>>> Dave T=C3=A4ht CSO, LibreQos >>>=20 >>>=20 >>=20 >> _______________________________________________ >> Nnagain mailing list >> Nnagain@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/nnagain > _______________________________________________ > Nnagain mailing list > Nnagain@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/nnagain >=20 > _______________________________________________ > Nnagain mailing list > Nnagain@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/nnagain