From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 882963CB39 for ; Sun, 15 Oct 2023 16:45:58 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.de; s=s31663417; t=1697402753; x=1698007553; i=moeller0@gmx.de; bh=JBxRui/5E8RUDjYvTsYjjRB9GmKpxzpQ0I8A80qTjoU=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=mOitwCLh0asw1nitha0xyTrL8QksEWKLW3sAyH/t/y1PAi7RZ0G0jTGM0wx3cmjPjqQd6GBAYZS Ff5HB0aNgpHk5xROXVDfhj6W31zXeO9ezCM5jIRZ8C9krPIllk4kKV9bRmv4rCbyaritv7pk7TrWL xlRmeuJLd9w7GTCQHMfpMQH7vjQeQJ2ae6ejPG5ltY84U11oOfbDiJvhj01IYaVm+NIc1II3PZ/PN qCUvpEmkjWjERWw0bW2N7d9mpY1rEMaa8m0amLYkmniqBpyGO2o9txNAc8HBNQl6z8h8Ah+M29e74 Xd00HbuYUbEw5rO7V2n/+PI141q1nmE0ODgg== X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a Received: from smtpclient.apple ([77.8.145.49]) by mail.gmx.net (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MD9T1-1qjdYG19bF-0096uW; Sun, 15 Oct 2023 22:45:53 +0200 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.4\)) From: Sebastian Moeller In-Reply-To: Date: Sun, 15 Oct 2023 22:45:52 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: References: To: =?utf-8?Q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_as?= =?utf-8?Q?pects_heard_this_time!?= X-Mailer: Apple Mail (2.3696.120.41.1.4) X-Provags-ID: V03:K1:QMZcZSAWeNBnH0TEBMpRkbnFbY7Hna7yfDCOqIJyEI0aZ4PcRF8 wmliEme2fI7DRXGbLzKO5sIWQZoJaU6oqwVis8Xw7lg5Qte9S6flmMila/byoN4/GOBMLQK RY5M2euldoFdbj6bDrQdxeI8Gvxtv2OLSh64rxbBlfeNDG6imx4RGxX3ruY1ZEnMm/9edVz VZoz1gnpovq8PSPRCtVKg== X-Spam-Flag: NO UI-OutboundReport: notjunk:1;M01:P0:lHPiOFI2QK4=;gtXo3McKMhlNgh+84UnUnpfuxMq 5/NVgZxtlg6ynvJn3p3MQpRc5nnfrPUqgTE7clHOXROg9ZRmvXSmEGq4iVob8moz7++JJuVdH ta2LR6LucwKit80pYVp6owZbpZR8yrCA/jVmucnHT0iZTccSq5M5aU+2OMpZhh5pZ5Ni5rB/i PiNUluTJV6iwpmIZD7Dj55BVHDVm5TOVKgZlrNmMjGLk6rU7+rd7vGYlCQApqjq/OB8vgf2dM 4Rf5oSoY7yAJu9XqgDC+Zpue/f3qcj3fPxueAW9wIZoLol8zO5ano1VbonVyid6mLz5vRQV5v 2hiWYuhUVoU5h4dAGm9TQ8KCQ0CWPpaBQ6YrQaMVjBk+sncTS4xafa4gYoz4qJiasY8VgvGlE ynSnX46RK/lKAPH3kpV2sLv6VN63UOEougl/TXARXQJ1/UaYgjwJnJRgbSrfFFZP/QJikLd8q 4NvEYuEgCnO77YLt5DoDjaVq7jje3HMLj4Gx/DVf4lm0LQEDV0EE2bafRLwZb7CdPqRxBCbDg PhmYrEzrACO3ypOztuYSpewvvG+zMdgBB491WrSrZg0FAIDtOeDbF4tw4x1rNsD1e7TiRIAKq lYAwN9Jze6y5Pk6pWObokRO68Jlv9EL7yJfkq3dQV4gXWnxpExPG4nUPlhTaCTDiURwRvj8Du SLCQ6GuTtLi9588qmSXhEVdGFMiYAf5To5wp8bsx5Y5zvXCsy0P8j3K+A9OMJ8xWLkFJbC5Co 4itKCWJdY+CMyPD+xT0FPLjqOxMbyKUN/xXd9Frssc0ltP9Z+/LmdWjcBm4W2coYfqSEeubFb bf2tY20ZnVbuij7s9oCqetnuexMDhlYoGSC443TfV2BqnS9Q89pl1CFU4C9o4iJJ1cD+DYPSd er/v1OPPbSMgHfLtkwaOGFqIEPbS/4tjh8D/ztqOdNUSpsSmINtuy33jQLBna6CqzjeHKK5+h tRBUqfvgm9HT6dEHR1fPawG+6+c= Subject: Re: [NNagain] transit and peering costs projections X-BeenThere: nnagain@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: =?utf-8?q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_aspects_heard_this_time!?= List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Oct 2023 20:45:58 -0000 Hi Jack, > On Oct 15, 2023, at 21:59, Jack Haverty via Nnagain = wrote: >=20 > The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about = latency. It's not just "rewarding" to have lower latencies; high = latencies may make VGV unusable. Average (or "typical") latency as the = FCC label proposes isn't a good metric to judge usability. A path which = has high variance in latency can be unusable even if the average is = quite low. Having your voice or video or gameplay "break up" every = minute or so when latency spikes to 500 msec makes the "user experience" = intolerable. >=20 > A few years ago, I ran some simple "ping" tests to help a friend who = was trying to use a gaming app. My data was only for one specific path = so it's anecdotal. What I saw was surprising - zero data loss, every = datagram was delivered, but occasionally a datagram would take up to 30 = seconds to arrive. I didn't have the ability to poke around inside, but = I suspected it was an experience of "bufferbloat", enabled by the = dramatic drop in price of memory over the decades. >=20 > It's been a long time since I was involved in operating any part of = the Internet, so I don't know much about the inner workings today. = Apologies for my ignorance.... >=20 > There was a scenario in the early days of the Internet for which we = struggled to find a technical solution. Imagine some node in the bowels = of the network, with 3 connected "circuits" to some other nodes. On two = of those inputs, traffic is arriving to be forwarded out the third = circuit. The incoming flows are significantly more than the outgoing = path can accept. >=20 > What happens? How is "backpressure" generated so that the incoming = flows are reduced to the point that the outgoing circuit can handle the = traffic? >=20 > About 45 years ago, while we were defining TCPV4, we struggled with = this issue, but didn't find any consensus solutions. So "placeholder" = mechanisms were defined in TCPV4, to be replaced as research continued = and found a good solution. >=20 > In that "placeholder" scheme, the "Source Quench" (SQ) IP message was = defined; it was to be sent by a switching node back toward the sender of = any datagram that had to be discarded because there wasn't any place to = put it. >=20 > In addition, the TOS (Type Of Service) and TTL (Time To Live) fields = were defined in IP. >=20 > TOS would allow the sender to distinguish datagrams based on their = needs. For example, we thought "Interactive" service might be needed = for VGV traffic, where timeliness of delivery was most important. = "Bulk" service might be useful for activities like file transfers, = backups, et al. "Normal" service might now mean activities like using = the Web. >=20 > The TTL field was an attempt to inform each switching node about the = "expiration date" for a datagram. If a node somehow knew that a = particular datagram was unlikely to reach its destination in time to be = useful (such as a video datagram for a frame that has already been = displayed), the node could, and should, discard that datagram to free up = resources for useful traffic. Sadly we had no mechanisms for measuring = delay, either in transit or in queuing, so TTL was defined in terms of = "hops", which is not an accurate proxy for time. But it's all we had. >=20 > Part of the complexity was that the "flow control" mechanism of the = Internet had put much of the mechanism in the users' computers' TCP = implementations, rather than the switches which handle only IP. Without = mechanisms in the users' computers, all a switch could do is order more = circuits, and add more memory to the switches for queuing. Perhaps that = led to "bufferbloat". >=20 > So TOS, SQ, and TTL were all placeholders, for some mechanism in a = future release that would introduce a "real" form of Backpressure and = the ability to handle different types of traffic. Meanwhile, these = rudimentary mechanisms would provide some flow control. Hopefully the = users' computers sending the flows would respond to the SQ backpressure, = and switches would prioritize traffic using the TTL and TOS information. >=20 > But, being way out of touch, I don't know what actually happens today. = Perhaps the current operators and current government watchers can = answer?: >=20 > 1/ How do current switches exert Backpressure to reduce competing = traffic flows? Do they still send SQs? [SM] As far as i can tell SQ is considered a "failed" experiment = at least over the open internet, as anybody can manufacture such quench = messages and hence these pose an excellent DOS vector. In controlled = environments however this idea keeps coming back (as it has the = potential for faster signaling than piggy-backing a signal onto the = forward packets and expect the receiver to reflect the signals back to = the sender). But instead over the internet we have the receivers detect = either packet drops or explicit signals of congestion (ECN or = alternatives) and reflect these back to the senders that are then = expected to respond appropriately*. The congested nodes really only can = drop and/or use some sort of clever scheduling to not spread the = overload on all connections but if push comes to shove dropping is = really the only option, in your example if two ingress interfaces = converge on a single egress interface of half the capacity, as long as = the aggregate ingress rate exceeds the egress rate queues will grow and = once these reached an end all the node can do is drop ingressing = packets... *) With lots of effort put into responding as gently as possible, not = sure that from a perspective of internet stability we would not fare = better with a strict "on congestion detection at least half the sending = rate" mandate and a way to enforce that... but then I am not a CS or = network expert, so what do I know. > 2/ How do the current and proposed government regulations treat the = different needs of different types of traffic, e.g., "Bulk" versus = "Interactive" versus "Normal"? Are Internet carriers permitted to treat = traffic types differently? Are they permitted to charge different = amounts for different types of service? [SM] I can only talk about the little I know about EU = regulations; conceptually an ISP is to treat all traffic to/from its = end-customers equally. But the ISP is free to use any king of service = level agreement (SLA) with its upstreams*. They also can offer special = services with other properties but not as premium internet access = service. However if an ISP offers some QoS treatment configurable by its = end-users that would IMHO by fair game... the main goal here is to avoid = having the ISPs picking winners and losers in regards to content = providers, end-users are free to do so if they wish. IMHO an ISP = offering QoS services (opt-in and controlled by the end-user) might as = well charge extra for that service. They are also permitted to charge = differently based on access capacity (vulgo "speed") and extras (like = fixed-line and/or mobile telephony volumes or flat rates). *) As long as that does not blatantly affect the unbiased internet = access by the ISPs end-users, that is a bit of a gray zone that current = EU regulations carefully step around. I Think ISPs do this e.g. for = their own VoIP traffic, and regulators and end-users generally seem to = agree that working telephony is somewhat important. Net neutrality = regulations really only demand that such a special treatment would be = available to all VoIP traffic and not just the ISPs, but at least over = here nobody seems to be fighting for this right now. Then again people = generally also seem to be happy with 3rd party VoIP, what ever this = means. Regards Sebastian >=20 > Jack Haverty >=20 > On 10/15/23 09:45, Dave Taht via Nnagain wrote: >> For starters I would like to apologize for cc-ing both nanog and my >> new nn list. (I will add sender filters) >>=20 >> A bit more below. >>=20 >> On Sun, Oct 15, 2023 at 9:32=E2=80=AFAM Tom Beecher = wrote: >>>> So for now, we'll keep paying for transit to get to the others = (since it=E2=80=99s about as much as transporting IXP from Dallas), and = hoping someone at Google finally sees Houston as more than a third rate = city hanging off of Dallas. Or=E2=80=A6 someone finally brings a = worthwhile IX to Houston that gets us more than peering to Kansas City. = Yeah, I think the former is more likely. =F0=9F=98=8A >>>=20 >>> There is often a chicken/egg scenario here with the economics. As an = eyeball network, your costs to build out and connect to Dallas are = greater than your transit cost, so you do that. Totally fair. >>>=20 >>> However think about it from the content side. Say I want to build = into to Houston. I have to put routers in, and a bunch of cache servers, = so I have capital outlay , plus opex for space, power, = IX/backhaul/transit costs. That's not cheap, so there's a lot of = calculations that go into it. Is there enough total eyeball traffic = there to make it worth it? Is saving 8-10ms enough of a performance = boost to justify the spend? What are the long term trends in that = market? These answers are of course different for a company running = their own CDN vs the commercial CDNs. >>>=20 >>> I don't work for Google and obviously don't speak for them, but I = would suspect that they're happy to eat a 8-10ms performance hit to = serve from Dallas , versus the amount of capital outlay to build out = there right now. >> The three forms of traffic I care most about are voip, gaming, and >> videoconferencing, which are rewarding to have at lower latencies. >> When I was a kid, we had switched phone networks, and while the sound >> quality was poorer than today, the voice latency cross-town was just >> like "being there". Nowadays we see 500+ms latencies for this kind of >> traffic. >>=20 >> As to how to make calls across town work that well again, cost-wise, = I >> do not know, but the volume of traffic that would be better served by >> these interconnects quite low, respective to the overall gains in >> lower latency experiences for them. >>=20 >>=20 >>=20 >>> On Sat, Oct 14, 2023 at 11:47=E2=80=AFPM Tim Burke = wrote: >>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be = had for a good bit less than $900 on the wholesale market. >>>>=20 >>>> Sadly, IXP=E2=80=99s are seemingly turning into a pay to play game, = with rates almost costing as much as transit in many cases after you = factor in loop costs. >>>>=20 >>>> For example, in the Houston market (one of the largest and fastest = growing regions in the US!), we do not have a major IX, so to get up to = Dallas it=E2=80=99s several thousand for a 100g wave, plus several = thousand for a 100g port on one of those major IXes. Or, a better = option, we can get a 100g flat internet transit for just a little bit = more. >>>>=20 >>>> Fortunately, for us as an eyeball network, there are a good number = of major content networks that are allowing for private peering in = markets like Houston for just the cost of a cross connect and a QSFP if = you=E2=80=99re in the right DC, with Google and some others being the = outliers. >>>>=20 >>>> So for now, we'll keep paying for transit to get to the others = (since it=E2=80=99s about as much as transporting IXP from Dallas), and = hoping someone at Google finally sees Houston as more than a third rate = city hanging off of Dallas. Or=E2=80=A6 someone finally brings a = worthwhile IX to Houston that gets us more than peering to Kansas City. = Yeah, I think the former is more likely. =F0=9F=98=8A >>>>=20 >>>> See y=E2=80=99all in San Diego this week, >>>> Tim >>>>=20 >>>> On Oct 14, 2023, at 18:04, Dave Taht wrote: >>>>> =EF=BB=BFThis set of trendlines was very interesting. = Unfortunately the data >>>>> stops in 2015. Does anyone have more recent data? >>>>>=20 >>>>> = https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And= -Projected.php >>>>>=20 >>>>> I believe a gbit circuit that an ISP can resell still runs at = about >>>>> $900 - $1.4k (?) in the usa? How about elsewhere? >>>>>=20 >>>>> ... >>>>>=20 >>>>> I am under the impression that many IXPs remain very successful, >>>>> states without them suffer, and I also find the concept of doing = micro >>>>> IXPs at the city level, appealing, and now achievable with cheap = gear. >>>>> Finer grained cross connects between telco and ISP and IXP would = lower >>>>> latencies across town quite hugely... >>>>>=20 >>>>> PS I hear ARIN is planning on dropping the price for, and bundling = 3 >>>>> BGP AS numbers at a time, as of the end of this year, also. >>>>>=20 >>>>>=20 >>>>>=20 >>>>> -- >>>>> Oct 30: = https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html >>>>> Dave T=C3=A4ht CSO, LibreQos >>=20 >>=20 >=20 > _______________________________________________ > Nnagain mailing list > Nnagain@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/nnagain