From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bobcat.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id AC3533CB37 for ; Thu, 22 Feb 2024 13:59:00 -0500 (EST) Received: from mail.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) by bobcat.rjmcmahon.com (Postfix) with ESMTPA id D1BDB1B25C; Thu, 22 Feb 2024 10:58:59 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 bobcat.rjmcmahon.com D1BDB1B25C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rjmcmahon.com; s=bobcat; t=1708628339; bh=jyIstCcZFzUAIIPhpd+jVq9SmQ69moG2L1NYle1k1dM=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=NXDziDol4d6ewQJq+ZOwnAmJ2uy2mg2unh0xezPYV6LdEgx0LUZakrYjJ9k4+TsY4 W69EzQP6BSPc7aAnvHu0ikhFpcqdn6eZxYzK+oqUB8+PUk9lnigPw3kM7WaqFW+638 1ZUgW+r/U4i01nFuURyr5j154r5bTtw5oFxLdSYA= MIME-Version: 1.0 Date: Thu, 22 Feb 2024 10:58:59 -0800 From: rjmcmahon To: =?UTF-8?Q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_a?= =?UTF-8?Q?spects_heard_this_time!?= Cc: Dave Taht , Brent Legg In-Reply-To: References: Message-ID: X-Sender: rjmcmahon@rjmcmahon.com Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [NNagain] The Whys of the Wichita IXP Project X-BeenThere: nnagain@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: =?utf-8?q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_aspects_heard_this_time!?= List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Feb 2024 18:59:00 -0000 Boston University spent $305M on this and it doesn't have an IXP. https://www.bu.edu/articles/2022/center-for-computing-and-data-sciences-photo-essay/ It's like building a magnificent train station w/o any tracks to/fro the station. Bob > On Wed, Feb 21, 2024 at 8:02 PM Brent Legg via Nnagain > wrote: >> >> First, let me offer a public THANK YOU to Dave Taht for reaching out >> to us about the specifics of our Wichita IXP project, and for inviting >> me to join this group. It’s been disheartening to see folks talk >> about us & the project on public forums like LinkedIn without first >> engaging us in conversation to learn the specifics of what we’re >> actually doing. I’d like to think that those who have been >> disparaging have only done so because they don’t understand what we’re >> trying to achieve. > > I was wildly enthusiastic to see what you were proposing appear in the > press. It was a breath of potentially fresh air in an otherwise > depressing post RDOF, post BEAD environment where it seemed like the > only metrics were speedtests and passings. > > I try very hard to get people of wildly disparate backgrounds to > converse, and escape the bubbles they are in. I have tried to gather > together on this old-fashioned email *discussion* list both > technologists and policy-makers to clear the air in ways that cannot > be encapsulated in 240 characters. These two groups (a lot of old > internet experts here) have not been communicating very well of late, > ironically, over the best communication medium ever invented. > > It is sad that email lists have so been in decline the past 20+ years, > overwhelmed by marketing and spam, as an email address is the only > universal identifier we have for so many other transactions. The > advantage of a discussion list, over all the faddy technologies, are: > you retain a copy of what you said, everyone else does also, and the > internet at least used to make it searchable into the far future. > Remembering that I had a dispute or discussion with @randomperson and > finding them again via the technology-of-the-day (g+ anyone?, slack? > disquis? hackernews?) is really hard otherwise, and I do hope that > email makes a comeback. > > But someones need to start maintaining it better. > >> >> >> To begin, I think there is confusion in the terminology being used. >> When we say “IXP,” we mean the facility (building, venue) where >> interconnection & peering occurs. The “IX” is the ethernet switch in >> the building. When someone says an IXP can be built for $8k, that’s >> apples-to-oranges with what we’re doing. Yes, a switch can be >> procured for $8k. But where does it go? What if there is no safe, >> secure, neutral place for it to go? Then such a place must be built. >> That’s what we’re building in Wichita. > > To not annoy us old farts, clarifying that you mean a carrier neutral > facility or datacenter with an IXP would go a long way. :) > > Too many in the past built gold-plated IXPs, ending up with an > appalling cost model that attracts nobody. This total plan, > at this cost, is a *very good* one, and my hope would be, commoditized > and widely replicated to even more than the 120 locations you project > - but my hope is that the IXP component will mirror the successful IXP > models already existing in the USA. > > The costs of interconnecting networks have fallen dramatically, and > can fall further. > >> >> >> Saying an IXP can be built for $8k is enormously confusing to many >> policymakers who do not understand the issue or how interconnection & >> peering actually work, yet have enormous power to set policy and spend >> money that will affect the future of the Internet for generations. > > Operational expense needs to be discussed. The underlying technologies > used to "make it happen", need to be selected. It is amazing what a > modern cheap 100GB 32 port switch can do. IPv6 is mandatory nowadays > while still finding a way to carry what little remains of IPv4 space > efficiently is needed. It would help if there was a local mirror of > one or more of the root DNS servers. Some really tough design choices > regarding what forms of active ethernet fiber vs a vs gpon need to be > made. And so on. Who makes those decisions? > >> >> >> >> We began this whole initiative by asking a series of questions to help >> us arrive at our model for IXP (building) proliferation. I’ll use >> Wichita as the context for these questions, but these could just as >> easily apply to any other similar city that is home to a large public >> research university: > > Thank you for sharing this last criterion. I had done a similar (much > briefer) study targetting latency and resilience primarily, and what > it would cost to do more "rural IXPs" - call them RXPs - every 50 > miles or so - on the cheap as an outgrowth of BEAD. But that would be > a subject for another thread. > > But I did not limit it to "research" universities, but to areas that > had universities. Certainly there is high demand for sexy AI-related > things, but the nuts and bolts of how to design and build networks, is > lacking. > > I regard network design and operations to be a branch of civil > engineering nowadays, and most operations people are quite leery of > letting grad students loose with operational networks. I would love to > see more universities actually teaching the skills to be a decent > sysadmin (or SRE), because basic knowledge of packets, routing, tcp, > bgp, resiliency, and so on is in the decline. Being a BOFH requires > far more skills than a electrician and is actually comparable in > skills and stress to being a doctor. (SREs get paid pretty well, but > most fall into the profession rather than being directly trained on > it) > > Instead, I have been coping (as part of bead), at 6 week educational > programs intended to train people how to splice fiber. > > So I would broaden your targets to places that also intend to teach > people how to design and maintain civil infrastructure, and plan ahead > for disaster recovery. This includes connecting up governments and > emergency services. Reusing old postal buildings is an option, as are > other lower grades of schools. > > I would love to see curricula for the next generation of BOFHs that > included formerly basic things like how to decode a packet capture and > teachings from TCP/ip volume 3, illustrated, and everything > in-between. > > Obligatory xkcd: https://xkcd.com/705/ > >> >> >> >> Should Wichita, with a regional metro population of 600k+, be >> literally dependent, from an interconnection standpoint, on Kansas >> City and Denver forever? No. >> Okay, then what type of facility does Wichita need? Ideally, >> something that can meet current needs and scale to meet future needs. >> What are the attributes of such a facility? >> >> Does it need to be carrier-neutral? Yes. >> Does it need to be secure? Yes. >> Does it need to provide a level-playing field for networks of all >> types? Yes. >> Does it need to be able to convey rights to, and protect the rights >> of, its tenants? Yes. >> Does it need to be a facility that networks can rely on to remain “up” >> in the wake of adverse events? Yes. >> >> Resilient from power outages? Yes. >> Resilient from cooling equipment failures? Yes. >> Resistant to wind damage? Yes. >> Resistant to vandalism or ballistics damage? Yes. >> >> Does it need to be financially sustainable? Yes. > > So that is the good question. How do you do opex? > >> Is “best effort” good enough? No. > > Redundancy helps. > >> Then does it need to be professionally managed? Yes. > > Where will they come from? What software do they have to manage the > facility? Who writes the software? > >> >> Is there an existing facility in Wichita that can meet those needs? >> No. > > In general I use latency as a proxy for where interconnects should go. > Historically this has been about 500 miles. I thought it was > interesting to explore what (as part of Biden´s ev charger program) > what it would take to have an old fashioned IXP ever 50 miles. Turns > out that is pretty close 8k in gear + a lot of fiber. > >> So one must be built? Yes. >> Where should it be built? Where a concentration of eyeball traffic >> already exists that can grow a peering ecosystem faster than it might >> otherwise, and that is also proximate to existing fiber plant, and >> where diverse manholes can be placed on the edge of public >> right-of-way. >> >> >> >> In the case of Wichita, that’s at Wichita State University. > > Do they teach how to run a network? > >> >> >> >> Creating a secure, neutral, resilient interconnection facility with >> proper cooling, power systems, lockable cabinet space, diverse >> manholes and POE isn’t cheap. The whole project is actually more than >> the $5M grant we received. We’re putting in over $800k in cash, plus >> additional in-kind match. >> >> >> >> We’ve done the data analyses necessary to determine which communities >> need such facilities, and that’s how we came up with our list of 125 >> target communities. Most of them are home to public research >> universities, but have no IXP or IX. Not all of those communities are >> equal in terms of priority, but all of them have a need, and we’re >> actively seeking pathways to scale that preserve our core principles >> and avoid the need for grants. But that’s a big challenge. >> >> >> >> I really appreciate the opportunity to provide clarity on the project >> and I’m happy to answer your questions. Surely we agree on much more >> than we disagree. >> >> >> >> --Brent Legg, Connected Nation >> >> _______________________________________________ >> Nnagain mailing list >> Nnagain@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/nnagain