* [NNagain] transit and peering costs projections
@ 2023-10-14 23:01 Dave Taht
2023-10-15 0:25 ` Dave Cohen
` (6 more replies)
0 siblings, 7 replies; 38+ messages in thread
From: Dave Taht @ 2023-10-14 23:01 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG
This set of trendlines was very interesting. Unfortunately the data
stops in 2015. Does anyone have more recent data?
https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
I believe a gbit circuit that an ISP can resell still runs at about
$900 - $1.4k (?) in the usa? How about elsewhere?
...
I am under the impression that many IXPs remain very successful,
states without them suffer, and I also find the concept of doing micro
IXPs at the city level, appealing, and now achievable with cheap gear.
Finer grained cross connects between telco and ISP and IXP would lower
latencies across town quite hugely...
PS I hear ARIN is planning on dropping the price for, and bundling 3
BGP AS numbers at a time, as of the end of this year, also.
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-14 23:01 [NNagain] transit and peering costs projections Dave Taht
@ 2023-10-15 0:25 ` Dave Cohen
2023-10-15 3:41 ` le berger des photons
2023-10-15 3:45 ` Tim Burke
` (5 subsequent siblings)
6 siblings, 1 reply; 38+ messages in thread
From: Dave Cohen @ 2023-10-15 0:25 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Cc: libreqos, NANOG, Dave Taht
I’m a couple years removed from dealing with this on the provider side but the focus has shifted rapidly to adding core capacity and large capacity ports to the extent that smaller capacity ports like 1 Gbps aren’t going to see much more price compression. Cost per bit will come down at higher tiers but there simply isn’t enough focus at lower levels at the hardware providers to afford carriers more price compression at 1 Gbps, even 10 Gbps. I would expect further price compression in access costs but not really in transit costs below 10 Gbps.
In general I agree that IXs continue to proliferate relative to quantity, throughput and geographic reach, almost to the degree that mainland Europe has been covered for years. In my home market of Atlanta, I’m aware of at least four IXs that have been established here or entered the market in the last three years - there were only two major ones prior to that. This is a net positive for a wide variety of reasons but I don’t think it’s created much of an impact in terms of pulling down transit prices. There are a few reasons for this, but primarily because that growth hasn’t really displaced transit demand (at least in my view) and has really been more about a relatively stable set of IX participants creating more resiliency and driving other performance improvements in that leg of the peering ecosystem.
Dave Cohen
craetdave@gmail.com
> On Oct 14, 2023, at 7:02 PM, Dave Taht via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>
> This set of trendlines was very interesting. Unfortunately the data
> stops in 2015. Does anyone have more recent data?
>
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>
> I believe a gbit circuit that an ISP can resell still runs at about
> $900 - $1.4k (?) in the usa? How about elsewhere?
>
> ...
>
> I am under the impression that many IXPs remain very successful,
> states without them suffer, and I also find the concept of doing micro
> IXPs at the city level, appealing, and now achievable with cheap gear.
> Finer grained cross connects between telco and ISP and IXP would lower
> latencies across town quite hugely...
>
> PS I hear ARIN is planning on dropping the price for, and bundling 3
> BGP AS numbers at a time, as of the end of this year, also.
>
>
>
> --
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 0:25 ` Dave Cohen
@ 2023-10-15 3:41 ` le berger des photons
0 siblings, 0 replies; 38+ messages in thread
From: le berger des photons @ 2023-10-15 3:41 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
[-- Attachment #1: Type: text/plain, Size: 3377 bytes --]
as interesting as this all is, this wasn't the discussion I'm looking
for. Perhaps you know of somewhere I can go to find what I'm looking for.
I'm looking to figure out how to share two different accesses among the
same group of clients depending on varying conditions of the main wifi
links which serve them all. Thanks for any direction.
On Sun, Oct 15, 2023 at 2:25 AM Dave Cohen via Nnagain <
nnagain@lists.bufferbloat.net> wrote:
> I’m a couple years removed from dealing with this on the provider side but
> the focus has shifted rapidly to adding core capacity and large capacity
> ports to the extent that smaller capacity ports like 1 Gbps aren’t going to
> see much more price compression. Cost per bit will come down at higher
> tiers but there simply isn’t enough focus at lower levels at the hardware
> providers to afford carriers more price compression at 1 Gbps, even 10
> Gbps. I would expect further price compression in access costs but not
> really in transit costs below 10 Gbps.
>
> In general I agree that IXs continue to proliferate relative to quantity,
> throughput and geographic reach, almost to the degree that mainland Europe
> has been covered for years. In my home market of Atlanta, I’m aware of at
> least four IXs that have been established here or entered the market in the
> last three years - there were only two major ones prior to that. This is a
> net positive for a wide variety of reasons but I don’t think it’s created
> much of an impact in terms of pulling down transit prices. There are a few
> reasons for this, but primarily because that growth hasn’t really displaced
> transit demand (at least in my view) and has really been more about a
> relatively stable set of IX participants creating more resiliency and
> driving other performance improvements in that leg of the peering
> ecosystem.
>
> Dave Cohen
> craetdave@gmail.com
>
> > On Oct 14, 2023, at 7:02 PM, Dave Taht via Nnagain <
> nnagain@lists.bufferbloat.net> wrote:
> >
> > This set of trendlines was very interesting. Unfortunately the data
> > stops in 2015. Does anyone have more recent data?
> >
> >
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
> >
> > I believe a gbit circuit that an ISP can resell still runs at about
> > $900 - $1.4k (?) in the usa? How about elsewhere?
> >
> > ...
> >
> > I am under the impression that many IXPs remain very successful,
> > states without them suffer, and I also find the concept of doing micro
> > IXPs at the city level, appealing, and now achievable with cheap gear.
> > Finer grained cross connects between telco and ISP and IXP would lower
> > latencies across town quite hugely...
> >
> > PS I hear ARIN is planning on dropping the price for, and bundling 3
> > BGP AS numbers at a time, as of the end of this year, also.
> >
> >
> >
> > --
> > Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> > Dave Täht CSO, LibreQos
> > _______________________________________________
> > Nnagain mailing list
> > Nnagain@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/nnagain
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>
[-- Attachment #2: Type: text/html, Size: 4505 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-14 23:01 [NNagain] transit and peering costs projections Dave Taht
2023-10-15 0:25 ` Dave Cohen
@ 2023-10-15 3:45 ` Tim Burke
2023-10-15 4:03 ` Ryan Hamel
` (2 more replies)
2023-10-15 7:40 ` Bill Woodcock
` (4 subsequent siblings)
6 siblings, 3 replies; 38+ messages in thread
From: Tim Burke @ 2023-10-15 3:45 UTC (permalink / raw)
To: Dave Taht
Cc: Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG
I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
See y’all in San Diego this week,
Tim
On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>
> This set of trendlines was very interesting. Unfortunately the data
> stops in 2015. Does anyone have more recent data?
>
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>
> I believe a gbit circuit that an ISP can resell still runs at about
> $900 - $1.4k (?) in the usa? How about elsewhere?
>
> ...
>
> I am under the impression that many IXPs remain very successful,
> states without them suffer, and I also find the concept of doing micro
> IXPs at the city level, appealing, and now achievable with cheap gear.
> Finer grained cross connects between telco and ISP and IXP would lower
> latencies across town quite hugely...
>
> PS I hear ARIN is planning on dropping the price for, and bundling 3
> BGP AS numbers at a time, as of the end of this year, also.
>
>
>
> --
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 3:45 ` Tim Burke
@ 2023-10-15 4:03 ` Ryan Hamel
2023-10-15 4:12 ` Tim Burke
2023-10-15 13:41 ` Mike Hammett
2023-10-15 16:32 ` [NNagain] " Tom Beecher
2 siblings, 1 reply; 38+ messages in thread
From: Ryan Hamel @ 2023-10-15 4:03 UTC (permalink / raw)
To: Tim Burke, Dave Taht
Cc: Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG
[-- Attachment #1: Type: text/plain, Size: 3998 bytes --]
Why not place the routers in Dallas, aggregate the transit, IXP, and PNI's there, and backhaul it over redundant dark fiber with DWDM waves or 400G OpenZR?
Ryan
________________________________
From: NANOG <nanog-bounces+ryan=rkhtech.org@nanog.org> on behalf of Tim Burke <tim@mid.net>
Sent: Saturday, October 14, 2023 8:45 PM
To: Dave Taht <dave.taht@gmail.com>
Cc: Network Neutrality is back! Let´s make the technical aspects heard this time! <nnagain@lists.bufferbloat.net>; libreqos <libreqos@lists.bufferbloat.net>; NANOG <nanog@nanog.org>
Subject: Re: transit and peering costs projections
Caution: This is an external email and may be malicious. Please take care when clicking links or opening attachments.
I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
See y’all in San Diego this week,
Tim
On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>
> This set of trendlines was very interesting. Unfortunately the data
> stops in 2015. Does anyone have more recent data?
>
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrpeering.net%2Fwhite-papers%2FInternet-Transit-Pricing-Historical-And-Projected.php&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=nQeWrGi%2BblMmtiG9u7SdF3JOi1h9Fni7xXo%2FusZRopA%3D&reserved=0<https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php>
>
> I believe a gbit circuit that an ISP can resell still runs at about
> $900 - $1.4k (?) in the usa? How about elsewhere?
>
> ...
>
> I am under the impression that many IXPs remain very successful,
> states without them suffer, and I also find the concept of doing micro
> IXPs at the city level, appealing, and now achievable with cheap gear.
> Finer grained cross connects between telco and ISP and IXP would lower
> latencies across town quite hugely...
>
> PS I hear ARIN is planning on dropping the price for, and bundling 3
> BGP AS numbers at a time, as of the end of this year, also.
>
>
>
> --
> Oct 30: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnetdevconf.info%2F0x17%2Fnews%2Fthe-maestro-and-the-music-bof.html&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ROLgtoeiBgfAG40UZqS8Zd8vMK%2B0HQB7RV%2FhQRvIcFM%3D&reserved=0<https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html>
> Dave Täht CSO, LibreQos
[-- Attachment #2: Type: text/html, Size: 5578 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 4:03 ` Ryan Hamel
@ 2023-10-15 4:12 ` Tim Burke
2023-10-15 4:19 ` Dave Taht
2023-10-15 7:54 ` [NNagain] " Bill Woodcock
0 siblings, 2 replies; 38+ messages in thread
From: Tim Burke @ 2023-10-15 4:12 UTC (permalink / raw)
To: Ryan Hamel
Cc: Dave Taht,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG
[-- Attachment #1: Type: text/plain, Size: 4522 bytes --]
It’s better for customer experience to keep it local instead of adding 200 miles to the route. All of the competition hauls all of their traffic up to Dallas, so we easily have a nice 8-10ms latency advantage by keeping transit and peering as close to the customer as possible.
Plus, you can’t forget to mention another ~$10k MRC per pair in DF costs to get up to Dallas, not including colo, that we can spend on more transit or better gear!
On Oct 14, 2023, at 23:03, Ryan Hamel <ryan@rkhtech.org> wrote:
Why not place the routers in Dallas, aggregate the transit, IXP, and PNI's there, and backhaul it over redundant dark fiber with DWDM waves or 400G OpenZR?
Ryan
________________________________
From: NANOG <nanog-bounces+ryan=rkhtech.org@nanog.org> on behalf of Tim Burke <tim@mid.net>
Sent: Saturday, October 14, 2023 8:45 PM
To: Dave Taht <dave.taht@gmail.com>
Cc: Network Neutrality is back! Let´s make the technical aspects heard this time! <nnagain@lists.bufferbloat.net>; libreqos <libreqos@lists.bufferbloat.net>; NANOG <nanog@nanog.org>
Subject: Re: transit and peering costs projections
Caution: This is an external email and may be malicious. Please take care when clicking links or opening attachments.
I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
See y’all in San Diego this week,
Tim
On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>
> This set of trendlines was very interesting. Unfortunately the data
> stops in 2015. Does anyone have more recent data?
>
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrpeering.net%2Fwhite-papers%2FInternet-Transit-Pricing-Historical-And-Projected.php&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=nQeWrGi%2BblMmtiG9u7SdF3JOi1h9Fni7xXo%2FusZRopA%3D&reserved=0<https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php>
>
> I believe a gbit circuit that an ISP can resell still runs at about
> $900 - $1.4k (?) in the usa? How about elsewhere?
>
> ...
>
> I am under the impression that many IXPs remain very successful,
> states without them suffer, and I also find the concept of doing micro
> IXPs at the city level, appealing, and now achievable with cheap gear.
> Finer grained cross connects between telco and ISP and IXP would lower
> latencies across town quite hugely...
>
> PS I hear ARIN is planning on dropping the price for, and bundling 3
> BGP AS numbers at a time, as of the end of this year, also.
>
>
>
> --
> Oct 30: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnetdevconf.info%2F0x17%2Fnews%2Fthe-maestro-and-the-music-bof.html&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ROLgtoeiBgfAG40UZqS8Zd8vMK%2B0HQB7RV%2FhQRvIcFM%3D&reserved=0<https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html>
> Dave Täht CSO, LibreQos
[-- Attachment #2: Type: text/html, Size: 6292 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 4:12 ` Tim Burke
@ 2023-10-15 4:19 ` Dave Taht
2023-10-15 4:26 ` [NNagain] [LibreQoS] " dan
2023-10-15 7:54 ` [NNagain] " Bill Woodcock
1 sibling, 1 reply; 38+ messages in thread
From: Dave Taht @ 2023-10-15 4:19 UTC (permalink / raw)
To: Tim Burke
Cc: Ryan Hamel,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG
On Sat, Oct 14, 2023 at 9:12 PM Tim Burke <tim@mid.net> wrote:
>
> It’s better for customer experience to keep it local instead of adding 200 miles to the route. All of the competition hauls all of their traffic up to Dallas, so we easily have a nice 8-10ms latency advantage by keeping transit and peering as close to the customer as possible.
>
> Plus, you can’t forget to mention another ~$10k MRC per pair in DF costs to get up to Dallas, not including colo, that we can spend on more transit or better gear!
Texas's BEAD funding and broadband offices are looking for proposals
and seem to have dollars to spend. I have spent much of the past few
years attempting to convince these entities that what was often more
needed was better, more local IXPs. Have you reached out to them?
> On Oct 14, 2023, at 23:03, Ryan Hamel <ryan@rkhtech.org> wrote:
>
>
> Why not place the routers in Dallas, aggregate the transit, IXP, and PNI's there, and backhaul it over redundant dark fiber with DWDM waves or 400G OpenZR?
>
> Ryan
>
> ________________________________
> From: NANOG <nanog-bounces+ryan=rkhtech.org@nanog.org> on behalf of Tim Burke <tim@mid.net>
> Sent: Saturday, October 14, 2023 8:45 PM
> To: Dave Taht <dave.taht@gmail.com>
> Cc: Network Neutrality is back! Let´s make the technical aspects heard this time! <nnagain@lists.bufferbloat.net>; libreqos <libreqos@lists.bufferbloat.net>; NANOG <nanog@nanog.org>
> Subject: Re: transit and peering costs projections
>
> Caution: This is an external email and may be malicious. Please take care when clicking links or opening attachments.
>
>
> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
>
> Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
>
> For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
>
> Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
>
> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>
> See y’all in San Diego this week,
> Tim
>
> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
> >
> > This set of trendlines was very interesting. Unfortunately the data
> > stops in 2015. Does anyone have more recent data?
> >
> > https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrpeering.net%2Fwhite-papers%2FInternet-Transit-Pricing-Historical-And-Projected.php&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=nQeWrGi%2BblMmtiG9u7SdF3JOi1h9Fni7xXo%2FusZRopA%3D&reserved=0
> >
> > I believe a gbit circuit that an ISP can resell still runs at about
> > $900 - $1.4k (?) in the usa? How about elsewhere?
> >
> > ...
> >
> > I am under the impression that many IXPs remain very successful,
> > states without them suffer, and I also find the concept of doing micro
> > IXPs at the city level, appealing, and now achievable with cheap gear.
> > Finer grained cross connects between telco and ISP and IXP would lower
> > latencies across town quite hugely...
> >
> > PS I hear ARIN is planning on dropping the price for, and bundling 3
> > BGP AS numbers at a time, as of the end of this year, also.
> >
> >
> >
> > --
> > Oct 30: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnetdevconf.info%2F0x17%2Fnews%2Fthe-maestro-and-the-music-bof.html&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ROLgtoeiBgfAG40UZqS8Zd8vMK%2B0HQB7RV%2FhQRvIcFM%3D&reserved=0
> > Dave Täht CSO, LibreQos
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] [LibreQoS] transit and peering costs projections
2023-10-15 4:19 ` Dave Taht
@ 2023-10-15 4:26 ` dan
0 siblings, 0 replies; 38+ messages in thread
From: dan @ 2023-10-15 4:26 UTC (permalink / raw)
To: Dave Taht
Cc: Tim Burke, Ryan Hamel, NANOG,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos
[-- Attachment #1: Type: text/plain, Size: 5900 bytes --]
the 900-1400 for 1G is right at what I'm seeing in the rockies region.
price scales decently at 10G. transit costs just as much as DIA or more
because of port costs on each side.
Also, with zayo and lumen, traversing their MPLS networks a few hundred
miles probably costs you 10-15ms latency. That's what I'm seeing. So
unless you're doing a wave or something that isn't getting battered in
their neglected networks centralizing could be worse overall.
On Sat, Oct 14, 2023 at 10:19 PM Dave Taht via LibreQoS <
libreqos@lists.bufferbloat.net> wrote:
> On Sat, Oct 14, 2023 at 9:12 PM Tim Burke <tim@mid.net> wrote:
> >
> > It’s better for customer experience to keep it local instead of adding
> 200 miles to the route. All of the competition hauls all of their traffic
> up to Dallas, so we easily have a nice 8-10ms latency advantage by keeping
> transit and peering as close to the customer as possible.
> >
> > Plus, you can’t forget to mention another ~$10k MRC per pair in DF costs
> to get up to Dallas, not including colo, that we can spend on more transit
> or better gear!
>
> Texas's BEAD funding and broadband offices are looking for proposals
> and seem to have dollars to spend. I have spent much of the past few
> years attempting to convince these entities that what was often more
> needed was better, more local IXPs. Have you reached out to them?
>
>
> > On Oct 14, 2023, at 23:03, Ryan Hamel <ryan@rkhtech.org> wrote:
> >
> >
> > Why not place the routers in Dallas, aggregate the transit, IXP, and
> PNI's there, and backhaul it over redundant dark fiber with DWDM waves or
> 400G OpenZR?
> >
> > Ryan
> >
> > ________________________________
> > From: NANOG <nanog-bounces+ryan=rkhtech.org@nanog.org> on behalf of Tim
> Burke <tim@mid.net>
> > Sent: Saturday, October 14, 2023 8:45 PM
> > To: Dave Taht <dave.taht@gmail.com>
> > Cc: Network Neutrality is back! Let´s make the technical aspects heard
> this time! <nnagain@lists.bufferbloat.net>; libreqos <
> libreqos@lists.bufferbloat.net>; NANOG <nanog@nanog.org>
> > Subject: Re: transit and peering costs projections
> >
> > Caution: This is an external email and may be malicious. Please take
> care when clicking links or opening attachments.
> >
> >
> > I would say that a 1Gbit IP transit in a carrier neutral DC can be had
> for a good bit less than $900 on the wholesale market.
> >
> > Sadly, IXP’s are seemingly turning into a pay to play game, with rates
> almost costing as much as transit in many cases after you factor in loop
> costs.
> >
> > For example, in the Houston market (one of the largest and fastest
> growing regions in the US!), we do not have a major IX, so to get up to
> Dallas it’s several thousand for a 100g wave, plus several thousand for a
> 100g port on one of those major IXes. Or, a better option, we can get a
> 100g flat internet transit for just a little bit more.
> >
> > Fortunately, for us as an eyeball network, there are a good number of
> major content networks that are allowing for private peering in markets
> like Houston for just the cost of a cross connect and a QSFP if you’re in
> the right DC, with Google and some others being the outliers.
> >
> > So for now, we'll keep paying for transit to get to the others (since
> it’s about as much as transporting IXP from Dallas), and hoping someone at
> Google finally sees Houston as more than a third rate city hanging off of
> Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us
> more than peering to Kansas City. Yeah, I think the former is more likely.
> 😊
> >
> > See y’all in San Diego this week,
> > Tim
> >
> > On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
> > >
> > > This set of trendlines was very interesting. Unfortunately the data
> > > stops in 2015. Does anyone have more recent data?
> > >
> > >
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrpeering.net%2Fwhite-papers%2FInternet-Transit-Pricing-Historical-And-Projected.php&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=nQeWrGi%2BblMmtiG9u7SdF3JOi1h9Fni7xXo%2FusZRopA%3D&reserved=0
> > >
> > > I believe a gbit circuit that an ISP can resell still runs at about
> > > $900 - $1.4k (?) in the usa? How about elsewhere?
> > >
> > > ...
> > >
> > > I am under the impression that many IXPs remain very successful,
> > > states without them suffer, and I also find the concept of doing micro
> > > IXPs at the city level, appealing, and now achievable with cheap gear.
> > > Finer grained cross connects between telco and ISP and IXP would lower
> > > latencies across town quite hugely...
> > >
> > > PS I hear ARIN is planning on dropping the price for, and bundling 3
> > > BGP AS numbers at a time, as of the end of this year, also.
> > >
> > >
> > >
> > > --
> > > Oct 30:
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnetdevconf.info%2F0x17%2Fnews%2Fthe-maestro-and-the-music-bof.html&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ROLgtoeiBgfAG40UZqS8Zd8vMK%2B0HQB7RV%2FhQRvIcFM%3D&reserved=0
> > > Dave Täht CSO, LibreQos
>
>
>
> --
> Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> _______________________________________________
> LibreQoS mailing list
> LibreQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos
>
[-- Attachment #2: Type: text/html, Size: 8444 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-14 23:01 [NNagain] transit and peering costs projections Dave Taht
2023-10-15 0:25 ` Dave Cohen
2023-10-15 3:45 ` Tim Burke
@ 2023-10-15 7:40 ` Bill Woodcock
2023-10-15 12:40 ` [NNagain] [LibreQoS] " Jim Troutman
` (3 subsequent siblings)
6 siblings, 0 replies; 38+ messages in thread
From: Bill Woodcock @ 2023-10-15 7:40 UTC (permalink / raw)
To: Dave Täht
Cc: Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG
[-- Attachment #1: Type: text/plain, Size: 2375 bytes --]
> On Oct 15, 2023, at 01:01, Dave Taht <dave.taht@gmail.com> wrote:
> I am under the impression that many IXPs remain very successful,
I know of 760 active IXPs, out of 1,148 total, so, over 31 years, two-thirds are still successful now. Obviously they didn’t all start 31 years ago, they started on a gradually-accelerating curve. I guess we could do the visualization to plot range of lifespans versus start dates, but we haven’t done that as yet.
> states without them suffer
Any populated area without one or more of them suffers by comparison with areas that do have them. States, countries, cities, etc. There are still a surprising number of whole countries that don’t yet have one. We try to prioritize those in our work:
https://www.pch.net/ixp/summary
> I also find the concept of doing micro IXPs at the city level, appealing, and now achievable with cheap gear.
This has always, by definition, been achievable, since it’s the only way any IXP has ever succeeded, really. I mean, big sample set, bell curve, you can always find a few things out at the fringes to argue about, but the thing that allows an IXP to succeed is good APBDC, and the thing that most frequently kills IXPs is over-investment. An expensive switch at the outset is a huge liability, and one of the things most likely to tank a startup IXP. Notably, that doesn’t mean a switch that costs the IXP a lot of money: you can tank an IXP by donating an expensive switch for free. Expensive switches have expensive maintenance, whether you’re paying for it or not. Maintenance means down-time, and down-time raises APBDC, regardless of whether you’ve laid out cash in parallel with it.
> Finer grained cross connects between telco and ISP and IXP would lower latencies across town quite hugely...
Of course, and that requires that they show up in the same building, ideally with an MMR. The same places that work well for IXPs. Interconnection basically just requires a lot of networks be present close to a population center. Which always presents a little tension vis-a-vis datacenters, which profit immensely if there’s a successful IXP in them, but can never afford to locate themselves where IXPs would be most valuable, and don’t like to have to provide free backhaul to better IXP locations.
-Bill
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 4:12 ` Tim Burke
2023-10-15 4:19 ` Dave Taht
@ 2023-10-15 7:54 ` Bill Woodcock
1 sibling, 0 replies; 38+ messages in thread
From: Bill Woodcock @ 2023-10-15 7:54 UTC (permalink / raw)
To: Tim Burke
Cc: Ryan Hamel, NANOG,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos
[-- Attachment #1: Type: text/plain, Size: 4839 bytes --]
Exactly. Speed x distance = cost. This is _exactly_ why IXPs get set up. To avoid backhauling bandwidth from Dallas, or wherever. Loss, latency, out-of-order delivery, and jitter. All lower when you source your bandwidth closer.
-Bill
> On Oct 15, 2023, at 06:12, Tim Burke <tim@mid.net> wrote:
>
> It’s better for customer experience to keep it local instead of adding 200 miles to the route. All of the competition hauls all of their traffic up to Dallas, so we easily have a nice 8-10ms latency advantage by keeping transit and peering as close to the customer as possible.
>
> Plus, you can’t forget to mention another ~$10k MRC per pair in DF costs to get up to Dallas, not including colo, that we can spend on more transit or better gear!
>
>> On Oct 14, 2023, at 23:03, Ryan Hamel <ryan@rkhtech.org> wrote:
>>
>> Why not place the routers in Dallas, aggregate the transit, IXP, and PNI's there, and backhaul it over redundant dark fiber with DWDM waves or 400G OpenZR?
>>
>> Ryan
>>
>> From: NANOG <nanog-bounces+ryan=rkhtech.org@nanog.org> on behalf of Tim Burke <tim@mid.net>
>> Sent: Saturday, October 14, 2023 8:45 PM
>> To: Dave Taht <dave.taht@gmail.com>
>> Cc: Network Neutrality is back! Let´s make the technical aspects heard this time! <nnagain@lists.bufferbloat.net>; libreqos <libreqos@lists.bufferbloat.net>; NANOG <nanog@nanog.org>
>> Subject: Re: transit and peering costs projections Caution: This is an external email and may be malicious. Please take care when clicking links or opening attachments.
>>
>>
>> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
>>
>> Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
>>
>> For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
>>
>> Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
>>
>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>
>> See y’all in San Diego this week,
>> Tim
>>
>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>> >
>> > This set of trendlines was very interesting. Unfortunately the data
>> > stops in 2015. Does anyone have more recent data?
>> >
>> > https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrpeering.net%2Fwhite-papers%2FInternet-Transit-Pricing-Historical-And-Projected.php&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=nQeWrGi%2BblMmtiG9u7SdF3JOi1h9Fni7xXo%2FusZRopA%3D&reserved=0
>> >
>> > I believe a gbit circuit that an ISP can resell still runs at about
>> > $900 - $1.4k (?) in the usa? How about elsewhere?
>> >
>> > ...
>> >
>> > I am under the impression that many IXPs remain very successful,
>> > states without them suffer, and I also find the concept of doing micro
>> > IXPs at the city level, appealing, and now achievable with cheap gear.
>> > Finer grained cross connects between telco and ISP and IXP would lower
>> > latencies across town quite hugely...
>> >
>> > PS I hear ARIN is planning on dropping the price for, and bundling 3
>> > BGP AS numbers at a time, as of the end of this year, also.
>> >
>> >
>> >
>> > --
>> > Oct 30: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnetdevconf.info%2F0x17%2Fnews%2Fthe-maestro-and-the-music-bof.html&data=05%7C01%7Cryan%40rkhtech.org%7Cc8ebae9f0ecd4b368dcb08dbcd319880%7C81c24bb4f9ec4739ba4d25c42594d996%7C0%7C0%7C638329385118876648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ROLgtoeiBgfAG40UZqS8Zd8vMK%2B0HQB7RV%2FhQRvIcFM%3D&reserved=0
>> > Dave Täht CSO, LibreQos
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] [LibreQoS] transit and peering costs projections
2023-10-14 23:01 [NNagain] transit and peering costs projections Dave Taht
` (2 preceding siblings ...)
2023-10-15 7:40 ` Bill Woodcock
@ 2023-10-15 12:40 ` Jim Troutman
2023-10-15 14:12 ` Tim Burke
2023-10-15 13:38 ` [NNagain] " Mike Hammett
` (2 subsequent siblings)
6 siblings, 1 reply; 38+ messages in thread
From: Jim Troutman @ 2023-10-15 12:40 UTC (permalink / raw)
To: Dave Taht
Cc: NANOG,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos
[-- Attachment #1: Type: text/plain, Size: 2053 bytes --]
Transit 1G wholesale in the right DCs is below $500 per port. 10gigE full
port can be had around $1k-1.5k month on long term deals from multiple
sources. 100g IP transit ports start around $4k.
The cost of transport (dark or wavelength) is generally at least as much as
the IP transit cost, and usually more in underserved markets. In the
northeast it is very hard to get 10GigE wavelengths below $2k/month to any
location, and is generally closer to $3k. 100g waves are starting around
$4k and go up a lot.
Pricing has come down somewhat over time, but not as fast as transit
prices. 6 years ago a 10Gig wave to Boston from Maine would be about
$5k/month. Today about $2800.
With the cost of XCs in data centers and transport costs, you generally
don’t want to go beyond 2x10gigE before jumping to 100.
On Sat, Oct 14, 2023 at 19:02 Dave Taht via LibreQoS <
libreqos@lists.bufferbloat.net> wrote:
> This set of trendlines was very interesting. Unfortunately the data
> stops in 2015. Does anyone have more recent data?
>
>
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>
> I believe a gbit circuit that an ISP can resell still runs at about
> $900 - $1.4k (?) in the usa? How about elsewhere?
>
> ...
>
> I am under the impression that many IXPs remain very successful,
> states without them suffer, and I also find the concept of doing micro
> IXPs at the city level, appealing, and now achievable with cheap gear.
> Finer grained cross connects between telco and ISP and IXP would lower
> latencies across town quite hugely...
>
> PS I hear ARIN is planning on dropping the price for, and bundling 3
> BGP AS numbers at a time, as of the end of this year, also.
>
>
>
> --
> Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> _______________________________________________
> LibreQoS mailing list
> LibreQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos
>
[-- Attachment #2: Type: text/html, Size: 3039 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-14 23:01 [NNagain] transit and peering costs projections Dave Taht
` (3 preceding siblings ...)
2023-10-15 12:40 ` [NNagain] [LibreQoS] " Jim Troutman
@ 2023-10-15 13:38 ` Mike Hammett
2023-10-15 13:44 ` Mike Hammett
[not found] ` <20231015092253.67e4546e@dataplane.org>
6 siblings, 0 replies; 38+ messages in thread
From: Mike Hammett @ 2023-10-15 13:38 UTC (permalink / raw)
To: Dave Taht
Cc: Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG
[-- Attachment #1: Type: text/plain, Size: 1801 bytes --]
I've seen some attempts to put an IX at every corner, but I don't think those efforts will be overly successful.
It's still difficult to gain sufficient scale in NFL-sized cities. Big content won't join without big eyeballs (well, not the national-level guys because they almost never will). Big eyeballs just can't be bothered. Small guys don't move the needle enough.
-----
Mike Hammett
Intelligent Computing Solutions
http://www.ics-il.com
Midwest-IX
http://www.midwest-ix.com
----- Original Message -----
From: "Dave Taht" <dave.taht@gmail.com>
To: "Network Neutrality is back! Let´s make the technical aspects heard this time!" <nnagain@lists.bufferbloat.net>, "libreqos" <libreqos@lists.bufferbloat.net>, "NANOG" <nanog@nanog.org>
Sent: Saturday, October 14, 2023 6:01:54 PM
Subject: transit and peering costs projections
This set of trendlines was very interesting. Unfortunately the data
stops in 2015. Does anyone have more recent data?
https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
I believe a gbit circuit that an ISP can resell still runs at about
$900 - $1.4k (?) in the usa? How about elsewhere?
...
I am under the impression that many IXPs remain very successful,
states without them suffer, and I also find the concept of doing micro
IXPs at the city level, appealing, and now achievable with cheap gear.
Finer grained cross connects between telco and ISP and IXP would lower
latencies across town quite hugely...
PS I hear ARIN is planning on dropping the price for, and bundling 3
BGP AS numbers at a time, as of the end of this year, also.
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
[-- Attachment #2: Type: text/html, Size: 2271 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 3:45 ` Tim Burke
2023-10-15 4:03 ` Ryan Hamel
@ 2023-10-15 13:41 ` Mike Hammett
2023-10-15 14:19 ` Tim Burke
2023-10-15 16:32 ` [NNagain] " Tom Beecher
2 siblings, 1 reply; 38+ messages in thread
From: Mike Hammett @ 2023-10-15 13:41 UTC (permalink / raw)
To: Tim Burke
Cc: Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG, Dave Taht
[-- Attachment #1: Type: text/plain, Size: 3287 bytes --]
Houston is tricky as due to it's geographic scope, it's quite expensive to build an IX that goes into enough facilities to achieve meaningful scale. CDN 1 is in facility A. CDN 2 in facility B. CDN 3 is in facility C. When I last looked, it was about 80 driving miles to have a dark fiber ring that encompassed all of the facilities one would need to be in.
-----
Mike Hammett
Intelligent Computing Solutions
http://www.ics-il.com
Midwest-IX
http://www.midwest-ix.com
----- Original Message -----
From: "Tim Burke" <tim@mid.net>
To: "Dave Taht" <dave.taht@gmail.com>
Cc: "Network Neutrality is back! Let´s make the technical aspects heard this time!" <nnagain@lists.bufferbloat.net>, "libreqos" <libreqos@lists.bufferbloat.net>, "NANOG" <nanog@nanog.org>
Sent: Saturday, October 14, 2023 10:45:47 PM
Subject: Re: transit and peering costs projections
I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
See y’all in San Diego this week,
Tim
On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>
> This set of trendlines was very interesting. Unfortunately the data
> stops in 2015. Does anyone have more recent data?
>
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>
> I believe a gbit circuit that an ISP can resell still runs at about
> $900 - $1.4k (?) in the usa? How about elsewhere?
>
> ...
>
> I am under the impression that many IXPs remain very successful,
> states without them suffer, and I also find the concept of doing micro
> IXPs at the city level, appealing, and now achievable with cheap gear.
> Finer grained cross connects between telco and ISP and IXP would lower
> latencies across town quite hugely...
>
> PS I hear ARIN is planning on dropping the price for, and bundling 3
> BGP AS numbers at a time, as of the end of this year, also.
>
>
>
> --
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
[-- Attachment #2: Type: text/html, Size: 3875 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-14 23:01 [NNagain] transit and peering costs projections Dave Taht
` (4 preceding siblings ...)
2023-10-15 13:38 ` [NNagain] " Mike Hammett
@ 2023-10-15 13:44 ` Mike Hammett
[not found] ` <20231015092253.67e4546e@dataplane.org>
6 siblings, 0 replies; 38+ messages in thread
From: Mike Hammett @ 2023-10-15 13:44 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
[-- Attachment #1: Type: text/plain, Size: 1905 bytes --]
I've seen some attempts to put an IX at every corner, but I don't think those efforts will be overly successful.
It's still difficult to gain sufficient scale in NFL-sized cities. Big content won't join without big eyeballs (well, not the national-level guys because they almost never will). Big eyeballs just can't be bothered. Small guys don't move the needle enough.
----- Original Message -----
From: "Dave Taht via Nnagain" <nnagain@lists.bufferbloat.net>
To: "Network Neutrality is back! Let´s make the technical aspects heard this time!" <nnagain@lists.bufferbloat.net>, "libreqos" <libreqos@lists.bufferbloat.net>, "NANOG" <nanog@nanog.org>
Cc: "Dave Taht" <dave.taht@gmail.com>
Sent: Saturday, October 14, 2023 6:01:54 PM
Subject: [NNagain] transit and peering costs projections
This set of trendlines was very interesting. Unfortunately the data
stops in 2015. Does anyone have more recent data?
https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
I believe a gbit circuit that an ISP can resell still runs at about
$900 - $1.4k (?) in the usa? How about elsewhere?
...
I am under the impression that many IXPs remain very successful,
states without them suffer, and I also find the concept of doing micro
IXPs at the city level, appealing, and now achievable with cheap gear.
Finer grained cross connects between telco and ISP and IXP would lower
latencies across town quite hugely...
PS I hear ARIN is planning on dropping the price for, and bundling 3
BGP AS numbers at a time, as of the end of this year, also.
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
_______________________________________________
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain
[-- Attachment #2: Type: text/html, Size: 2830 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] [LibreQoS] transit and peering costs projections
2023-10-15 12:40 ` [NNagain] [LibreQoS] " Jim Troutman
@ 2023-10-15 14:12 ` Tim Burke
0 siblings, 0 replies; 38+ messages in thread
From: Tim Burke @ 2023-10-15 14:12 UTC (permalink / raw)
To: Jim Troutman
Cc: Dave Taht, NANOG,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos
[-- Attachment #1: Type: text/plain, Size: 2299 bytes --]
Man, I wanna know where you’re getting 100g transit for $4500 a month! Even someone as fly by night as Cogent wants almost double that, unfortunately.
On Oct 15, 2023, at 07:43, Jim Troutman <jamesltroutman@gmail.com> wrote:
Transit 1G wholesale in the right DCs is below $500 per port. 10gigE full port can be had around $1k-1.5k month on long term deals from multiple sources. 100g IP transit ports start around $4k.
The cost of transport (dark or wavelength) is generally at least as much as the IP transit cost, and usually more in underserved markets. In the northeast it is very hard to get 10GigE wavelengths below $2k/month to any location, and is generally closer to $3k. 100g waves are starting around $4k and go up a lot.
Pricing has come down somewhat over time, but not as fast as transit prices. 6 years ago a 10Gig wave to Boston from Maine would be about $5k/month. Today about $2800.
With the cost of XCs in data centers and transport costs, you generally don’t want to go beyond 2x10gigE before jumping to 100.
On Sat, Oct 14, 2023 at 19:02 Dave Taht via LibreQoS <libreqos@lists.bufferbloat.net<mailto:libreqos@lists.bufferbloat.net>> wrote:
This set of trendlines was very interesting. Unfortunately the data
stops in 2015. Does anyone have more recent data?
https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
I believe a gbit circuit that an ISP can resell still runs at about
$900 - $1.4k (?) in the usa? How about elsewhere?
...
I am under the impression that many IXPs remain very successful,
states without them suffer, and I also find the concept of doing micro
IXPs at the city level, appealing, and now achievable with cheap gear.
Finer grained cross connects between telco and ISP and IXP would lower
latencies across town quite hugely...
PS I hear ARIN is planning on dropping the price for, and bundling 3
BGP AS numbers at a time, as of the end of this year, also.
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
_______________________________________________
LibreQoS mailing list
LibreQoS@lists.bufferbloat.net<mailto:LibreQoS@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/libreqos
[-- Attachment #2: Type: text/html, Size: 3684 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 13:41 ` Mike Hammett
@ 2023-10-15 14:19 ` Tim Burke
2023-10-15 16:44 ` [NNagain] [LibreQoS] " dan
0 siblings, 1 reply; 38+ messages in thread
From: Tim Burke @ 2023-10-15 14:19 UTC (permalink / raw)
To: Mike Hammett
Cc: Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG, Dave Taht
[-- Attachment #1: Type: text/plain, Size: 3710 bytes --]
I’ve found that most of the CDNs that matter are in one facility in Houston, the Databank West (formerly Cyrus One) campus. We are about to light up a POP there so we’ll at least be able to get PNIs to them. There is even an IX in the facility, but it’s relatively small (likely because the operator wants near-transit pricing to get on it) so we’ll just PNI what we can for now.
On Oct 15, 2023, at 08:50, Mike Hammett <nanog@ics-il.net> wrote:
Houston is tricky as due to it's geographic scope, it's quite expensive to build an IX that goes into enough facilities to achieve meaningful scale. CDN 1 is in facility A. CDN 2 in facility B. CDN 3 is in facility C. When I last looked, it was about 80 driving miles to have a dark fiber ring that encompassed all of the facilities one would need to be in.
-----
Mike Hammett
Intelligent Computing Solutions
http://www.ics-il.com
Midwest-IX
http://www.midwest-ix.com
________________________________
From: "Tim Burke" <tim@mid.net>
To: "Dave Taht" <dave.taht@gmail.com>
Cc: "Network Neutrality is back! Let´s make the technical aspects heard this time!" <nnagain@lists.bufferbloat.net>, "libreqos" <libreqos@lists.bufferbloat.net>, "NANOG" <nanog@nanog.org>
Sent: Saturday, October 14, 2023 10:45:47 PM
Subject: Re: transit and peering costs projections
I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
See y’all in San Diego this week,
Tim
On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>
> This set of trendlines was very interesting. Unfortunately the data
> stops in 2015. Does anyone have more recent data?
>
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>
> I believe a gbit circuit that an ISP can resell still runs at about
> $900 - $1.4k (?) in the usa? How about elsewhere?
>
> ...
>
> I am under the impression that many IXPs remain very successful,
> states without them suffer, and I also find the concept of doing micro
> IXPs at the city level, appealing, and now achievable with cheap gear.
> Finer grained cross connects between telco and ISP and IXP would lower
> latencies across town quite hugely...
>
> PS I hear ARIN is planning on dropping the price for, and bundling 3
> BGP AS numbers at a time, as of the end of this year, also.
>
>
>
> --
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
[-- Attachment #2: Type: text/html, Size: 4725 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* [NNagain] Fwd: transit and peering costs projections
[not found] ` <20231015092253.67e4546e@dataplane.org>
@ 2023-10-15 14:48 ` Dave Taht
0 siblings, 0 replies; 38+ messages in thread
From: Dave Taht @ 2023-10-15 14:48 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
This may be of interest:
Peering Costs and Fees
<https://arxiv.org/abs/2310.04651>
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 3:45 ` Tim Burke
2023-10-15 4:03 ` Ryan Hamel
2023-10-15 13:41 ` Mike Hammett
@ 2023-10-15 16:32 ` Tom Beecher
2023-10-15 16:45 ` Dave Taht
2023-10-15 19:19 ` Tim Burke
2 siblings, 2 replies; 38+ messages in thread
From: Tom Beecher @ 2023-10-15 16:32 UTC (permalink / raw)
To: Tim Burke
Cc: Dave Taht,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG
[-- Attachment #1: Type: text/plain, Size: 3966 bytes --]
>
> So for now, we'll keep paying for transit to get to the others (since it’s
> about as much as transporting IXP from Dallas), and hoping someone at
> Google finally sees Houston as more than a third rate city hanging off of
> Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us
> more than peering to Kansas City. Yeah, I think the former is more
> likely. 😊
>
There is often a chicken/egg scenario here with the economics. As an
eyeball network, your costs to build out and connect to Dallas are greater
than your transit cost, so you do that. Totally fair.
However think about it from the content side. Say I want to build into to
Houston. I have to put routers in, and a bunch of cache servers, so I have
capital outlay , plus opex for space, power, IX/backhaul/transit costs.
That's not cheap, so there's a lot of calculations that go into it. Is
there enough total eyeball traffic there to make it worth it? Is saving
8-10ms enough of a performance boost to justify the spend? What are the
long term trends in that market? These answers are of course different for
a company running their own CDN vs the commercial CDNs.
I don't work for Google and obviously don't speak for them, but I would
suspect that they're happy to eat a 8-10ms performance hit to serve from
Dallas , versus the amount of capital outlay to build out there right now.
On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for
> a good bit less than $900 on the wholesale market.
>
> Sadly, IXP’s are seemingly turning into a pay to play game, with rates
> almost costing as much as transit in many cases after you factor in loop
> costs.
>
> For example, in the Houston market (one of the largest and fastest growing
> regions in the US!), we do not have a major IX, so to get up to Dallas it’s
> several thousand for a 100g wave, plus several thousand for a 100g port on
> one of those major IXes. Or, a better option, we can get a 100g flat
> internet transit for just a little bit more.
>
> Fortunately, for us as an eyeball network, there are a good number of
> major content networks that are allowing for private peering in markets
> like Houston for just the cost of a cross connect and a QSFP if you’re in
> the right DC, with Google and some others being the outliers.
>
> So for now, we'll keep paying for transit to get to the others (since it’s
> about as much as transporting IXP from Dallas), and hoping someone at
> Google finally sees Houston as more than a third rate city hanging off of
> Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us
> more than peering to Kansas City. Yeah, I think the former is more likely.
> 😊
>
> See y’all in San Diego this week,
> Tim
>
> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
> >
> > This set of trendlines was very interesting. Unfortunately the data
> > stops in 2015. Does anyone have more recent data?
> >
> >
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
> >
> > I believe a gbit circuit that an ISP can resell still runs at about
> > $900 - $1.4k (?) in the usa? How about elsewhere?
> >
> > ...
> >
> > I am under the impression that many IXPs remain very successful,
> > states without them suffer, and I also find the concept of doing micro
> > IXPs at the city level, appealing, and now achievable with cheap gear.
> > Finer grained cross connects between telco and ISP and IXP would lower
> > latencies across town quite hugely...
> >
> > PS I hear ARIN is planning on dropping the price for, and bundling 3
> > BGP AS numbers at a time, as of the end of this year, also.
> >
> >
> >
> > --
> > Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> > Dave Täht CSO, LibreQos
>
[-- Attachment #2: Type: text/html, Size: 4870 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] [LibreQoS] transit and peering costs projections
2023-10-15 14:19 ` Tim Burke
@ 2023-10-15 16:44 ` dan
0 siblings, 0 replies; 38+ messages in thread
From: dan @ 2023-10-15 16:44 UTC (permalink / raw)
To: Tim Burke
Cc: Mike Hammett, NANOG,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos
[-- Attachment #1: Type: text/plain, Size: 6311 bytes --]
Just want to rewind back to the IX map above. The problem is that it's
really misleading. Dive down on a number of those (a big number) and they
are registered as an IX but they have few tier1 providers in them. The
closest one to me is essentially just fed by Zayo. Not much of an IX when
there's only one path out on one provider...
If big providers, or at least multiple providers linking to other IX'
aren't participating, then the purpose isn't met. Zayo isn't offering IX
rates in these for lack of competition so the incentive to build out from
there is very low, ie I can get Zayo in a road-side hut for basically the
same price and not have to share access. I also realize that getting 2-4
providers into a shack in the middle of no-where doesn't make sense either,
but population dictates a lot of this.
I know it's a big ask, getting full size IX access in a microIX, but that's
what big government projects are for. Get these carriers that are crossing
various jurisdictions to drop transport services, waves or dark viber etc,
into something useful like a school, courthouse, town hall, whatever, and
in that build out link to the two IX's that fiber crossing was going
between. Just put in the deal that they put in optics aligned with the
population. Frankly, 40G to most of these areas would be plenty for a
decade or more and 40G optics long distance modules are only a few grand
each. Maybe $10-15,000 for redundant 40G and they've already run the fiber
as part of that delivery to the facility (double that for really long
runs...). Schools would be my #1 pick here because it solves a lot of
issues. Gov pulls in at least 1x 40G to every single incorporated school
and builds access facilities for that (conduits to edges of property etc)
and at some threshold that's 1x40G with 2 providers then 3 providers for
bigger populations and as populations grow. Standard prices on port and
they are all just a vlan or equiv on the pipe back to the IX. Basically
these would be like IX extension sites with layer2 ports between provided
by long-haul providers.
On Sun, Oct 15, 2023 at 8:19 AM Tim Burke via LibreQoS <
libreqos@lists.bufferbloat.net> wrote:
> I’ve found that most of the CDNs that matter are in one facility in
> Houston, the Databank West (formerly Cyrus One) campus. We are about to
> light up a POP there so we’ll at least be able to get PNIs to them. There
> is even an IX in the facility, but it’s relatively small (likely because
> the operator wants near-transit pricing to get on it) so we’ll just PNI
> what we can for now.
>
> On Oct 15, 2023, at 08:50, Mike Hammett <nanog@ics-il.net> wrote:
>
>
> Houston is tricky as due to it's geographic scope, it's quite expensive to
> build an IX that goes into enough facilities to achieve meaningful scale.
> CDN 1 is in facility A. CDN 2 in facility B. CDN 3 is in facility C. When I
> last looked, it was about 80 driving miles to have a dark fiber ring that
> encompassed all of the facilities one would need to be in.
>
>
>
> -----
> Mike Hammett
> Intelligent Computing Solutions
> http://www.ics-il.com
>
> Midwest-IX
> http://www.midwest-ix.com
>
> ------------------------------
> *From: *"Tim Burke" <tim@mid.net>
> *To: *"Dave Taht" <dave.taht@gmail.com>
> *Cc: *"Network Neutrality is back! Let´s make the technical aspects heard
> this time!" <nnagain@lists.bufferbloat.net>, "libreqos" <
> libreqos@lists.bufferbloat.net>, "NANOG" <nanog@nanog.org>
> *Sent: *Saturday, October 14, 2023 10:45:47 PM
> *Subject: *Re: transit and peering costs projections
>
> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for
> a good bit less than $900 on the wholesale market.
>
> Sadly, IXP’s are seemingly turning into a pay to play game, with rates
> almost costing as much as transit in many cases after you factor in loop
> costs.
>
> For example, in the Houston market (one of the largest and fastest growing
> regions in the US!), we do not have a major IX, so to get up to Dallas it’s
> several thousand for a 100g wave, plus several thousand for a 100g port on
> one of those major IXes. Or, a better option, we can get a 100g flat
> internet transit for just a little bit more.
>
> Fortunately, for us as an eyeball network, there are a good number of
> major content networks that are allowing for private peering in markets
> like Houston for just the cost of a cross connect and a QSFP if you’re in
> the right DC, with Google and some others being the outliers.
>
> So for now, we'll keep paying for transit to get to the others (since it’s
> about as much as transporting IXP from Dallas), and hoping someone at
> Google finally sees Houston as more than a third rate city hanging off of
> Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us
> more than peering to Kansas City. Yeah, I think the former is more likely.
> 😊
>
> See y’all in San Diego this week,
> Tim
>
> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
> >
> > This set of trendlines was very interesting. Unfortunately the data
> > stops in 2015. Does anyone have more recent data?
> >
> >
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
> >
> > I believe a gbit circuit that an ISP can resell still runs at about
> > $900 - $1.4k (?) in the usa? How about elsewhere?
> >
> > ...
> >
> > I am under the impression that many IXPs remain very successful,
> > states without them suffer, and I also find the concept of doing micro
> > IXPs at the city level, appealing, and now achievable with cheap gear.
> > Finer grained cross connects between telco and ISP and IXP would lower
> > latencies across town quite hugely...
> >
> > PS I hear ARIN is planning on dropping the price for, and bundling 3
> > BGP AS numbers at a time, as of the end of this year, also.
> >
> >
> >
> > --
> > Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> > Dave Täht CSO, LibreQos
>
> _______________________________________________
> LibreQoS mailing list
> LibreQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos
>
[-- Attachment #2: Type: text/html, Size: 8304 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 16:32 ` [NNagain] " Tom Beecher
@ 2023-10-15 16:45 ` Dave Taht
2023-10-15 19:59 ` Jack Haverty
2023-10-16 3:33 ` [NNagain] transit and peering costs projections Matthew Petach
2023-10-15 19:19 ` Tim Burke
1 sibling, 2 replies; 38+ messages in thread
From: Dave Taht @ 2023-10-15 16:45 UTC (permalink / raw)
To: Tom Beecher
Cc: Tim Burke,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
NANOG
For starters I would like to apologize for cc-ing both nanog and my
new nn list. (I will add sender filters)
A bit more below.
On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc> wrote:
>>
>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>
>
> There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.
>
> However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That's not cheap, so there's a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.
>
> I don't work for Google and obviously don't speak for them, but I would suspect that they're happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.
The three forms of traffic I care most about are voip, gaming, and
videoconferencing, which are rewarding to have at lower latencies.
When I was a kid, we had switched phone networks, and while the sound
quality was poorer than today, the voice latency cross-town was just
like "being there". Nowadays we see 500+ms latencies for this kind of
traffic.
As to how to make calls across town work that well again, cost-wise, I
do not know, but the volume of traffic that would be better served by
these interconnects quite low, respective to the overall gains in
lower latency experiences for them.
>
> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>
>> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
>>
>> Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
>>
>> For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
>>
>> Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
>>
>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>
>> See y’all in San Diego this week,
>> Tim
>>
>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>> >
>> > This set of trendlines was very interesting. Unfortunately the data
>> > stops in 2015. Does anyone have more recent data?
>> >
>> > https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>> >
>> > I believe a gbit circuit that an ISP can resell still runs at about
>> > $900 - $1.4k (?) in the usa? How about elsewhere?
>> >
>> > ...
>> >
>> > I am under the impression that many IXPs remain very successful,
>> > states without them suffer, and I also find the concept of doing micro
>> > IXPs at the city level, appealing, and now achievable with cheap gear.
>> > Finer grained cross connects between telco and ISP and IXP would lower
>> > latencies across town quite hugely...
>> >
>> > PS I hear ARIN is planning on dropping the price for, and bundling 3
>> > BGP AS numbers at a time, as of the end of this year, also.
>> >
>> >
>> >
>> > --
>> > Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>> > Dave Täht CSO, LibreQos
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 16:32 ` [NNagain] " Tom Beecher
2023-10-15 16:45 ` Dave Taht
@ 2023-10-15 19:19 ` Tim Burke
1 sibling, 0 replies; 38+ messages in thread
From: Tim Burke @ 2023-10-15 19:19 UTC (permalink / raw)
To: Tom Beecher
Cc: Dave Taht,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
libreqos, NANOG
[-- Attachment #1: Type: text/plain, Size: 4523 bytes --]
I agree, but there are fortunately several large content networks that have had the forethought to put their stuff in Houston - Meta, Fastly, Akamai, AWS just to name a few… There is enough of a need to warrant those other networks having a presence, so hopefully it’s just a matter of time before other content networks jump in too.
Those 4 (plus Google cache fills) make up a huge majority of our transit usage, so at least we’ll get a majority of it peered off after we get these PNI’s stood up. And yes, I will continue to push for Google to light something up in Houston. 🤣
On Oct 15, 2023, at 11:33, Tom Beecher <beecher@beecher.cc> wrote:
So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.
However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That's not cheap, so there's a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.
I don't work for Google and obviously don't speak for them, but I would suspect that they're happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.
On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net<mailto:tim@mid.net>> wrote:
I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
See y’all in San Diego this week,
Tim
On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com<mailto:dave.taht@gmail.com>> wrote:
>
> This set of trendlines was very interesting. Unfortunately the data
> stops in 2015. Does anyone have more recent data?
>
> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>
> I believe a gbit circuit that an ISP can resell still runs at about
> $900 - $1.4k (?) in the usa? How about elsewhere?
>
> ...
>
> I am under the impression that many IXPs remain very successful,
> states without them suffer, and I also find the concept of doing micro
> IXPs at the city level, appealing, and now achievable with cheap gear.
> Finer grained cross connects between telco and ISP and IXP would lower
> latencies across town quite hugely...
>
> PS I hear ARIN is planning on dropping the price for, and bundling 3
> BGP AS numbers at a time, as of the end of this year, also.
>
>
>
> --
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
[-- Attachment #2: Type: text/html, Size: 5947 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 16:45 ` Dave Taht
@ 2023-10-15 19:59 ` Jack Haverty
2023-10-15 20:39 ` rjmcmahon
` (2 more replies)
2023-10-16 3:33 ` [NNagain] transit and peering costs projections Matthew Petach
1 sibling, 3 replies; 38+ messages in thread
From: Jack Haverty @ 2023-10-15 19:59 UTC (permalink / raw)
To: nnagain
The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
latency. It's not just "rewarding" to have lower latencies; high
latencies may make VGV unusable. Average (or "typical") latency as the
FCC label proposes isn't a good metric to judge usability. A path which
has high variance in latency can be unusable even if the average is
quite low. Having your voice or video or gameplay "break up" every
minute or so when latency spikes to 500 msec makes the "user experience"
intolerable.
A few years ago, I ran some simple "ping" tests to help a friend who was
trying to use a gaming app. My data was only for one specific path so
it's anecdotal. What I saw was surprising - zero data loss, every
datagram was delivered, but occasionally a datagram would take up to 30
seconds to arrive. I didn't have the ability to poke around inside, but
I suspected it was an experience of "bufferbloat", enabled by the
dramatic drop in price of memory over the decades.
It's been a long time since I was involved in operating any part of the
Internet, so I don't know much about the inner workings today. Apologies
for my ignorance....
There was a scenario in the early days of the Internet for which we
struggled to find a technical solution. Imagine some node in the bowels
of the network, with 3 connected "circuits" to some other nodes. On two
of those inputs, traffic is arriving to be forwarded out the third
circuit. The incoming flows are significantly more than the outgoing
path can accept.
What happens? How is "backpressure" generated so that the incoming
flows are reduced to the point that the outgoing circuit can handle the
traffic?
About 45 years ago, while we were defining TCPV4, we struggled with this
issue, but didn't find any consensus solutions. So "placeholder"
mechanisms were defined in TCPV4, to be replaced as research continued
and found a good solution.
In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
defined; it was to be sent by a switching node back toward the sender of
any datagram that had to be discarded because there wasn't any place to
put it.
In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
were defined in IP.
TOS would allow the sender to distinguish datagrams based on their
needs. For example, we thought "Interactive" service might be needed
for VGV traffic, where timeliness of delivery was most important.
"Bulk" service might be useful for activities like file transfers,
backups, et al. "Normal" service might now mean activities like using
the Web.
The TTL field was an attempt to inform each switching node about the
"expiration date" for a datagram. If a node somehow knew that a
particular datagram was unlikely to reach its destination in time to be
useful (such as a video datagram for a frame that has already been
displayed), the node could, and should, discard that datagram to free up
resources for useful traffic. Sadly we had no mechanisms for measuring
delay, either in transit or in queuing, so TTL was defined in terms of
"hops", which is not an accurate proxy for time. But it's all we had.
Part of the complexity was that the "flow control" mechanism of the
Internet had put much of the mechanism in the users' computers' TCP
implementations, rather than the switches which handle only IP. Without
mechanisms in the users' computers, all a switch could do is order more
circuits, and add more memory to the switches for queuing. Perhaps that
led to "bufferbloat".
So TOS, SQ, and TTL were all placeholders, for some mechanism in a
future release that would introduce a "real" form of Backpressure and
the ability to handle different types of traffic. Meanwhile, these
rudimentary mechanisms would provide some flow control. Hopefully the
users' computers sending the flows would respond to the SQ backpressure,
and switches would prioritize traffic using the TTL and TOS information.
But, being way out of touch, I don't know what actually happens today.
Perhaps the current operators and current government watchers can answer?:
1/ How do current switches exert Backpressure to reduce competing
traffic flows? Do they still send SQs?
2/ How do the current and proposed government regulations treat the
different needs of different types of traffic, e.g., "Bulk" versus
"Interactive" versus "Normal"? Are Internet carriers permitted to treat
traffic types differently? Are they permitted to charge different
amounts for different types of service?
Jack Haverty
On 10/15/23 09:45, Dave Taht via Nnagain wrote:
> For starters I would like to apologize for cc-ing both nanog and my
> new nn list. (I will add sender filters)
>
> A bit more below.
>
> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc> wrote:
>>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>
>> There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.
>>
>> However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That's not cheap, so there's a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.
>>
>> I don't work for Google and obviously don't speak for them, but I would suspect that they're happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.
> The three forms of traffic I care most about are voip, gaming, and
> videoconferencing, which are rewarding to have at lower latencies.
> When I was a kid, we had switched phone networks, and while the sound
> quality was poorer than today, the voice latency cross-town was just
> like "being there". Nowadays we see 500+ms latencies for this kind of
> traffic.
>
> As to how to make calls across town work that well again, cost-wise, I
> do not know, but the volume of traffic that would be better served by
> these interconnects quite low, respective to the overall gains in
> lower latency experiences for them.
>
>
>
>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
>>>
>>> Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
>>>
>>> For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
>>>
>>> Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
>>>
>>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>>
>>> See y’all in San Diego this week,
>>> Tim
>>>
>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>> This set of trendlines was very interesting. Unfortunately the data
>>>> stops in 2015. Does anyone have more recent data?
>>>>
>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>
>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>
>>>> ...
>>>>
>>>> I am under the impression that many IXPs remain very successful,
>>>> states without them suffer, and I also find the concept of doing micro
>>>> IXPs at the city level, appealing, and now achievable with cheap gear.
>>>> Finer grained cross connects between telco and ISP and IXP would lower
>>>> latencies across town quite hugely...
>>>>
>>>> PS I hear ARIN is planning on dropping the price for, and bundling 3
>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>
>>>>
>>>>
>>>> --
>>>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>> Dave Täht CSO, LibreQos
>
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 19:59 ` Jack Haverty
@ 2023-10-15 20:39 ` rjmcmahon
2023-10-15 23:44 ` Karl Auerbach
2023-10-16 17:01 ` Dick Roy
2023-10-15 20:45 ` [NNagain] transit and peering costs projections Sebastian Moeller
2023-10-16 1:39 ` [NNagain] The history of congestion control on the internet Dave Taht
2 siblings, 2 replies; 38+ messages in thread
From: rjmcmahon @ 2023-10-15 20:39 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Hi Jack,
Thanks again for sharing. It's very interesting to me.
Today, the networks are shifting from capacity constrained to latency
constrained, as can be seen in the IX discussions about how the speed of
light over fiber is too slow even between Houston & Dallas.
The mitigations against standing queues (which cause bloat today) are:
o) Shrink the e2e bottleneck queue so it will drop packets in a flow and
TCP will respond to that "signal"
o) Use some form of ECN marking where the network forwarding plane
ultimately informs the TCP source state machine so it can slow down or
pace effectively. This can be an earlier feedback signal and, if done
well, can inform the sources to avoid bottleneck queuing. There are
couple of approaches with ECN. Comcast is trialing L4S now which seems
interesting to me as a WiFi test & measurement engineer. The jury is
still out on this and measurements are needed.
o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
The QoS priority approach per congestion is orthogonal by my judgment as
it's typically not supported e2e, many networks will bleach DSCP
markings. And it's really too late by my judgment.
Also, on clock sync, yes your generation did us both a service and
disservice by getting rid of the PSTN TDM clock ;) So IP networking
devices kinda ignored clock sync, which makes e2e one way delay (OWD)
measurements impossible. Thankfully, the GPS atomic clock is now
available mostly everywhere and many devices use TCXO oscillators so
it's possible to get clock sync and use oscillators that can minimize
drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an
example.
It seems silly to me that clocks aren't synced to the GPS atomic clock
even if by a proxy even if only for measurement and monitoring.
Note: As Richard Roy will point out, there really is no such thing as
synchronized clocks across geographies per general relativity - so those
syncing clocks need to keep those effects in mind. I limited the iperf 2
timestamps to microsecond precision in hopes avoiding those issues.
Note: With WiFi, a packet drop can occur because an intermittent RF
channel condition. TCP can't tell the difference between an RF drop vs a
congested queue drop. That's another reason ECN markings from network
devices may be better than dropped packets.
Note: I've added some iperf 2 test support around pacing as that seems
to be the direction the industry is heading as networks are less and
less capacity strained and user quality of experience is being driven by
tail latencies. One can also test with the Prague CCA for the L4S
scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly
low cost)
--fq-rate n[kmgKMG]
Set a rate to be used with fair-queuing based socket-level pacing, in
bytes or bits per second. Only available on platforms supporting the
SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate
bytes/sec or bits/sec per use of uppercase or lowercase, respectively)
--fq-rate-step n[kmgKMG]
Set a step of rate to be used with fair-queuing based socket-level
pacing, in bytes or bits per second. Step occurs every
fq-rate-step-interval (defaults to one second)
--fq-rate-step-interval n
Time in seconds before stepping the fq-rate
Bob
PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html
> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
> latency. It's not just "rewarding" to have lower latencies; high
> latencies may make VGV unusable. Average (or "typical") latency as
> the FCC label proposes isn't a good metric to judge usability. A path
> which has high variance in latency can be unusable even if the average
> is quite low. Having your voice or video or gameplay "break up"
> every minute or so when latency spikes to 500 msec makes the "user
> experience" intolerable.
>
> A few years ago, I ran some simple "ping" tests to help a friend who
> was trying to use a gaming app. My data was only for one specific
> path so it's anecdotal. What I saw was surprising - zero data loss,
> every datagram was delivered, but occasionally a datagram would take
> up to 30 seconds to arrive. I didn't have the ability to poke around
> inside, but I suspected it was an experience of "bufferbloat", enabled
> by the dramatic drop in price of memory over the decades.
>
> It's been a long time since I was involved in operating any part of
> the Internet, so I don't know much about the inner workings today.
> Apologies for my ignorance....
>
> There was a scenario in the early days of the Internet for which we
> struggled to find a technical solution. Imagine some node in the
> bowels of the network, with 3 connected "circuits" to some other
> nodes. On two of those inputs, traffic is arriving to be forwarded
> out the third circuit. The incoming flows are significantly more than
> the outgoing path can accept.
>
> What happens? How is "backpressure" generated so that the incoming
> flows are reduced to the point that the outgoing circuit can handle
> the traffic?
>
> About 45 years ago, while we were defining TCPV4, we struggled with
> this issue, but didn't find any consensus solutions. So "placeholder"
> mechanisms were defined in TCPV4, to be replaced as research continued
> and found a good solution.
>
> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
> defined; it was to be sent by a switching node back toward the sender
> of any datagram that had to be discarded because there wasn't any
> place to put it.
>
> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
> were defined in IP.
>
> TOS would allow the sender to distinguish datagrams based on their
> needs. For example, we thought "Interactive" service might be needed
> for VGV traffic, where timeliness of delivery was most important.
> "Bulk" service might be useful for activities like file transfers,
> backups, et al. "Normal" service might now mean activities like
> using the Web.
>
> The TTL field was an attempt to inform each switching node about the
> "expiration date" for a datagram. If a node somehow knew that a
> particular datagram was unlikely to reach its destination in time to
> be useful (such as a video datagram for a frame that has already been
> displayed), the node could, and should, discard that datagram to free
> up resources for useful traffic. Sadly we had no mechanisms for
> measuring delay, either in transit or in queuing, so TTL was defined
> in terms of "hops", which is not an accurate proxy for time. But
> it's all we had.
>
> Part of the complexity was that the "flow control" mechanism of the
> Internet had put much of the mechanism in the users' computers' TCP
> implementations, rather than the switches which handle only IP.
> Without mechanisms in the users' computers, all a switch could do is
> order more circuits, and add more memory to the switches for queuing.
> Perhaps that led to "bufferbloat".
>
> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
> future release that would introduce a "real" form of Backpressure and
> the ability to handle different types of traffic. Meanwhile, these
> rudimentary mechanisms would provide some flow control. Hopefully the
> users' computers sending the flows would respond to the SQ
> backpressure, and switches would prioritize traffic using the TTL and
> TOS information.
>
> But, being way out of touch, I don't know what actually happens
> today. Perhaps the current operators and current government watchers
> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code
> iperf2-code
>
> 1/ How do current switches exert Backpressure to reduce competing
> traffic flows? Do they still send SQs?
>
> 2/ How do the current and proposed government regulations treat the
> different needs of different types of traffic, e.g., "Bulk" versus
> "Interactive" versus "Normal"? Are Internet carriers permitted to
> treat traffic types differently? Are they permitted to charge
> different amounts for different types of service?
>
> Jack Haverty
>
> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>> For starters I would like to apologize for cc-ing both nanog and my
>> new nn list. (I will add sender filters)
>>
>> A bit more below.
>>
>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc>
>> wrote:
>>>> So for now, we'll keep paying for transit to get to the others
>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>> hoping someone at Google finally sees Houston as more than a third
>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>> City. Yeah, I think the former is more likely. 😊
>>>
>>> There is often a chicken/egg scenario here with the economics. As an
>>> eyeball network, your costs to build out and connect to Dallas are
>>> greater than your transit cost, so you do that. Totally fair.
>>>
>>> However think about it from the content side. Say I want to build
>>> into to Houston. I have to put routers in, and a bunch of cache
>>> servers, so I have capital outlay , plus opex for space, power,
>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of
>>> calculations that go into it. Is there enough total eyeball traffic
>>> there to make it worth it? Is saving 8-10ms enough of a performance
>>> boost to justify the spend? What are the long term trends in that
>>> market? These answers are of course different for a company running
>>> their own CDN vs the commercial CDNs.
>>>
>>> I don't work for Google and obviously don't speak for them, but I
>>> would suspect that they're happy to eat a 8-10ms performance hit to
>>> serve from Dallas , versus the amount of capital outlay to build out
>>> there right now.
>> The three forms of traffic I care most about are voip, gaming, and
>> videoconferencing, which are rewarding to have at lower latencies.
>> When I was a kid, we had switched phone networks, and while the sound
>> quality was poorer than today, the voice latency cross-town was just
>> like "being there". Nowadays we see 500+ms latencies for this kind of
>> traffic.
>>
>> As to how to make calls across town work that well again, cost-wise, I
>> do not know, but the volume of traffic that would be better served by
>> these interconnects quite low, respective to the overall gains in
>> lower latency experiences for them.
>>
>>
>>
>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be
>>>> had for a good bit less than $900 on the wholesale market.
>>>>
>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>>>> rates almost costing as much as transit in many cases after you
>>>> factor in loop costs.
>>>>
>>>> For example, in the Houston market (one of the largest and fastest
>>>> growing regions in the US!), we do not have a major IX, so to get up
>>>> to Dallas it’s several thousand for a 100g wave, plus several
>>>> thousand for a 100g port on one of those major IXes. Or, a better
>>>> option, we can get a 100g flat internet transit for just a little
>>>> bit more.
>>>>
>>>> Fortunately, for us as an eyeball network, there are a good number
>>>> of major content networks that are allowing for private peering in
>>>> markets like Houston for just the cost of a cross connect and a QSFP
>>>> if you’re in the right DC, with Google and some others being the
>>>> outliers.
>>>>
>>>> So for now, we'll keep paying for transit to get to the others
>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>> hoping someone at Google finally sees Houston as more than a third
>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>> City. Yeah, I think the former is more likely. 😊
>>>>
>>>> See y’all in San Diego this week,
>>>> Tim
>>>>
>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>> This set of trendlines was very interesting. Unfortunately the
>>>>> data
>>>>> stops in 2015. Does anyone have more recent data?
>>>>>
>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>
>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>
>>>>> ...
>>>>>
>>>>> I am under the impression that many IXPs remain very successful,
>>>>> states without them suffer, and I also find the concept of doing
>>>>> micro
>>>>> IXPs at the city level, appealing, and now achievable with cheap
>>>>> gear.
>>>>> Finer grained cross connects between telco and ISP and IXP would
>>>>> lower
>>>>> latencies across town quite hugely...
>>>>>
>>>>> PS I hear ARIN is planning on dropping the price for, and bundling
>>>>> 3
>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Oct 30:
>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>> Dave Täht CSO, LibreQos
>>
>>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 19:59 ` Jack Haverty
2023-10-15 20:39 ` rjmcmahon
@ 2023-10-15 20:45 ` Sebastian Moeller
2023-10-16 1:39 ` [NNagain] The history of congestion control on the internet Dave Taht
2 siblings, 0 replies; 38+ messages in thread
From: Sebastian Moeller @ 2023-10-15 20:45 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Hi Jack,
> On Oct 15, 2023, at 21:59, Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>
> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about latency. It's not just "rewarding" to have lower latencies; high latencies may make VGV unusable. Average (or "typical") latency as the FCC label proposes isn't a good metric to judge usability. A path which has high variance in latency can be unusable even if the average is quite low. Having your voice or video or gameplay "break up" every minute or so when latency spikes to 500 msec makes the "user experience" intolerable.
>
> A few years ago, I ran some simple "ping" tests to help a friend who was trying to use a gaming app. My data was only for one specific path so it's anecdotal. What I saw was surprising - zero data loss, every datagram was delivered, but occasionally a datagram would take up to 30 seconds to arrive. I didn't have the ability to poke around inside, but I suspected it was an experience of "bufferbloat", enabled by the dramatic drop in price of memory over the decades.
>
> It's been a long time since I was involved in operating any part of the Internet, so I don't know much about the inner workings today. Apologies for my ignorance....
>
> There was a scenario in the early days of the Internet for which we struggled to find a technical solution. Imagine some node in the bowels of the network, with 3 connected "circuits" to some other nodes. On two of those inputs, traffic is arriving to be forwarded out the third circuit. The incoming flows are significantly more than the outgoing path can accept.
>
> What happens? How is "backpressure" generated so that the incoming flows are reduced to the point that the outgoing circuit can handle the traffic?
>
> About 45 years ago, while we were defining TCPV4, we struggled with this issue, but didn't find any consensus solutions. So "placeholder" mechanisms were defined in TCPV4, to be replaced as research continued and found a good solution.
>
> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was defined; it was to be sent by a switching node back toward the sender of any datagram that had to be discarded because there wasn't any place to put it.
>
> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields were defined in IP.
>
> TOS would allow the sender to distinguish datagrams based on their needs. For example, we thought "Interactive" service might be needed for VGV traffic, where timeliness of delivery was most important. "Bulk" service might be useful for activities like file transfers, backups, et al. "Normal" service might now mean activities like using the Web.
>
> The TTL field was an attempt to inform each switching node about the "expiration date" for a datagram. If a node somehow knew that a particular datagram was unlikely to reach its destination in time to be useful (such as a video datagram for a frame that has already been displayed), the node could, and should, discard that datagram to free up resources for useful traffic. Sadly we had no mechanisms for measuring delay, either in transit or in queuing, so TTL was defined in terms of "hops", which is not an accurate proxy for time. But it's all we had.
>
> Part of the complexity was that the "flow control" mechanism of the Internet had put much of the mechanism in the users' computers' TCP implementations, rather than the switches which handle only IP. Without mechanisms in the users' computers, all a switch could do is order more circuits, and add more memory to the switches for queuing. Perhaps that led to "bufferbloat".
>
> So TOS, SQ, and TTL were all placeholders, for some mechanism in a future release that would introduce a "real" form of Backpressure and the ability to handle different types of traffic. Meanwhile, these rudimentary mechanisms would provide some flow control. Hopefully the users' computers sending the flows would respond to the SQ backpressure, and switches would prioritize traffic using the TTL and TOS information.
>
> But, being way out of touch, I don't know what actually happens today. Perhaps the current operators and current government watchers can answer?:
>
> 1/ How do current switches exert Backpressure to reduce competing traffic flows? Do they still send SQs?
[SM] As far as i can tell SQ is considered a "failed" experiment at least over the open internet, as anybody can manufacture such quench messages and hence these pose an excellent DOS vector. In controlled environments however this idea keeps coming back (as it has the potential for faster signaling than piggy-backing a signal onto the forward packets and expect the receiver to reflect the signals back to the sender). But instead over the internet we have the receivers detect either packet drops or explicit signals of congestion (ECN or alternatives) and reflect these back to the senders that are then expected to respond appropriately*. The congested nodes really only can drop and/or use some sort of clever scheduling to not spread the overload on all connections but if push comes to shove dropping is really the only option, in your example if two ingress interfaces converge on a single egress interface of half the capacity, as long as the aggregate ingress rate exceeds the egress rate queues will grow and once these reached an end all the node can do is drop ingressing packets...
*) With lots of effort put into responding as gently as possible, not sure that from a perspective of internet stability we would not fare better with a strict "on congestion detection at least half the sending rate" mandate and a way to enforce that... but then I am not a CS or network expert, so what do I know.
> 2/ How do the current and proposed government regulations treat the different needs of different types of traffic, e.g., "Bulk" versus "Interactive" versus "Normal"? Are Internet carriers permitted to treat traffic types differently? Are they permitted to charge different amounts for different types of service?
[SM] I can only talk about the little I know about EU regulations; conceptually an ISP is to treat all traffic to/from its end-customers equally. But the ISP is free to use any king of service level agreement (SLA) with its upstreams*. They also can offer special services with other properties but not as premium internet access service. However if an ISP offers some QoS treatment configurable by its end-users that would IMHO by fair game... the main goal here is to avoid having the ISPs picking winners and losers in regards to content providers, end-users are free to do so if they wish. IMHO an ISP offering QoS services (opt-in and controlled by the end-user) might as well charge extra for that service. They are also permitted to charge differently based on access capacity (vulgo "speed") and extras (like fixed-line and/or mobile telephony volumes or flat rates).
*) As long as that does not blatantly affect the unbiased internet access by the ISPs end-users, that is a bit of a gray zone that current EU regulations carefully step around. I Think ISPs do this e.g. for their own VoIP traffic, and regulators and end-users generally seem to agree that working telephony is somewhat important. Net neutrality regulations really only demand that such a special treatment would be available to all VoIP traffic and not just the ISPs, but at least over here nobody seems to be fighting for this right now. Then again people generally also seem to be happy with 3rd party VoIP, what ever this means.
Regards
Sebastian
>
> Jack Haverty
>
> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>> For starters I would like to apologize for cc-ing both nanog and my
>> new nn list. (I will add sender filters)
>>
>> A bit more below.
>>
>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc> wrote:
>>>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>>
>>> There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.
>>>
>>> However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That's not cheap, so there's a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.
>>>
>>> I don't work for Google and obviously don't speak for them, but I would suspect that they're happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.
>> The three forms of traffic I care most about are voip, gaming, and
>> videoconferencing, which are rewarding to have at lower latencies.
>> When I was a kid, we had switched phone networks, and while the sound
>> quality was poorer than today, the voice latency cross-town was just
>> like "being there". Nowadays we see 500+ms latencies for this kind of
>> traffic.
>>
>> As to how to make calls across town work that well again, cost-wise, I
>> do not know, but the volume of traffic that would be better served by
>> these interconnects quite low, respective to the overall gains in
>> lower latency experiences for them.
>>
>>
>>
>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
>>>>
>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
>>>>
>>>> For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
>>>>
>>>> Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
>>>>
>>>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>>>
>>>> See y’all in San Diego this week,
>>>> Tim
>>>>
>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>> This set of trendlines was very interesting. Unfortunately the data
>>>>> stops in 2015. Does anyone have more recent data?
>>>>>
>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>
>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>
>>>>> ...
>>>>>
>>>>> I am under the impression that many IXPs remain very successful,
>>>>> states without them suffer, and I also find the concept of doing micro
>>>>> IXPs at the city level, appealing, and now achievable with cheap gear.
>>>>> Finer grained cross connects between telco and ISP and IXP would lower
>>>>> latencies across town quite hugely...
>>>>>
>>>>> PS I hear ARIN is planning on dropping the price for, and bundling 3
>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>> Dave Täht CSO, LibreQos
>>
>>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 20:39 ` rjmcmahon
@ 2023-10-15 23:44 ` Karl Auerbach
2023-10-16 17:01 ` Dick Roy
1 sibling, 0 replies; 38+ messages in thread
From: Karl Auerbach @ 2023-10-15 23:44 UTC (permalink / raw)
To: rjmcmahon via Nnagain
[-- Attachment #1: Type: text/plain, Size: 14525 bytes --]
Thinking of networks not being fast enough .. we wrote this years upon
years ago at the Interop show. We shouldn't have been surprised, but we
were - a lot of "the press" believed this:
https://www.cavebear.com/cb_catalog/techno/gaganet/
Here's the introduction snippet, the rest is via the link above:
May 5, 1998:
Las Vegas, Networld+Interop
Today, the worlds greatest collection of networking professionals
gathered and constructed the first trans-relativistic network.
The NOC Team used hyper-fiber to create the first network not limited by
the speed of light. ...
etc etc
--karl--
On 10/15/23 1:39 PM, rjmcmahon via Nnagain wrote:
> Hi Jack,
>
> Thanks again for sharing. It's very interesting to me.
>
> Today, the networks are shifting from capacity constrained to latency
> constrained, as can be seen in the IX discussions about how the speed
> of light over fiber is too slow even between Houston & Dallas.
>
> The mitigations against standing queues (which cause bloat today) are:
>
> o) Shrink the e2e bottleneck queue so it will drop packets in a flow
> and TCP will respond to that "signal"
> o) Use some form of ECN marking where the network forwarding plane
> ultimately informs the TCP source state machine so it can slow down or
> pace effectively. This can be an earlier feedback signal and, if done
> well, can inform the sources to avoid bottleneck queuing. There are
> couple of approaches with ECN. Comcast is trialing L4S now which seems
> interesting to me as a WiFi test & measurement engineer. The jury is
> still out on this and measurements are needed.
> o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
>
> The QoS priority approach per congestion is orthogonal by my judgment
> as it's typically not supported e2e, many networks will bleach DSCP
> markings. And it's really too late by my judgment.
>
> Also, on clock sync, yes your generation did us both a service and
> disservice by getting rid of the PSTN TDM clock ;) So IP networking
> devices kinda ignored clock sync, which makes e2e one way delay (OWD)
> measurements impossible. Thankfully, the GPS atomic clock is now
> available mostly everywhere and many devices use TCXO oscillators so
> it's possible to get clock sync and use oscillators that can minimize
> drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an example.
>
> It seems silly to me that clocks aren't synced to the GPS atomic clock
> even if by a proxy even if only for measurement and monitoring.
>
> Note: As Richard Roy will point out, there really is no such thing as
> synchronized clocks across geographies per general relativity - so
> those syncing clocks need to keep those effects in mind. I limited the
> iperf 2 timestamps to microsecond precision in hopes avoiding those
> issues.
>
> Note: With WiFi, a packet drop can occur because an intermittent RF
> channel condition. TCP can't tell the difference between an RF drop vs
> a congested queue drop. That's another reason ECN markings from
> network devices may be better than dropped packets.
>
> Note: I've added some iperf 2 test support around pacing as that seems
> to be the direction the industry is heading as networks are less and
> less capacity strained and user quality of experience is being driven
> by tail latencies. One can also test with the Prague CCA for the L4S
> scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly
> low cost)
>
> --fq-rate n[kmgKMG]
> Set a rate to be used with fair-queuing based socket-level pacing, in
> bytes or bits per second. Only available on platforms supporting the
> SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate
> bytes/sec or bits/sec per use of uppercase or lowercase, respectively)
>
> --fq-rate-step n[kmgKMG]
> Set a step of rate to be used with fair-queuing based socket-level
> pacing, in bytes or bits per second. Step occurs every
> fq-rate-step-interval (defaults to one second)
>
> --fq-rate-step-interval n
> Time in seconds before stepping the fq-rate
>
> Bob
>
> PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html
>
>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>> latency. It's not just "rewarding" to have lower latencies; high
>> latencies may make VGV unusable. Average (or "typical") latency as
>> the FCC label proposes isn't a good metric to judge usability. A path
>> which has high variance in latency can be unusable even if the average
>> is quite low. Having your voice or video or gameplay "break up"
>> every minute or so when latency spikes to 500 msec makes the "user
>> experience" intolerable.
>>
>> A few years ago, I ran some simple "ping" tests to help a friend who
>> was trying to use a gaming app. My data was only for one specific
>> path so it's anecdotal. What I saw was surprising - zero data loss,
>> every datagram was delivered, but occasionally a datagram would take
>> up to 30 seconds to arrive. I didn't have the ability to poke around
>> inside, but I suspected it was an experience of "bufferbloat", enabled
>> by the dramatic drop in price of memory over the decades.
>>
>> It's been a long time since I was involved in operating any part of
>> the Internet, so I don't know much about the inner workings today.
>> Apologies for my ignorance....
>>
>> There was a scenario in the early days of the Internet for which we
>> struggled to find a technical solution. Imagine some node in the
>> bowels of the network, with 3 connected "circuits" to some other
>> nodes. On two of those inputs, traffic is arriving to be forwarded
>> out the third circuit. The incoming flows are significantly more than
>> the outgoing path can accept.
>>
>> What happens? How is "backpressure" generated so that the incoming
>> flows are reduced to the point that the outgoing circuit can handle
>> the traffic?
>>
>> About 45 years ago, while we were defining TCPV4, we struggled with
>> this issue, but didn't find any consensus solutions. So "placeholder"
>> mechanisms were defined in TCPV4, to be replaced as research continued
>> and found a good solution.
>>
>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
>> defined; it was to be sent by a switching node back toward the sender
>> of any datagram that had to be discarded because there wasn't any
>> place to put it.
>>
>> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
>> were defined in IP.
>>
>> TOS would allow the sender to distinguish datagrams based on their
>> needs. For example, we thought "Interactive" service might be needed
>> for VGV traffic, where timeliness of delivery was most important.
>> "Bulk" service might be useful for activities like file transfers,
>> backups, et al. "Normal" service might now mean activities like
>> using the Web.
>>
>> The TTL field was an attempt to inform each switching node about the
>> "expiration date" for a datagram. If a node somehow knew that a
>> particular datagram was unlikely to reach its destination in time to
>> be useful (such as a video datagram for a frame that has already been
>> displayed), the node could, and should, discard that datagram to free
>> up resources for useful traffic. Sadly we had no mechanisms for
>> measuring delay, either in transit or in queuing, so TTL was defined
>> in terms of "hops", which is not an accurate proxy for time. But
>> it's all we had.
>>
>> Part of the complexity was that the "flow control" mechanism of the
>> Internet had put much of the mechanism in the users' computers' TCP
>> implementations, rather than the switches which handle only IP.
>> Without mechanisms in the users' computers, all a switch could do is
>> order more circuits, and add more memory to the switches for queuing.
>> Perhaps that led to "bufferbloat".
>>
>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>> future release that would introduce a "real" form of Backpressure and
>> the ability to handle different types of traffic. Meanwhile, these
>> rudimentary mechanisms would provide some flow control. Hopefully the
>> users' computers sending the flows would respond to the SQ
>> backpressure, and switches would prioritize traffic using the TTL and
>> TOS information.
>>
>> But, being way out of touch, I don't know what actually happens
>> today. Perhaps the current operators and current government watchers
>> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code
>> iperf2-code
>>
>> 1/ How do current switches exert Backpressure to reduce competing
>> traffic flows? Do they still send SQs?
>>
>> 2/ How do the current and proposed government regulations treat the
>> different needs of different types of traffic, e.g., "Bulk" versus
>> "Interactive" versus "Normal"? Are Internet carriers permitted to
>> treat traffic types differently? Are they permitted to charge
>> different amounts for different types of service?
>>
>> Jack Haverty
>>
>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>> For starters I would like to apologize for cc-ing both nanog and my
>>> new nn list. (I will add sender filters)
>>>
>>> A bit more below.
>>>
>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc> wrote:
>>>>> So for now, we'll keep paying for transit to get to the others
>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>> City. Yeah, I think the former is more likely. 😊
>>>>
>>>> There is often a chicken/egg scenario here with the economics. As
>>>> an eyeball network, your costs to build out and connect to Dallas
>>>> are greater than your transit cost, so you do that. Totally fair.
>>>>
>>>> However think about it from the content side. Say I want to build
>>>> into to Houston. I have to put routers in, and a bunch of cache
>>>> servers, so I have capital outlay , plus opex for space, power,
>>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of
>>>> calculations that go into it. Is there enough total eyeball traffic
>>>> there to make it worth it? Is saving 8-10ms enough of a performance
>>>> boost to justify the spend? What are the long term trends in that
>>>> market? These answers are of course different for a company running
>>>> their own CDN vs the commercial CDNs.
>>>>
>>>> I don't work for Google and obviously don't speak for them, but I
>>>> would suspect that they're happy to eat a 8-10ms performance hit to
>>>> serve from Dallas , versus the amount of capital outlay to build
>>>> out there right now.
>>> The three forms of traffic I care most about are voip, gaming, and
>>> videoconferencing, which are rewarding to have at lower latencies.
>>> When I was a kid, we had switched phone networks, and while the sound
>>> quality was poorer than today, the voice latency cross-town was just
>>> like "being there". Nowadays we see 500+ms latencies for this kind of
>>> traffic.
>>>
>>> As to how to make calls across town work that well again, cost-wise, I
>>> do not know, but the volume of traffic that would be better served by
>>> these interconnects quite low, respective to the overall gains in
>>> lower latency experiences for them.
>>>
>>>
>>>
>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be
>>>>> had for a good bit less than $900 on the wholesale market.
>>>>>
>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>>>>> rates almost costing as much as transit in many cases after you
>>>>> factor in loop costs.
>>>>>
>>>>> For example, in the Houston market (one of the largest and fastest
>>>>> growing regions in the US!), we do not have a major IX, so to get
>>>>> up to Dallas it’s several thousand for a 100g wave, plus several
>>>>> thousand for a 100g port on one of those major IXes. Or, a better
>>>>> option, we can get a 100g flat internet transit for just a little
>>>>> bit more.
>>>>>
>>>>> Fortunately, for us as an eyeball network, there are a good number
>>>>> of major content networks that are allowing for private peering in
>>>>> markets like Houston for just the cost of a cross connect and a
>>>>> QSFP if you’re in the right DC, with Google and some others being
>>>>> the outliers.
>>>>>
>>>>> So for now, we'll keep paying for transit to get to the others
>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>
>>>>> See y’all in San Diego this week,
>>>>> Tim
>>>>>
>>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>>> This set of trendlines was very interesting. Unfortunately the data
>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>
>>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>
>>>>>>
>>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>
>>>>>> ...
>>>>>>
>>>>>> I am under the impression that many IXPs remain very successful,
>>>>>> states without them suffer, and I also find the concept of doing
>>>>>> micro
>>>>>> IXPs at the city level, appealing, and now achievable with cheap
>>>>>> gear.
>>>>>> Finer grained cross connects between telco and ISP and IXP would
>>>>>> lower
>>>>>> latencies across town quite hugely...
>>>>>>
>>>>>> PS I hear ARIN is planning on dropping the price for, and bundling 3
>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Oct 30:
>>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>> Dave Täht CSO, LibreQos
>>>
>>>
>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
[-- Attachment #2: Type: text/html, Size: 21300 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* [NNagain] The history of congestion control on the internet
2023-10-15 19:59 ` Jack Haverty
2023-10-15 20:39 ` rjmcmahon
2023-10-15 20:45 ` [NNagain] transit and peering costs projections Sebastian Moeller
@ 2023-10-16 1:39 ` Dave Taht
2023-10-16 6:30 ` Jack Haverty
2023-10-17 15:34 ` Dick Roy
2 siblings, 2 replies; 38+ messages in thread
From: Dave Taht @ 2023-10-16 1:39 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
It is wonderful to have your original perspectives here, Jack.
But please, everyone, before a major subject change, change the subject?
Jack's email conflates a few things that probably deserve other
threads for them. One is VGV - great acronym! Another is about the
"Placeholders" of TTL, and TOS. The last is the history of congestion
control - and it's future! As being a part of the most recent episodes
here I have written extensively on the subject, but what I most like
to point people to is my fun talks trying to make it more accessible
like this one at apnic
https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-but-its-not-over-yet/
or my more recent one at tti/vanguard.
Most recently one of our LibreQos clients has been collecting 10ms
samples and movies of what real-world residential traffic actually
looks like:
https://www.youtube.com/@trendaltoews7143
And it is my hope that that conveys intuition to others... as compared
to speedtest traffic, which prove nothing about the actual behaviors
of VGV traffic, which I ranted about here:
https://blog.cerowrt.org/post/speedtests/ - I am glad that these
speedtests now have latency under load reports almost universally, but
see the rant for more detail.
Most people only have a picture of traffic in the large, over 5 minute
intervals, which behaves quite differently, or a pre-conception that
backpressure actually exists across the internet. It doesn't. An
explicit ack for every packet was ripped out of the arpanet as costing
too much time. Wifi, to some extent, recreates the arpanet problem by
having explicit acks on the local loop that are repeated until by god
the packet comes through, usually without exponential backoff.
We have some really amazing encoding schemes now - I do not understand
how starlink works without retries for example, an my grip on 5G's
encodings is non-existent, except knowing it is the most bufferbloated
of all our technologies.
...
Anyway, my hope for this list is that we come up with useful technical
feedback to the powers-that-be that want to regulate the internet
under some title ii provisions, and I certainly hope we can make
strides towards fixing bufferbloat along the way! There are many other
issues. Let's talk about those instead!
But...
...
In "brief" response to the notes below - source quench died due to
easy ddos, AQMs from RED (1992) until codel (2012) struggled with
measuring the wrong things ( Kathie's updated paper on red in a
different light: https://pollere.net/Codel.html ), SFQ was adopted by
many devices, WRR used in others, ARED I think is common in juniper
boxes, fq_codel is pretty much the default now for most of linux, and
I helped write CAKE.
TCPs evolved from reno to vegas to cubic to bbr and the paper on BBR
is excellent: https://research.google/pubs/pub45646/ as is len
kleinrock's monograph on it. However problems with self congestion and
excessive packet loss were observed, and after entering the ietf
process, is now in it's 3rd revision, which looks pretty good.
Hardware pause frames in ethernet are often available, there are all
kinds of specialized new hardware flow control standards in 802.1, a
new more centralized controller in wifi7
To this day I have no idea how infiniband works. Or how ATM was
supposed to work. I have a good grip on wifi up to version 6, and the
work we did on wifi is in use now on a lot of wifi gear like openwrt,
eero and evenroute, and I am proudest of all my teams' work on
achieving airtime fairness, and better scheduling described in this
paper here: https://www.cs.kau.se/tohojo/airtime-fairness/ for wifi
and MOS to die for.
There is new work on this thing called L4S, which has a bunch of RFCs
for it, leverages multi-bit DCTCP style ECN and is under test by apple
and comcast, it is discussed on tsvwg list a lot. I encourage users to
jump in on the comcast/apple beta, and operators to at least read
this: https://datatracker.ietf.org/doc/draft-ietf-tsvwg-l4sops/
Knowing that there is a book or three left to write on this subject
that nobody will read is an issue, as is coming up with an
architecture to take packet handling as we know it, to the moon and
the rest of the solar system, seems kind of difficult.
Ideally I would love to be working on that earth-moon architecture
rather than trying to finish getting stuff we designed in 2012-2016
deployed.
I am going to pull out a few specific questions from the below and
answer separately.
On Sun, Oct 15, 2023 at 1:00 PM Jack Haverty via Nnagain
<nnagain@lists.bufferbloat.net> wrote:
>
> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
> latency. It's not just "rewarding" to have lower latencies; high
> latencies may make VGV unusable. Average (or "typical") latency as the
> FCC label proposes isn't a good metric to judge usability. A path which
> has high variance in latency can be unusable even if the average is
> quite low. Having your voice or video or gameplay "break up" every
> minute or so when latency spikes to 500 msec makes the "user experience"
> intolerable.
>
> A few years ago, I ran some simple "ping" tests to help a friend who was
> trying to use a gaming app. My data was only for one specific path so
> it's anecdotal. What I saw was surprising - zero data loss, every
> datagram was delivered, but occasionally a datagram would take up to 30
> seconds to arrive. I didn't have the ability to poke around inside, but
> I suspected it was an experience of "bufferbloat", enabled by the
> dramatic drop in price of memory over the decades.
>
> It's been a long time since I was involved in operating any part of the
> Internet, so I don't know much about the inner workings today. Apologies
> for my ignorance....
>
> There was a scenario in the early days of the Internet for which we
> struggled to find a technical solution. Imagine some node in the bowels
> of the network, with 3 connected "circuits" to some other nodes. On two
> of those inputs, traffic is arriving to be forwarded out the third
> circuit. The incoming flows are significantly more than the outgoing
> path can accept.
>
> What happens? How is "backpressure" generated so that the incoming
> flows are reduced to the point that the outgoing circuit can handle the
> traffic?
>
> About 45 years ago, while we were defining TCPV4, we struggled with this
> issue, but didn't find any consensus solutions. So "placeholder"
> mechanisms were defined in TCPV4, to be replaced as research continued
> and found a good solution.
>
> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
> defined; it was to be sent by a switching node back toward the sender of
> any datagram that had to be discarded because there wasn't any place to
> put it.
>
> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
> were defined in IP.
>
> TOS would allow the sender to distinguish datagrams based on their
> needs. For example, we thought "Interactive" service might be needed
> for VGV traffic, where timeliness of delivery was most important.
> "Bulk" service might be useful for activities like file transfers,
> backups, et al. "Normal" service might now mean activities like using
> the Web.
>
> The TTL field was an attempt to inform each switching node about the
> "expiration date" for a datagram. If a node somehow knew that a
> particular datagram was unlikely to reach its destination in time to be
> useful (such as a video datagram for a frame that has already been
> displayed), the node could, and should, discard that datagram to free up
> resources for useful traffic. Sadly we had no mechanisms for measuring
> delay, either in transit or in queuing, so TTL was defined in terms of
> "hops", which is not an accurate proxy for time. But it's all we had.
>
> Part of the complexity was that the "flow control" mechanism of the
> Internet had put much of the mechanism in the users' computers' TCP
> implementations, rather than the switches which handle only IP. Without
> mechanisms in the users' computers, all a switch could do is order more
> circuits, and add more memory to the switches for queuing. Perhaps that
> led to "bufferbloat".
>
> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
> future release that would introduce a "real" form of Backpressure and
> the ability to handle different types of traffic. Meanwhile, these
> rudimentary mechanisms would provide some flow control. Hopefully the
> users' computers sending the flows would respond to the SQ backpressure,
> and switches would prioritize traffic using the TTL and TOS information.
>
> But, being way out of touch, I don't know what actually happens today.
> Perhaps the current operators and current government watchers can answer?:
I would love moe feedback about RED''s deployment at scale in particular.
>
> 1/ How do current switches exert Backpressure to reduce competing
> traffic flows? Do they still send SQs?
Some send various forms of hardware flow control, an ethernet pause
frame derivative
> 2/ How do the current and proposed government regulations treat the
> different needs of different types of traffic, e.g., "Bulk" versus
> "Interactive" versus "Normal"? Are Internet carriers permitted to treat
> traffic types differently? Are they permitted to charge different
> amounts for different types of service?
> Jack Haverty
>
> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
> > For starters I would like to apologize for cc-ing both nanog and my
> > new nn list. (I will add sender filters)
> >
> > A bit more below.
> >
> > On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc> wrote:
> >>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
> >>
> >> There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.
> >>
> >> However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That's not cheap, so there's a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.
> >>
> >> I don't work for Google and obviously don't speak for them, but I would suspect that they're happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.
> > The three forms of traffic I care most about are voip, gaming, and
> > videoconferencing, which are rewarding to have at lower latencies.
> > When I was a kid, we had switched phone networks, and while the sound
> > quality was poorer than today, the voice latency cross-town was just
> > like "being there". Nowadays we see 500+ms latencies for this kind of
> > traffic.
> >
> > As to how to make calls across town work that well again, cost-wise, I
> > do not know, but the volume of traffic that would be better served by
> > these interconnects quite low, respective to the overall gains in
> > lower latency experiences for them.
> >
> >
> >
> >> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
> >>> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
> >>>
> >>> Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
> >>>
> >>> For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
> >>>
> >>> Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
> >>>
> >>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
> >>>
> >>> See y’all in San Diego this week,
> >>> Tim
> >>>
> >>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
> >>>> This set of trendlines was very interesting. Unfortunately the data
> >>>> stops in 2015. Does anyone have more recent data?
> >>>>
> >>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
> >>>>
> >>>> I believe a gbit circuit that an ISP can resell still runs at about
> >>>> $900 - $1.4k (?) in the usa? How about elsewhere?
> >>>>
> >>>> ...
> >>>>
> >>>> I am under the impression that many IXPs remain very successful,
> >>>> states without them suffer, and I also find the concept of doing micro
> >>>> IXPs at the city level, appealing, and now achievable with cheap gear.
> >>>> Finer grained cross connects between telco and ISP and IXP would lower
> >>>> latencies across town quite hugely...
> >>>>
> >>>> PS I hear ARIN is planning on dropping the price for, and bundling 3
> >>>> BGP AS numbers at a time, as of the end of this year, also.
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> >>>> Dave Täht CSO, LibreQos
> >
> >
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 16:45 ` Dave Taht
2023-10-15 19:59 ` Jack Haverty
@ 2023-10-16 3:33 ` Matthew Petach
1 sibling, 0 replies; 38+ messages in thread
From: Matthew Petach @ 2023-10-16 3:33 UTC (permalink / raw)
To: Dave Taht
Cc: Tom Beecher,
Network Neutrality is back! Let´s make the technical
aspects heard this time!,
NANOG
[-- Attachment #1: Type: text/plain, Size: 5188 bytes --]
On Sun, Oct 15, 2023 at 9:47 AM Dave Taht <dave.taht@gmail.com> wrote:
> [...]
> The three forms of traffic I care most about are voip, gaming, and
> videoconferencing, which are rewarding to have at lower latencies.
> When I was a kid, we had switched phone networks, and while the sound
> quality was poorer than today, the voice latency cross-town was just
> like "being there". Nowadays we see 500+ms latencies for this kind of
> traffic.
>
When you were a kid, the cost of voice calls across town were completely
dwarfed by the cost of long distance calls, which were insane by today's
standards. But let's take the $10/month local-only dialtone fee from 1980;
a typical household would spend less than 600 minutes a month on local
calls,
for a per-minute cost for local calls of about 1.6 cents/minute.
(data from https://babel.hathitrust.org/cgi/pt?id=umn.319510029171372&seq=75
)
Each call would use up a single trunk line--today, we would think of that
as an
ISDN BRI at 64Kbits. Doing the math, that meant on average you were using
64Kbit/sec*600minutes*60sec/min or 2304000Kbit per month (2.3 Gbit/month).
A 1Mbit/sec circuit, running constantly, has a capacity to transfer
2592Gbit/month.
So, a typical household used about 1/1000th of a 1Mbit/sec circuit, on
average,
but paid about $10/month for that. That works out to a comparative cost of
$10,000/Mbit/month in revenue from those local voice calls.
You can afford to put in a *LOT* of "just like "being there""
infrastructure when
you're charging your customers the equivalent of $10,000/month per Mbit to
talk across town. Remember, this isn't adding in any long-distance charges,
this is *just* for you to ring up Aunt Maude on the other side of town to
ask when
the bake sale starts on Saturday. So, that revenue is going into covering
the costs of backhaul to the local IXP, and to your ports on the local IXP,
to put it into modern terms.
> As to how to make calls across town work that well again, cost-wise, I
> do not know, but the volume of traffic that would be better served by
> these interconnects quite low, respective to the overall gains in
> lower latency experiences for them.
>
If you can figure out how to charge your customers equivalent pricing
again today, you'll have no trouble getting those calls across town to
work that well again.
Unfortunately, the consumers have gotten used to much lower
prices, and it's really, really hard to stuff the cat back into the
genie bottle again, to bludgeon a dead metaphor.
Not to mention customers have gotten much more used to the
smaller world we live in today, where everything IP is considered "local",
and you won't find many willing customers to pay a higher price for
communicating with far-away websites. Good luck getting customers
to sign up for split contracts, with one price for talking to the local IXP
in town, and a different, more expensive price to send traffic outside
the city to some far-away place like Prineville, OR! ;)
I think we often forget just how much of a massive inversion the
communications industry has undergone; back in the 80s, when
I started working in networking, everything was DS0 voice channels,
and data was just a strange side business that nobody in the telcos
really understood or wanted to sell to. At the time, the volume of money
being raked in from those DS0/VGE channels was mammoth compared
to the data networking side; we weren't even a rounding error. But as the
roles reversed and the pyramid inverted, the data networking costs didn't
rise to meet the voice costs (no matter how hard the telcos tried to push
VGE-mileage-based pricing models!
-- see https://transition.fcc.gov/form477/FVS/definitions_fvs.pdf)
Instead, once VoIP became possible, the high-revenue voice circuits
got pillaged, with more and more of the traffic being pulled off over to
the cheaper data side, until even internally the telcos saw the writing
on the wall, and started to move their trunked voice traffic over to IP
as well.
But as we moved away from the SS7-based signalling, with explicit
information about the locality of the destination exchange giving way
to more generic IP datagrams, the distinction of "local" versus
"long-distance"
became less meaningful, outside the regulatory tariff domain.
When everything is IP datagrams, making a call from you to a person on
the other side of town may just as easily be exchanged at an exchange point
1,000 miles away as it would be locally in town, depending upon where your
carrier and your friend's carriers happen to be network co-incident. So,
for
the consumer, the prices go drastically down, but in return, we accept
potentially higher latencies to exchange traffic that in earlier days would
have been kept strictly local.
Long-winded way of saying "yes, you can go back to how it was when
you were a kid--but can you get all your customers to agree to go back
to those pricing models as well?" ^_^;
Thanks!
Matt
> --
> Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
>
[-- Attachment #2: Type: text/html, Size: 6982 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] The history of congestion control on the internet
2023-10-16 1:39 ` [NNagain] The history of congestion control on the internet Dave Taht
@ 2023-10-16 6:30 ` Jack Haverty
2023-10-16 17:21 ` Spencer Sevilla
2023-10-17 15:34 ` Dick Roy
1 sibling, 1 reply; 38+ messages in thread
From: Jack Haverty @ 2023-10-16 6:30 UTC (permalink / raw)
To: Dave Taht,
Network Neutrality is back! Let´s make the technical
aspects heard this time!
[-- Attachment #1: Type: text/plain, Size: 18983 bytes --]
Even back in 1978, I didn't think Source Quench would work. I recall
that I was trying to adapt my TCP2.5 Unix implementation to become TCP4,
and I asked what my TCP should do if it sent the first IP datagram to
open a TCP connection and received a Source Quench. It wasn't clear at
all how I should "slow down". Other TCP implementors took the receipt
of an SQ as an indication that a datagram they had sent had been
discarded, so the obvious reaction for user satisfaction was to
retransmit immediately. Slowing down would simply degrade their user's
experience.
Glad to hear SQ is gone. I hope whatever replaced it works.
There's some confusion about the Arpanet. The Arpanet was known as a
"packet switching network", but it had lots of internal mechanisms that
essentially created virtual circuits between attached computers. Every
packet sent in to the network by a user computer came out at the
destination intact, in order, and not duplicated or lost. The Arpanet
switches even had a hardware mechanism for flow control; a switch could
halt data transfer from a user computer when necessary. During the
80s, the Arpanet evolved to have an X.25 interface, and operated as a
true "virtual circuit" provider. Even in the Defense Data Network
(DDN), the network delivered a virtual circuit service. The attached
users' computers had TCP, but the TCP didn't need to deal with most of
the network behavior that TCP was designed to handle. Congestion was
similarly handled by internal Arpanet mechanisms (there were several
technical reports from BBN to ARPA with details). I don't remember
any time that "an explicit ack for every packet was ripped out of the
arpanet" None of those events happened when two TCP computers were
connected to the Arpanet.
The Internet grew up around the Arpanet, which provided most of the
wide-area connectivity through the mid-80s. Since the Arpanet provided
the same "reliable byte stream" behavior as TCP provided, and most user
computers were physically attached to an Arpanet switch, it wasn't
obvious how to test a TCP implementation, to see how well it dealt with
reordering, duplication, dropping, or corruption of IP datagrams.
We (at BBN) actually had to implement a software package called a
"Flakeway", which ran on a SparcStation. Using a "feature" of
Ethernets and ARP (some would call it a vulnerability), the Flakeway
could insert itself invisibly in the stream of datagrams between any two
computers on that LAN (e.g., between a user computer and the
gateway/router providing a path to other sites). The Flakeway could
then simulate "real" Internet behavior by dropping, duplicating,
reordering, mangling, delaying, or otherwise interfering with the
flow. That was extremely useful in testing and diagnosing TCP
implementations.
I understand that there has been a lot of technical work over the years,
and lots of new mechanisms defined for use in the Internet to solve
various problems. But one issue that has not been addressed -- how do
you know whether or not some such mechanism has actually been
implemented, and configured correctly, in the millions of devices that
are now using TCP (and UDP, IP, etc.)? AFAIK, there's no way to tell
unless you can examine the actual code.
The Internet, and TCP, was an experiment. One aspect of that experiment
involved changing the traditional role of a network "switch", and moving
mechanisms for flow control, error control, and other mechanisms used to
create a "virtual circuit" behavior. Instead of being implemented inside
some switching equipment, TCP's mechanisms are implemented inside users'
computers. That was a significant break from traditional network
architecture.
I didn't realize it at the time, but now, with users' devices being
uncountable handheld or desktop computers rather than huge racks in
relatively few data centers, moving all those mechanisms from switches
to users' computers significantly complicates the system design and
especially operation.
That may be one of the more important results of the long-running
experiment.
Jack Haverty
On 10/15/23 18:39, Dave Taht wrote:
> It is wonderful to have your original perspectives here, Jack.
>
> But please, everyone, before a major subject change, change the subject?
>
> Jack's email conflates a few things that probably deserve other
> threads for them. One is VGV - great acronym! Another is about the
> "Placeholders" of TTL, and TOS. The last is the history of congestion
> control - and it's future! As being a part of the most recent episodes
> here I have written extensively on the subject, but what I most like
> to point people to is my fun talks trying to make it more accessible
> like this one at apnic
> https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-but-its-not-over-yet/
> or my more recent one at tti/vanguard.
>
> Most recently one of our LibreQos clients has been collecting 10ms
> samples and movies of what real-world residential traffic actually
> looks like:
>
> https://www.youtube.com/@trendaltoews7143
>
> And it is my hope that that conveys intuition to others... as compared
> to speedtest traffic, which prove nothing about the actual behaviors
> of VGV traffic, which I ranted about here:
> https://blog.cerowrt.org/post/speedtests/ - I am glad that these
> speedtests now have latency under load reports almost universally, but
> see the rant for more detail.
>
> Most people only have a picture of traffic in the large, over 5 minute
> intervals, which behaves quite differently, or a pre-conception that
> backpressure actually exists across the internet. It doesn't. An
> explicit ack for every packet was ripped out of the arpanet as costing
> too much time. Wifi, to some extent, recreates the arpanet problem by
> having explicit acks on the local loop that are repeated until by god
> the packet comes through, usually without exponential backoff.
>
> We have some really amazing encoding schemes now - I do not understand
> how starlink works without retries for example, an my grip on 5G's
> encodings is non-existent, except knowing it is the most bufferbloated
> of all our technologies.
>
> ...
>
> Anyway, my hope for this list is that we come up with useful technical
> feedback to the powers-that-be that want to regulate the internet
> under some title ii provisions, and I certainly hope we can make
> strides towards fixing bufferbloat along the way! There are many other
> issues. Let's talk about those instead!
>
> But...
> ...
>
> In "brief" response to the notes below - source quench died due to
> easy ddos, AQMs from RED (1992) until codel (2012) struggled with
> measuring the wrong things ( Kathie's updated paper on red in a
> different light:https://pollere.net/Codel.html ), SFQ was adopted by
> many devices, WRR used in others, ARED I think is common in juniper
> boxes, fq_codel is pretty much the default now for most of linux, and
> I helped write CAKE.
>
> TCPs evolved from reno to vegas to cubic to bbr and the paper on BBR
> is excellent:https://research.google/pubs/pub45646/ as is len
> kleinrock's monograph on it. However problems with self congestion and
> excessive packet loss were observed, and after entering the ietf
> process, is now in it's 3rd revision, which looks pretty good.
>
> Hardware pause frames in ethernet are often available, there are all
> kinds of specialized new hardware flow control standards in 802.1, a
> new more centralized controller in wifi7
>
> To this day I have no idea how infiniband works. Or how ATM was
> supposed to work. I have a good grip on wifi up to version 6, and the
> work we did on wifi is in use now on a lot of wifi gear like openwrt,
> eero and evenroute, and I am proudest of all my teams' work on
> achieving airtime fairness, and better scheduling described in this
> paper here:https://www.cs.kau.se/tohojo/airtime-fairness/ for wifi
> and MOS to die for.
>
> There is new work on this thing called L4S, which has a bunch of RFCs
> for it, leverages multi-bit DCTCP style ECN and is under test by apple
> and comcast, it is discussed on tsvwg list a lot. I encourage users to
> jump in on the comcast/apple beta, and operators to at least read
> this:https://datatracker.ietf.org/doc/draft-ietf-tsvwg-l4sops/
>
> Knowing that there is a book or three left to write on this subject
> that nobody will read is an issue, as is coming up with an
> architecture to take packet handling as we know it, to the moon and
> the rest of the solar system, seems kind of difficult.
>
> Ideally I would love to be working on that earth-moon architecture
> rather than trying to finish getting stuff we designed in 2012-2016
> deployed.
>
> I am going to pull out a few specific questions from the below and
> answer separately.
>
> On Sun, Oct 15, 2023 at 1:00 PM Jack Haverty via Nnagain
> <nnagain@lists.bufferbloat.net> wrote:
>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>> latency. It's not just "rewarding" to have lower latencies; high
>> latencies may make VGV unusable. Average (or "typical") latency as the
>> FCC label proposes isn't a good metric to judge usability. A path which
>> has high variance in latency can be unusable even if the average is
>> quite low. Having your voice or video or gameplay "break up" every
>> minute or so when latency spikes to 500 msec makes the "user experience"
>> intolerable.
>>
>> A few years ago, I ran some simple "ping" tests to help a friend who was
>> trying to use a gaming app. My data was only for one specific path so
>> it's anecdotal. What I saw was surprising - zero data loss, every
>> datagram was delivered, but occasionally a datagram would take up to 30
>> seconds to arrive. I didn't have the ability to poke around inside, but
>> I suspected it was an experience of "bufferbloat", enabled by the
>> dramatic drop in price of memory over the decades.
>>
>> It's been a long time since I was involved in operating any part of the
>> Internet, so I don't know much about the inner workings today. Apologies
>> for my ignorance....
>>
>> There was a scenario in the early days of the Internet for which we
>> struggled to find a technical solution. Imagine some node in the bowels
>> of the network, with 3 connected "circuits" to some other nodes. On two
>> of those inputs, traffic is arriving to be forwarded out the third
>> circuit. The incoming flows are significantly more than the outgoing
>> path can accept.
>>
>> What happens? How is "backpressure" generated so that the incoming
>> flows are reduced to the point that the outgoing circuit can handle the
>> traffic?
>>
>> About 45 years ago, while we were defining TCPV4, we struggled with this
>> issue, but didn't find any consensus solutions. So "placeholder"
>> mechanisms were defined in TCPV4, to be replaced as research continued
>> and found a good solution.
>>
>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
>> defined; it was to be sent by a switching node back toward the sender of
>> any datagram that had to be discarded because there wasn't any place to
>> put it.
>>
>> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
>> were defined in IP.
>>
>> TOS would allow the sender to distinguish datagrams based on their
>> needs. For example, we thought "Interactive" service might be needed
>> for VGV traffic, where timeliness of delivery was most important.
>> "Bulk" service might be useful for activities like file transfers,
>> backups, et al. "Normal" service might now mean activities like using
>> the Web.
>>
>> The TTL field was an attempt to inform each switching node about the
>> "expiration date" for a datagram. If a node somehow knew that a
>> particular datagram was unlikely to reach its destination in time to be
>> useful (such as a video datagram for a frame that has already been
>> displayed), the node could, and should, discard that datagram to free up
>> resources for useful traffic. Sadly we had no mechanisms for measuring
>> delay, either in transit or in queuing, so TTL was defined in terms of
>> "hops", which is not an accurate proxy for time. But it's all we had.
>>
>> Part of the complexity was that the "flow control" mechanism of the
>> Internet had put much of the mechanism in the users' computers' TCP
>> implementations, rather than the switches which handle only IP. Without
>> mechanisms in the users' computers, all a switch could do is order more
>> circuits, and add more memory to the switches for queuing. Perhaps that
>> led to "bufferbloat".
>>
>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>> future release that would introduce a "real" form of Backpressure and
>> the ability to handle different types of traffic. Meanwhile, these
>> rudimentary mechanisms would provide some flow control. Hopefully the
>> users' computers sending the flows would respond to the SQ backpressure,
>> and switches would prioritize traffic using the TTL and TOS information.
>>
>> But, being way out of touch, I don't know what actually happens today.
>> Perhaps the current operators and current government watchers can answer?:
> I would love moe feedback about RED''s deployment at scale in particular.
>
>> 1/ How do current switches exert Backpressure to reduce competing
>> traffic flows? Do they still send SQs?
> Some send various forms of hardware flow control, an ethernet pause
> frame derivative
>
>> 2/ How do the current and proposed government regulations treat the
>> different needs of different types of traffic, e.g., "Bulk" versus
>> "Interactive" versus "Normal"? Are Internet carriers permitted to treat
>> traffic types differently? Are they permitted to charge different
>> amounts for different types of service?
>
>> Jack Haverty
>>
>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>> For starters I would like to apologize for cc-ing both nanog and my
>>> new nn list. (I will add sender filters)
>>>
>>> A bit more below.
>>>
>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher<beecher@beecher.cc> wrote:
>>>>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>>> There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.
>>>>
>>>> However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That's not cheap, so there's a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.
>>>>
>>>> I don't work for Google and obviously don't speak for them, but I would suspect that they're happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.
>>> The three forms of traffic I care most about are voip, gaming, and
>>> videoconferencing, which are rewarding to have at lower latencies.
>>> When I was a kid, we had switched phone networks, and while the sound
>>> quality was poorer than today, the voice latency cross-town was just
>>> like "being there". Nowadays we see 500+ms latencies for this kind of
>>> traffic.
>>>
>>> As to how to make calls across town work that well again, cost-wise, I
>>> do not know, but the volume of traffic that would be better served by
>>> these interconnects quite low, respective to the overall gains in
>>> lower latency experiences for them.
>>>
>>>
>>>
>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke<tim@mid.net> wrote:
>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
>>>>>
>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
>>>>>
>>>>> For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
>>>>>
>>>>> Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
>>>>>
>>>>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>>>>
>>>>> See y’all in San Diego this week,
>>>>> Tim
>>>>>
>>>>> On Oct 14, 2023, at 18:04, Dave Taht<dave.taht@gmail.com> wrote:
>>>>>> This set of trendlines was very interesting. Unfortunately the data
>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>
>>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>
>>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>
>>>>>> ...
>>>>>>
>>>>>> I am under the impression that many IXPs remain very successful,
>>>>>> states without them suffer, and I also find the concept of doing micro
>>>>>> IXPs at the city level, appealing, and now achievable with cheap gear.
>>>>>> Finer grained cross connects between telco and ISP and IXP would lower
>>>>>> latencies across town quite hugely...
>>>>>>
>>>>>> PS I hear ARIN is planning on dropping the price for, and bundling 3
>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Oct 30:https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>> Dave Täht CSO, LibreQos
>>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>
>
[-- Attachment #2: Type: text/html, Size: 21780 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-15 20:39 ` rjmcmahon
2023-10-15 23:44 ` Karl Auerbach
@ 2023-10-16 17:01 ` Dick Roy
2023-10-16 17:35 ` Jack Haverty
2023-10-16 17:36 ` Sebastian Moeller
1 sibling, 2 replies; 38+ messages in thread
From: Dick Roy @ 2023-10-16 17:01 UTC (permalink / raw)
To: 'Network Neutrality is back! Let´s make the technical
aspects heard this time!'
Just an observation: ANY type of congestion control that changes application behavior in response to congestion, or predicted congestion (ENC), begs the question "How does throttling of application information exchange rate (aka behavior) affect the user experience and will the user tolerate it?"
Given any (complex and packet-switched) network topology of interconnected nodes and links, each with possible a different capacity and characteristics, such as the internet today, IMO the two fundamental questions are:
1) How can a given network be operated/configured so as to maximize aggregate throughput (i.e. achieve its theoretical capacity), and
2) What things in the network need to change to increase the throughput (aka parameters in the network with the largest Lagrange multipliers associated with them)?
I am not an expert in this field, however it seems to me that answers to these questions would be useful, assuming they are not yet available!
Cheers,
RR
-----Original Message-----
From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of rjmcmahon via Nnagain
Sent: Sunday, October 15, 2023 1:39 PM
To: Network Neutrality is back! Let´s make the technical aspects heard this time!
Cc: rjmcmahon
Subject: Re: [NNagain] transit and peering costs projections
Hi Jack,
Thanks again for sharing. It's very interesting to me.
Today, the networks are shifting from capacity constrained to latency
constrained, as can be seen in the IX discussions about how the speed of
light over fiber is too slow even between Houston & Dallas.
The mitigations against standing queues (which cause bloat today) are:
o) Shrink the e2e bottleneck queue so it will drop packets in a flow and
TCP will respond to that "signal"
o) Use some form of ECN marking where the network forwarding plane
ultimately informs the TCP source state machine so it can slow down or
pace effectively. This can be an earlier feedback signal and, if done
well, can inform the sources to avoid bottleneck queuing. There are
couple of approaches with ECN. Comcast is trialing L4S now which seems
interesting to me as a WiFi test & measurement engineer. The jury is
still out on this and measurements are needed.
o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
The QoS priority approach per congestion is orthogonal by my judgment as
it's typically not supported e2e, many networks will bleach DSCP
markings. And it's really too late by my judgment.
Also, on clock sync, yes your generation did us both a service and
disservice by getting rid of the PSTN TDM clock ;) So IP networking
devices kinda ignored clock sync, which makes e2e one way delay (OWD)
measurements impossible. Thankfully, the GPS atomic clock is now
available mostly everywhere and many devices use TCXO oscillators so
it's possible to get clock sync and use oscillators that can minimize
drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an
example.
It seems silly to me that clocks aren't synced to the GPS atomic clock
even if by a proxy even if only for measurement and monitoring.
Note: As Richard Roy will point out, there really is no such thing as
synchronized clocks across geographies per general relativity - so those
syncing clocks need to keep those effects in mind. I limited the iperf 2
timestamps to microsecond precision in hopes avoiding those issues.
Note: With WiFi, a packet drop can occur because an intermittent RF
channel condition. TCP can't tell the difference between an RF drop vs a
congested queue drop. That's another reason ECN markings from network
devices may be better than dropped packets.
Note: I've added some iperf 2 test support around pacing as that seems
to be the direction the industry is heading as networks are less and
less capacity strained and user quality of experience is being driven by
tail latencies. One can also test with the Prague CCA for the L4S
scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly
low cost)
--fq-rate n[kmgKMG]
Set a rate to be used with fair-queuing based socket-level pacing, in
bytes or bits per second. Only available on platforms supporting the
SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate
bytes/sec or bits/sec per use of uppercase or lowercase, respectively)
--fq-rate-step n[kmgKMG]
Set a step of rate to be used with fair-queuing based socket-level
pacing, in bytes or bits per second. Step occurs every
fq-rate-step-interval (defaults to one second)
--fq-rate-step-interval n
Time in seconds before stepping the fq-rate
Bob
PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html
> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
> latency. It's not just "rewarding" to have lower latencies; high
> latencies may make VGV unusable. Average (or "typical") latency as
> the FCC label proposes isn't a good metric to judge usability. A path
> which has high variance in latency can be unusable even if the average
> is quite low. Having your voice or video or gameplay "break up"
> every minute or so when latency spikes to 500 msec makes the "user
> experience" intolerable.
>
> A few years ago, I ran some simple "ping" tests to help a friend who
> was trying to use a gaming app. My data was only for one specific
> path so it's anecdotal. What I saw was surprising - zero data loss,
> every datagram was delivered, but occasionally a datagram would take
> up to 30 seconds to arrive. I didn't have the ability to poke around
> inside, but I suspected it was an experience of "bufferbloat", enabled
> by the dramatic drop in price of memory over the decades.
>
> It's been a long time since I was involved in operating any part of
> the Internet, so I don't know much about the inner workings today.
> Apologies for my ignorance....
>
> There was a scenario in the early days of the Internet for which we
> struggled to find a technical solution. Imagine some node in the
> bowels of the network, with 3 connected "circuits" to some other
> nodes. On two of those inputs, traffic is arriving to be forwarded
> out the third circuit. The incoming flows are significantly more than
> the outgoing path can accept.
>
> What happens? How is "backpressure" generated so that the incoming
> flows are reduced to the point that the outgoing circuit can handle
> the traffic?
>
> About 45 years ago, while we were defining TCPV4, we struggled with
> this issue, but didn't find any consensus solutions. So "placeholder"
> mechanisms were defined in TCPV4, to be replaced as research continued
> and found a good solution.
>
> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
> defined; it was to be sent by a switching node back toward the sender
> of any datagram that had to be discarded because there wasn't any
> place to put it.
>
> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
> were defined in IP.
>
> TOS would allow the sender to distinguish datagrams based on their
> needs. For example, we thought "Interactive" service might be needed
> for VGV traffic, where timeliness of delivery was most important.
> "Bulk" service might be useful for activities like file transfers,
> backups, et al. "Normal" service might now mean activities like
> using the Web.
>
> The TTL field was an attempt to inform each switching node about the
> "expiration date" for a datagram. If a node somehow knew that a
> particular datagram was unlikely to reach its destination in time to
> be useful (such as a video datagram for a frame that has already been
> displayed), the node could, and should, discard that datagram to free
> up resources for useful traffic. Sadly we had no mechanisms for
> measuring delay, either in transit or in queuing, so TTL was defined
> in terms of "hops", which is not an accurate proxy for time. But
> it's all we had.
>
> Part of the complexity was that the "flow control" mechanism of the
> Internet had put much of the mechanism in the users' computers' TCP
> implementations, rather than the switches which handle only IP.
> Without mechanisms in the users' computers, all a switch could do is
> order more circuits, and add more memory to the switches for queuing.
> Perhaps that led to "bufferbloat".
>
> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
> future release that would introduce a "real" form of Backpressure and
> the ability to handle different types of traffic. Meanwhile, these
> rudimentary mechanisms would provide some flow control. Hopefully the
> users' computers sending the flows would respond to the SQ
> backpressure, and switches would prioritize traffic using the TTL and
> TOS information.
>
> But, being way out of touch, I don't know what actually happens
> today. Perhaps the current operators and current government watchers
> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code
> iperf2-code
>
> 1/ How do current switches exert Backpressure to reduce competing
> traffic flows? Do they still send SQs?
>
> 2/ How do the current and proposed government regulations treat the
> different needs of different types of traffic, e.g., "Bulk" versus
> "Interactive" versus "Normal"? Are Internet carriers permitted to
> treat traffic types differently? Are they permitted to charge
> different amounts for different types of service?
>
> Jack Haverty
>
> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>> For starters I would like to apologize for cc-ing both nanog and my
>> new nn list. (I will add sender filters)
>>
>> A bit more below.
>>
>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc>
>> wrote:
>>>> So for now, we'll keep paying for transit to get to the others
>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>> hoping someone at Google finally sees Houston as more than a third
>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>> City. Yeah, I think the former is more likely. 😊
>>>
>>> There is often a chicken/egg scenario here with the economics. As an
>>> eyeball network, your costs to build out and connect to Dallas are
>>> greater than your transit cost, so you do that. Totally fair.
>>>
>>> However think about it from the content side. Say I want to build
>>> into to Houston. I have to put routers in, and a bunch of cache
>>> servers, so I have capital outlay , plus opex for space, power,
>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of
>>> calculations that go into it. Is there enough total eyeball traffic
>>> there to make it worth it? Is saving 8-10ms enough of a performance
>>> boost to justify the spend? What are the long term trends in that
>>> market? These answers are of course different for a company running
>>> their own CDN vs the commercial CDNs.
>>>
>>> I don't work for Google and obviously don't speak for them, but I
>>> would suspect that they're happy to eat a 8-10ms performance hit to
>>> serve from Dallas , versus the amount of capital outlay to build out
>>> there right now.
>> The three forms of traffic I care most about are voip, gaming, and
>> videoconferencing, which are rewarding to have at lower latencies.
>> When I was a kid, we had switched phone networks, and while the sound
>> quality was poorer than today, the voice latency cross-town was just
>> like "being there". Nowadays we see 500+ms latencies for this kind of
>> traffic.
>>
>> As to how to make calls across town work that well again, cost-wise, I
>> do not know, but the volume of traffic that would be better served by
>> these interconnects quite low, respective to the overall gains in
>> lower latency experiences for them.
>>
>>
>>
>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be
>>>> had for a good bit less than $900 on the wholesale market.
>>>>
>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>>>> rates almost costing as much as transit in many cases after you
>>>> factor in loop costs.
>>>>
>>>> For example, in the Houston market (one of the largest and fastest
>>>> growing regions in the US!), we do not have a major IX, so to get up
>>>> to Dallas it’s several thousand for a 100g wave, plus several
>>>> thousand for a 100g port on one of those major IXes. Or, a better
>>>> option, we can get a 100g flat internet transit for just a little
>>>> bit more.
>>>>
>>>> Fortunately, for us as an eyeball network, there are a good number
>>>> of major content networks that are allowing for private peering in
>>>> markets like Houston for just the cost of a cross connect and a QSFP
>>>> if you’re in the right DC, with Google and some others being the
>>>> outliers.
>>>>
>>>> So for now, we'll keep paying for transit to get to the others
>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>> hoping someone at Google finally sees Houston as more than a third
>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>> City. Yeah, I think the former is more likely. 😊
>>>>
>>>> See y’all in San Diego this week,
>>>> Tim
>>>>
>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>> This set of trendlines was very interesting. Unfortunately the
>>>>> data
>>>>> stops in 2015. Does anyone have more recent data?
>>>>>
>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>
>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>
>>>>> ...
>>>>>
>>>>> I am under the impression that many IXPs remain very successful,
>>>>> states without them suffer, and I also find the concept of doing
>>>>> micro
>>>>> IXPs at the city level, appealing, and now achievable with cheap
>>>>> gear.
>>>>> Finer grained cross connects between telco and ISP and IXP would
>>>>> lower
>>>>> latencies across town quite hugely...
>>>>>
>>>>> PS I hear ARIN is planning on dropping the price for, and bundling
>>>>> 3
>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Oct 30:
>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>> Dave Täht CSO, LibreQos
>>
>>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
_______________________________________________
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] The history of congestion control on the internet
2023-10-16 6:30 ` Jack Haverty
@ 2023-10-16 17:21 ` Spencer Sevilla
2023-10-16 17:37 ` Robert McMahon
0 siblings, 1 reply; 38+ messages in thread
From: Spencer Sevilla @ 2023-10-16 17:21 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
[-- Attachment #1: Type: text/plain, Size: 20184 bytes --]
That Flakeway tool makes me think of an early version of the Chaos Monkey. To that note, Apple maintains a developer tool called Network Link Conditioner that does a good job simulating reduced network performance.
> On Oct 15, 2023, at 23:30, Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>
> Even back in 1978, I didn't think Source Quench would work. I recall that I was trying to adapt my TCP2.5 Unix implementation to become TCP4, and I asked what my TCP should do if it sent the first IP datagram to open a TCP connection and received a Source Quench. It wasn't clear at all how I should "slow down". Other TCP implementors took the receipt of an SQ as an indication that a datagram they had sent had been discarded, so the obvious reaction for user satisfaction was to retransmit immediately. Slowing down would simply degrade their user's experience.
>
> Glad to hear SQ is gone. I hope whatever replaced it works.
>
> There's some confusion about the Arpanet. The Arpanet was known as a "packet switching network", but it had lots of internal mechanisms that essentially created virtual circuits between attached computers. Every packet sent in to the network by a user computer came out at the destination intact, in order, and not duplicated or lost. The Arpanet switches even had a hardware mechanism for flow control; a switch could halt data transfer from a user computer when necessary. During the 80s, the Arpanet evolved to have an X.25 interface, and operated as a true "virtual circuit" provider. Even in the Defense Data Network (DDN), the network delivered a virtual circuit service. The attached users' computers had TCP, but the TCP didn't need to deal with most of the network behavior that TCP was designed to handle. Congestion was similarly handled by internal Arpanet mechanisms (there were several technical reports from BBN to ARPA with details). I don't remember any time that "an explicit ack for every packet was ripped out of the arpanet" None of those events happened when two TCP computers were connected to the Arpanet.
>
> The Internet grew up around the Arpanet, which provided most of the wide-area connectivity through the mid-80s. Since the Arpanet provided the same "reliable byte stream" behavior as TCP provided, and most user computers were physically attached to an Arpanet switch, it wasn't obvious how to test a TCP implementation, to see how well it dealt with reordering, duplication, dropping, or corruption of IP datagrams.
>
> We (at BBN) actually had to implement a software package called a "Flakeway", which ran on a SparcStation. Using a "feature" of Ethernets and ARP (some would call it a vulnerability), the Flakeway could insert itself invisibly in the stream of datagrams between any two computers on that LAN (e.g., between a user computer and the gateway/router providing a path to other sites). The Flakeway could then simulate "real" Internet behavior by dropping, duplicating, reordering, mangling, delaying, or otherwise interfering with the flow. That was extremely useful in testing and diagnosing TCP implementations.
>
> I understand that there has been a lot of technical work over the years, and lots of new mechanisms defined for use in the Internet to solve various problems. But one issue that has not been addressed -- how do you know whether or not some such mechanism has actually been implemented, and configured correctly, in the millions of devices that are now using TCP (and UDP, IP, etc.)? AFAIK, there's no way to tell unless you can examine the actual code.
>
> The Internet, and TCP, was an experiment. One aspect of that experiment involved changing the traditional role of a network "switch", and moving mechanisms for flow control, error control, and other mechanisms used to create a "virtual circuit" behavior. Instead of being implemented inside some switching equipment, TCP's mechanisms are implemented inside users' computers. That was a significant break from traditional network architecture.
>
> I didn't realize it at the time, but now, with users' devices being uncountable handheld or desktop computers rather than huge racks in relatively few data centers, moving all those mechanisms from switches to users' computers significantly complicates the system design and especially operation.
>
> That may be one of the more important results of the long-running experiment.
>
> Jack Haverty
>
> On 10/15/23 18:39, Dave Taht wrote:
>> It is wonderful to have your original perspectives here, Jack.
>>
>> But please, everyone, before a major subject change, change the subject?
>>
>> Jack's email conflates a few things that probably deserve other
>> threads for them. One is VGV - great acronym! Another is about the
>> "Placeholders" of TTL, and TOS. The last is the history of congestion
>> control - and it's future! As being a part of the most recent episodes
>> here I have written extensively on the subject, but what I most like
>> to point people to is my fun talks trying to make it more accessible
>> like this one at apnic
>> https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-but-its-not-over-yet/
>> or my more recent one at tti/vanguard.
>>
>> Most recently one of our LibreQos clients has been collecting 10ms
>> samples and movies of what real-world residential traffic actually
>> looks like:
>>
>> https://www.youtube.com/@trendaltoews7143
>>
>> And it is my hope that that conveys intuition to others... as compared
>> to speedtest traffic, which prove nothing about the actual behaviors
>> of VGV traffic, which I ranted about here:
>> https://blog.cerowrt.org/post/speedtests/ - I am glad that these
>> speedtests now have latency under load reports almost universally, but
>> see the rant for more detail.
>>
>> Most people only have a picture of traffic in the large, over 5 minute
>> intervals, which behaves quite differently, or a pre-conception that
>> backpressure actually exists across the internet. It doesn't. An
>> explicit ack for every packet was ripped out of the arpanet as costing
>> too much time. Wifi, to some extent, recreates the arpanet problem by
>> having explicit acks on the local loop that are repeated until by god
>> the packet comes through, usually without exponential backoff.
>>
>> We have some really amazing encoding schemes now - I do not understand
>> how starlink works without retries for example, an my grip on 5G's
>> encodings is non-existent, except knowing it is the most bufferbloated
>> of all our technologies.
>>
>> ...
>>
>> Anyway, my hope for this list is that we come up with useful technical
>> feedback to the powers-that-be that want to regulate the internet
>> under some title ii provisions, and I certainly hope we can make
>> strides towards fixing bufferbloat along the way! There are many other
>> issues. Let's talk about those instead!
>>
>> But...
>> ...
>>
>> In "brief" response to the notes below - source quench died due to
>> easy ddos, AQMs from RED (1992) until codel (2012) struggled with
>> measuring the wrong things ( Kathie's updated paper on red in a
>> different light: https://pollere.net/Codel.html ), SFQ was adopted by
>> many devices, WRR used in others, ARED I think is common in juniper
>> boxes, fq_codel is pretty much the default now for most of linux, and
>> I helped write CAKE.
>>
>> TCPs evolved from reno to vegas to cubic to bbr and the paper on BBR
>> is excellent: https://research.google/pubs/pub45646/ as is len
>> kleinrock's monograph on it. However problems with self congestion and
>> excessive packet loss were observed, and after entering the ietf
>> process, is now in it's 3rd revision, which looks pretty good.
>>
>> Hardware pause frames in ethernet are often available, there are all
>> kinds of specialized new hardware flow control standards in 802.1, a
>> new more centralized controller in wifi7
>>
>> To this day I have no idea how infiniband works. Or how ATM was
>> supposed to work. I have a good grip on wifi up to version 6, and the
>> work we did on wifi is in use now on a lot of wifi gear like openwrt,
>> eero and evenroute, and I am proudest of all my teams' work on
>> achieving airtime fairness, and better scheduling described in this
>> paper here: https://www.cs.kau.se/tohojo/airtime-fairness/ for wifi
>> and MOS to die for.
>>
>> There is new work on this thing called L4S, which has a bunch of RFCs
>> for it, leverages multi-bit DCTCP style ECN and is under test by apple
>> and comcast, it is discussed on tsvwg list a lot. I encourage users to
>> jump in on the comcast/apple beta, and operators to at least read
>> this: https://datatracker.ietf.org/doc/draft-ietf-tsvwg-l4sops/
>>
>> Knowing that there is a book or three left to write on this subject
>> that nobody will read is an issue, as is coming up with an
>> architecture to take packet handling as we know it, to the moon and
>> the rest of the solar system, seems kind of difficult.
>>
>> Ideally I would love to be working on that earth-moon architecture
>> rather than trying to finish getting stuff we designed in 2012-2016
>> deployed.
>>
>> I am going to pull out a few specific questions from the below and
>> answer separately.
>>
>> On Sun, Oct 15, 2023 at 1:00 PM Jack Haverty via Nnagain
>> <nnagain@lists.bufferbloat.net> <mailto:nnagain@lists.bufferbloat.net> wrote:
>>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>>> latency. It's not just "rewarding" to have lower latencies; high
>>> latencies may make VGV unusable. Average (or "typical") latency as the
>>> FCC label proposes isn't a good metric to judge usability. A path which
>>> has high variance in latency can be unusable even if the average is
>>> quite low. Having your voice or video or gameplay "break up" every
>>> minute or so when latency spikes to 500 msec makes the "user experience"
>>> intolerable.
>>>
>>> A few years ago, I ran some simple "ping" tests to help a friend who was
>>> trying to use a gaming app. My data was only for one specific path so
>>> it's anecdotal. What I saw was surprising - zero data loss, every
>>> datagram was delivered, but occasionally a datagram would take up to 30
>>> seconds to arrive. I didn't have the ability to poke around inside, but
>>> I suspected it was an experience of "bufferbloat", enabled by the
>>> dramatic drop in price of memory over the decades.
>>>
>>> It's been a long time since I was involved in operating any part of the
>>> Internet, so I don't know much about the inner workings today. Apologies
>>> for my ignorance....
>>>
>>> There was a scenario in the early days of the Internet for which we
>>> struggled to find a technical solution. Imagine some node in the bowels
>>> of the network, with 3 connected "circuits" to some other nodes. On two
>>> of those inputs, traffic is arriving to be forwarded out the third
>>> circuit. The incoming flows are significantly more than the outgoing
>>> path can accept.
>>>
>>> What happens? How is "backpressure" generated so that the incoming
>>> flows are reduced to the point that the outgoing circuit can handle the
>>> traffic?
>>>
>>> About 45 years ago, while we were defining TCPV4, we struggled with this
>>> issue, but didn't find any consensus solutions. So "placeholder"
>>> mechanisms were defined in TCPV4, to be replaced as research continued
>>> and found a good solution.
>>>
>>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
>>> defined; it was to be sent by a switching node back toward the sender of
>>> any datagram that had to be discarded because there wasn't any place to
>>> put it.
>>>
>>> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
>>> were defined in IP.
>>>
>>> TOS would allow the sender to distinguish datagrams based on their
>>> needs. For example, we thought "Interactive" service might be needed
>>> for VGV traffic, where timeliness of delivery was most important.
>>> "Bulk" service might be useful for activities like file transfers,
>>> backups, et al. "Normal" service might now mean activities like using
>>> the Web.
>>>
>>> The TTL field was an attempt to inform each switching node about the
>>> "expiration date" for a datagram. If a node somehow knew that a
>>> particular datagram was unlikely to reach its destination in time to be
>>> useful (such as a video datagram for a frame that has already been
>>> displayed), the node could, and should, discard that datagram to free up
>>> resources for useful traffic. Sadly we had no mechanisms for measuring
>>> delay, either in transit or in queuing, so TTL was defined in terms of
>>> "hops", which is not an accurate proxy for time. But it's all we had.
>>>
>>> Part of the complexity was that the "flow control" mechanism of the
>>> Internet had put much of the mechanism in the users' computers' TCP
>>> implementations, rather than the switches which handle only IP. Without
>>> mechanisms in the users' computers, all a switch could do is order more
>>> circuits, and add more memory to the switches for queuing. Perhaps that
>>> led to "bufferbloat".
>>>
>>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>>> future release that would introduce a "real" form of Backpressure and
>>> the ability to handle different types of traffic. Meanwhile, these
>>> rudimentary mechanisms would provide some flow control. Hopefully the
>>> users' computers sending the flows would respond to the SQ backpressure,
>>> and switches would prioritize traffic using the TTL and TOS information.
>>>
>>> But, being way out of touch, I don't know what actually happens today.
>>> Perhaps the current operators and current government watchers can answer?:
>> I would love moe feedback about RED''s deployment at scale in particular.
>>
>>> 1/ How do current switches exert Backpressure to reduce competing
>>> traffic flows? Do they still send SQs?
>> Some send various forms of hardware flow control, an ethernet pause
>> frame derivative
>>
>>> 2/ How do the current and proposed government regulations treat the
>>> different needs of different types of traffic, e.g., "Bulk" versus
>>> "Interactive" versus "Normal"? Are Internet carriers permitted to treat
>>> traffic types differently? Are they permitted to charge different
>>> amounts for different types of service?
>>
>>> Jack Haverty
>>>
>>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>>> For starters I would like to apologize for cc-ing both nanog and my
>>>> new nn list. (I will add sender filters)
>>>>
>>>> A bit more below.
>>>>
>>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc> <mailto:beecher@beecher.cc> wrote:
>>>>>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>>>> There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.
>>>>>
>>>>> However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That's not cheap, so there's a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.
>>>>>
>>>>> I don't work for Google and obviously don't speak for them, but I would suspect that they're happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.
>>>> The three forms of traffic I care most about are voip, gaming, and
>>>> videoconferencing, which are rewarding to have at lower latencies.
>>>> When I was a kid, we had switched phone networks, and while the sound
>>>> quality was poorer than today, the voice latency cross-town was just
>>>> like "being there". Nowadays we see 500+ms latencies for this kind of
>>>> traffic.
>>>>
>>>> As to how to make calls across town work that well again, cost-wise, I
>>>> do not know, but the volume of traffic that would be better served by
>>>> these interconnects quite low, respective to the overall gains in
>>>> lower latency experiences for them.
>>>>
>>>>
>>>>
>>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> <mailto:tim@mid.net> wrote:
>>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
>>>>>>
>>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
>>>>>>
>>>>>> For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
>>>>>>
>>>>>> Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
>>>>>>
>>>>>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
>>>>>>
>>>>>> See y’all in San Diego this week,
>>>>>> Tim
>>>>>>
>>>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> <mailto:dave.taht@gmail.com> wrote:
>>>>>>> This set of trendlines was very interesting. Unfortunately the data
>>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>>
>>>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>>
>>>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>>
>>>>>>> ...
>>>>>>>
>>>>>>> I am under the impression that many IXPs remain very successful,
>>>>>>> states without them suffer, and I also find the concept of doing micro
>>>>>>> IXPs at the city level, appealing, and now achievable with cheap gear.
>>>>>>> Finer grained cross connects between telco and ISP and IXP would lower
>>>>>>> latencies across town quite hugely...
>>>>>>>
>>>>>>> PS I hear ARIN is planning on dropping the price for, and bundling 3
>>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>>> Dave Täht CSO, LibreQos
>>>>
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain@lists.bufferbloat.net <mailto:Nnagain@lists.bufferbloat.net>
>>> https://lists.bufferbloat.net/listinfo/nnagain
>>
>>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
[-- Attachment #2: Type: text/html, Size: 23051 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-16 17:01 ` Dick Roy
@ 2023-10-16 17:35 ` Jack Haverty
2023-10-16 17:36 ` Sebastian Moeller
1 sibling, 0 replies; 38+ messages in thread
From: Jack Haverty @ 2023-10-16 17:35 UTC (permalink / raw)
To: nnagain
Starting with the users' view is good, but I think throughput is not the
only appropriate metric. If it was, we should possibly be converting
to an avian-based Internet:
https://spectrum.ieee.org/pigeonbased-feathernet-still-wingsdown-fastest-way-of-transferring-lots-of-data
Or perhaps tractor-trailers loaded with memory chips -- as Amazon is doing.
IMO, the most fundamental question is "What range of services should the
network provide?" That question can be answered by starting with the
question "What kinds of user activity should the network enable?"
Personally, as an end-user, I'd like to see my Internet service support
interactive activities such as video conferencing and gaming (with few
visual or audible "artifacts"), and reliable electronic mail (no message
gets lost or rejected), as well as traditional activities such as Web
usage.
Those user activities probably translate into some kind of specs for
throughput, latency, and associated variance of each - at all levels
from IP datagrams to email messages. To include "backpressure" (ECN et
al), specs should include some form of "cost", not necessarily limited
to monetary. Costs should somehow be visible to the users, so it is
effective in preventing network overloads.
Jack Haverty
On 10/16/23 10:01, Dick Roy via Nnagain wrote:
> Just an observation: ANY type of congestion control that changes application behavior in response to congestion, or predicted congestion (ENC), begs the question "How does throttling of application information exchange rate (aka behavior) affect the user experience and will the user tolerate it?"
>
> Given any (complex and packet-switched) network topology of interconnected nodes and links, each with possible a different capacity and characteristics, such as the internet today, IMO the two fundamental questions are:
>
> 1) How can a given network be operated/configured so as to maximize aggregate throughput (i.e. achieve its theoretical capacity), and
> 2) What things in the network need to change to increase the throughput (aka parameters in the network with the largest Lagrange multipliers associated with them)?
>
> I am not an expert in this field, however it seems to me that answers to these questions would be useful, assuming they are not yet available!
>
> Cheers,
>
> RR
>
>
>
> -----Original Message-----
> From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of rjmcmahon via Nnagain
> Sent: Sunday, October 15, 2023 1:39 PM
> To: Network Neutrality is back! Let´s make the technical aspects heard this time!
> Cc: rjmcmahon
> Subject: Re: [NNagain] transit and peering costs projections
>
> Hi Jack,
>
> Thanks again for sharing. It's very interesting to me.
>
> Today, the networks are shifting from capacity constrained to latency
> constrained, as can be seen in the IX discussions about how the speed of
> light over fiber is too slow even between Houston & Dallas.
>
> The mitigations against standing queues (which cause bloat today) are:
>
> o) Shrink the e2e bottleneck queue so it will drop packets in a flow and
> TCP will respond to that "signal"
> o) Use some form of ECN marking where the network forwarding plane
> ultimately informs the TCP source state machine so it can slow down or
> pace effectively. This can be an earlier feedback signal and, if done
> well, can inform the sources to avoid bottleneck queuing. There are
> couple of approaches with ECN. Comcast is trialing L4S now which seems
> interesting to me as a WiFi test & measurement engineer. The jury is
> still out on this and measurements are needed.
> o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
>
> The QoS priority approach per congestion is orthogonal by my judgment as
> it's typically not supported e2e, many networks will bleach DSCP
> markings. And it's really too late by my judgment.
>
> Also, on clock sync, yes your generation did us both a service and
> disservice by getting rid of the PSTN TDM clock ;) So IP networking
> devices kinda ignored clock sync, which makes e2e one way delay (OWD)
> measurements impossible. Thankfully, the GPS atomic clock is now
> available mostly everywhere and many devices use TCXO oscillators so
> it's possible to get clock sync and use oscillators that can minimize
> drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an
> example.
>
> It seems silly to me that clocks aren't synced to the GPS atomic clock
> even if by a proxy even if only for measurement and monitoring.
>
> Note: As Richard Roy will point out, there really is no such thing as
> synchronized clocks across geographies per general relativity - so those
> syncing clocks need to keep those effects in mind. I limited the iperf 2
> timestamps to microsecond precision in hopes avoiding those issues.
>
> Note: With WiFi, a packet drop can occur because an intermittent RF
> channel condition. TCP can't tell the difference between an RF drop vs a
> congested queue drop. That's another reason ECN markings from network
> devices may be better than dropped packets.
>
> Note: I've added some iperf 2 test support around pacing as that seems
> to be the direction the industry is heading as networks are less and
> less capacity strained and user quality of experience is being driven by
> tail latencies. One can also test with the Prague CCA for the L4S
> scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly
> low cost)
>
> --fq-rate n[kmgKMG]
> Set a rate to be used with fair-queuing based socket-level pacing, in
> bytes or bits per second. Only available on platforms supporting the
> SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate
> bytes/sec or bits/sec per use of uppercase or lowercase, respectively)
>
> --fq-rate-step n[kmgKMG]
> Set a step of rate to be used with fair-queuing based socket-level
> pacing, in bytes or bits per second. Step occurs every
> fq-rate-step-interval (defaults to one second)
>
> --fq-rate-step-interval n
> Time in seconds before stepping the fq-rate
>
> Bob
>
> PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html
>
>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>> latency. It's not just "rewarding" to have lower latencies; high
>> latencies may make VGV unusable. Average (or "typical") latency as
>> the FCC label proposes isn't a good metric to judge usability. A path
>> which has high variance in latency can be unusable even if the average
>> is quite low. Having your voice or video or gameplay "break up"
>> every minute or so when latency spikes to 500 msec makes the "user
>> experience" intolerable.
>>
>> A few years ago, I ran some simple "ping" tests to help a friend who
>> was trying to use a gaming app. My data was only for one specific
>> path so it's anecdotal. What I saw was surprising - zero data loss,
>> every datagram was delivered, but occasionally a datagram would take
>> up to 30 seconds to arrive. I didn't have the ability to poke around
>> inside, but I suspected it was an experience of "bufferbloat", enabled
>> by the dramatic drop in price of memory over the decades.
>>
>> It's been a long time since I was involved in operating any part of
>> the Internet, so I don't know much about the inner workings today.
>> Apologies for my ignorance....
>>
>> There was a scenario in the early days of the Internet for which we
>> struggled to find a technical solution. Imagine some node in the
>> bowels of the network, with 3 connected "circuits" to some other
>> nodes. On two of those inputs, traffic is arriving to be forwarded
>> out the third circuit. The incoming flows are significantly more than
>> the outgoing path can accept.
>>
>> What happens? How is "backpressure" generated so that the incoming
>> flows are reduced to the point that the outgoing circuit can handle
>> the traffic?
>>
>> About 45 years ago, while we were defining TCPV4, we struggled with
>> this issue, but didn't find any consensus solutions. So "placeholder"
>> mechanisms were defined in TCPV4, to be replaced as research continued
>> and found a good solution.
>>
>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
>> defined; it was to be sent by a switching node back toward the sender
>> of any datagram that had to be discarded because there wasn't any
>> place to put it.
>>
>> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
>> were defined in IP.
>>
>> TOS would allow the sender to distinguish datagrams based on their
>> needs. For example, we thought "Interactive" service might be needed
>> for VGV traffic, where timeliness of delivery was most important.
>> "Bulk" service might be useful for activities like file transfers,
>> backups, et al. "Normal" service might now mean activities like
>> using the Web.
>>
>> The TTL field was an attempt to inform each switching node about the
>> "expiration date" for a datagram. If a node somehow knew that a
>> particular datagram was unlikely to reach its destination in time to
>> be useful (such as a video datagram for a frame that has already been
>> displayed), the node could, and should, discard that datagram to free
>> up resources for useful traffic. Sadly we had no mechanisms for
>> measuring delay, either in transit or in queuing, so TTL was defined
>> in terms of "hops", which is not an accurate proxy for time. But
>> it's all we had.
>>
>> Part of the complexity was that the "flow control" mechanism of the
>> Internet had put much of the mechanism in the users' computers' TCP
>> implementations, rather than the switches which handle only IP.
>> Without mechanisms in the users' computers, all a switch could do is
>> order more circuits, and add more memory to the switches for queuing.
>> Perhaps that led to "bufferbloat".
>>
>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>> future release that would introduce a "real" form of Backpressure and
>> the ability to handle different types of traffic. Meanwhile, these
>> rudimentary mechanisms would provide some flow control. Hopefully the
>> users' computers sending the flows would respond to the SQ
>> backpressure, and switches would prioritize traffic using the TTL and
>> TOS information.
>>
>> But, being way out of touch, I don't know what actually happens
>> today. Perhaps the current operators and current government watchers
>> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code
>> iperf2-code
>>
>> 1/ How do current switches exert Backpressure to reduce competing
>> traffic flows? Do they still send SQs?
>>
>> 2/ How do the current and proposed government regulations treat the
>> different needs of different types of traffic, e.g., "Bulk" versus
>> "Interactive" versus "Normal"? Are Internet carriers permitted to
>> treat traffic types differently? Are they permitted to charge
>> different amounts for different types of service?
>>
>> Jack Haverty
>>
>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>> For starters I would like to apologize for cc-ing both nanog and my
>>> new nn list. (I will add sender filters)
>>>
>>> A bit more below.
>>>
>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc>
>>> wrote:
>>>>> So for now, we'll keep paying for transit to get to the others
>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>> City. Yeah, I think the former is more likely. 😊
>>>> There is often a chicken/egg scenario here with the economics. As an
>>>> eyeball network, your costs to build out and connect to Dallas are
>>>> greater than your transit cost, so you do that. Totally fair.
>>>>
>>>> However think about it from the content side. Say I want to build
>>>> into to Houston. I have to put routers in, and a bunch of cache
>>>> servers, so I have capital outlay , plus opex for space, power,
>>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of
>>>> calculations that go into it. Is there enough total eyeball traffic
>>>> there to make it worth it? Is saving 8-10ms enough of a performance
>>>> boost to justify the spend? What are the long term trends in that
>>>> market? These answers are of course different for a company running
>>>> their own CDN vs the commercial CDNs.
>>>>
>>>> I don't work for Google and obviously don't speak for them, but I
>>>> would suspect that they're happy to eat a 8-10ms performance hit to
>>>> serve from Dallas , versus the amount of capital outlay to build out
>>>> there right now.
>>> The three forms of traffic I care most about are voip, gaming, and
>>> videoconferencing, which are rewarding to have at lower latencies.
>>> When I was a kid, we had switched phone networks, and while the sound
>>> quality was poorer than today, the voice latency cross-town was just
>>> like "being there". Nowadays we see 500+ms latencies for this kind of
>>> traffic.
>>>
>>> As to how to make calls across town work that well again, cost-wise, I
>>> do not know, but the volume of traffic that would be better served by
>>> these interconnects quite low, respective to the overall gains in
>>> lower latency experiences for them.
>>>
>>>
>>>
>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be
>>>>> had for a good bit less than $900 on the wholesale market.
>>>>>
>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>>>>> rates almost costing as much as transit in many cases after you
>>>>> factor in loop costs.
>>>>>
>>>>> For example, in the Houston market (one of the largest and fastest
>>>>> growing regions in the US!), we do not have a major IX, so to get up
>>>>> to Dallas it’s several thousand for a 100g wave, plus several
>>>>> thousand for a 100g port on one of those major IXes. Or, a better
>>>>> option, we can get a 100g flat internet transit for just a little
>>>>> bit more.
>>>>>
>>>>> Fortunately, for us as an eyeball network, there are a good number
>>>>> of major content networks that are allowing for private peering in
>>>>> markets like Houston for just the cost of a cross connect and a QSFP
>>>>> if you’re in the right DC, with Google and some others being the
>>>>> outliers.
>>>>>
>>>>> So for now, we'll keep paying for transit to get to the others
>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>
>>>>> See y’all in San Diego this week,
>>>>> Tim
>>>>>
>>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>>> This set of trendlines was very interesting. Unfortunately the
>>>>>> data
>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>
>>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>
>>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>
>>>>>> ...
>>>>>>
>>>>>> I am under the impression that many IXPs remain very successful,
>>>>>> states without them suffer, and I also find the concept of doing
>>>>>> micro
>>>>>> IXPs at the city level, appealing, and now achievable with cheap
>>>>>> gear.
>>>>>> Finer grained cross connects between telco and ISP and IXP would
>>>>>> lower
>>>>>> latencies across town quite hugely...
>>>>>>
>>>>>> PS I hear ARIN is planning on dropping the price for, and bundling
>>>>>> 3
>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Oct 30:
>>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>> Dave Täht CSO, LibreQos
>>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-16 17:01 ` Dick Roy
2023-10-16 17:35 ` Jack Haverty
@ 2023-10-16 17:36 ` Sebastian Moeller
2023-10-16 18:04 ` Dick Roy
1 sibling, 1 reply; 38+ messages in thread
From: Sebastian Moeller @ 2023-10-16 17:36 UTC (permalink / raw)
To: dickroy,
Network Neutrality is back! Let´s make the technical
aspects heard this time!
Hi Richard,
> On Oct 16, 2023, at 19:01, Dick Roy via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>
> Just an observation: ANY type of congestion control that changes application behavior in response to congestion, or predicted congestion (ENC), begs the question "How does throttling of application information exchange rate (aka behavior) affect the user experience and will the user tolerate it?"
[SM] The trade-off here is, if the application does not respond (or rather if no application would respond) we would end up with congestion collapse where no application would gain much of anything as the network busies itself trying to re-transmit dropped packets without making much head way... Simplistic game theory application might imply that individual applications could try to game this, and generally that seems to be true, but we have remedies for that available..
>
> Given any (complex and packet-switched) network topology of interconnected nodes and links, each with possible a different capacity and characteristics, such as the internet today, IMO the two fundamental questions are:
>
> 1) How can a given network be operated/configured so as to maximize aggregate throughput (i.e. achieve its theoretical capacity), and
> 2) What things in the network need to change to increase the throughput (aka parameters in the network with the largest Lagrange multipliers associated with them)?
[SM] The thing is we generally know how to maximize (average) throughput, just add (over-)generous amounts of buffering, the problem is that this screws up the other important quality axis, latency... We ideally want low latency and even more low latency variance (aka jitter) AND high throughput... Turns out though that above a certain throughput threshold* many users do not seem to care all that much for more throughput as long as interactive use cases are sufficiently responsive... but high responsiveness requires low latency and low jitter... This is actually a good thing, as that means we do not necessarily aim for 100% utilization (almost requiring deep buffers and hence resulting in compromised latency) but can get away with say 80-90% where shallow buffers will do (or rather where buffer filling stays shallow, there is IMHO still value in having deep buffers for rare events that need it).
*) This is not a hard physical law so the exact threshold is not set in stone, but unless one has many parallel users, something in the 20-50 Mbps range is plenty and that is only needed in the "loaded" direction, that is for pure consumers the upload can be thinner, for pure producers the download can be thinner.
>
> I am not an expert in this field,
[SM] Nor am I, I come from the wet-ware side of things so not even soft- or hard-ware ;)
> however it seems to me that answers to these questions would be useful, assuming they are not yet available!
>
> Cheers,
>
> RR
>
>
>
> -----Original Message-----
> From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of rjmcmahon via Nnagain
> Sent: Sunday, October 15, 2023 1:39 PM
> To: Network Neutrality is back! Let´s make the technical aspects heard this time!
> Cc: rjmcmahon
> Subject: Re: [NNagain] transit and peering costs projections
>
> Hi Jack,
>
> Thanks again for sharing. It's very interesting to me.
>
> Today, the networks are shifting from capacity constrained to latency
> constrained, as can be seen in the IX discussions about how the speed of
> light over fiber is too slow even between Houston & Dallas.
>
> The mitigations against standing queues (which cause bloat today) are:
>
> o) Shrink the e2e bottleneck queue so it will drop packets in a flow and
> TCP will respond to that "signal"
> o) Use some form of ECN marking where the network forwarding plane
> ultimately informs the TCP source state machine so it can slow down or
> pace effectively. This can be an earlier feedback signal and, if done
> well, can inform the sources to avoid bottleneck queuing. There are
> couple of approaches with ECN. Comcast is trialing L4S now which seems
> interesting to me as a WiFi test & measurement engineer. The jury is
> still out on this and measurements are needed.
> o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
>
> The QoS priority approach per congestion is orthogonal by my judgment as
> it's typically not supported e2e, many networks will bleach DSCP
> markings. And it's really too late by my judgment.
>
> Also, on clock sync, yes your generation did us both a service and
> disservice by getting rid of the PSTN TDM clock ;) So IP networking
> devices kinda ignored clock sync, which makes e2e one way delay (OWD)
> measurements impossible. Thankfully, the GPS atomic clock is now
> available mostly everywhere and many devices use TCXO oscillators so
> it's possible to get clock sync and use oscillators that can minimize
> drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an
> example.
>
> It seems silly to me that clocks aren't synced to the GPS atomic clock
> even if by a proxy even if only for measurement and monitoring.
>
> Note: As Richard Roy will point out, there really is no such thing as
> synchronized clocks across geographies per general relativity - so those
> syncing clocks need to keep those effects in mind. I limited the iperf 2
> timestamps to microsecond precision in hopes avoiding those issues.
>
> Note: With WiFi, a packet drop can occur because an intermittent RF
> channel condition. TCP can't tell the difference between an RF drop vs a
> congested queue drop. That's another reason ECN markings from network
> devices may be better than dropped packets.
>
> Note: I've added some iperf 2 test support around pacing as that seems
> to be the direction the industry is heading as networks are less and
> less capacity strained and user quality of experience is being driven by
> tail latencies. One can also test with the Prague CCA for the L4S
> scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly
> low cost)
>
> --fq-rate n[kmgKMG]
> Set a rate to be used with fair-queuing based socket-level pacing, in
> bytes or bits per second. Only available on platforms supporting the
> SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate
> bytes/sec or bits/sec per use of uppercase or lowercase, respectively)
>
> --fq-rate-step n[kmgKMG]
> Set a step of rate to be used with fair-queuing based socket-level
> pacing, in bytes or bits per second. Step occurs every
> fq-rate-step-interval (defaults to one second)
>
> --fq-rate-step-interval n
> Time in seconds before stepping the fq-rate
>
> Bob
>
> PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html
>
>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>> latency. It's not just "rewarding" to have lower latencies; high
>> latencies may make VGV unusable. Average (or "typical") latency as
>> the FCC label proposes isn't a good metric to judge usability. A path
>> which has high variance in latency can be unusable even if the average
>> is quite low. Having your voice or video or gameplay "break up"
>> every minute or so when latency spikes to 500 msec makes the "user
>> experience" intolerable.
>>
>> A few years ago, I ran some simple "ping" tests to help a friend who
>> was trying to use a gaming app. My data was only for one specific
>> path so it's anecdotal. What I saw was surprising - zero data loss,
>> every datagram was delivered, but occasionally a datagram would take
>> up to 30 seconds to arrive. I didn't have the ability to poke around
>> inside, but I suspected it was an experience of "bufferbloat", enabled
>> by the dramatic drop in price of memory over the decades.
>>
>> It's been a long time since I was involved in operating any part of
>> the Internet, so I don't know much about the inner workings today.
>> Apologies for my ignorance....
>>
>> There was a scenario in the early days of the Internet for which we
>> struggled to find a technical solution. Imagine some node in the
>> bowels of the network, with 3 connected "circuits" to some other
>> nodes. On two of those inputs, traffic is arriving to be forwarded
>> out the third circuit. The incoming flows are significantly more than
>> the outgoing path can accept.
>>
>> What happens? How is "backpressure" generated so that the incoming
>> flows are reduced to the point that the outgoing circuit can handle
>> the traffic?
>>
>> About 45 years ago, while we were defining TCPV4, we struggled with
>> this issue, but didn't find any consensus solutions. So "placeholder"
>> mechanisms were defined in TCPV4, to be replaced as research continued
>> and found a good solution.
>>
>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
>> defined; it was to be sent by a switching node back toward the sender
>> of any datagram that had to be discarded because there wasn't any
>> place to put it.
>>
>> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
>> were defined in IP.
>>
>> TOS would allow the sender to distinguish datagrams based on their
>> needs. For example, we thought "Interactive" service might be needed
>> for VGV traffic, where timeliness of delivery was most important.
>> "Bulk" service might be useful for activities like file transfers,
>> backups, et al. "Normal" service might now mean activities like
>> using the Web.
>>
>> The TTL field was an attempt to inform each switching node about the
>> "expiration date" for a datagram. If a node somehow knew that a
>> particular datagram was unlikely to reach its destination in time to
>> be useful (such as a video datagram for a frame that has already been
>> displayed), the node could, and should, discard that datagram to free
>> up resources for useful traffic. Sadly we had no mechanisms for
>> measuring delay, either in transit or in queuing, so TTL was defined
>> in terms of "hops", which is not an accurate proxy for time. But
>> it's all we had.
>>
>> Part of the complexity was that the "flow control" mechanism of the
>> Internet had put much of the mechanism in the users' computers' TCP
>> implementations, rather than the switches which handle only IP.
>> Without mechanisms in the users' computers, all a switch could do is
>> order more circuits, and add more memory to the switches for queuing.
>> Perhaps that led to "bufferbloat".
>>
>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>> future release that would introduce a "real" form of Backpressure and
>> the ability to handle different types of traffic. Meanwhile, these
>> rudimentary mechanisms would provide some flow control. Hopefully the
>> users' computers sending the flows would respond to the SQ
>> backpressure, and switches would prioritize traffic using the TTL and
>> TOS information.
>>
>> But, being way out of touch, I don't know what actually happens
>> today. Perhaps the current operators and current government watchers
>> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code
>> iperf2-code
>>
>> 1/ How do current switches exert Backpressure to reduce competing
>> traffic flows? Do they still send SQs?
>>
>> 2/ How do the current and proposed government regulations treat the
>> different needs of different types of traffic, e.g., "Bulk" versus
>> "Interactive" versus "Normal"? Are Internet carriers permitted to
>> treat traffic types differently? Are they permitted to charge
>> different amounts for different types of service?
>>
>> Jack Haverty
>>
>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>> For starters I would like to apologize for cc-ing both nanog and my
>>> new nn list. (I will add sender filters)
>>>
>>> A bit more below.
>>>
>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc>
>>> wrote:
>>>>> So for now, we'll keep paying for transit to get to the others
>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>> City. Yeah, I think the former is more likely. 😊
>>>>
>>>> There is often a chicken/egg scenario here with the economics. As an
>>>> eyeball network, your costs to build out and connect to Dallas are
>>>> greater than your transit cost, so you do that. Totally fair.
>>>>
>>>> However think about it from the content side. Say I want to build
>>>> into to Houston. I have to put routers in, and a bunch of cache
>>>> servers, so I have capital outlay , plus opex for space, power,
>>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of
>>>> calculations that go into it. Is there enough total eyeball traffic
>>>> there to make it worth it? Is saving 8-10ms enough of a performance
>>>> boost to justify the spend? What are the long term trends in that
>>>> market? These answers are of course different for a company running
>>>> their own CDN vs the commercial CDNs.
>>>>
>>>> I don't work for Google and obviously don't speak for them, but I
>>>> would suspect that they're happy to eat a 8-10ms performance hit to
>>>> serve from Dallas , versus the amount of capital outlay to build out
>>>> there right now.
>>> The three forms of traffic I care most about are voip, gaming, and
>>> videoconferencing, which are rewarding to have at lower latencies.
>>> When I was a kid, we had switched phone networks, and while the sound
>>> quality was poorer than today, the voice latency cross-town was just
>>> like "being there". Nowadays we see 500+ms latencies for this kind of
>>> traffic.
>>>
>>> As to how to make calls across town work that well again, cost-wise, I
>>> do not know, but the volume of traffic that would be better served by
>>> these interconnects quite low, respective to the overall gains in
>>> lower latency experiences for them.
>>>
>>>
>>>
>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be
>>>>> had for a good bit less than $900 on the wholesale market.
>>>>>
>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>>>>> rates almost costing as much as transit in many cases after you
>>>>> factor in loop costs.
>>>>>
>>>>> For example, in the Houston market (one of the largest and fastest
>>>>> growing regions in the US!), we do not have a major IX, so to get up
>>>>> to Dallas it’s several thousand for a 100g wave, plus several
>>>>> thousand for a 100g port on one of those major IXes. Or, a better
>>>>> option, we can get a 100g flat internet transit for just a little
>>>>> bit more.
>>>>>
>>>>> Fortunately, for us as an eyeball network, there are a good number
>>>>> of major content networks that are allowing for private peering in
>>>>> markets like Houston for just the cost of a cross connect and a QSFP
>>>>> if you’re in the right DC, with Google and some others being the
>>>>> outliers.
>>>>>
>>>>> So for now, we'll keep paying for transit to get to the others
>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>
>>>>> See y’all in San Diego this week,
>>>>> Tim
>>>>>
>>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>>> This set of trendlines was very interesting. Unfortunately the
>>>>>> data
>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>
>>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>
>>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>
>>>>>> ...
>>>>>>
>>>>>> I am under the impression that many IXPs remain very successful,
>>>>>> states without them suffer, and I also find the concept of doing
>>>>>> micro
>>>>>> IXPs at the city level, appealing, and now achievable with cheap
>>>>>> gear.
>>>>>> Finer grained cross connects between telco and ISP and IXP would
>>>>>> lower
>>>>>> latencies across town quite hugely...
>>>>>>
>>>>>> PS I hear ARIN is planning on dropping the price for, and bundling
>>>>>> 3
>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Oct 30:
>>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>> Dave Täht CSO, LibreQos
>>>
>>>
>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] The history of congestion control on the internet
2023-10-16 17:21 ` Spencer Sevilla
@ 2023-10-16 17:37 ` Robert McMahon
0 siblings, 0 replies; 38+ messages in thread
From: Robert McMahon @ 2023-10-16 17:37 UTC (permalink / raw)
To: Spencer Sevilla via Nnagain
[-- Attachment #1: Type: text/plain, Size: 21565 bytes --]
We in semiconductors test TCP on hundreds of test rigs and multiple operating systems, use statistical process controls before sending our chips, and support sw to system integrators or device manufacturers. Then, those companies do their work and test more before shipping to their customers. There is a lot of testing baked in now. If not, billions of TCP state machines wouldn't function nor interoperate. And people then wouldn't buy these products as networks are essential infrastructure.
Reading the code doesn't really work for this class of problem. Code reviews are good in human processes, but escapes are quite high. Computers have to engage too and are now doing the heavy lifting
Bob
On Oct 16, 2023, 10:21 AM, at 10:21 AM, Spencer Sevilla via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>That Flakeway tool makes me think of an early version of the Chaos
>Monkey. To that note, Apple maintains a developer tool called Network
>Link Conditioner that does a good job simulating reduced network
>performance.
>
>> On Oct 15, 2023, at 23:30, Jack Haverty via Nnagain
><nnagain@lists.bufferbloat.net> wrote:
>>
>> Even back in 1978, I didn't think Source Quench would work. I
>recall that I was trying to adapt my TCP2.5 Unix implementation to
>become TCP4, and I asked what my TCP should do if it sent the first IP
>datagram to open a TCP connection and received a Source Quench. It
>wasn't clear at all how I should "slow down". Other TCP implementors
>took the receipt of an SQ as an indication that a datagram they had
>sent had been discarded, so the obvious reaction for user satisfaction
>was to retransmit immediately. Slowing down would simply degrade
>their user's experience.
>>
>> Glad to hear SQ is gone. I hope whatever replaced it works.
>>
>> There's some confusion about the Arpanet. The Arpanet was known as a
>"packet switching network", but it had lots of internal mechanisms that
>essentially created virtual circuits between attached computers.
>Every packet sent in to the network by a user computer came out at the
>destination intact, in order, and not duplicated or lost. The Arpanet
>switches even had a hardware mechanism for flow control; a switch could
>halt data transfer from a user computer when necessary. During the
>80s, the Arpanet evolved to have an X.25 interface, and operated as a
>true "virtual circuit" provider. Even in the Defense Data Network
>(DDN), the network delivered a virtual circuit service. The attached
>users' computers had TCP, but the TCP didn't need to deal with most of
>the network behavior that TCP was designed to handle. Congestion was
>similarly handled by internal Arpanet mechanisms (there were several
>technical reports from BBN to ARPA with details). I don't remember
>any time that "an explicit ack for every packet was ripped out of the
>arpanet" None of those events happened when two TCP computers were
>connected to the Arpanet.
>>
>> The Internet grew up around the Arpanet, which provided most of the
>wide-area connectivity through the mid-80s. Since the Arpanet
>provided the same "reliable byte stream" behavior as TCP provided, and
>most user computers were physically attached to an Arpanet switch, it
>wasn't obvious how to test a TCP implementation, to see how well it
>dealt with reordering, duplication, dropping, or corruption of IP
>datagrams.
>>
>> We (at BBN) actually had to implement a software package called a
>"Flakeway", which ran on a SparcStation. Using a "feature" of
>Ethernets and ARP (some would call it a vulnerability), the Flakeway
>could insert itself invisibly in the stream of datagrams between any
>two computers on that LAN (e.g., between a user computer and the
>gateway/router providing a path to other sites). The Flakeway could
>then simulate "real" Internet behavior by dropping, duplicating,
>reordering, mangling, delaying, or otherwise interfering with the flow.
>That was extremely useful in testing and diagnosing TCP
>implementations.
>>
>> I understand that there has been a lot of technical work over the
>years, and lots of new mechanisms defined for use in the Internet to
>solve various problems. But one issue that has not been addressed --
>how do you know whether or not some such mechanism has actually been
>implemented, and configured correctly, in the millions of devices that
>are now using TCP (and UDP, IP, etc.)? AFAIK, there's no way to tell
>unless you can examine the actual code.
>>
>> The Internet, and TCP, was an experiment. One aspect of that
>experiment involved changing the traditional role of a network
>"switch", and moving mechanisms for flow control, error control, and
>other mechanisms used to create a "virtual circuit" behavior. Instead
>of being implemented inside some switching equipment, TCP's mechanisms
>are implemented inside users' computers. That was a significant
>break from traditional network architecture.
>>
>> I didn't realize it at the time, but now, with users' devices being
>uncountable handheld or desktop computers rather than huge racks in
>relatively few data centers, moving all those mechanisms from switches
>to users' computers significantly complicates the system design and
>especially operation.
>>
>> That may be one of the more important results of the long-running
>experiment.
>>
>> Jack Haverty
>>
>> On 10/15/23 18:39, Dave Taht wrote:
>>> It is wonderful to have your original perspectives here, Jack.
>>>
>>> But please, everyone, before a major subject change, change the
>subject?
>>>
>>> Jack's email conflates a few things that probably deserve other
>>> threads for them. One is VGV - great acronym! Another is about the
>>> "Placeholders" of TTL, and TOS. The last is the history of
>congestion
>>> control - and it's future! As being a part of the most recent
>episodes
>>> here I have written extensively on the subject, but what I most like
>>> to point people to is my fun talks trying to make it more accessible
>>> like this one at apnic
>>>
>https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-but-its-not-over-yet/
>>> or my more recent one at tti/vanguard.
>>>
>>> Most recently one of our LibreQos clients has been collecting 10ms
>>> samples and movies of what real-world residential traffic actually
>>> looks like:
>>>
>>> https://www.youtube.com/@trendaltoews7143
>>>
>>> And it is my hope that that conveys intuition to others... as
>compared
>>> to speedtest traffic, which prove nothing about the actual behaviors
>>> of VGV traffic, which I ranted about here:
>>> https://blog.cerowrt.org/post/speedtests/ - I am glad that these
>>> speedtests now have latency under load reports almost universally,
>but
>>> see the rant for more detail.
>>>
>>> Most people only have a picture of traffic in the large, over 5
>minute
>>> intervals, which behaves quite differently, or a pre-conception that
>>> backpressure actually exists across the internet. It doesn't. An
>>> explicit ack for every packet was ripped out of the arpanet as
>costing
>>> too much time. Wifi, to some extent, recreates the arpanet problem
>by
>>> having explicit acks on the local loop that are repeated until by
>god
>>> the packet comes through, usually without exponential backoff.
>>>
>>> We have some really amazing encoding schemes now - I do not
>understand
>>> how starlink works without retries for example, an my grip on 5G's
>>> encodings is non-existent, except knowing it is the most
>bufferbloated
>>> of all our technologies.
>>>
>>> ...
>>>
>>> Anyway, my hope for this list is that we come up with useful
>technical
>>> feedback to the powers-that-be that want to regulate the internet
>>> under some title ii provisions, and I certainly hope we can make
>>> strides towards fixing bufferbloat along the way! There are many
>other
>>> issues. Let's talk about those instead!
>>>
>>> But...
>>> ...
>>>
>>> In "brief" response to the notes below - source quench died due to
>>> easy ddos, AQMs from RED (1992) until codel (2012) struggled with
>>> measuring the wrong things ( Kathie's updated paper on red in a
>>> different light: https://pollere.net/Codel.html ), SFQ was adopted
>by
>>> many devices, WRR used in others, ARED I think is common in juniper
>>> boxes, fq_codel is pretty much the default now for most of linux,
>and
>>> I helped write CAKE.
>>>
>>> TCPs evolved from reno to vegas to cubic to bbr and the paper on BBR
>>> is excellent: https://research.google/pubs/pub45646/ as is len
>>> kleinrock's monograph on it. However problems with self congestion
>and
>>> excessive packet loss were observed, and after entering the ietf
>>> process, is now in it's 3rd revision, which looks pretty good.
>>>
>>> Hardware pause frames in ethernet are often available, there are all
>>> kinds of specialized new hardware flow control standards in 802.1, a
>>> new more centralized controller in wifi7
>>>
>>> To this day I have no idea how infiniband works. Or how ATM was
>>> supposed to work. I have a good grip on wifi up to version 6, and
>the
>>> work we did on wifi is in use now on a lot of wifi gear like
>openwrt,
>>> eero and evenroute, and I am proudest of all my teams' work on
>>> achieving airtime fairness, and better scheduling described in this
>>> paper here: https://www.cs.kau.se/tohojo/airtime-fairness/ for wifi
>>> and MOS to die for.
>>>
>>> There is new work on this thing called L4S, which has a bunch of
>RFCs
>>> for it, leverages multi-bit DCTCP style ECN and is under test by
>apple
>>> and comcast, it is discussed on tsvwg list a lot. I encourage users
>to
>>> jump in on the comcast/apple beta, and operators to at least read
>>> this: https://datatracker.ietf.org/doc/draft-ietf-tsvwg-l4sops/
>>>
>>> Knowing that there is a book or three left to write on this subject
>>> that nobody will read is an issue, as is coming up with an
>>> architecture to take packet handling as we know it, to the moon and
>>> the rest of the solar system, seems kind of difficult.
>>>
>>> Ideally I would love to be working on that earth-moon architecture
>>> rather than trying to finish getting stuff we designed in 2012-2016
>>> deployed.
>>>
>>> I am going to pull out a few specific questions from the below and
>>> answer separately.
>>>
>>> On Sun, Oct 15, 2023 at 1:00 PM Jack Haverty via Nnagain
>>> <nnagain@lists.bufferbloat.net>
><mailto:nnagain@lists.bufferbloat.net> wrote:
>>>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>>>> latency. It's not just "rewarding" to have lower latencies; high
>>>> latencies may make VGV unusable. Average (or "typical") latency
>as the
>>>> FCC label proposes isn't a good metric to judge usability. A path
>which
>>>> has high variance in latency can be unusable even if the average is
>>>> quite low. Having your voice or video or gameplay "break up"
>every
>>>> minute or so when latency spikes to 500 msec makes the "user
>experience"
>>>> intolerable.
>>>>
>>>> A few years ago, I ran some simple "ping" tests to help a friend
>who was
>>>> trying to use a gaming app. My data was only for one specific path
>so
>>>> it's anecdotal. What I saw was surprising - zero data loss, every
>>>> datagram was delivered, but occasionally a datagram would take up
>to 30
>>>> seconds to arrive. I didn't have the ability to poke around
>inside, but
>>>> I suspected it was an experience of "bufferbloat", enabled by the
>>>> dramatic drop in price of memory over the decades.
>>>>
>>>> It's been a long time since I was involved in operating any part of
>the
>>>> Internet, so I don't know much about the inner workings today.
>Apologies
>>>> for my ignorance....
>>>>
>>>> There was a scenario in the early days of the Internet for which we
>>>> struggled to find a technical solution. Imagine some node in the
>bowels
>>>> of the network, with 3 connected "circuits" to some other nodes.
>On two
>>>> of those inputs, traffic is arriving to be forwarded out the third
>>>> circuit. The incoming flows are significantly more than the
>outgoing
>>>> path can accept.
>>>>
>>>> What happens? How is "backpressure" generated so that the
>incoming
>>>> flows are reduced to the point that the outgoing circuit can handle
>the
>>>> traffic?
>>>>
>>>> About 45 years ago, while we were defining TCPV4, we struggled with
>this
>>>> issue, but didn't find any consensus solutions. So "placeholder"
>>>> mechanisms were defined in TCPV4, to be replaced as research
>continued
>>>> and found a good solution.
>>>>
>>>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message
>was
>>>> defined; it was to be sent by a switching node back toward the
>sender of
>>>> any datagram that had to be discarded because there wasn't any
>place to
>>>> put it.
>>>>
>>>> In addition, the TOS (Type Of Service) and TTL (Time To Live)
>fields
>>>> were defined in IP.
>>>>
>>>> TOS would allow the sender to distinguish datagrams based on their
>>>> needs. For example, we thought "Interactive" service might be
>needed
>>>> for VGV traffic, where timeliness of delivery was most important.
>>>> "Bulk" service might be useful for activities like file transfers,
>>>> backups, et al. "Normal" service might now mean activities like
>using
>>>> the Web.
>>>>
>>>> The TTL field was an attempt to inform each switching node about
>the
>>>> "expiration date" for a datagram. If a node somehow knew that a
>>>> particular datagram was unlikely to reach its destination in time
>to be
>>>> useful (such as a video datagram for a frame that has already been
>>>> displayed), the node could, and should, discard that datagram to
>free up
>>>> resources for useful traffic. Sadly we had no mechanisms for
>measuring
>>>> delay, either in transit or in queuing, so TTL was defined in terms
>of
>>>> "hops", which is not an accurate proxy for time. But it's all we
>had.
>>>>
>>>> Part of the complexity was that the "flow control" mechanism of the
>>>> Internet had put much of the mechanism in the users' computers' TCP
>>>> implementations, rather than the switches which handle only IP.
>Without
>>>> mechanisms in the users' computers, all a switch could do is order
>more
>>>> circuits, and add more memory to the switches for queuing. Perhaps
>that
>>>> led to "bufferbloat".
>>>>
>>>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>>>> future release that would introduce a "real" form of Backpressure
>and
>>>> the ability to handle different types of traffic. Meanwhile,
>these
>>>> rudimentary mechanisms would provide some flow control. Hopefully
>the
>>>> users' computers sending the flows would respond to the SQ
>backpressure,
>>>> and switches would prioritize traffic using the TTL and TOS
>information.
>>>>
>>>> But, being way out of touch, I don't know what actually happens
>today.
>>>> Perhaps the current operators and current government watchers can
>answer?:
>>> I would love moe feedback about RED''s deployment at scale in
>particular.
>>>
>>>> 1/ How do current switches exert Backpressure to reduce competing
>>>> traffic flows? Do they still send SQs?
>>> Some send various forms of hardware flow control, an ethernet pause
>>> frame derivative
>>>
>>>> 2/ How do the current and proposed government regulations treat the
>>>> different needs of different types of traffic, e.g., "Bulk" versus
>>>> "Interactive" versus "Normal"? Are Internet carriers permitted to
>treat
>>>> traffic types differently? Are they permitted to charge different
>>>> amounts for different types of service?
>>>
>>>> Jack Haverty
>>>>
>>>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>>>> For starters I would like to apologize for cc-ing both nanog and
>my
>>>>> new nn list. (I will add sender filters)
>>>>>
>>>>> A bit more below.
>>>>>
>>>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc>
><mailto:beecher@beecher.cc> wrote:
>>>>>>> So for now, we'll keep paying for transit to get to the others
>(since it’s about as much as transporting IXP from Dallas), and hoping
>someone at Google finally sees Houston as more than a third rate city
>hanging off of Dallas. Or… someone finally brings a worthwhile IX to
>Houston that gets us more than peering to Kansas City. Yeah, I think
>the former is more likely. 😊
>>>>>> There is often a chicken/egg scenario here with the economics. As
>an eyeball network, your costs to build out and connect to Dallas are
>greater than your transit cost, so you do that. Totally fair.
>>>>>>
>>>>>> However think about it from the content side. Say I want to build
>into to Houston. I have to put routers in, and a bunch of cache
>servers, so I have capital outlay , plus opex for space, power,
>IX/backhaul/transit costs. That's not cheap, so there's a lot of
>calculations that go into it. Is there enough total eyeball traffic
>there to make it worth it? Is saving 8-10ms enough of a performance
>boost to justify the spend? What are the long term trends in that
>market? These answers are of course different for a company running
>their own CDN vs the commercial CDNs.
>>>>>>
>>>>>> I don't work for Google and obviously don't speak for them, but I
>would suspect that they're happy to eat a 8-10ms performance hit to
>serve from Dallas , versus the amount of capital outlay to build out
>there right now.
>>>>> The three forms of traffic I care most about are voip, gaming, and
>>>>> videoconferencing, which are rewarding to have at lower latencies.
>>>>> When I was a kid, we had switched phone networks, and while the
>sound
>>>>> quality was poorer than today, the voice latency cross-town was
>just
>>>>> like "being there". Nowadays we see 500+ms latencies for this kind
>of
>>>>> traffic.
>>>>>
>>>>> As to how to make calls across town work that well again,
>cost-wise, I
>>>>> do not know, but the volume of traffic that would be better served
>by
>>>>> these interconnects quite low, respective to the overall gains in
>>>>> lower latency experiences for them.
>>>>>
>>>>>
>>>>>
>>>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net>
><mailto:tim@mid.net> wrote:
>>>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can
>be had for a good bit less than $900 on the wholesale market.
>>>>>>>
>>>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>rates almost costing as much as transit in many cases after you factor
>in loop costs.
>>>>>>>
>>>>>>> For example, in the Houston market (one of the largest and
>fastest growing regions in the US!), we do not have a major IX, so to
>get up to Dallas it’s several thousand for a 100g wave, plus several
>thousand for a 100g port on one of those major IXes. Or, a better
>option, we can get a 100g flat internet transit for just a little bit
>more.
>>>>>>>
>>>>>>> Fortunately, for us as an eyeball network, there are a good
>number of major content networks that are allowing for private peering
>in markets like Houston for just the cost of a cross connect and a QSFP
>if you’re in the right DC, with Google and some others being the
>outliers.
>>>>>>>
>>>>>>> So for now, we'll keep paying for transit to get to the others
>(since it’s about as much as transporting IXP from Dallas), and hoping
>someone at Google finally sees Houston as more than a third rate city
>hanging off of Dallas. Or… someone finally brings a worthwhile IX to
>Houston that gets us more than peering to Kansas City. Yeah, I think
>the former is more likely. 😊
>>>>>>>
>>>>>>> See y’all in San Diego this week,
>>>>>>> Tim
>>>>>>>
>>>>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com>
><mailto:dave.taht@gmail.com> wrote:
>>>>>>>> This set of trendlines was very interesting. Unfortunately the
>data
>>>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>>>
>>>>>>>>
>https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>>>
>>>>>>>> I believe a gbit circuit that an ISP can resell still runs at
>about
>>>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>>>
>>>>>>>> ...
>>>>>>>>
>>>>>>>> I am under the impression that many IXPs remain very
>successful,
>>>>>>>> states without them suffer, and I also find the concept of
>doing micro
>>>>>>>> IXPs at the city level, appealing, and now achievable with
>cheap gear.
>>>>>>>> Finer grained cross connects between telco and ISP and IXP
>would lower
>>>>>>>> latencies across town quite hugely...
>>>>>>>>
>>>>>>>> PS I hear ARIN is planning on dropping the price for, and
>bundling 3
>>>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Oct 30:
>https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>>>> Dave Täht CSO, LibreQos
>>>>>
>>>> _______________________________________________
>>>> Nnagain mailing list
>>>> Nnagain@lists.bufferbloat.net
><mailto:Nnagain@lists.bufferbloat.net>
>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>
>>>
>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
[-- Attachment #2: Type: text/html, Size: 9587 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] transit and peering costs projections
2023-10-16 17:36 ` Sebastian Moeller
@ 2023-10-16 18:04 ` Dick Roy
2023-10-17 10:26 ` [NNagain] NN and freedom of speech, and whether there is worthwhile good-faith discussion in that direction Sebastian Moeller
0 siblings, 1 reply; 38+ messages in thread
From: Dick Roy @ 2023-10-16 18:04 UTC (permalink / raw)
To: 'Sebastian Moeller',
'Network Neutrality is back! Let´s make the technical
aspects heard this time!'
Good points all, Sebastien. How to "trade-off" a fixed capacity amongst many users is ultimately a game theoretic problem when users are allowed to make choices, which is certainly the case here. Secondly, any network that can and does generate "more traffic" (aka overhead such as ACKs NACKs and retries) reduces the capacity of the network, and ultimately can lead to the "user" capacity going to zero! Such is life in the fast lane (aka the internet).
Lastly, on the issue of low-latency real-time experience, there are many applications that need/want such capabilities that actually have a net benefit to the individuals involved AND to society as a whole. IMO, interactive gaming is NOT one of those. OK, so now you know I don't engage in these time sinks with no redeeming social value.:) Since it is not hard to argue that just like power distribution, information exchange/dissemination is "in the public interest", the question becomes "Do we allow any and all forms of information exchange/dissemination over what is becoming something akin to a public utility?" FWIW, I don't know the answer to this question! :)
Cheers,
RR
-----Original Message-----
From: Sebastian Moeller [mailto:moeller0@gmx.de]
Sent: Monday, October 16, 2023 10:36 AM
To: dickroy@alum.mit.edu; Network Neutrality is back! Let´s make the technical aspects heard this time!
Subject: Re: [NNagain] transit and peering costs projections
Hi Richard,
> On Oct 16, 2023, at 19:01, Dick Roy via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>
> Just an observation: ANY type of congestion control that changes application behavior in response to congestion, or predicted congestion (ENC), begs the question "How does throttling of application information exchange rate (aka behavior) affect the user experience and will the user tolerate it?"
[SM] The trade-off here is, if the application does not respond (or rather if no application would respond) we would end up with congestion collapse where no application would gain much of anything as the network busies itself trying to re-transmit dropped packets without making much head way... Simplistic game theory application might imply that individual applications could try to game this, and generally that seems to be true, but we have remedies for that available..
>
> Given any (complex and packet-switched) network topology of interconnected nodes and links, each with possible a different capacity and characteristics, such as the internet today, IMO the two fundamental questions are:
>
> 1) How can a given network be operated/configured so as to maximize aggregate throughput (i.e. achieve its theoretical capacity), and
> 2) What things in the network need to change to increase the throughput (aka parameters in the network with the largest Lagrange multipliers associated with them)?
[SM] The thing is we generally know how to maximize (average) throughput, just add (over-)generous amounts of buffering, the problem is that this screws up the other important quality axis, latency... We ideally want low latency and even more low latency variance (aka jitter) AND high throughput... Turns out though that above a certain throughput threshold* many users do not seem to care all that much for more throughput as long as interactive use cases are sufficiently responsive... but high responsiveness requires low latency and low jitter... This is actually a good thing, as that means we do not necessarily aim for 100% utilization (almost requiring deep buffers and hence resulting in compromised latency) but can get away with say 80-90% where shallow buffers will do (or rather where buffer filling stays shallow, there is IMHO still value in having deep buffers for rare events that need it).
*) This is not a hard physical law so the exact threshold is not set in stone, but unless one has many parallel users, something in the 20-50 Mbps range is plenty and that is only needed in the "loaded" direction, that is for pure consumers the upload can be thinner, for pure producers the download can be thinner.
>
> I am not an expert in this field,
[SM] Nor am I, I come from the wet-ware side of things so not even soft- or hard-ware ;)
> however it seems to me that answers to these questions would be useful, assuming they are not yet available!
>
> Cheers,
>
> RR
>
>
>
> -----Original Message-----
> From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of rjmcmahon via Nnagain
> Sent: Sunday, October 15, 2023 1:39 PM
> To: Network Neutrality is back! Let´s make the technical aspects heard this time!
> Cc: rjmcmahon
> Subject: Re: [NNagain] transit and peering costs projections
>
> Hi Jack,
>
> Thanks again for sharing. It's very interesting to me.
>
> Today, the networks are shifting from capacity constrained to latency
> constrained, as can be seen in the IX discussions about how the speed of
> light over fiber is too slow even between Houston & Dallas.
>
> The mitigations against standing queues (which cause bloat today) are:
>
> o) Shrink the e2e bottleneck queue so it will drop packets in a flow and
> TCP will respond to that "signal"
> o) Use some form of ECN marking where the network forwarding plane
> ultimately informs the TCP source state machine so it can slow down or
> pace effectively. This can be an earlier feedback signal and, if done
> well, can inform the sources to avoid bottleneck queuing. There are
> couple of approaches with ECN. Comcast is trialing L4S now which seems
> interesting to me as a WiFi test & measurement engineer. The jury is
> still out on this and measurements are needed.
> o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
>
> The QoS priority approach per congestion is orthogonal by my judgment as
> it's typically not supported e2e, many networks will bleach DSCP
> markings. And it's really too late by my judgment.
>
> Also, on clock sync, yes your generation did us both a service and
> disservice by getting rid of the PSTN TDM clock ;) So IP networking
> devices kinda ignored clock sync, which makes e2e one way delay (OWD)
> measurements impossible. Thankfully, the GPS atomic clock is now
> available mostly everywhere and many devices use TCXO oscillators so
> it's possible to get clock sync and use oscillators that can minimize
> drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an
> example.
>
> It seems silly to me that clocks aren't synced to the GPS atomic clock
> even if by a proxy even if only for measurement and monitoring.
>
> Note: As Richard Roy will point out, there really is no such thing as
> synchronized clocks across geographies per general relativity - so those
> syncing clocks need to keep those effects in mind. I limited the iperf 2
> timestamps to microsecond precision in hopes avoiding those issues.
>
> Note: With WiFi, a packet drop can occur because an intermittent RF
> channel condition. TCP can't tell the difference between an RF drop vs a
> congested queue drop. That's another reason ECN markings from network
> devices may be better than dropped packets.
>
> Note: I've added some iperf 2 test support around pacing as that seems
> to be the direction the industry is heading as networks are less and
> less capacity strained and user quality of experience is being driven by
> tail latencies. One can also test with the Prague CCA for the L4S
> scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly
> low cost)
>
> --fq-rate n[kmgKMG]
> Set a rate to be used with fair-queuing based socket-level pacing, in
> bytes or bits per second. Only available on platforms supporting the
> SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate
> bytes/sec or bits/sec per use of uppercase or lowercase, respectively)
>
> --fq-rate-step n[kmgKMG]
> Set a step of rate to be used with fair-queuing based socket-level
> pacing, in bytes or bits per second. Step occurs every
> fq-rate-step-interval (defaults to one second)
>
> --fq-rate-step-interval n
> Time in seconds before stepping the fq-rate
>
> Bob
>
> PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html
>
>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>> latency. It's not just "rewarding" to have lower latencies; high
>> latencies may make VGV unusable. Average (or "typical") latency as
>> the FCC label proposes isn't a good metric to judge usability. A path
>> which has high variance in latency can be unusable even if the average
>> is quite low. Having your voice or video or gameplay "break up"
>> every minute or so when latency spikes to 500 msec makes the "user
>> experience" intolerable.
>>
>> A few years ago, I ran some simple "ping" tests to help a friend who
>> was trying to use a gaming app. My data was only for one specific
>> path so it's anecdotal. What I saw was surprising - zero data loss,
>> every datagram was delivered, but occasionally a datagram would take
>> up to 30 seconds to arrive. I didn't have the ability to poke around
>> inside, but I suspected it was an experience of "bufferbloat", enabled
>> by the dramatic drop in price of memory over the decades.
>>
>> It's been a long time since I was involved in operating any part of
>> the Internet, so I don't know much about the inner workings today.
>> Apologies for my ignorance....
>>
>> There was a scenario in the early days of the Internet for which we
>> struggled to find a technical solution. Imagine some node in the
>> bowels of the network, with 3 connected "circuits" to some other
>> nodes. On two of those inputs, traffic is arriving to be forwarded
>> out the third circuit. The incoming flows are significantly more than
>> the outgoing path can accept.
>>
>> What happens? How is "backpressure" generated so that the incoming
>> flows are reduced to the point that the outgoing circuit can handle
>> the traffic?
>>
>> About 45 years ago, while we were defining TCPV4, we struggled with
>> this issue, but didn't find any consensus solutions. So "placeholder"
>> mechanisms were defined in TCPV4, to be replaced as research continued
>> and found a good solution.
>>
>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
>> defined; it was to be sent by a switching node back toward the sender
>> of any datagram that had to be discarded because there wasn't any
>> place to put it.
>>
>> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
>> were defined in IP.
>>
>> TOS would allow the sender to distinguish datagrams based on their
>> needs. For example, we thought "Interactive" service might be needed
>> for VGV traffic, where timeliness of delivery was most important.
>> "Bulk" service might be useful for activities like file transfers,
>> backups, et al. "Normal" service might now mean activities like
>> using the Web.
>>
>> The TTL field was an attempt to inform each switching node about the
>> "expiration date" for a datagram. If a node somehow knew that a
>> particular datagram was unlikely to reach its destination in time to
>> be useful (such as a video datagram for a frame that has already been
>> displayed), the node could, and should, discard that datagram to free
>> up resources for useful traffic. Sadly we had no mechanisms for
>> measuring delay, either in transit or in queuing, so TTL was defined
>> in terms of "hops", which is not an accurate proxy for time. But
>> it's all we had.
>>
>> Part of the complexity was that the "flow control" mechanism of the
>> Internet had put much of the mechanism in the users' computers' TCP
>> implementations, rather than the switches which handle only IP.
>> Without mechanisms in the users' computers, all a switch could do is
>> order more circuits, and add more memory to the switches for queuing.
>> Perhaps that led to "bufferbloat".
>>
>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>> future release that would introduce a "real" form of Backpressure and
>> the ability to handle different types of traffic. Meanwhile, these
>> rudimentary mechanisms would provide some flow control. Hopefully the
>> users' computers sending the flows would respond to the SQ
>> backpressure, and switches would prioritize traffic using the TTL and
>> TOS information.
>>
>> But, being way out of touch, I don't know what actually happens
>> today. Perhaps the current operators and current government watchers
>> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code
>> iperf2-code
>>
>> 1/ How do current switches exert Backpressure to reduce competing
>> traffic flows? Do they still send SQs?
>>
>> 2/ How do the current and proposed government regulations treat the
>> different needs of different types of traffic, e.g., "Bulk" versus
>> "Interactive" versus "Normal"? Are Internet carriers permitted to
>> treat traffic types differently? Are they permitted to charge
>> different amounts for different types of service?
>>
>> Jack Haverty
>>
>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>> For starters I would like to apologize for cc-ing both nanog and my
>>> new nn list. (I will add sender filters)
>>>
>>> A bit more below.
>>>
>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc>
>>> wrote:
>>>>> So for now, we'll keep paying for transit to get to the others
>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>> City. Yeah, I think the former is more likely. 😊
>>>>
>>>> There is often a chicken/egg scenario here with the economics. As an
>>>> eyeball network, your costs to build out and connect to Dallas are
>>>> greater than your transit cost, so you do that. Totally fair.
>>>>
>>>> However think about it from the content side. Say I want to build
>>>> into to Houston. I have to put routers in, and a bunch of cache
>>>> servers, so I have capital outlay , plus opex for space, power,
>>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of
>>>> calculations that go into it. Is there enough total eyeball traffic
>>>> there to make it worth it? Is saving 8-10ms enough of a performance
>>>> boost to justify the spend? What are the long term trends in that
>>>> market? These answers are of course different for a company running
>>>> their own CDN vs the commercial CDNs.
>>>>
>>>> I don't work for Google and obviously don't speak for them, but I
>>>> would suspect that they're happy to eat a 8-10ms performance hit to
>>>> serve from Dallas , versus the amount of capital outlay to build out
>>>> there right now.
>>> The three forms of traffic I care most about are voip, gaming, and
>>> videoconferencing, which are rewarding to have at lower latencies.
>>> When I was a kid, we had switched phone networks, and while the sound
>>> quality was poorer than today, the voice latency cross-town was just
>>> like "being there". Nowadays we see 500+ms latencies for this kind of
>>> traffic.
>>>
>>> As to how to make calls across town work that well again, cost-wise, I
>>> do not know, but the volume of traffic that would be better served by
>>> these interconnects quite low, respective to the overall gains in
>>> lower latency experiences for them.
>>>
>>>
>>>
>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be
>>>>> had for a good bit less than $900 on the wholesale market.
>>>>>
>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>>>>> rates almost costing as much as transit in many cases after you
>>>>> factor in loop costs.
>>>>>
>>>>> For example, in the Houston market (one of the largest and fastest
>>>>> growing regions in the US!), we do not have a major IX, so to get up
>>>>> to Dallas it’s several thousand for a 100g wave, plus several
>>>>> thousand for a 100g port on one of those major IXes. Or, a better
>>>>> option, we can get a 100g flat internet transit for just a little
>>>>> bit more.
>>>>>
>>>>> Fortunately, for us as an eyeball network, there are a good number
>>>>> of major content networks that are allowing for private peering in
>>>>> markets like Houston for just the cost of a cross connect and a QSFP
>>>>> if you’re in the right DC, with Google and some others being the
>>>>> outliers.
>>>>>
>>>>> So for now, we'll keep paying for transit to get to the others
>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>
>>>>> See y’all in San Diego this week,
>>>>> Tim
>>>>>
>>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>>> This set of trendlines was very interesting. Unfortunately the
>>>>>> data
>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>
>>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>
>>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>
>>>>>> ...
>>>>>>
>>>>>> I am under the impression that many IXPs remain very successful,
>>>>>> states without them suffer, and I also find the concept of doing
>>>>>> micro
>>>>>> IXPs at the city level, appealing, and now achievable with cheap
>>>>>> gear.
>>>>>> Finer grained cross connects between telco and ISP and IXP would
>>>>>> lower
>>>>>> latencies across town quite hugely...
>>>>>>
>>>>>> PS I hear ARIN is planning on dropping the price for, and bundling
>>>>>> 3
>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Oct 30:
>>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>> Dave Täht CSO, LibreQos
>>>
>>>
>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] NN and freedom of speech, and whether there is worthwhile good-faith discussion in that direction
2023-10-16 18:04 ` Dick Roy
@ 2023-10-17 10:26 ` Sebastian Moeller
2023-10-17 17:26 ` Spencer Sevilla
0 siblings, 1 reply; 38+ messages in thread
From: Sebastian Moeller @ 2023-10-17 10:26 UTC (permalink / raw)
To: Dick Roy
Cc: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Hi Richard,
> On Oct 16, 2023, at 20:04, Dick Roy <dickroy@alum.mit.edu> wrote:
>
> Good points all, Sebastien. How to "trade-off" a fixed capacity amongst many users is ultimately a game theoretic problem when users are allowed to make choices, which is certainly the case here. Secondly, any network that can and does generate "more traffic" (aka overhead such as ACKs NACKs and retries) reduces the capacity of the network, and ultimately can lead to the "user" capacity going to zero! Such is life in the fast lane (aka the internet).
>
> Lastly, on the issue of low-latency real-time experience, there are many applications that need/want such capabilities that actually have a net benefit to the individuals involved AND to society as a whole. IMO, interactive gaming is NOT one of those.
[SM] Yes, gaming is one obvious example of a class of uses that work best with low latency and low jitter, not necessarily an example for a use-case worthy enough to justify the work required to increase the responsiveness of the internet. Other examples are video conferences, VoIP, in extension of both musical collaboration over the internet, and surprising to some even plain old web browsing (it often needs to first read a page before being able to follow links and load resources, and every read takes at best a single RTT). None of these are inherently beneficial or detrimental to individuals or society, but most can be used to improve the status quo... I would argue that in the last 4 years the relevance of interactive use-cases has been made quite clear to a lot of folks...
> OK, so now you know I don't engage in these time sinks with no redeeming social value.:)
[SM] Duly noted ;)
> Since it is not hard to argue that just like power distribution, information exchange/dissemination is "in the public interest", the question becomes "Do we allow any and all forms of information exchange/dissemination over what is becoming something akin to a public utility?" FWIW, I don't know the answer to this question! :)
[SM] This is an interesting question and one (only) tangentially related to network neutrality... it is more related to freedom of speech and limits thereof. Maybe a question for another mailing list? Certainly one meriting a topic change...
Regards
Sebastian
>
> Cheers,
>
> RR
>
> -----Original Message-----
> From: Sebastian Moeller [mailto:moeller0@gmx.de]
> Sent: Monday, October 16, 2023 10:36 AM
> To: dickroy@alum.mit.edu; Network Neutrality is back! Let´s make the technical aspects heard this time!
> Subject: Re: [NNagain] transit and peering costs projections
>
> Hi Richard,
>
>
>> On Oct 16, 2023, at 19:01, Dick Roy via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>>
>> Just an observation: ANY type of congestion control that changes application behavior in response to congestion, or predicted congestion (ENC), begs the question "How does throttling of application information exchange rate (aka behavior) affect the user experience and will the user tolerate it?"
>
> [SM] The trade-off here is, if the application does not respond (or rather if no application would respond) we would end up with congestion collapse where no application would gain much of anything as the network busies itself trying to re-transmit dropped packets without making much head way... Simplistic game theory application might imply that individual applications could try to game this, and generally that seems to be true, but we have remedies for that available..
>
>
>>
>> Given any (complex and packet-switched) network topology of interconnected nodes and links, each with possible a different capacity and characteristics, such as the internet today, IMO the two fundamental questions are:
>>
>> 1) How can a given network be operated/configured so as to maximize aggregate throughput (i.e. achieve its theoretical capacity), and
>> 2) What things in the network need to change to increase the throughput (aka parameters in the network with the largest Lagrange multipliers associated with them)?
>
> [SM] The thing is we generally know how to maximize (average) throughput, just add (over-)generous amounts of buffering, the problem is that this screws up the other important quality axis, latency... We ideally want low latency and even more low latency variance (aka jitter) AND high throughput... Turns out though that above a certain throughput threshold* many users do not seem to care all that much for more throughput as long as interactive use cases are sufficiently responsive... but high responsiveness requires low latency and low jitter... This is actually a good thing, as that means we do not necessarily aim for 100% utilization (almost requiring deep buffers and hence resulting in compromised latency) but can get away with say 80-90% where shallow buffers will do (or rather where buffer filling stays shallow, there is IMHO still value in having deep buffers for rare events that need it).
>
>
>
> *) This is not a hard physical law so the exact threshold is not set in stone, but unless one has many parallel users, something in the 20-50 Mbps range is plenty and that is only needed in the "loaded" direction, that is for pure consumers the upload can be thinner, for pure producers the download can be thinner.
>
>
>>
>> I am not an expert in this field,
>
> [SM] Nor am I, I come from the wet-ware side of things so not even soft- or hard-ware ;)
>
>
>> however it seems to me that answers to these questions would be useful, assuming they are not yet available!
>>
>> Cheers,
>>
>> RR
>>
>>
>>
>> -----Original Message-----
>> From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of rjmcmahon via Nnagain
>> Sent: Sunday, October 15, 2023 1:39 PM
>> To: Network Neutrality is back! Let´s make the technical aspects heard this time!
>> Cc: rjmcmahon
>> Subject: Re: [NNagain] transit and peering costs projections
>>
>> Hi Jack,
>>
>> Thanks again for sharing. It's very interesting to me.
>>
>> Today, the networks are shifting from capacity constrained to latency
>> constrained, as can be seen in the IX discussions about how the speed of
>> light over fiber is too slow even between Houston & Dallas.
>>
>> The mitigations against standing queues (which cause bloat today) are:
>>
>> o) Shrink the e2e bottleneck queue so it will drop packets in a flow and
>> TCP will respond to that "signal"
>> o) Use some form of ECN marking where the network forwarding plane
>> ultimately informs the TCP source state machine so it can slow down or
>> pace effectively. This can be an earlier feedback signal and, if done
>> well, can inform the sources to avoid bottleneck queuing. There are
>> couple of approaches with ECN. Comcast is trialing L4S now which seems
>> interesting to me as a WiFi test & measurement engineer. The jury is
>> still out on this and measurements are needed.
>> o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
>>
>> The QoS priority approach per congestion is orthogonal by my judgment as
>> it's typically not supported e2e, many networks will bleach DSCP
>> markings. And it's really too late by my judgment.
>>
>> Also, on clock sync, yes your generation did us both a service and
>> disservice by getting rid of the PSTN TDM clock ;) So IP networking
>> devices kinda ignored clock sync, which makes e2e one way delay (OWD)
>> measurements impossible. Thankfully, the GPS atomic clock is now
>> available mostly everywhere and many devices use TCXO oscillators so
>> it's possible to get clock sync and use oscillators that can minimize
>> drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an
>> example.
>>
>> It seems silly to me that clocks aren't synced to the GPS atomic clock
>> even if by a proxy even if only for measurement and monitoring.
>>
>> Note: As Richard Roy will point out, there really is no such thing as
>> synchronized clocks across geographies per general relativity - so those
>> syncing clocks need to keep those effects in mind. I limited the iperf 2
>> timestamps to microsecond precision in hopes avoiding those issues.
>>
>> Note: With WiFi, a packet drop can occur because an intermittent RF
>> channel condition. TCP can't tell the difference between an RF drop vs a
>> congested queue drop. That's another reason ECN markings from network
>> devices may be better than dropped packets.
>>
>> Note: I've added some iperf 2 test support around pacing as that seems
>> to be the direction the industry is heading as networks are less and
>> less capacity strained and user quality of experience is being driven by
>> tail latencies. One can also test with the Prague CCA for the L4S
>> scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly
>> low cost)
>>
>> --fq-rate n[kmgKMG]
>> Set a rate to be used with fair-queuing based socket-level pacing, in
>> bytes or bits per second. Only available on platforms supporting the
>> SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate
>> bytes/sec or bits/sec per use of uppercase or lowercase, respectively)
>>
>> --fq-rate-step n[kmgKMG]
>> Set a step of rate to be used with fair-queuing based socket-level
>> pacing, in bytes or bits per second. Step occurs every
>> fq-rate-step-interval (defaults to one second)
>>
>> --fq-rate-step-interval n
>> Time in seconds before stepping the fq-rate
>>
>> Bob
>>
>> PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html
>>
>>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>>> latency. It's not just "rewarding" to have lower latencies; high
>>> latencies may make VGV unusable. Average (or "typical") latency as
>>> the FCC label proposes isn't a good metric to judge usability. A path
>>> which has high variance in latency can be unusable even if the average
>>> is quite low. Having your voice or video or gameplay "break up"
>>> every minute or so when latency spikes to 500 msec makes the "user
>>> experience" intolerable.
>>>
>>> A few years ago, I ran some simple "ping" tests to help a friend who
>>> was trying to use a gaming app. My data was only for one specific
>>> path so it's anecdotal. What I saw was surprising - zero data loss,
>>> every datagram was delivered, but occasionally a datagram would take
>>> up to 30 seconds to arrive. I didn't have the ability to poke around
>>> inside, but I suspected it was an experience of "bufferbloat", enabled
>>> by the dramatic drop in price of memory over the decades.
>>>
>>> It's been a long time since I was involved in operating any part of
>>> the Internet, so I don't know much about the inner workings today.
>>> Apologies for my ignorance....
>>>
>>> There was a scenario in the early days of the Internet for which we
>>> struggled to find a technical solution. Imagine some node in the
>>> bowels of the network, with 3 connected "circuits" to some other
>>> nodes. On two of those inputs, traffic is arriving to be forwarded
>>> out the third circuit. The incoming flows are significantly more than
>>> the outgoing path can accept.
>>>
>>> What happens? How is "backpressure" generated so that the incoming
>>> flows are reduced to the point that the outgoing circuit can handle
>>> the traffic?
>>>
>>> About 45 years ago, while we were defining TCPV4, we struggled with
>>> this issue, but didn't find any consensus solutions. So "placeholder"
>>> mechanisms were defined in TCPV4, to be replaced as research continued
>>> and found a good solution.
>>>
>>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
>>> defined; it was to be sent by a switching node back toward the sender
>>> of any datagram that had to be discarded because there wasn't any
>>> place to put it.
>>>
>>> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
>>> were defined in IP.
>>>
>>> TOS would allow the sender to distinguish datagrams based on their
>>> needs. For example, we thought "Interactive" service might be needed
>>> for VGV traffic, where timeliness of delivery was most important.
>>> "Bulk" service might be useful for activities like file transfers,
>>> backups, et al. "Normal" service might now mean activities like
>>> using the Web.
>>>
>>> The TTL field was an attempt to inform each switching node about the
>>> "expiration date" for a datagram. If a node somehow knew that a
>>> particular datagram was unlikely to reach its destination in time to
>>> be useful (such as a video datagram for a frame that has already been
>>> displayed), the node could, and should, discard that datagram to free
>>> up resources for useful traffic. Sadly we had no mechanisms for
>>> measuring delay, either in transit or in queuing, so TTL was defined
>>> in terms of "hops", which is not an accurate proxy for time. But
>>> it's all we had.
>>>
>>> Part of the complexity was that the "flow control" mechanism of the
>>> Internet had put much of the mechanism in the users' computers' TCP
>>> implementations, rather than the switches which handle only IP.
>>> Without mechanisms in the users' computers, all a switch could do is
>>> order more circuits, and add more memory to the switches for queuing.
>>> Perhaps that led to "bufferbloat".
>>>
>>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>>> future release that would introduce a "real" form of Backpressure and
>>> the ability to handle different types of traffic. Meanwhile, these
>>> rudimentary mechanisms would provide some flow control. Hopefully the
>>> users' computers sending the flows would respond to the SQ
>>> backpressure, and switches would prioritize traffic using the TTL and
>>> TOS information.
>>>
>>> But, being way out of touch, I don't know what actually happens
>>> today. Perhaps the current operators and current government watchers
>>> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code
>>> iperf2-code
>>>
>>> 1/ How do current switches exert Backpressure to reduce competing
>>> traffic flows? Do they still send SQs?
>>>
>>> 2/ How do the current and proposed government regulations treat the
>>> different needs of different types of traffic, e.g., "Bulk" versus
>>> "Interactive" versus "Normal"? Are Internet carriers permitted to
>>> treat traffic types differently? Are they permitted to charge
>>> different amounts for different types of service?
>>>
>>> Jack Haverty
>>>
>>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>>> For starters I would like to apologize for cc-ing both nanog and my
>>>> new nn list. (I will add sender filters)
>>>>
>>>> A bit more below.
>>>>
>>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc>
>>>> wrote:
>>>>>> So for now, we'll keep paying for transit to get to the others
>>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>
>>>>> There is often a chicken/egg scenario here with the economics. As an
>>>>> eyeball network, your costs to build out and connect to Dallas are
>>>>> greater than your transit cost, so you do that. Totally fair.
>>>>>
>>>>> However think about it from the content side. Say I want to build
>>>>> into to Houston. I have to put routers in, and a bunch of cache
>>>>> servers, so I have capital outlay , plus opex for space, power,
>>>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of
>>>>> calculations that go into it. Is there enough total eyeball traffic
>>>>> there to make it worth it? Is saving 8-10ms enough of a performance
>>>>> boost to justify the spend? What are the long term trends in that
>>>>> market? These answers are of course different for a company running
>>>>> their own CDN vs the commercial CDNs.
>>>>>
>>>>> I don't work for Google and obviously don't speak for them, but I
>>>>> would suspect that they're happy to eat a 8-10ms performance hit to
>>>>> serve from Dallas , versus the amount of capital outlay to build out
>>>>> there right now.
>>>> The three forms of traffic I care most about are voip, gaming, and
>>>> videoconferencing, which are rewarding to have at lower latencies.
>>>> When I was a kid, we had switched phone networks, and while the sound
>>>> quality was poorer than today, the voice latency cross-town was just
>>>> like "being there". Nowadays we see 500+ms latencies for this kind of
>>>> traffic.
>>>>
>>>> As to how to make calls across town work that well again, cost-wise, I
>>>> do not know, but the volume of traffic that would be better served by
>>>> these interconnects quite low, respective to the overall gains in
>>>> lower latency experiences for them.
>>>>
>>>>
>>>>
>>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be
>>>>>> had for a good bit less than $900 on the wholesale market.
>>>>>>
>>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>>>>>> rates almost costing as much as transit in many cases after you
>>>>>> factor in loop costs.
>>>>>>
>>>>>> For example, in the Houston market (one of the largest and fastest
>>>>>> growing regions in the US!), we do not have a major IX, so to get up
>>>>>> to Dallas it’s several thousand for a 100g wave, plus several
>>>>>> thousand for a 100g port on one of those major IXes. Or, a better
>>>>>> option, we can get a 100g flat internet transit for just a little
>>>>>> bit more.
>>>>>>
>>>>>> Fortunately, for us as an eyeball network, there are a good number
>>>>>> of major content networks that are allowing for private peering in
>>>>>> markets like Houston for just the cost of a cross connect and a QSFP
>>>>>> if you’re in the right DC, with Google and some others being the
>>>>>> outliers.
>>>>>>
>>>>>> So for now, we'll keep paying for transit to get to the others
>>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>>
>>>>>> See y’all in San Diego this week,
>>>>>> Tim
>>>>>>
>>>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>>>> This set of trendlines was very interesting. Unfortunately the
>>>>>>> data
>>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>>
>>>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>>
>>>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>>
>>>>>>> ...
>>>>>>>
>>>>>>> I am under the impression that many IXPs remain very successful,
>>>>>>> states without them suffer, and I also find the concept of doing
>>>>>>> micro
>>>>>>> IXPs at the city level, appealing, and now achievable with cheap
>>>>>>> gear.
>>>>>>> Finer grained cross connects between telco and ISP and IXP would
>>>>>>> lower
>>>>>>> latencies across town quite hugely...
>>>>>>>
>>>>>>> PS I hear ARIN is planning on dropping the price for, and bundling
>>>>>>> 3
>>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Oct 30:
>>>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>>> Dave Täht CSO, LibreQos
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] The history of congestion control on the internet
2023-10-16 1:39 ` [NNagain] The history of congestion control on the internet Dave Taht
2023-10-16 6:30 ` Jack Haverty
@ 2023-10-17 15:34 ` Dick Roy
1 sibling, 0 replies; 38+ messages in thread
From: Dick Roy @ 2023-10-17 15:34 UTC (permalink / raw)
To: 'Network Neutrality is back! Let´s make the technical
aspects heard this time!'
While this article (How to Fix the Internet) is not explicitly about congestion control, it is nonetheless about "congestion of a sort on the internet", and it is definitely worth a read!
https://www.technologyreview.com/2023/10/17/1081194/how-to-fix-the-internet-online-discourse/?truid=232f5693d66885a14db16ab409cea188&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=&utm_content=10-17-2023&mc_cid=6ba1a5c7e6&mc_eid=6d74dc7f9a
-----Original Message-----
From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of Dave Taht via Nnagain
Sent: Sunday, October 15, 2023 6:39 PM
To: Network Neutrality is back! Let´s make the technical aspects heard this time!
Cc: Dave Taht
Subject: [NNagain] The history of congestion control on the internet
It is wonderful to have your original perspectives here, Jack.
But please, everyone, before a major subject change, change the subject?
Jack's email conflates a few things that probably deserve other
threads for them. One is VGV - great acronym! Another is about the
"Placeholders" of TTL, and TOS. The last is the history of congestion
control - and it's future! As being a part of the most recent episodes
here I have written extensively on the subject, but what I most like
to point people to is my fun talks trying to make it more accessible
like this one at apnic
https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-but-its-not-over-yet/
or my more recent one at tti/vanguard.
Most recently one of our LibreQos clients has been collecting 10ms
samples and movies of what real-world residential traffic actually
looks like:
https://www.youtube.com/@trendaltoews7143
And it is my hope that that conveys intuition to others... as compared
to speedtest traffic, which prove nothing about the actual behaviors
of VGV traffic, which I ranted about here:
https://blog.cerowrt.org/post/speedtests/ - I am glad that these
speedtests now have latency under load reports almost universally, but
see the rant for more detail.
Most people only have a picture of traffic in the large, over 5 minute
intervals, which behaves quite differently, or a pre-conception that
backpressure actually exists across the internet. It doesn't. An
explicit ack for every packet was ripped out of the arpanet as costing
too much time. Wifi, to some extent, recreates the arpanet problem by
having explicit acks on the local loop that are repeated until by god
the packet comes through, usually without exponential backoff.
We have some really amazing encoding schemes now - I do not understand
how starlink works without retries for example, an my grip on 5G's
encodings is non-existent, except knowing it is the most bufferbloated
of all our technologies.
...
Anyway, my hope for this list is that we come up with useful technical
feedback to the powers-that-be that want to regulate the internet
under some title ii provisions, and I certainly hope we can make
strides towards fixing bufferbloat along the way! There are many other
issues. Let's talk about those instead!
But...
...
In "brief" response to the notes below - source quench died due to
easy ddos, AQMs from RED (1992) until codel (2012) struggled with
measuring the wrong things ( Kathie's updated paper on red in a
different light: https://pollere.net/Codel.html ), SFQ was adopted by
many devices, WRR used in others, ARED I think is common in juniper
boxes, fq_codel is pretty much the default now for most of linux, and
I helped write CAKE.
TCPs evolved from reno to vegas to cubic to bbr and the paper on BBR
is excellent: https://research.google/pubs/pub45646/ as is len
kleinrock's monograph on it. However problems with self congestion and
excessive packet loss were observed, and after entering the ietf
process, is now in it's 3rd revision, which looks pretty good.
Hardware pause frames in ethernet are often available, there are all
kinds of specialized new hardware flow control standards in 802.1, a
new more centralized controller in wifi7
To this day I have no idea how infiniband works. Or how ATM was
supposed to work. I have a good grip on wifi up to version 6, and the
work we did on wifi is in use now on a lot of wifi gear like openwrt,
eero and evenroute, and I am proudest of all my teams' work on
achieving airtime fairness, and better scheduling described in this
paper here: https://www.cs.kau.se/tohojo/airtime-fairness/ for wifi
and MOS to die for.
There is new work on this thing called L4S, which has a bunch of RFCs
for it, leverages multi-bit DCTCP style ECN and is under test by apple
and comcast, it is discussed on tsvwg list a lot. I encourage users to
jump in on the comcast/apple beta, and operators to at least read
this: https://datatracker.ietf.org/doc/draft-ietf-tsvwg-l4sops/
Knowing that there is a book or three left to write on this subject
that nobody will read is an issue, as is coming up with an
architecture to take packet handling as we know it, to the moon and
the rest of the solar system, seems kind of difficult.
Ideally I would love to be working on that earth-moon architecture
rather than trying to finish getting stuff we designed in 2012-2016
deployed.
I am going to pull out a few specific questions from the below and
answer separately.
On Sun, Oct 15, 2023 at 1:00 PM Jack Haverty via Nnagain
<nnagain@lists.bufferbloat.net> wrote:
>
> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
> latency. It's not just "rewarding" to have lower latencies; high
> latencies may make VGV unusable. Average (or "typical") latency as the
> FCC label proposes isn't a good metric to judge usability. A path which
> has high variance in latency can be unusable even if the average is
> quite low. Having your voice or video or gameplay "break up" every
> minute or so when latency spikes to 500 msec makes the "user experience"
> intolerable.
>
> A few years ago, I ran some simple "ping" tests to help a friend who was
> trying to use a gaming app. My data was only for one specific path so
> it's anecdotal. What I saw was surprising - zero data loss, every
> datagram was delivered, but occasionally a datagram would take up to 30
> seconds to arrive. I didn't have the ability to poke around inside, but
> I suspected it was an experience of "bufferbloat", enabled by the
> dramatic drop in price of memory over the decades.
>
> It's been a long time since I was involved in operating any part of the
> Internet, so I don't know much about the inner workings today. Apologies
> for my ignorance....
>
> There was a scenario in the early days of the Internet for which we
> struggled to find a technical solution. Imagine some node in the bowels
> of the network, with 3 connected "circuits" to some other nodes. On two
> of those inputs, traffic is arriving to be forwarded out the third
> circuit. The incoming flows are significantly more than the outgoing
> path can accept.
>
> What happens? How is "backpressure" generated so that the incoming
> flows are reduced to the point that the outgoing circuit can handle the
> traffic?
>
> About 45 years ago, while we were defining TCPV4, we struggled with this
> issue, but didn't find any consensus solutions. So "placeholder"
> mechanisms were defined in TCPV4, to be replaced as research continued
> and found a good solution.
>
> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
> defined; it was to be sent by a switching node back toward the sender of
> any datagram that had to be discarded because there wasn't any place to
> put it.
>
> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
> were defined in IP.
>
> TOS would allow the sender to distinguish datagrams based on their
> needs. For example, we thought "Interactive" service might be needed
> for VGV traffic, where timeliness of delivery was most important.
> "Bulk" service might be useful for activities like file transfers,
> backups, et al. "Normal" service might now mean activities like using
> the Web.
>
> The TTL field was an attempt to inform each switching node about the
> "expiration date" for a datagram. If a node somehow knew that a
> particular datagram was unlikely to reach its destination in time to be
> useful (such as a video datagram for a frame that has already been
> displayed), the node could, and should, discard that datagram to free up
> resources for useful traffic. Sadly we had no mechanisms for measuring
> delay, either in transit or in queuing, so TTL was defined in terms of
> "hops", which is not an accurate proxy for time. But it's all we had.
>
> Part of the complexity was that the "flow control" mechanism of the
> Internet had put much of the mechanism in the users' computers' TCP
> implementations, rather than the switches which handle only IP. Without
> mechanisms in the users' computers, all a switch could do is order more
> circuits, and add more memory to the switches for queuing. Perhaps that
> led to "bufferbloat".
>
> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
> future release that would introduce a "real" form of Backpressure and
> the ability to handle different types of traffic. Meanwhile, these
> rudimentary mechanisms would provide some flow control. Hopefully the
> users' computers sending the flows would respond to the SQ backpressure,
> and switches would prioritize traffic using the TTL and TOS information.
>
> But, being way out of touch, I don't know what actually happens today.
> Perhaps the current operators and current government watchers can answer?:
I would love moe feedback about RED''s deployment at scale in particular.
>
> 1/ How do current switches exert Backpressure to reduce competing
> traffic flows? Do they still send SQs?
Some send various forms of hardware flow control, an ethernet pause
frame derivative
> 2/ How do the current and proposed government regulations treat the
> different needs of different types of traffic, e.g., "Bulk" versus
> "Interactive" versus "Normal"? Are Internet carriers permitted to treat
> traffic types differently? Are they permitted to charge different
> amounts for different types of service?
> Jack Haverty
>
> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
> > For starters I would like to apologize for cc-ing both nanog and my
> > new nn list. (I will add sender filters)
> >
> > A bit more below.
> >
> > On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc> wrote:
> >>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
> >>
> >> There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.
> >>
> >> However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That's not cheap, so there's a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.
> >>
> >> I don't work for Google and obviously don't speak for them, but I would suspect that they're happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.
> > The three forms of traffic I care most about are voip, gaming, and
> > videoconferencing, which are rewarding to have at lower latencies.
> > When I was a kid, we had switched phone networks, and while the sound
> > quality was poorer than today, the voice latency cross-town was just
> > like "being there". Nowadays we see 500+ms latencies for this kind of
> > traffic.
> >
> > As to how to make calls across town work that well again, cost-wise, I
> > do not know, but the volume of traffic that would be better served by
> > these interconnects quite low, respective to the overall gains in
> > lower latency experiences for them.
> >
> >
> >
> >> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
> >>> I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.
> >>>
> >>> Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.
> >>>
> >>> For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.
> >>>
> >>> Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.
> >>>
> >>> So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. 😊
> >>>
> >>> See y’all in San Diego this week,
> >>> Tim
> >>>
> >>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
> >>>> This set of trendlines was very interesting. Unfortunately the data
> >>>> stops in 2015. Does anyone have more recent data?
> >>>>
> >>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
> >>>>
> >>>> I believe a gbit circuit that an ISP can resell still runs at about
> >>>> $900 - $1.4k (?) in the usa? How about elsewhere?
> >>>>
> >>>> ...
> >>>>
> >>>> I am under the impression that many IXPs remain very successful,
> >>>> states without them suffer, and I also find the concept of doing micro
> >>>> IXPs at the city level, appealing, and now achievable with cheap gear.
> >>>> Finer grained cross connects between telco and ISP and IXP would lower
> >>>> latencies across town quite hugely...
> >>>>
> >>>> PS I hear ARIN is planning on dropping the price for, and bundling 3
> >>>> BGP AS numbers at a time, as of the end of this year, also.
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> >>>> Dave Täht CSO, LibreQos
> >
> >
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos
_______________________________________________
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] NN and freedom of speech, and whether there is worthwhile good-faith discussion in that direction
2023-10-17 10:26 ` [NNagain] NN and freedom of speech, and whether there is worthwhile good-faith discussion in that direction Sebastian Moeller
@ 2023-10-17 17:26 ` Spencer Sevilla
2023-10-17 20:06 ` Jack Haverty
0 siblings, 1 reply; 38+ messages in thread
From: Spencer Sevilla @ 2023-10-17 17:26 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
[-- Attachment #1: Type: text/plain, Size: 22119 bytes --]
I know this is a small side note but I felt compelled to speak up in defense of online gaming. I’m not a gamer at all and up till a year or two ago, would have agreed with Dick’s take about benefit to “society as a whole.” However, lately I’ve started hearing some research on the benefits of groups of friends using online games to socialize together, effectively using the game primarily as a group call.
There’s also this project, where people have collected banned/censored books into a library in Minecraft. Specifically as a solution to contexts where regulators/censors ban and monitor content through other channels (websites etc) but don’t surveil Minecraft... Presumably because they share Dick’s opinion ;-) https://www.uncensoredlibrary.com/en
> On Oct 17, 2023, at 03:26, Sebastian Moeller via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>
> Hi Richard,
>
>
>> On Oct 16, 2023, at 20:04, Dick Roy <dickroy@alum.mit.edu> wrote:
>>
>> Good points all, Sebastien. How to "trade-off" a fixed capacity amongst many users is ultimately a game theoretic problem when users are allowed to make choices, which is certainly the case here. Secondly, any network that can and does generate "more traffic" (aka overhead such as ACKs NACKs and retries) reduces the capacity of the network, and ultimately can lead to the "user" capacity going to zero! Such is life in the fast lane (aka the internet).
>>
>> Lastly, on the issue of low-latency real-time experience, there are many applications that need/want such capabilities that actually have a net benefit to the individuals involved AND to society as a whole. IMO, interactive gaming is NOT one of those.
>
> [SM] Yes, gaming is one obvious example of a class of uses that work best with low latency and low jitter, not necessarily an example for a use-case worthy enough to justify the work required to increase the responsiveness of the internet. Other examples are video conferences, VoIP, in extension of both musical collaboration over the internet, and surprising to some even plain old web browsing (it often needs to first read a page before being able to follow links and load resources, and every read takes at best a single RTT). None of these are inherently beneficial or detrimental to individuals or society, but most can be used to improve the status quo... I would argue that in the last 4 years the relevance of interactive use-cases has been made quite clear to a lot of folks...
>
>
>> OK, so now you know I don't engage in these time sinks with no redeeming social value.:)
>
> [SM] Duly noted ;)
>
>> Since it is not hard to argue that just like power distribution, information exchange/dissemination is "in the public interest", the question becomes "Do we allow any and all forms of information exchange/dissemination over what is becoming something akin to a public utility?" FWIW, I don't know the answer to this question! :)
>
> [SM] This is an interesting question and one (only) tangentially related to network neutrality... it is more related to freedom of speech and limits thereof. Maybe a question for another mailing list? Certainly one meriting a topic change...
>
>
> Regards
> Sebastian
>
>>
>> Cheers,
>>
>> RR
>>
>> -----Original Message-----
>> From: Sebastian Moeller [mailto:moeller0@gmx.de]
>> Sent: Monday, October 16, 2023 10:36 AM
>> To: dickroy@alum.mit.edu; Network Neutrality is back! Let´s make the technical aspects heard this time!
>> Subject: Re: [NNagain] transit and peering costs projections
>>
>> Hi Richard,
>>
>>
>>> On Oct 16, 2023, at 19:01, Dick Roy via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>>>
>>> Just an observation: ANY type of congestion control that changes application behavior in response to congestion, or predicted congestion (ENC), begs the question "How does throttling of application information exchange rate (aka behavior) affect the user experience and will the user tolerate it?"
>>
>> [SM] The trade-off here is, if the application does not respond (or rather if no application would respond) we would end up with congestion collapse where no application would gain much of anything as the network busies itself trying to re-transmit dropped packets without making much head way... Simplistic game theory application might imply that individual applications could try to game this, and generally that seems to be true, but we have remedies for that available..
>>
>>
>>>
>>> Given any (complex and packet-switched) network topology of interconnected nodes and links, each with possible a different capacity and characteristics, such as the internet today, IMO the two fundamental questions are:
>>>
>>> 1) How can a given network be operated/configured so as to maximize aggregate throughput (i.e. achieve its theoretical capacity), and
>>> 2) What things in the network need to change to increase the throughput (aka parameters in the network with the largest Lagrange multipliers associated with them)?
>>
>> [SM] The thing is we generally know how to maximize (average) throughput, just add (over-)generous amounts of buffering, the problem is that this screws up the other important quality axis, latency... We ideally want low latency and even more low latency variance (aka jitter) AND high throughput... Turns out though that above a certain throughput threshold* many users do not seem to care all that much for more throughput as long as interactive use cases are sufficiently responsive... but high responsiveness requires low latency and low jitter... This is actually a good thing, as that means we do not necessarily aim for 100% utilization (almost requiring deep buffers and hence resulting in compromised latency) but can get away with say 80-90% where shallow buffers will do (or rather where buffer filling stays shallow, there is IMHO still value in having deep buffers for rare events that need it).
>>
>>
>>
>> *) This is not a hard physical law so the exact threshold is not set in stone, but unless one has many parallel users, something in the 20-50 Mbps range is plenty and that is only needed in the "loaded" direction, that is for pure consumers the upload can be thinner, for pure producers the download can be thinner.
>>
>>
>>>
>>> I am not an expert in this field,
>>
>> [SM] Nor am I, I come from the wet-ware side of things so not even soft- or hard-ware ;)
>>
>>
>>> however it seems to me that answers to these questions would be useful, assuming they are not yet available!
>>>
>>> Cheers,
>>>
>>> RR
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of rjmcmahon via Nnagain
>>> Sent: Sunday, October 15, 2023 1:39 PM
>>> To: Network Neutrality is back! Let´s make the technical aspects heard this time!
>>> Cc: rjmcmahon
>>> Subject: Re: [NNagain] transit and peering costs projections
>>>
>>> Hi Jack,
>>>
>>> Thanks again for sharing. It's very interesting to me.
>>>
>>> Today, the networks are shifting from capacity constrained to latency
>>> constrained, as can be seen in the IX discussions about how the speed of
>>> light over fiber is too slow even between Houston & Dallas.
>>>
>>> The mitigations against standing queues (which cause bloat today) are:
>>>
>>> o) Shrink the e2e bottleneck queue so it will drop packets in a flow and
>>> TCP will respond to that "signal"
>>> o) Use some form of ECN marking where the network forwarding plane
>>> ultimately informs the TCP source state machine so it can slow down or
>>> pace effectively. This can be an earlier feedback signal and, if done
>>> well, can inform the sources to avoid bottleneck queuing. There are
>>> couple of approaches with ECN. Comcast is trialing L4S now which seems
>>> interesting to me as a WiFi test & measurement engineer. The jury is
>>> still out on this and measurements are needed.
>>> o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
>>>
>>> The QoS priority approach per congestion is orthogonal by my judgment as
>>> it's typically not supported e2e, many networks will bleach DSCP
>>> markings. And it's really too late by my judgment.
>>>
>>> Also, on clock sync, yes your generation did us both a service and
>>> disservice by getting rid of the PSTN TDM clock ;) So IP networking
>>> devices kinda ignored clock sync, which makes e2e one way delay (OWD)
>>> measurements impossible. Thankfully, the GPS atomic clock is now
>>> available mostly everywhere and many devices use TCXO oscillators so
>>> it's possible to get clock sync and use oscillators that can minimize
>>> drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an
>>> example.
>>>
>>> It seems silly to me that clocks aren't synced to the GPS atomic clock
>>> even if by a proxy even if only for measurement and monitoring.
>>>
>>> Note: As Richard Roy will point out, there really is no such thing as
>>> synchronized clocks across geographies per general relativity - so those
>>> syncing clocks need to keep those effects in mind. I limited the iperf 2
>>> timestamps to microsecond precision in hopes avoiding those issues.
>>>
>>> Note: With WiFi, a packet drop can occur because an intermittent RF
>>> channel condition. TCP can't tell the difference between an RF drop vs a
>>> congested queue drop. That's another reason ECN markings from network
>>> devices may be better than dropped packets.
>>>
>>> Note: I've added some iperf 2 test support around pacing as that seems
>>> to be the direction the industry is heading as networks are less and
>>> less capacity strained and user quality of experience is being driven by
>>> tail latencies. One can also test with the Prague CCA for the L4S
>>> scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly
>>> low cost)
>>>
>>> --fq-rate n[kmgKMG]
>>> Set a rate to be used with fair-queuing based socket-level pacing, in
>>> bytes or bits per second. Only available on platforms supporting the
>>> SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate
>>> bytes/sec or bits/sec per use of uppercase or lowercase, respectively)
>>>
>>> --fq-rate-step n[kmgKMG]
>>> Set a step of rate to be used with fair-queuing based socket-level
>>> pacing, in bytes or bits per second. Step occurs every
>>> fq-rate-step-interval (defaults to one second)
>>>
>>> --fq-rate-step-interval n
>>> Time in seconds before stepping the fq-rate
>>>
>>> Bob
>>>
>>> PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html
>>>
>>>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>>>> latency. It's not just "rewarding" to have lower latencies; high
>>>> latencies may make VGV unusable. Average (or "typical") latency as
>>>> the FCC label proposes isn't a good metric to judge usability. A path
>>>> which has high variance in latency can be unusable even if the average
>>>> is quite low. Having your voice or video or gameplay "break up"
>>>> every minute or so when latency spikes to 500 msec makes the "user
>>>> experience" intolerable.
>>>>
>>>> A few years ago, I ran some simple "ping" tests to help a friend who
>>>> was trying to use a gaming app. My data was only for one specific
>>>> path so it's anecdotal. What I saw was surprising - zero data loss,
>>>> every datagram was delivered, but occasionally a datagram would take
>>>> up to 30 seconds to arrive. I didn't have the ability to poke around
>>>> inside, but I suspected it was an experience of "bufferbloat", enabled
>>>> by the dramatic drop in price of memory over the decades.
>>>>
>>>> It's been a long time since I was involved in operating any part of
>>>> the Internet, so I don't know much about the inner workings today.
>>>> Apologies for my ignorance....
>>>>
>>>> There was a scenario in the early days of the Internet for which we
>>>> struggled to find a technical solution. Imagine some node in the
>>>> bowels of the network, with 3 connected "circuits" to some other
>>>> nodes. On two of those inputs, traffic is arriving to be forwarded
>>>> out the third circuit. The incoming flows are significantly more than
>>>> the outgoing path can accept.
>>>>
>>>> What happens? How is "backpressure" generated so that the incoming
>>>> flows are reduced to the point that the outgoing circuit can handle
>>>> the traffic?
>>>>
>>>> About 45 years ago, while we were defining TCPV4, we struggled with
>>>> this issue, but didn't find any consensus solutions. So "placeholder"
>>>> mechanisms were defined in TCPV4, to be replaced as research continued
>>>> and found a good solution.
>>>>
>>>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
>>>> defined; it was to be sent by a switching node back toward the sender
>>>> of any datagram that had to be discarded because there wasn't any
>>>> place to put it.
>>>>
>>>> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
>>>> were defined in IP.
>>>>
>>>> TOS would allow the sender to distinguish datagrams based on their
>>>> needs. For example, we thought "Interactive" service might be needed
>>>> for VGV traffic, where timeliness of delivery was most important.
>>>> "Bulk" service might be useful for activities like file transfers,
>>>> backups, et al. "Normal" service might now mean activities like
>>>> using the Web.
>>>>
>>>> The TTL field was an attempt to inform each switching node about the
>>>> "expiration date" for a datagram. If a node somehow knew that a
>>>> particular datagram was unlikely to reach its destination in time to
>>>> be useful (such as a video datagram for a frame that has already been
>>>> displayed), the node could, and should, discard that datagram to free
>>>> up resources for useful traffic. Sadly we had no mechanisms for
>>>> measuring delay, either in transit or in queuing, so TTL was defined
>>>> in terms of "hops", which is not an accurate proxy for time. But
>>>> it's all we had.
>>>>
>>>> Part of the complexity was that the "flow control" mechanism of the
>>>> Internet had put much of the mechanism in the users' computers' TCP
>>>> implementations, rather than the switches which handle only IP.
>>>> Without mechanisms in the users' computers, all a switch could do is
>>>> order more circuits, and add more memory to the switches for queuing.
>>>> Perhaps that led to "bufferbloat".
>>>>
>>>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>>>> future release that would introduce a "real" form of Backpressure and
>>>> the ability to handle different types of traffic. Meanwhile, these
>>>> rudimentary mechanisms would provide some flow control. Hopefully the
>>>> users' computers sending the flows would respond to the SQ
>>>> backpressure, and switches would prioritize traffic using the TTL and
>>>> TOS information.
>>>>
>>>> But, being way out of touch, I don't know what actually happens
>>>> today. Perhaps the current operators and current government watchers
>>>> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code
>>>> iperf2-code
>>>>
>>>> 1/ How do current switches exert Backpressure to reduce competing
>>>> traffic flows? Do they still send SQs?
>>>>
>>>> 2/ How do the current and proposed government regulations treat the
>>>> different needs of different types of traffic, e.g., "Bulk" versus
>>>> "Interactive" versus "Normal"? Are Internet carriers permitted to
>>>> treat traffic types differently? Are they permitted to charge
>>>> different amounts for different types of service?
>>>>
>>>> Jack Haverty
>>>>
>>>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>>>> For starters I would like to apologize for cc-ing both nanog and my
>>>>> new nn list. (I will add sender filters)
>>>>>
>>>>> A bit more below.
>>>>>
>>>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc>
>>>>> wrote:
>>>>>>> So for now, we'll keep paying for transit to get to the others
>>>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>>
>>>>>> There is often a chicken/egg scenario here with the economics. As an
>>>>>> eyeball network, your costs to build out and connect to Dallas are
>>>>>> greater than your transit cost, so you do that. Totally fair.
>>>>>>
>>>>>> However think about it from the content side. Say I want to build
>>>>>> into to Houston. I have to put routers in, and a bunch of cache
>>>>>> servers, so I have capital outlay , plus opex for space, power,
>>>>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of
>>>>>> calculations that go into it. Is there enough total eyeball traffic
>>>>>> there to make it worth it? Is saving 8-10ms enough of a performance
>>>>>> boost to justify the spend? What are the long term trends in that
>>>>>> market? These answers are of course different for a company running
>>>>>> their own CDN vs the commercial CDNs.
>>>>>>
>>>>>> I don't work for Google and obviously don't speak for them, but I
>>>>>> would suspect that they're happy to eat a 8-10ms performance hit to
>>>>>> serve from Dallas , versus the amount of capital outlay to build out
>>>>>> there right now.
>>>>> The three forms of traffic I care most about are voip, gaming, and
>>>>> videoconferencing, which are rewarding to have at lower latencies.
>>>>> When I was a kid, we had switched phone networks, and while the sound
>>>>> quality was poorer than today, the voice latency cross-town was just
>>>>> like "being there". Nowadays we see 500+ms latencies for this kind of
>>>>> traffic.
>>>>>
>>>>> As to how to make calls across town work that well again, cost-wise, I
>>>>> do not know, but the volume of traffic that would be better served by
>>>>> these interconnects quite low, respective to the overall gains in
>>>>> lower latency experiences for them.
>>>>>
>>>>>
>>>>>
>>>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be
>>>>>>> had for a good bit less than $900 on the wholesale market.
>>>>>>>
>>>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>>>>>>> rates almost costing as much as transit in many cases after you
>>>>>>> factor in loop costs.
>>>>>>>
>>>>>>> For example, in the Houston market (one of the largest and fastest
>>>>>>> growing regions in the US!), we do not have a major IX, so to get up
>>>>>>> to Dallas it’s several thousand for a 100g wave, plus several
>>>>>>> thousand for a 100g port on one of those major IXes. Or, a better
>>>>>>> option, we can get a 100g flat internet transit for just a little
>>>>>>> bit more.
>>>>>>>
>>>>>>> Fortunately, for us as an eyeball network, there are a good number
>>>>>>> of major content networks that are allowing for private peering in
>>>>>>> markets like Houston for just the cost of a cross connect and a QSFP
>>>>>>> if you’re in the right DC, with Google and some others being the
>>>>>>> outliers.
>>>>>>>
>>>>>>> So for now, we'll keep paying for transit to get to the others
>>>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>>>
>>>>>>> See y’all in San Diego this week,
>>>>>>> Tim
>>>>>>>
>>>>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>>>>> This set of trendlines was very interesting. Unfortunately the
>>>>>>>> data
>>>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>>>
>>>>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>>>
>>>>>>>> I believe a gbit circuit that an ISP can resell still runs at about
>>>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>>>
>>>>>>>> ...
>>>>>>>>
>>>>>>>> I am under the impression that many IXPs remain very successful,
>>>>>>>> states without them suffer, and I also find the concept of doing
>>>>>>>> micro
>>>>>>>> IXPs at the city level, appealing, and now achievable with cheap
>>>>>>>> gear.
>>>>>>>> Finer grained cross connects between telco and ISP and IXP would
>>>>>>>> lower
>>>>>>>> latencies across town quite hugely...
>>>>>>>>
>>>>>>>> PS I hear ARIN is planning on dropping the price for, and bundling
>>>>>>>> 3
>>>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Oct 30:
>>>>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>>>> Dave Täht CSO, LibreQos
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Nnagain mailing list
>>>> Nnagain@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
>>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
[-- Attachment #2: Type: text/html, Size: 39345 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [NNagain] NN and freedom of speech, and whether there is worthwhile good-faith discussion in that direction
2023-10-17 17:26 ` Spencer Sevilla
@ 2023-10-17 20:06 ` Jack Haverty
0 siblings, 0 replies; 38+ messages in thread
From: Jack Haverty @ 2023-10-17 20:06 UTC (permalink / raw)
To: nnagain
[-- Attachment #1.1.1: Type: text/plain, Size: 26937 bytes --]
When I introduced the VGV acronym, I used Gaming as the G, but that was
probably misleading. There are many "use cases" other than gaming. A
better term might be Interactive, and a VIV acronym.
Personally, I've been experimenting with "home automation", and have
found the cloud-based schemes to be intolerable. When you flip a light
switch, you expect the light to come on almost instantly, and
cloud-based mechanisms are disturbingly inconsistent, possibly due to
variability in latency. Lights turning on 20 seconds after you flip
the switch is unacceptable.
There are other such "serious" Interactive use cases not related to
Gaming. Consider, for example, a "driverless" vehicle which relies on
the ability for a human at some remote control center to "take over"
control of a vehicle when necessary. Or a power plant, or medical
device, or ...
IMHO, "Use Cases" are the foundation to setting good policy and
regulation. What should Users expect to be able to do when using the
Internet?
Back in the early days, the design of Internet mechanisms was driven by
several "use cases". Since the project was funded by the military, the
use cases were also military. But humans need similar kinds of
communications, whether they are exchanging commands to the troops or
tweets to the forums.
One such "use case" motivated the concern for latency. The scenario was
simple. An army is engaged in some action, and an online conference is
being held amongst all the players, from the generals in HQs to the
field commanders in tents or even jeeps or airplanes. The conference
supports multimedia interactions, using something like a "shared
whiteboard" that everyone can see (video was just a dream in 1980) and
change using a mouse or some kind of pointer device.
In such a scenario, the participants might be viewing a map, and
exchanging information while pointing on the map. "The enemy HQ is
here." "Our patrol is here." "Send the battalion here." "We'll order
artillery to strike here." It's very important that the pointing is
synchronized with the speech, and that the graphics and speech are
delivered intact.
Those scenarios motivated the inclusion of Internet mechanisms such as
SQ, TTL, TOS et al.
Such scenarios are used today, quite frequently, in normal everyday life
by humans in all sorts of activities. They're no longer just military
situations.
Considering "freedom of speech", what are the Use Cases that the
Internet must support? Setting aside the question of what can be said,
are there other aspects of freedom of speech?
One example - assuming people should have the right to speak
anonymously, should they also have the right to speak non-anonymously?
Should someone have the right to know that speech attributed to some
person actually was spoken by that person?
Today, regulators are dealing with this question in other venues, e.g.,
by adding, and enforcing, the notion of "verified" telephone numbers, in
response to the tsunami of robocalls, phishing, and such activities.
Should the Internet also provide a way to verify the sources of emails,
blog posts, etc.?
I've "digitally signed" this message, using the technology which I found
in my email program. Maybe that will enable you to believe that I
actually wrote this message. But I rarely receive any "signed" email,
except occasionally from members of the Technorati. Perhaps the
requirement to be anonymous is just part of the Internet now? Or is
that a Use Case that simply hasn't been developed, propagated, and
enforced yet?
What are the "Use Cases" that the Internet must support, and that
regulators should enforce?
Jack Haverty
On 10/17/23 10:26, Spencer Sevilla via Nnagain wrote:
> I know this is a small side note but I felt compelled to speak up in
> defense of online gaming. I’m not a gamer at all and up till a year or
> two ago, would have agreed with Dick’s take about benefit to “society
> as a whole.” However, lately I’ve started hearing some research on the
> benefits of groups of friends using online games to socialize
> together, effectively using the game primarily as a group call.
>
> There’s also this project, where people have collected banned/censored
> books into a library in Minecraft. Specifically as a solution to
> contexts where regulators/censors ban and monitor content through
> other channels (websites etc) but don’t surveil Minecraft...
> Presumably because they share Dick’s opinion ;-)
> https://www.uncensoredlibrary.com/en
>
>> On Oct 17, 2023, at 03:26, Sebastian Moeller via Nnagain
>> <nnagain@lists.bufferbloat.net> wrote:
>>
>> Hi Richard,
>>
>>
>>> On Oct 16, 2023, at 20:04, Dick Roy <dickroy@alum.mit.edu> wrote:
>>>
>>> Good points all, Sebastien. How to "trade-off" a fixed capacity
>>> amongst many users is ultimately a game theoretic problem when users
>>> are allowed to make choices, which is certainly the case here.
>>> Secondly, any network that can and does generate "more traffic"
>>> (aka overhead such as ACKs NACKs and retries) reduces the capacity
>>> of the network, and ultimately can lead to the "user" capacity going
>>> to zero! Such is life in the fast lane (aka the internet).
>>>
>>> Lastly, on the issue of low-latency real-time experience, there are
>>> many applications that need/want such capabilities that actually
>>> have a net benefit to the individuals involved AND to society as a
>>> whole. IMO, interactive gaming is NOT one of those.
>>
>> [SM] Yes, gaming is one obvious example of a class of uses that work
>> best with low latency and low jitter, not necessarily an example for
>> a use-case worthy enough to justify the work required to increase the
>> responsiveness of the internet. Other examples are video conferences,
>> VoIP, in extension of both musical collaboration over the internet,
>> and surprising to some even plain old web browsing (it often needs to
>> first read a page before being able to follow links and load
>> resources, and every read takes at best a single RTT). None of these
>> are inherently beneficial or detrimental to individuals or society,
>> but most can be used to improve the status quo... I would argue that
>> in the last 4 years the relevance of interactive use-cases has been
>> made quite clear to a lot of folks...
>>
>>
>>> OK, so now you know I don't engage in these time sinks with no
>>> redeeming social value.:)
>>
>> [SM] Duly noted ;)
>>
>>> Since it is not hard to argue that just like power distribution,
>>> information exchange/dissemination is "in the public interest", the
>>> question becomes "Do we allow any and all forms of information
>>> exchange/dissemination over what is becoming something akin to a
>>> public utility?" FWIW, I don't know the answer to this question! :)
>>
>> [SM] This is an interesting question and one (only) tangentially
>> related to network neutrality... it is more related to freedom of
>> speech and limits thereof. Maybe a question for another mailing list?
>> Certainly one meriting a topic change...
>>
>>
>> Regards
>> Sebastian
>>
>>>
>>> Cheers,
>>>
>>> RR
>>>
>>> -----Original Message-----
>>> From: Sebastian Moeller [mailto:moeller0@gmx.de]
>>> Sent: Monday, October 16, 2023 10:36 AM
>>> To: dickroy@alum.mit.edu; Network Neutrality is back! Let´s make the
>>> technical aspects heard this time!
>>> Subject: Re: [NNagain] transit and peering costs projections
>>>
>>> Hi Richard,
>>>
>>>
>>>> On Oct 16, 2023, at 19:01, Dick Roy via Nnagain
>>>> <nnagain@lists.bufferbloat.net> wrote:
>>>>
>>>> Just an observation: ANY type of congestion control that changes
>>>> application behavior in response to congestion, or predicted
>>>> congestion (ENC), begs the question "How does throttling of
>>>> application information exchange rate (aka behavior) affect the
>>>> user experience and will the user tolerate it?"
>>>
>>> [SM] The trade-off here is, if the application does not respond (or
>>> rather if no application would respond) we would end up with
>>> congestion collapse where no application would gain much of anything
>>> as the network busies itself trying to re-transmit dropped packets
>>> without making much head way... Simplistic game theory application
>>> might imply that individual applications could try to game this, and
>>> generally that seems to be true, but we have remedies for that
>>> available..
>>>
>>>
>>>>
>>>> Given any (complex and packet-switched) network topology of
>>>> interconnected nodes and links, each with possible a different
>>>> capacity and characteristics, such as the internet today, IMO the
>>>> two fundamental questions are:
>>>>
>>>> 1) How can a given network be operated/configured so as to maximize
>>>> aggregate throughput (i.e. achieve its theoretical capacity), and
>>>> 2) What things in the network need to change to increase the
>>>> throughput (aka parameters in the network with the largest Lagrange
>>>> multipliers associated with them)?
>>>
>>> [SM] The thing is we generally know how to maximize (average)
>>> throughput, just add (over-)generous amounts of buffering, the
>>> problem is that this screws up the other important quality axis,
>>> latency... We ideally want low latency and even more low latency
>>> variance (aka jitter) AND high throughput... Turns out though that
>>> above a certain throughput threshold* many users do not seem to care
>>> all that much for more throughput as long as interactive use cases
>>> are sufficiently responsive... but high responsiveness requires low
>>> latency and low jitter... This is actually a good thing, as that
>>> means we do not necessarily aim for 100% utilization (almost
>>> requiring deep buffers and hence resulting in compromised latency)
>>> but can get away with say 80-90% where shallow buffers will do (or
>>> rather where buffer filling stays shallow, there is IMHO still value
>>> in having deep buffers for rare events that need it).
>>>
>>>
>>>
>>> *) This is not a hard physical law so the exact threshold is not set
>>> in stone, but unless one has many parallel users, something in the
>>> 20-50 Mbps range is plenty and that is only needed in the "loaded"
>>> direction, that is for pure consumers the upload can be thinner, for
>>> pure producers the download can be thinner.
>>>
>>>
>>>>
>>>> I am not an expert in this field,
>>>
>>> [SM] Nor am I, I come from the wet-ware side of things so not
>>> even soft- or hard-ware ;)
>>>
>>>
>>>> however it seems to me that answers to these questions would be
>>>> useful, assuming they are not yet available!
>>>>
>>>> Cheers,
>>>>
>>>> RR
>>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On
>>>> Behalf Of rjmcmahon via Nnagain
>>>> Sent: Sunday, October 15, 2023 1:39 PM
>>>> To: Network Neutrality is back! Let´s make the technical aspects
>>>> heard this time!
>>>> Cc: rjmcmahon
>>>> Subject: Re: [NNagain] transit and peering costs projections
>>>>
>>>> Hi Jack,
>>>>
>>>> Thanks again for sharing. It's very interesting to me.
>>>>
>>>> Today, the networks are shifting from capacity constrained to latency
>>>> constrained, as can be seen in the IX discussions about how the
>>>> speed of
>>>> light over fiber is too slow even between Houston & Dallas.
>>>>
>>>> The mitigations against standing queues (which cause bloat today) are:
>>>>
>>>> o) Shrink the e2e bottleneck queue so it will drop packets in a
>>>> flow and
>>>> TCP will respond to that "signal"
>>>> o) Use some form of ECN marking where the network forwarding plane
>>>> ultimately informs the TCP source state machine so it can slow down or
>>>> pace effectively. This can be an earlier feedback signal and, if done
>>>> well, can inform the sources to avoid bottleneck queuing. There are
>>>> couple of approaches with ECN. Comcast is trialing L4S now which seems
>>>> interesting to me as a WiFi test & measurement engineer. The jury is
>>>> still out on this and measurements are needed.
>>>> o) Mitigate source side bloat via TCP_NOTSENT_LOWAT
>>>>
>>>> The QoS priority approach per congestion is orthogonal by my
>>>> judgment as
>>>> it's typically not supported e2e, many networks will bleach DSCP
>>>> markings. And it's really too late by my judgment.
>>>>
>>>> Also, on clock sync, yes your generation did us both a service and
>>>> disservice by getting rid of the PSTN TDM clock ;) So IP networking
>>>> devices kinda ignored clock sync, which makes e2e one way delay (OWD)
>>>> measurements impossible. Thankfully, the GPS atomic clock is now
>>>> available mostly everywhere and many devices use TCXO oscillators so
>>>> it's possible to get clock sync and use oscillators that can minimize
>>>> drift. I pay $14 for a Rpi4 GPS chip with pulse per second as an
>>>> example.
>>>>
>>>> It seems silly to me that clocks aren't synced to the GPS atomic clock
>>>> even if by a proxy even if only for measurement and monitoring.
>>>>
>>>> Note: As Richard Roy will point out, there really is no such thing as
>>>> synchronized clocks across geographies per general relativity - so
>>>> those
>>>> syncing clocks need to keep those effects in mind. I limited the
>>>> iperf 2
>>>> timestamps to microsecond precision in hopes avoiding those issues.
>>>>
>>>> Note: With WiFi, a packet drop can occur because an intermittent RF
>>>> channel condition. TCP can't tell the difference between an RF drop
>>>> vs a
>>>> congested queue drop. That's another reason ECN markings from network
>>>> devices may be better than dropped packets.
>>>>
>>>> Note: I've added some iperf 2 test support around pacing as that seems
>>>> to be the direction the industry is heading as networks are less and
>>>> less capacity strained and user quality of experience is being
>>>> driven by
>>>> tail latencies. One can also test with the Prague CCA for the L4S
>>>> scenarios. (This is a fun project: https://www.l4sgear.com/ and fairly
>>>> low cost)
>>>>
>>>> --fq-rate n[kmgKMG]
>>>> Set a rate to be used with fair-queuing based socket-level pacing, in
>>>> bytes or bits per second. Only available on platforms supporting the
>>>> SO_MAX_PACING_RATE socket option. (Note: Here the suffixes indicate
>>>> bytes/sec or bits/sec per use of uppercase or lowercase, respectively)
>>>>
>>>> --fq-rate-step n[kmgKMG]
>>>> Set a step of rate to be used with fair-queuing based socket-level
>>>> pacing, in bytes or bits per second. Step occurs every
>>>> fq-rate-step-interval (defaults to one second)
>>>>
>>>> --fq-rate-step-interval n
>>>> Time in seconds before stepping the fq-rate
>>>>
>>>> Bob
>>>>
>>>> PS. Iperf 2 man page https://iperf2.sourceforge.io/iperf-manpage.html
>>>>
>>>>> The "VGV User" (Voice, Gaming, Videoconferencing) cares a lot about
>>>>> latency. It's not just "rewarding" to have lower latencies; high
>>>>> latencies may make VGV unusable. Average (or "typical") latency as
>>>>> the FCC label proposes isn't a good metric to judge usability. A path
>>>>> which has high variance in latency can be unusable even if the average
>>>>> is quite low. Having your voice or video or gameplay "break up"
>>>>> every minute or so when latency spikes to 500 msec makes the "user
>>>>> experience" intolerable.
>>>>>
>>>>> A few years ago, I ran some simple "ping" tests to help a friend who
>>>>> was trying to use a gaming app. My data was only for one specific
>>>>> path so it's anecdotal. What I saw was surprising - zero data loss,
>>>>> every datagram was delivered, but occasionally a datagram would take
>>>>> up to 30 seconds to arrive. I didn't have the ability to poke around
>>>>> inside, but I suspected it was an experience of "bufferbloat", enabled
>>>>> by the dramatic drop in price of memory over the decades.
>>>>>
>>>>> It's been a long time since I was involved in operating any part of
>>>>> the Internet, so I don't know much about the inner workings today.
>>>>> Apologies for my ignorance....
>>>>>
>>>>> There was a scenario in the early days of the Internet for which we
>>>>> struggled to find a technical solution. Imagine some node in the
>>>>> bowels of the network, with 3 connected "circuits" to some other
>>>>> nodes. On two of those inputs, traffic is arriving to be forwarded
>>>>> out the third circuit. The incoming flows are significantly more than
>>>>> the outgoing path can accept.
>>>>>
>>>>> What happens? How is "backpressure" generated so that the incoming
>>>>> flows are reduced to the point that the outgoing circuit can handle
>>>>> the traffic?
>>>>>
>>>>> About 45 years ago, while we were defining TCPV4, we struggled with
>>>>> this issue, but didn't find any consensus solutions. So "placeholder"
>>>>> mechanisms were defined in TCPV4, to be replaced as research continued
>>>>> and found a good solution.
>>>>>
>>>>> In that "placeholder" scheme, the "Source Quench" (SQ) IP message was
>>>>> defined; it was to be sent by a switching node back toward the sender
>>>>> of any datagram that had to be discarded because there wasn't any
>>>>> place to put it.
>>>>>
>>>>> In addition, the TOS (Type Of Service) and TTL (Time To Live) fields
>>>>> were defined in IP.
>>>>>
>>>>> TOS would allow the sender to distinguish datagrams based on their
>>>>> needs. For example, we thought "Interactive" service might be needed
>>>>> for VGV traffic, where timeliness of delivery was most important.
>>>>> "Bulk" service might be useful for activities like file transfers,
>>>>> backups, et al. "Normal" service might now mean activities like
>>>>> using the Web.
>>>>>
>>>>> The TTL field was an attempt to inform each switching node about the
>>>>> "expiration date" for a datagram. If a node somehow knew that a
>>>>> particular datagram was unlikely to reach its destination in time to
>>>>> be useful (such as a video datagram for a frame that has already been
>>>>> displayed), the node could, and should, discard that datagram to free
>>>>> up resources for useful traffic. Sadly we had no mechanisms for
>>>>> measuring delay, either in transit or in queuing, so TTL was defined
>>>>> in terms of "hops", which is not an accurate proxy for time. But
>>>>> it's all we had.
>>>>>
>>>>> Part of the complexity was that the "flow control" mechanism of the
>>>>> Internet had put much of the mechanism in the users' computers' TCP
>>>>> implementations, rather than the switches which handle only IP.
>>>>> Without mechanisms in the users' computers, all a switch could do is
>>>>> order more circuits, and add more memory to the switches for queuing.
>>>>> Perhaps that led to "bufferbloat".
>>>>>
>>>>> So TOS, SQ, and TTL were all placeholders, for some mechanism in a
>>>>> future release that would introduce a "real" form of Backpressure and
>>>>> the ability to handle different types of traffic. Meanwhile, these
>>>>> rudimentary mechanisms would provide some flow control. Hopefully the
>>>>> users' computers sending the flows would respond to the SQ
>>>>> backpressure, and switches would prioritize traffic using the TTL and
>>>>> TOS information.
>>>>>
>>>>> But, being way out of touch, I don't know what actually happens
>>>>> today. Perhaps the current operators and current government watchers
>>>>> can answer?:git clone https://rjmcmahon@git.code.sf.net/p/iperf2/code
>>>>> iperf2-code
>>>>>
>>>>> 1/ How do current switches exert Backpressure to reduce competing
>>>>> traffic flows? Do they still send SQs?
>>>>>
>>>>> 2/ How do the current and proposed government regulations treat the
>>>>> different needs of different types of traffic, e.g., "Bulk" versus
>>>>> "Interactive" versus "Normal"? Are Internet carriers permitted to
>>>>> treat traffic types differently? Are they permitted to charge
>>>>> different amounts for different types of service?
>>>>>
>>>>> Jack Haverty
>>>>>
>>>>> On 10/15/23 09:45, Dave Taht via Nnagain wrote:
>>>>>> For starters I would like to apologize for cc-ing both nanog and my
>>>>>> new nn list. (I will add sender filters)
>>>>>>
>>>>>> A bit more below.
>>>>>>
>>>>>> On Sun, Oct 15, 2023 at 9:32 AM Tom Beecher <beecher@beecher.cc>
>>>>>> wrote:
>>>>>>>> So for now, we'll keep paying for transit to get to the others
>>>>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>>>
>>>>>>> There is often a chicken/egg scenario here with the economics. As an
>>>>>>> eyeball network, your costs to build out and connect to Dallas are
>>>>>>> greater than your transit cost, so you do that. Totally fair.
>>>>>>>
>>>>>>> However think about it from the content side. Say I want to build
>>>>>>> into to Houston. I have to put routers in, and a bunch of cache
>>>>>>> servers, so I have capital outlay , plus opex for space, power,
>>>>>>> IX/backhaul/transit costs. That's not cheap, so there's a lot of
>>>>>>> calculations that go into it. Is there enough total eyeball traffic
>>>>>>> there to make it worth it? Is saving 8-10ms enough of a performance
>>>>>>> boost to justify the spend? What are the long term trends in that
>>>>>>> market? These answers are of course different for a company running
>>>>>>> their own CDN vs the commercial CDNs.
>>>>>>>
>>>>>>> I don't work for Google and obviously don't speak for them, but I
>>>>>>> would suspect that they're happy to eat a 8-10ms performance hit to
>>>>>>> serve from Dallas , versus the amount of capital outlay to build out
>>>>>>> there right now.
>>>>>> The three forms of traffic I care most about are voip, gaming, and
>>>>>> videoconferencing, which are rewarding to have at lower latencies.
>>>>>> When I was a kid, we had switched phone networks, and while the sound
>>>>>> quality was poorer than today, the voice latency cross-town was just
>>>>>> like "being there". Nowadays we see 500+ms latencies for this kind of
>>>>>> traffic.
>>>>>>
>>>>>> As to how to make calls across town work that well again,
>>>>>> cost-wise, I
>>>>>> do not know, but the volume of traffic that would be better served by
>>>>>> these interconnects quite low, respective to the overall gains in
>>>>>> lower latency experiences for them.
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Sat, Oct 14, 2023 at 11:47 PM Tim Burke <tim@mid.net> wrote:
>>>>>>>> I would say that a 1Gbit IP transit in a carrier neutral DC can be
>>>>>>>> had for a good bit less than $900 on the wholesale market.
>>>>>>>>
>>>>>>>> Sadly, IXP’s are seemingly turning into a pay to play game, with
>>>>>>>> rates almost costing as much as transit in many cases after you
>>>>>>>> factor in loop costs.
>>>>>>>>
>>>>>>>> For example, in the Houston market (one of the largest and fastest
>>>>>>>> growing regions in the US!), we do not have a major IX, so to
>>>>>>>> get up
>>>>>>>> to Dallas it’s several thousand for a 100g wave, plus several
>>>>>>>> thousand for a 100g port on one of those major IXes. Or, a better
>>>>>>>> option, we can get a 100g flat internet transit for just a little
>>>>>>>> bit more.
>>>>>>>>
>>>>>>>> Fortunately, for us as an eyeball network, there are a good number
>>>>>>>> of major content networks that are allowing for private peering in
>>>>>>>> markets like Houston for just the cost of a cross connect and a
>>>>>>>> QSFP
>>>>>>>> if you’re in the right DC, with Google and some others being the
>>>>>>>> outliers.
>>>>>>>>
>>>>>>>> So for now, we'll keep paying for transit to get to the others
>>>>>>>> (since it’s about as much as transporting IXP from Dallas), and
>>>>>>>> hoping someone at Google finally sees Houston as more than a third
>>>>>>>> rate city hanging off of Dallas. Or… someone finally brings a
>>>>>>>> worthwhile IX to Houston that gets us more than peering to Kansas
>>>>>>>> City. Yeah, I think the former is more likely. 😊
>>>>>>>>
>>>>>>>> See y’all in San Diego this week,
>>>>>>>> Tim
>>>>>>>>
>>>>>>>> On Oct 14, 2023, at 18:04, Dave Taht <dave.taht@gmail.com> wrote:
>>>>>>>>> This set of trendlines was very interesting. Unfortunately the
>>>>>>>>> data
>>>>>>>>> stops in 2015. Does anyone have more recent data?
>>>>>>>>>
>>>>>>>>> https://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php
>>>>>>>>>
>>>>>>>>> I believe a gbit circuit that an ISP can resell still runs at
>>>>>>>>> about
>>>>>>>>> $900 - $1.4k (?) in the usa? How about elsewhere?
>>>>>>>>>
>>>>>>>>> ...
>>>>>>>>>
>>>>>>>>> I am under the impression that many IXPs remain very successful,
>>>>>>>>> states without them suffer, and I also find the concept of doing
>>>>>>>>> micro
>>>>>>>>> IXPs at the city level, appealing, and now achievable with cheap
>>>>>>>>> gear.
>>>>>>>>> Finer grained cross connects between telco and ISP and IXP would
>>>>>>>>> lower
>>>>>>>>> latencies across town quite hugely...
>>>>>>>>>
>>>>>>>>> PS I hear ARIN is planning on dropping the price for, and bundling
>>>>>>>>> 3
>>>>>>>>> BGP AS numbers at a time, as of the end of this year, also.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Oct 30:
>>>>>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>>>>> Dave Täht CSO, LibreQos
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Nnagain mailing list
>>>>> Nnagain@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>> _______________________________________________
>>>> Nnagain mailing list
>>>> Nnagain@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>
>>>> _______________________________________________
>>>> Nnagain mailing list
>>>> Nnagain@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>
>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
[-- Attachment #1.1.2: Type: text/html, Size: 63408 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2023-10-17 20:06 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-14 23:01 [NNagain] transit and peering costs projections Dave Taht
2023-10-15 0:25 ` Dave Cohen
2023-10-15 3:41 ` le berger des photons
2023-10-15 3:45 ` Tim Burke
2023-10-15 4:03 ` Ryan Hamel
2023-10-15 4:12 ` Tim Burke
2023-10-15 4:19 ` Dave Taht
2023-10-15 4:26 ` [NNagain] [LibreQoS] " dan
2023-10-15 7:54 ` [NNagain] " Bill Woodcock
2023-10-15 13:41 ` Mike Hammett
2023-10-15 14:19 ` Tim Burke
2023-10-15 16:44 ` [NNagain] [LibreQoS] " dan
2023-10-15 16:32 ` [NNagain] " Tom Beecher
2023-10-15 16:45 ` Dave Taht
2023-10-15 19:59 ` Jack Haverty
2023-10-15 20:39 ` rjmcmahon
2023-10-15 23:44 ` Karl Auerbach
2023-10-16 17:01 ` Dick Roy
2023-10-16 17:35 ` Jack Haverty
2023-10-16 17:36 ` Sebastian Moeller
2023-10-16 18:04 ` Dick Roy
2023-10-17 10:26 ` [NNagain] NN and freedom of speech, and whether there is worthwhile good-faith discussion in that direction Sebastian Moeller
2023-10-17 17:26 ` Spencer Sevilla
2023-10-17 20:06 ` Jack Haverty
2023-10-15 20:45 ` [NNagain] transit and peering costs projections Sebastian Moeller
2023-10-16 1:39 ` [NNagain] The history of congestion control on the internet Dave Taht
2023-10-16 6:30 ` Jack Haverty
2023-10-16 17:21 ` Spencer Sevilla
2023-10-16 17:37 ` Robert McMahon
2023-10-17 15:34 ` Dick Roy
2023-10-16 3:33 ` [NNagain] transit and peering costs projections Matthew Petach
2023-10-15 19:19 ` Tim Burke
2023-10-15 7:40 ` Bill Woodcock
2023-10-15 12:40 ` [NNagain] [LibreQoS] " Jim Troutman
2023-10-15 14:12 ` Tim Burke
2023-10-15 13:38 ` [NNagain] " Mike Hammett
2023-10-15 13:44 ` Mike Hammett
[not found] ` <20231015092253.67e4546e@dataplane.org>
2023-10-15 14:48 ` [NNagain] Fwd: " Dave Taht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox