* [Cerowrt-devel] viability of the data center in the internet of the future
@ 2014-06-28 4:58 Dave Taht
2014-06-28 12:31 ` David P. Reed
2014-06-29 0:50 ` [Cerowrt-devel] [Bloat] " Fred Baker (fred)
0 siblings, 2 replies; 6+ messages in thread
From: Dave Taht @ 2014-06-28 4:58 UTC (permalink / raw)
To: David P. Reed; +Cc: cerowrt-devel, bloat
I didn't care for my name in the subject line in the first place,
although it did inspire me to do some creative venting elsewhere, and
now here. And this is still way off topic for the bloat list...
One of the points in the wired article that kicked this thread off was
this picture of what the internet is starting to look like:
http://www.wired.com/wp-content/uploads/2014/06/net_neutral.jpg.jpeg
I don't want it to look like that. I worked pretty hard to defuse the
"fast vs slow" lane debate re peering because it was so inaccurate,
and it does look like it has died down somewhat, but
that doesn't mean I like the concentration of services that is going on.
I want the "backbone" to extend all the way to the edge.
I want the edge to be all connected together, so in the unlikely event
comcast goes out of business tomorrow, I can get re-routed 1 hop out
from my house through verizon, or joe's mom and pop fiber shop, or
wherever. I want a network that can survive multiple backhoe events,
katrinas, and nuclear wars, all at the same time. I'd like to be able
to get my own email,
and do my own phone and videoconferencing calls with nobody in the
middle, not even for call setup, and be able to host my own my own
services on my own hardware, with some level of hope that anything
secret or proprietary stays within my premise. I want a static ip
address range, and
control over my own dns.
I don't mind at all sharing some storage for the inevitable
advertising if the cdn's co-located inside my business are also
caching useful bits of javascript, etc, just so I can save on latency
on wiping the resulting cruft from my eyeballs. I want useful
applications, running, directly, on my own devices, with a minimum
amount of connectivity to the outside world required to run them. I
want the 83 items in my netflix queue already downloaded, overnight,
so I can pick and choose what to see without ever having a "Buffering"
event. I want my own copy of wikipedia, and a search engine that
doesn't share everything you are looking for with the universe.
I want the legal protections, well established for things inside your
home, that are clearly not established in data centers.
I'd like it if the software we had was robust, reliable, and secure
enough to do that. I'd like it if it were easy to make offsite
backups, as well as mirror services with friends and co-authors.
And I'd like my servers to run on a couple watts, at most, and not
require special heating, or cooling.
And I'd like (another) beer and some popcorn. Tonight's movie:
https://plus.google.com/u/0/107942175615993706558/posts/VJKvfvKU9pi
On Fri, Jun 27, 2014 at 9:28 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Fri, Jun 27, 2014 at 9:06 PM, David P. Reed <dpreed@reed.com> wrote:
>> Maybe I am misunderstanding something... it just took my Mac book Pro doing
>> an rsync to copy a TB of data from a small NAS at work yesterday to get
>> about 700 Gb/sec on a GigE office network for hours yesterday.
>>
>> I had to do that in our Santana Clara office rather than from home outside
>> Boston, which is where I work 90% of the time.
>>
>> That's one little computer and one user...
>
> On a daily basis, the bufferbloat websites transfer far, far less than gigE
>
> IF the redmine portion of the site wasn't so cpu expensive, I could
> use something
> other than hefty boxes they are on. Similarly snapon's cpu is mostly
> used for builds, the file transfer role could be done by something else
> easily. I'd like to switch it over to do that one day.
>
>> What does my Mac Book Pro draw doing that? 80 Watts?
>
> I love the "kill-a-watt" products. I use them everywhere. (while I'm
> pimping stuff I like, digilogger's power switches are a lifesaver also -
> staging boots for devices that draw a lot of power in a tiny lab that
> can only draw 350 watts before becoming a fire hazard)
>
> Your NAS probably ate less than 16 watts, more if you have more than one drive.
>
> My nucs draw 18 watts and can transfer at GigE off a flash disk
> without raising a sweat.
> (at least some of your overhead is in the rsync protocol, which is
> overly chatty)
>
> Several tiny arm boards can all do gigE at line rate, notably stuff built around
> marvell and cavium's chipset(s), and they do it at under 2 watts. Most support
> 64GB mini-sd cards (with pretty lousy transfer rates).
>
> Pretty sure (haven't booted it yet) the parallella (which is smaller
> than a drive),
> can do it in under a 2 watt, and if it doesn't do gigE now, it'll do
> it after I get through
> with it - but it lacks a sata port, and usb is only 2.0, so it might
> not drive gigE
> from a nas perspective. (It kind of bugs me that most of the tiny boards are in
> the altoids form factor, rather than the 2.5 inch drive form factor)
>
> So I go back to my original point in that, once you have fiber to the business,
> for most purposes in a small business or startup or home - who needs
> to co-lo in a data center?
> You can have a tiny wart on the wall do most of the job. And that's
> today. In another
> year or so we'll be over some more tipping points.
>
> One thing that does bug me is most UPSes are optimized to deliver a large
> load over a short time, a UPS capable of driving 5 watts for, say, 3 days is
> kind of rare.
>
>> On Jun 27, 2014, David Lang wrote:
>>>
>>> On Tue, 24 Jun 2014, Michael Richardson wrote:
>>>
>>>> Rick Jones wrote:
>>>>>
>>>>> Perhaps, but where does having gigabit fibre to a business imply the
>>>>> business
>>>>> has the space, power, and cooling to host all the servers it might
>>>>> need/wish
>>>>> to have?
>>>>
>>>>
>>>> That's a secondary decision.
>>>> Given roof space, solar panels and/or snow-outside, maybe the answer is
>>>> that
>>>> I regularly have 2 our of 3 of those available in a decentralized way.
>>>
>>>
>>> given the amount of processing capacity that you can get today in a
>>> pasively
>>> cooled system, you can do quite a b it of serving from a small amount of
>>> space
>>> and power.
>>>
>>> The days when it took rooms of Sun boxes to saturate a Gb line are long
>>> gone,
>>> you can do that with just a handful of machines.
>>>
>>> David Lang
>>> ________________________________
>>>
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>>
>> -- Sent from my Android device with K-@ Mail. Please excuse my brevity.
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
>
> --
> Dave Täht
>
> NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
--
Dave Täht
NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Cerowrt-devel] viability of the data center in the internet of the future
2014-06-28 4:58 [Cerowrt-devel] viability of the data center in the internet of the future Dave Taht
@ 2014-06-28 12:31 ` David P. Reed
2014-06-29 0:50 ` [Cerowrt-devel] [Bloat] " Fred Baker (fred)
1 sibling, 0 replies; 6+ messages in thread
From: David P. Reed @ 2014-06-28 12:31 UTC (permalink / raw)
To: Dave Taht; +Cc: cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 9064 bytes --]
I hope it is obvious I am in violent agreement.
The folks who think a centralized structure is more efficient or more practical just have not thought it through.
The opposite is true. Sadly people's intuitions are trained to ignore evidence and sound argument....
So we have a huge population of engineers who go along without thinking because they honestly think centralized systems are better for some important reason they never question.
This means that the non engineering public has no chance at understanding.
Whenever I have looked at why centralized designs are 'needed' it has turned out to be the felt need for 'control' of something by one small group or individual.
Sometimes it is the designer. Shame on him/her.
Sometimes it is the builder. Ditto.
Sometimes it is the operator. Do we need one operator?
Sometimes it is the owner. Don't the users own their uses and purposes?
Sometimes it is the fearful. I sympathize. But they don't really want to cede collective control to a small group they can't trust ir even understand. Or maybe they do...
Sometimes it is the wannabe sovereign.
The weakness of the argument is that control need not be centralized. In fact centralized control is inefficient and unnecessary.
I've devoted much of my work to that last sentence.
For example... In Croquet we (me and 3 others) demonstrated that it's pretty easy to build a real time shared multimedia virtual world that works without a single central server. It really worked and scaled linearly with users adding their own computer when they entered the world, and removing it when they got disconnected. . Just pulling the plug was fine.)
Same with decentralized wireless ... no need for centralized spectrum allocation... linear growth of capacity with transceiver participation coming from the actual physics of the real propagation environment.
Equating centralized control with efficiency or necessary management is a false intuition.
Always be skeptical of the claim that centralized control is good. Cui bono?
On Jun 28, 2014, Dave Taht <dave.taht@gmail.com> wrote:
>I didn't care for my name in the subject line in the first place,
>although it did inspire me to do some creative venting elsewhere, and
>now here. And this is still way off topic for the bloat list...
>
>One of the points in the wired article that kicked this thread off was
>this picture of what the internet is starting to look like:
>
>http://www.wired.com/wp-content/uploads/2014/06/net_neutral.jpg.jpeg
>
>I don't want it to look like that. I worked pretty hard to defuse the
>"fast vs slow" lane debate re peering because it was so inaccurate,
>and it does look like it has died down somewhat, but
>that doesn't mean I like the concentration of services that is going
>on.
>
>I want the "backbone" to extend all the way to the edge.
>
>I want the edge to be all connected together, so in the unlikely event
>comcast goes out of business tomorrow, I can get re-routed 1 hop out
>from my house through verizon, or joe's mom and pop fiber shop, or
>wherever. I want a network that can survive multiple backhoe events,
>katrinas, and nuclear wars, all at the same time. I'd like to be able
>to get my own email,
>and do my own phone and videoconferencing calls with nobody in the
>middle, not even for call setup, and be able to host my own my own
>services on my own hardware, with some level of hope that anything
>secret or proprietary stays within my premise. I want a static ip
>address range, and
>control over my own dns.
>
>I don't mind at all sharing some storage for the inevitable
>advertising if the cdn's co-located inside my business are also
>caching useful bits of javascript, etc, just so I can save on latency
>on wiping the resulting cruft from my eyeballs. I want useful
>applications, running, directly, on my own devices, with a minimum
>amount of connectivity to the outside world required to run them. I
>want the 83 items in my netflix queue already downloaded, overnight,
>so I can pick and choose what to see without ever having a "Buffering"
>event. I want my own copy of wikipedia, and a search engine that
>doesn't share everything you are looking for with the universe.
>
>I want the legal protections, well established for things inside your
>home, that are clearly not established in data centers.
>
>I'd like it if the software we had was robust, reliable, and secure
>enough to do that. I'd like it if it were easy to make offsite
>backups, as well as mirror services with friends and co-authors.
>
>And I'd like my servers to run on a couple watts, at most, and not
>require special heating, or cooling.
>
>And I'd like (another) beer and some popcorn. Tonight's movie:
>
>https://plus.google.com/u/0/107942175615993706558/posts/VJKvfvKU9pi
>
>On Fri, Jun 27, 2014 at 9:28 PM, Dave Taht <dave.taht@gmail.com> wrote:
>> On Fri, Jun 27, 2014 at 9:06 PM, David P. Reed <dpreed@reed.com>
>wrote:
>>> Maybe I am misunderstanding something... it just took my Mac book
>Pro doing
>>> an rsync to copy a TB of data from a small NAS at work yesterday to
>get
>>> about 700 Gb/sec on a GigE office network for hours yesterday.
>>>
>>> I had to do that in our Santana Clara office rather than from home
>outside
>>> Boston, which is where I work 90% of the time.
>>>
>>> That's one little computer and one user...
>>
>> On a daily basis, the bufferbloat websites transfer far, far less
>than gigE
>>
>> IF the redmine portion of the site wasn't so cpu expensive, I could
>> use something
>> other than hefty boxes they are on. Similarly snapon's cpu is mostly
>> used for builds, the file transfer role could be done by something
>else
>> easily. I'd like to switch it over to do that one day.
>>
>>> What does my Mac Book Pro draw doing that? 80 Watts?
>>
>> I love the "kill-a-watt" products. I use them everywhere. (while I'm
>> pimping stuff I like, digilogger's power switches are a lifesaver
>also -
>> staging boots for devices that draw a lot of power in a tiny lab that
>> can only draw 350 watts before becoming a fire hazard)
>>
>> Your NAS probably ate less than 16 watts, more if you have more than
>one drive.
>>
>> My nucs draw 18 watts and can transfer at GigE off a flash disk
>> without raising a sweat.
>> (at least some of your overhead is in the rsync protocol, which is
>> overly chatty)
>>
>> Several tiny arm boards can all do gigE at line rate, notably stuff
>built around
>> marvell and cavium's chipset(s), and they do it at under 2 watts.
>Most support
>> 64GB mini-sd cards (with pretty lousy transfer rates).
>>
>> Pretty sure (haven't booted it yet) the parallella (which is smaller
>> than a drive),
>> can do it in under a 2 watt, and if it doesn't do gigE now, it'll do
>> it after I get through
>> with it - but it lacks a sata port, and usb is only 2.0, so it might
>> not drive gigE
>> from a nas perspective. (It kind of bugs me that most of the tiny
>boards are in
>> the altoids form factor, rather than the 2.5 inch drive form factor)
>>
>> So I go back to my original point in that, once you have fiber to the
>business,
>> for most purposes in a small business or startup or home - who needs
>> to co-lo in a data center?
>> You can have a tiny wart on the wall do most of the job. And that's
>> today. In another
>> year or so we'll be over some more tipping points.
>>
>> One thing that does bug me is most UPSes are optimized to deliver a
>large
>> load over a short time, a UPS capable of driving 5 watts for, say, 3
>days is
>> kind of rare.
>>
>>> On Jun 27, 2014, David Lang wrote:
>>>>
>>>> On Tue, 24 Jun 2014, Michael Richardson wrote:
>>>>
>>>>> Rick Jones wrote:
>>>>>>
>>>>>> Perhaps, but where does having gigabit fibre to a business imply
>the
>>>>>> business
>>>>>> has the space, power, and cooling to host all the servers it
>might
>>>>>> need/wish
>>>>>> to have?
>>>>>
>>>>>
>>>>> That's a secondary decision.
>>>>> Given roof space, solar panels and/or snow-outside, maybe the
>answer is
>>>>> that
>>>>> I regularly have 2 our of 3 of those available in a decentralized
>way.
>>>>
>>>>
>>>> given the amount of processing capacity that you can get today in a
>>>> pasively
>>>> cooled system, you can do quite a b it of serving from a small
>amount of
>>>> space
>>>> and power.
>>>>
>>>> The days when it took rooms of Sun boxes to saturate a Gb line are
>long
>>>> gone,
>>>> you can do that with just a handful of machines.
>>>>
>>>> David Lang
>>>> ________________________________
>>>>
>>>> Cerowrt-devel mailing list
>>>> Cerowrt-devel@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>>
>>>
>>> -- Sent from my Android device with K-@ Mail. Please excuse my
>brevity.
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>>
>>
>> --
>> Dave Täht
>>
>> NSFW:
>https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
-- Sent from my Android device with K-@ Mail. Please excuse my brevity.
[-- Attachment #2: Type: text/html, Size: 13456 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Cerowrt-devel] [Bloat] viability of the data center in the internet of the future
2014-06-28 4:58 [Cerowrt-devel] viability of the data center in the internet of the future Dave Taht
2014-06-28 12:31 ` David P. Reed
@ 2014-06-29 0:50 ` Fred Baker (fred)
2014-07-01 8:37 ` Dave Taht
1 sibling, 1 reply; 6+ messages in thread
From: Fred Baker (fred) @ 2014-06-29 0:50 UTC (permalink / raw)
To: Dave Taht; +Cc: cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 5576 bytes --]
On Jun 27, 2014, at 9:58 PM, Dave Taht <dave.taht@gmail.com> wrote:
> One of the points in the wired article that kicked this thread off was
> this picture of what the internet is starting to look like:
>
> http://www.wired.com/wp-content/uploads/2014/06/net_neutral.jpg.jpeg
>
> I don't want it to look like that.
Well, I think trying to describe the Internet in those terms is a lot like half a dozen blind men describing an elephant. The picture makes a point, and a good one. But it’s also wildly inaccurate. It depends on which blind man you ask. And they’ll all be right, from their perspective.
There is in fact a backbone. Once upon a time, it was run by a single company, BBN. Then it was more like five, and then ... and now it’s 169. There are, if the BGP report (http://seclists.org/nanog/2014/Jun/495) is to be believed, 47136 ASNs in the system, of which 35929 don’t show up as transit for anyone and are therefore presumably edge networks and potentially multihomed, and of those 16325 only announce a single prefix. Of the 6101 ASNs that show up as transit, 169 ONLY show up as transit. Yes, the core is 169 ASNs, and it’s not a little dot off to the side. If you want to know where it is, do a traceroute (tracery on windows).
I’ll give you two, one through Cisco and one through my residential provider.
traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
1 sjc-fred-881.cisco.com (10.19.64.113) 1.289 ms 12.000 ms 1.130 ms
2 sjce-access-hub1-tun10.cisco.com (10.27.128.1) 47.661 ms 45.281 ms 42.995 ms
3 ...
11 sjck-isp-gw1-ten1-1-0.cisco.com (128.107.239.217) 44.972 ms 45.094 ms 43.670 ms
12 tengige0-2-0-0.gw5.scl2.alter.net (152.179.99.153) 48.806 ms 49.338 ms 47.975 ms
13 0.xe-9-1-0.br1.sjc7.alter.net (152.63.51.101) 43.998 ms 45.595 ms 49.838 ms
14 206.111.6.121.ptr.us.xo.net (206.111.6.121) 52.110 ms 45.492 ms 47.373 ms
15 207.88.14.225.ptr.us.xo.net (207.88.14.225) 126.696 ms 124.374 ms 127.983 ms
16 te-2-0-0.rar3.washington-dc.us.xo.net (207.88.12.70) 127.639 ms 132.965 ms 131.415 ms
17 te-3-0-0.rar3.nyc-ny.us.xo.net (207.88.12.73) 129.747 ms 125.680 ms 123.907 ms
18 ae0d0.mcr1.cambridge-ma.us.xo.net (216.156.0.26) 125.009 ms 123.152 ms 126.992 ms
19 ip65-47-145-6.z145-47-65.customer.algx.net (65.47.145.6) 118.244 ms 118.024 ms 117.983 ms
20 * * *
21 209.59.211.175 (209.59.211.175) 119.378 ms * 122.057 ms
22 reed.com (67.223.249.82) 120.051 ms 120.146 ms 118.672 ms
traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
1 10.0.2.1 (10.0.2.1) 1.728 ms 1.140 ms 1.289 ms
2 10.6.44.1 (10.6.44.1) 122.289 ms 126.330 ms 14.782 ms
3 ip68-4-12-20.oc.oc.cox.net (68.4.12.20) 13.208 ms 12.667 ms 8.941 ms
4 ip68-4-11-96.oc.oc.cox.net (68.4.11.96) 17.025 ms 13.911 ms 13.835 ms
5 langbprj01-ae1.rd.la.cox.net (68.1.1.13) 131.855 ms 14.677 ms 129.860 ms
6 68.105.30.150 (68.105.30.150) 16.750 ms 31.627 ms 130.134 ms
7 ae11.cr2.lax112.us.above.net (64.125.21.173) 40.754 ms 31.873 ms 130.246 ms
8 ae3.cr2.iah1.us.above.net (64.125.21.85) 162.884 ms 77.157 ms 69.431 ms
9 ae14.cr2.dca2.us.above.net (64.125.21.53) 97.115 ms 113.428 ms 80.068 ms
10 ae8.mpr4.bos2.us.above.net.29.125.64.in-addr.arpa (64.125.29.33) 109.957 ms 124.964 ms 122.447 ms
11 * 64.125.69.90.t01470-01.above.net (64.125.69.90) 86.163 ms 103.232 ms
12 250.252.148.207.static.yourhostingaccount.com (207.148.252.250) 111.068 ms 119.984 ms 114.022 ms
13 209.59.211.175 (209.59.211.175) 103.358 ms 87.412 ms 86.345 ms
14 reed.com (67.223.249.82) 87.276 ms 102.752 ms 86.800 ms
Cisco->AlterNet->XO->ALGX is one path, and Cox->AboveNet->Presumably ALGX is another. They both traverse the core.
Going to bufferbloat.net, I actually do skip the core in one path. Through Cisco, I go through core site and hurricane electric and finally into ISC. ISC, it turns out, is a Cox customer; taking my residential path, since Cox serves us both, the traffic never goes upstream from Cox.
Yes, there are CDNs. I don’t think you’d like the way Video/IP and especially adaptive bitrate video - Netflix, Youtube, etc - worked if they didn’t exist. Akamai is probably the prototypical one, and when they deployed theirs it made the Internet quite a bit snappier - and that helped the economics of Internet sales. Google and Facebook actually do operate large data centers, but a lot of their common content (or at least Google’s) is in CDNlets. NetFlix uses several CDNs, or so I’m told; the best explanation I have found of their issues with Comcast and Level 3 is at http://www.youtube.com/watch?v=tR1sLLOYxnY (and it has imperfections). And yes, part of the story is business issues over CDNs. Netflix’s data traverses the core once to each CDN download server, and from the server to its customers.
The IETF uses a CDN, as of recently. It’s called Cloudflare.
One of the places I worry is Chrome and Silk’s SPDY Proxies, which are somewhere in Google and Amazon respectively. Chrome and Silk send https and SPDY traffic directly to the targeted service, but http traffic to their proxies, which do their magic and send the result back. One of the potential implications is that instead of going to the CDN nearest me, it then goes to the CDN nearest the proxy. That’s not good for me. I just hope that the CDNs I use accept https from me, because that will give me the best service (and btw encrypts my data).
Blind men and elephants, and they’re all right.
[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Cerowrt-devel] [Bloat] viability of the data center in the internet of the future
2014-06-29 0:50 ` [Cerowrt-devel] [Bloat] " Fred Baker (fred)
@ 2014-07-01 8:37 ` Dave Taht
2014-07-01 16:38 ` Fred Baker (fred)
0 siblings, 1 reply; 6+ messages in thread
From: Dave Taht @ 2014-07-01 8:37 UTC (permalink / raw)
To: Fred Baker (fred); +Cc: cerowrt-devel, bloat
On Sat, Jun 28, 2014 at 5:50 PM, Fred Baker (fred) <fred@cisco.com> wrote:
>
> On Jun 27, 2014, at 9:58 PM, Dave Taht <dave.taht@gmail.com> wrote:
>
>> One of the points in the wired article that kicked this thread off was
>> this picture of what the internet is starting to look like:
>>
>> http://www.wired.com/wp-content/uploads/2014/06/net_neutral.jpg.jpeg
>>
>> I don't want it to look like that.
>
> Well, I think trying to describe the Internet in those terms is a lot like half a dozen blind men describing an elephant. The picture makes a point, and a good one. But it’s also wildly inaccurate. It depends on which blind man you ask. And they’ll all be right, from their perspective.
In the case of the 'fast lane/slow lane' debate it seemed the blind
men were fiercely arguing over the elephant's preference for chicken
or beef, without engaging the (vegetarian) elephant in the discussion.
The picture was about content and not connectivity. The debate seemed
to be about who was going to put speed bumps on the highway when the
real problem was over who was going to plug what port into which
switch.
I grew quite annoyed after a while.
> There is in fact a backbone. Once upon a time, it was run by a single company, BBN. Then it was more like five, and then ... and now it’s 169. There are, if the BGP report (http://seclists.org/nanog/2014/Jun/495) is to be believed, 47136 ASNs in the system, of which 35929 don’t show up as transit for anyone and are therefore presumably edge networks and potentially multihomed, and of those 16325 only announce a single prefix. Of the 6101 ASNs that show up as transit, 169 ONLY show up as transit. Yes, the core is 169 ASNs, and it’s not a little dot off to the side. If you want to know where it is, do a traceroute (tracery on windows).
The fact that the internet has grown to 10+ billion devices (by some
estimates), and from 1 transit provider to only 169 doesn't impress
me. There are 206 countries in the world...
It is a shame that multi-homing has never been easily obtainable nor
widely available, it would be nice to be able to have multiple links
for any business critically dependent on the continuous operation of
the internet and cloud.
> I’ll give you two, one through Cisco and one through my residential provider.
>
> traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
> 1 sjc-fred-881.cisco.com (10.19.64.113) 1.289 ms 12.000 ms 1.130 ms
This is through your vpn?
> 2 sjce-access-hub1-tun10.cisco.com (10.27.128.1) 47.661 ms 45.281 ms 42.995 ms
> 3 ...
> 11 sjck-isp-gw1-ten1-1-0.cisco.com (128.107.239.217) 44.972 ms 45.094 ms 43.670 ms
> 12 tengige0-2-0-0.gw5.scl2.alter.net (152.179.99.153) 48.806 ms 49.338 ms 47.975 ms
> 13 0.xe-9-1-0.br1.sjc7.alter.net (152.63.51.101) 43.998 ms 45.595 ms 49.838 ms
> 14 206.111.6.121.ptr.us.xo.net (206.111.6.121) 52.110 ms 45.492 ms 47.373 ms
> 15 207.88.14.225.ptr.us.xo.net (207.88.14.225) 126.696 ms 124.374 ms 127.983 ms
> 16 te-2-0-0.rar3.washington-dc.us.xo.net (207.88.12.70) 127.639 ms 132.965 ms 131.415 ms
> 17 te-3-0-0.rar3.nyc-ny.us.xo.net (207.88.12.73) 129.747 ms 125.680 ms 123.907 ms
> 18 ae0d0.mcr1.cambridge-ma.us.xo.net (216.156.0.26) 125.009 ms 123.152 ms 126.992 ms
> 19 ip65-47-145-6.z145-47-65.customer.algx.net (65.47.145.6) 118.244 ms 118.024 ms 117.983 ms
> 20 * * *
> 21 209.59.211.175 (209.59.211.175) 119.378 ms * 122.057 ms
> 22 reed.com (67.223.249.82) 120.051 ms 120.146 ms 118.672 ms
> traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
> 1 10.0.2.1 (10.0.2.1) 1.728 ms 1.140 ms 1.289 ms
> 2 10.6.44.1 (10.6.44.1) 122.289 ms 126.330 ms 14.782 ms
^^^^^ is this a wireless hop or something? Seeing your traceroute jump
all the way to 122+ms strongly suggests you are either wireless or
non-pied/fq_codeled.
> 3 ip68-4-12-20.oc.oc.cox.net (68.4.12.20) 13.208 ms 12.667 ms 8.941 ms
> 4 ip68-4-11-96.oc.oc.cox.net (68.4.11.96) 17.025 ms 13.911 ms 13.835 ms
> 5 langbprj01-ae1.rd.la.cox.net (68.1.1.13) 131.855 ms 14.677 ms 129.860 ms
> 6 68.105.30.150 (68.105.30.150) 16.750 ms 31.627 ms 130.134 ms
> 7 ae11.cr2.lax112.us.above.net (64.125.21.173) 40.754 ms 31.873 ms 130.246 ms
> 8 ae3.cr2.iah1.us.above.net (64.125.21.85) 162.884 ms 77.157 ms 69.431 ms
> 9 ae14.cr2.dca2.us.above.net (64.125.21.53) 97.115 ms 113.428 ms 80.068 ms
> 10 ae8.mpr4.bos2.us.above.net.29.125.64.in-addr.arpa (64.125.29.33) 109.957 ms 124.964 ms 122.447 ms
> 11 * 64.125.69.90.t01470-01.above.net (64.125.69.90) 86.163 ms 103.232 ms
> 12 250.252.148.207.static.yourhostingaccount.com (207.148.252.250) 111.068 ms 119.984 ms 114.022 ms
> 13 209.59.211.175 (209.59.211.175) 103.358 ms 87.412 ms 86.345 ms
> 14 reed.com (67.223.249.82) 87.276 ms 102.752 ms 86.800 ms
Doing me to you:
d@ida:$ traceroute -n 68.4.12.20
traceroute to 68.4.12.20 (68.4.12.20), 30 hops max, 60 byte packets
1 172.21.2.1 0.288 ms 0.495 ms 0.469 ms
2 172.21.0.1 0.758 ms 0.744 ms 0.725 ms
3 172.29.142.6 1.121 ms 1.105 ms 1.085 ms
(wireless mesh hop 1)
4 172.20.142.9 2.932 ms 2.923 ms 6.429 ms
5 172.20.142.2 6.417 ms 6.398 ms 6.378 ms
|
(wireless mesh hop 2)
|
6 172.20.142.10 10.217 ms 12.162 ms 16.041 ms
7 192.168.100.1 16.042 ms 16.751 ms 19.185 ms
8 50.197.142.150 19.181 ms 19.547 ms 19.529 ms
9 67.180.184.1 24.600 ms 23.674 ms 23.659 ms
10 68.85.102.173 30.633 ms 30.639 ms 32.414 ms
11 69.139.198.142 32.404 ms 69.139.198.234 29.263 ms 68.87.193.146 32.465 ms
12 68.86.91.45 30.067 ms 32.566 ms 32.074 ms
13 68.86.85.242 30.238 ms 32.691 ms 32.031 ms
14 68.105.31.38 29.484 ms 28.925 ms 28.086 ms
15 68.1.0.185 44.320 ms 42.021 ms 68.1.0.181 41.999 ms
....
Using ping rather than traceroute I get a typical min RTT to you
of 32ms.
As the crow drives between santa barbara and los gatos, (280 miles) at
the speed of light in cable, we have roughly 4ms of RTT between us, or
28ms of induced latency due to the characteristics of the underlying
media technologies, and the quality and limited quantity of the
interconnects.
A number I've long longed to have from fios, dsl, and cable are
measurements of "cross-town" latency - in the prior age of
circuit-switched networks, I can't imagine it being much higher than
4ms, and local telephony used to account for a lot of calls.
Going cable to cable, between two comcast cablemodems on (so far as I
know) different CMTSes, the 20 miles between los gatos and scotts
valley:
1 50.197.142.150 0.794 ms 0.692 ms 0.517 ms
2 67.180.184.1 19.266 ms 18.397 ms 8.726 ms
3 68.85.102.173 14.953 ms 9.347 ms 10.213 ms
4 69.139.198.146 20.477 ms 69.139.198.142 12.434 ms
69.139.198.138 16.116 ms
5 68.87.226.205 17.850 ms 15.375 ms 13.954 ms
6 68.86.142.250 28.254 ms 33.133 ms 28.546 ms
7 67.180.229.17 21.987 ms 23.831 ms 27.354 ms
gfiber testers are reporting 3-5ms RTT to speedtest (co-lo'd in their
data center), which is a very encouraging statistic, but I don't have
subscriber-2-subscriber numbers there. Yet.
>
> Cisco->AlterNet->XO->ALGX is one path, and Cox->AboveNet->Presumably ALGX is another. They both traverse the core.
>
> Going to bufferbloat.net, I actually do skip the core in one path. Through Cisco, I go through core site and hurricane electric and finally into ISC. ISC, it turns out, is a Cox customer; taking my residential path, since Cox serves us both, the traffic never goes upstream from Cox.
>
> Yes, there are CDNs. I don’t think you’d like the way Video/IP and especially adaptive bitrate video - Netflix, Youtube, etc - worked if they didn’t exist.
I totally favor CDNs of all sorts. My worry - not successfully
mirrored in the fast/slow lane debate - was over the vertical
integration of certain providers preventing future CDN deployments of
certain kinds of content.
>Akamai is probably the prototypical one, and when they deployed theirs it made the Internet quite a bit snappier - and that helped the economics of Internet sales. Google and Facebook actually do operate large data centers, but a lot of their common content (or at least Google’s) is in CDNlets. NetFlix uses several CDNs, or so I’m told; the best explanation I have found of their issues with Comcast and Level 3 is at http://www.youtube.com/watch?v=tR1sLLOYxnY (and it has imperfections). And yes, part of the story is business issues over CDNs. Netflix’s data traverses the core once to each CDN download server, and from the server to its customers.
Yes, that description mostly mirrors my understanding, and the viewpoint we
point forth in the wired article which I hoped help to defuse the hysteria.
Then what gfiber published shortly afterwards on their co-lo policy
scored some points, I thought.
http://googlefiberblog.blogspot.com/2014/05/minimizing-buffering.html
In addition the wayward political arguments, the what bothered me
about level3's argument is that the made unsubstantiated claims about
packet loss and latency that I'd have loved to hear more about,
notably whether or not they had any AQM in place.
> The IETF uses a CDN, as of recently. It’s called Cloudflare.
>
> One of the places I worry is Chrome and Silk’s SPDY Proxies, which are somewhere in Google and Amazon respectively.
Well, the current focus on e2e encryption everywhere is breaking good
old fashioned methods of minimizing dns and web traffic inside an
organization and coping with odd circumstances like satellite links. I
liked web proxies, they were often capable of reducing traffic by 10s
of percentage points, reduce latency enormously for lossy or satellite
links, and were frequently used by large organizations (like schools)
to manage content.
>Chrome and Silk send https and SPDY traffic directly to the targeted service, but http traffic to their proxies, which do their magic and send the result back. One of the potential implications is that instead of going to the CDN nearest me, it then goes to the CDN nearest the proxy. That’s not good for me. I just hope that the CDNs I use accept https from me, because that will give me the best service (and btw encrypts my data).
>
> Blind men and elephants, and they’re all right.
>
>
>
--
Dave Täht
NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Cerowrt-devel] [Bloat] viability of the data center in the internet of the future
2014-07-01 8:37 ` Dave Taht
@ 2014-07-01 16:38 ` Fred Baker (fred)
2014-07-02 8:21 ` Baptiste Jonglez
0 siblings, 1 reply; 6+ messages in thread
From: Fred Baker (fred) @ 2014-07-01 16:38 UTC (permalink / raw)
To: Dave Taht; +Cc: cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 14404 bytes --]
On Jul 1, 2014, at 1:37 AM, Dave Taht <dave.taht@gmail.com> wrote:
> On Sat, Jun 28, 2014 at 5:50 PM, Fred Baker (fred) <fred@cisco.com> wrote:
>> There is in fact a backbone. Once upon a time, it was run by a single company, BBN. Then it was more like five, and then ... and now it’s 169. There are, if the BGP report (http://seclists.org/nanog/2014/Jun/495) is to be believed, 47136 ASNs in the system, of which 35929 don’t show up as transit for anyone and are therefore presumably edge networks and potentially multihomed, and of those 16325 only announce a single prefix. Of the 6101 ASNs that show up as transit, 169 ONLY show up as transit. Yes, the core is 169 ASNs, and it’s not a little dot off to the side. If you want to know where it is, do a traceroute (tracery on windows).
>
> The fact that the internet has grown to 10+ billion devices (by some
> estimates), and from 1 transit provider to only 169 doesn't impress
> me. There are 206 countries in the world...
Did I say that there was only one transit provider? I said there were 169 AS’s that, in potoroo’s equivalent of route views, *only* show up as transit. There are, this morning, 195 transit-only AS’s, 40724 origin-only AS’s (AS’s that are only found at the edge), and 6573 AS’s that show up both a origin AS’s and transit AS’s.
http://bgp.potaroo.net/as2.0/bgp-active.html
> It is a shame that multi-homing has never been easily obtainable nor
> widely available, it would be nice to be able to have multiple links
> for any business critically dependent on the continuous operation of
> the internet and cloud.
Actually, it is pretty common. Again, from potoroo.net, there are 30620 origin AS’s announced via a single AS path. The implication is that there are 40724-30620=10104 origin AS’s being announced to AS65000 via multiple AS paths. I don’t know whether they or their upstreams are multi-homed, but I’ll bet a significant subset of them are multihomed.
>> I’ll give you two, one through Cisco and one through my residential provider.
>>
>> traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
>> 1 sjc-fred-881.cisco.com (10.19.64.113) 1.289 ms 12.000 ms 1.130 ms
>
> This is through your vpn?
Yes
>> 2 sjce-access-hub1-tun10.cisco.com (10.27.128.1) 47.661 ms 45.281 ms 42.995 ms
>
>> 3 ...
>> 11 sjck-isp-gw1-ten1-1-0.cisco.com (128.107.239.217) 44.972 ms 45.094 ms 43.670 ms
>> 12 tengige0-2-0-0.gw5.scl2.alter.net (152.179.99.153) 48.806 ms 49.338 ms 47.975 ms
>> 13 0.xe-9-1-0.br1.sjc7.alter.net (152.63.51.101) 43.998 ms 45.595 ms 49.838 ms
>> 14 206.111.6.121.ptr.us.xo.net (206.111.6.121) 52.110 ms 45.492 ms 47.373 ms
>> 15 207.88.14.225.ptr.us.xo.net (207.88.14.225) 126.696 ms 124.374 ms 127.983 ms
>> 16 te-2-0-0.rar3.washington-dc.us.xo.net (207.88.12.70) 127.639 ms 132.965 ms 131.415 ms
>> 17 te-3-0-0.rar3.nyc-ny.us.xo.net (207.88.12.73) 129.747 ms 125.680 ms 123.907 ms
>> 18 ae0d0.mcr1.cambridge-ma.us.xo.net (216.156.0.26) 125.009 ms 123.152 ms 126.992 ms
>> 19 ip65-47-145-6.z145-47-65.customer.algx.net (65.47.145.6) 118.244 ms 118.024 ms 117.983 ms
>> 20 * * *
>> 21 209.59.211.175 (209.59.211.175) 119.378 ms * 122.057 ms
>> 22 reed.com (67.223.249.82) 120.051 ms 120.146 ms 118.672 ms
>
>
>> traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
>> 1 10.0.2.1 (10.0.2.1) 1.728 ms 1.140 ms 1.289 ms
>> 2 10.6.44.1 (10.6.44.1) 122.289 ms 126.330 ms 14.782 ms
>
> ^^^^^ is this a wireless hop or something? Seeing your traceroute jump
> all the way to 122+ms strongly suggests you are either wireless or
> non-pied/fq_codeled.
The zeroth hop is wireless - I pull my Ethernet plug and turn on the wifi interface, which is instantiated by two Apple Airport APs in the home. 10.0.2.1 is the residential slice of my router. To be honest, I’m hard-pressed to say what 10.6.44.1 is; I suspect it’s an address of my CMTS. The address *I* have for my CMTS is 98.173.193.1, and my address in that subnet is 98.173.193.12. If you want my guess, Cox is returning an RFC 1918 address to prevent non-customers from pinging it.
--- 10.6.44.1 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 7.668/10.102/12.012/1.520 ms
--- 98.173.193.1 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 8.414/30.501/120.407/41.031 ms
and 98.173.193.1 doesn’t show up in my traceroute.
Absent per-hop timestamps, I’m not in a position to say where the delay came from. For all I know, it has something to do with the Wifi in the house. Wifi can have really strange delays.
Whatever.
>> 3 ip68-4-12-20.oc.oc.cox.net (68.4.12.20) 13.208 ms 12.667 ms 8.941 ms
>> 4 ip68-4-11-96.oc.oc.cox.net (68.4.11.96) 17.025 ms 13.911 ms 13.835 ms
>> 5 langbprj01-ae1.rd.la.cox.net (68.1.1.13) 131.855 ms 14.677 ms 129.860 ms
>> 6 68.105.30.150 (68.105.30.150) 16.750 ms 31.627 ms 130.134 ms
>> 7 ae11.cr2.lax112.us.above.net (64.125.21.173) 40.754 ms 31.873 ms 130.246 ms
>> 8 ae3.cr2.iah1.us.above.net (64.125.21.85) 162.884 ms 77.157 ms 69.431 ms
>> 9 ae14.cr2.dca2.us.above.net (64.125.21.53) 97.115 ms 113.428 ms 80.068 ms
>> 10 ae8.mpr4.bos2.us.above.net.29.125.64.in-addr.arpa (64.125.29.33) 109.957 ms 124.964 ms 122.447 ms
>> 11 * 64.125.69.90.t01470-01.above.net (64.125.69.90) 86.163 ms 103.232 ms
>> 12 250.252.148.207.static.yourhostingaccount.com (207.148.252.250) 111.068 ms 119.984 ms 114.022 ms
>> 13 209.59.211.175 (209.59.211.175) 103.358 ms 87.412 ms 86.345 ms
>> 14 reed.com (67.223.249.82) 87.276 ms 102.752 ms 86.800 ms
>
> Doing me to you:
>
> d@ida:$ traceroute -n 68.4.12.20
Through Cox:
--- 68.4.12.20 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 12.954/16.348/28.209/4.777 ms
traceroute to 68.4.12.20 (68.4.12.20), 64 hops max, 52 byte packets
1 10.0.2.1 1.975 ms 9.026 ms 1.397 ms
2 * * *
3 * * *
Traceroute to Facebook works, though:
traceroute www.facebook.com
traceroute to star.c10r.facebook.com (31.13.77.65), 64 hops max, 52 byte packets
1 10.0.2.1 (10.0.2.1) 1.490 ms 1.347 ms 0.934 ms
2 10.6.44.1 (10.6.44.1) 9.253 ms 11.308 ms 10.974 ms
3 ip68-4-12-20.oc.oc.cox.net (68.4.12.20) 11.275 ms 13.531 ms 20.180 ms
4 ip68-4-11-96.oc.oc.cox.net (68.4.11.96) 18.901 ms 13.013 ms 18.723 ms
5 sanjbprj01-ae0.0.rd.sj.cox.net (68.1.5.184) 29.397 ms 28.944 ms 30.062 ms
6 sv1.br01.sjc1.tfbnw.net (206.223.116.166) 31.011 ms 31.082 ms
sv1.pr02.tfbnw.net (206.223.116.153) 32.035 ms
7 ae1.bb01.sjc1.tfbnw.net (74.119.76.23) 32.932 ms 33.251 ms
po126.msw01.05.sjc1.tfbnw.net (31.13.31.131) 31.822 ms
8 edge-star-shv-05-sjc1.facebook.com (31.13.77.65) 38.234 ms 44.150 ms 31.165 ms
So it’s not that the router is dropping incoming ICMP.
Through Cisco:
--- 68.4.12.20 ping statistics ---
10 packets transmitted, 0 packets received, 100.0% packet loss
traceroute to 68.4.12.20 (68.4.12.20), 64 hops max, 52 byte packets
1 10.19.64.113 1.173 ms 0.932 ms 0.932 ms
2 10.27.128.1 36.256 ms 36.478 ms 37.376 ms
3 10.20.1.205 35.831 ms 36.211 ms 36.090 ms
4 171.69.14.249 36.084 ms 36.345 ms 37.889 ms
5 171.69.14.206 38.342 ms 37.791 ms 39.771 ms
6 171.69.7.178 37.699 ms 36.662 ms 41.758 ms
7 128.107.236.39 43.112 ms 36.401 ms 39.407 ms
8 128.107.239.6 35.576 ms 35.092 ms 37.770 ms
9 128.107.239.218 35.846 ms 35.337 ms 36.488 ms
10 128.107.239.250 35.504 ms 36.924 ms 39.353 ms
11 128.107.239.217 36.881 ms 38.063 ms 37.892 ms
12 152.179.99.153 38.745 ms 39.754 ms 39.665 ms
13 152.63.51.97 38.322 ms 37.466 ms 41.380 ms
14 129.250.9.249 39.924 ms 40.913 ms 39.690 ms
15 129.250.5.52 46.302 ms 43.463 ms 39.334 ms
16 129.250.6.10 49.332 ms 45.380 ms 47.309 ms
17 129.250.5.86 46.556 ms 48.806 ms
129.250.5.70 48.635 ms
18 129.250.6.181 48.020 ms
129.250.6.203 47.502 ms 47.111 ms
19 129.250.194.166 47.373 ms 48.532 ms 48.723 ms
20 68.1.0.179 66.514 ms
68.1.0.185 63.758 ms
68.1.0.189 61.326 ms
21 * * *
> Using ping rather than traceroute I get a typical min RTT to you
> of 32ms.
>
> As the crow drives between santa barbara and los gatos, (280 miles) at
> the speed of light in cable, we have roughly 4ms of RTT between us, or
> 28ms of induced latency due to the characteristics of the underlying
> media technologies, and the quality and limited quantity of the
> interconnects.
>
> A number I've long longed to have from fios, dsl, and cable are
> measurements of "cross-town" latency - in the prior age of
> circuit-switched networks, I can't imagine it being much higher than
> 4ms, and local telephony used to account for a lot of calls.
Well, if it’s of any interest, I once upon a time had a fractional T-1 to the home (a different one, but here in Santa Barbara), and ping RTT to Cisco was routinely 30ish ms much as it is now through Cox. I did have it jump once of about 600 ms, and I called to complain.
> Going cable to cable, between two comcast cablemodems on (so far as I
> know) different CMTSes, the 20 miles between los gatos and scotts
> valley:
>
> 1 50.197.142.150 0.794 ms 0.692 ms 0.517 ms
> 2 67.180.184.1 19.266 ms 18.397 ms 8.726 ms
> 3 68.85.102.173 14.953 ms 9.347 ms 10.213 ms
> 4 69.139.198.146 20.477 ms 69.139.198.142 12.434 ms
> 69.139.198.138 16.116 ms
> 5 68.87.226.205 17.850 ms 15.375 ms 13.954 ms
> 6 68.86.142.250 28.254 ms 33.133 ms 28.546 ms
> 7 67.180.229.17 21.987 ms 23.831 ms 27.354 ms
>
> gfiber testers are reporting 3-5ms RTT to speedtest (co-lo'd in their
> data center), which is a very encouraging statistic, but I don't have
> subscriber-2-subscriber numbers there. Yet.
>
>>
>> Cisco->AlterNet->XO->ALGX is one path, and Cox->AboveNet->Presumably ALGX is another. They both traverse the core.
>>
>> Going to bufferbloat.net, I actually do skip the core in one path. Through Cisco, I go through core site and hurricane electric and finally into ISC. ISC, it turns out, is a Cox customer; taking my residential path, since Cox serves us both, the traffic never goes upstream from Cox.
>>
>> Yes, there are CDNs. I don’t think you’d like the way Video/IP and especially adaptive bitrate video - Netflix, Youtube, etc - worked if they didn’t exist.
>
> I totally favor CDNs of all sorts. My worry - not successfully
> mirrored in the fast/slow lane debate - was over the vertical
> integration of certain providers preventing future CDN deployments of
> certain kinds of content.
Personally, I think most of that is blarney. A contract to colo a CDN provider is money for the service provider. I haven’t noticed any service providers turning down money.
>> Akamai is probably the prototypical one, and when they deployed theirs it made the Internet quite a bit snappier - and that helped the economics of Internet sales. Google and Facebook actually do operate large data centers, but a lot of their common content (or at least Google’s) is in CDNlets. NetFlix uses several CDNs, or so I’m told; the best explanation I have found of their issues with Comcast and Level 3 is at http://www.youtube.com/watch?v=tR1sLLOYxnY (and it has imperfections). And yes, part of the story is business issues over CDNs. Netflix’s data traverses the core once to each CDN download server, and from the server to its customers.
>
> Yes, that description mostly mirrors my understanding, and the viewpoint we
> point forth in the wired article which I hoped help to defuse the hysteria.
>
> Then what gfiber published shortly afterwards on their co-lo policy
> scored some points, I thought.
>
> http://googlefiberblog.blogspot.com/2014/05/minimizing-buffering.html
>
> In addition the wayward political arguments, the what bothered me
> about level3's argument is that the made unsubstantiated claims about
> packet loss and latency that I'd have loved to hear more about,
> notably whether or not they had any AQM in place.
Were I Netflix and company, and for that matter Youtube, I would handle delay at the TCP sender by using a delay-based TCP congestion control algorithm. There is at least one common data center provider that I think does that; they told me that they had purchased a congestion control algorithm (although the guy I was speaking with didn’t know what they bought or from whom), and the only one I know of that is for sale in that sense is a pretty effective delay-based algorithm. The point of TCP congestion control is to maximize throughput while protecting the Internet. I would argue that it SHOULD be to maximize throughput while minimizing latency. Rant available on request.
>> The IETF uses a CDN, as of recently. It’s called Cloudflare.
>>
>> One of the places I worry is Chrome and Silk’s SPDY Proxies, which are somewhere in Google and Amazon respectively.
>
> Well, the current focus on e2e encryption everywhere is breaking good
> old fashioned methods of minimizing dns and web traffic inside an
> organization and coping with odd circumstances like satellite links. I
> liked web proxies, they were often capable of reducing traffic by 10s
> of percentage points, reduce latency enormously for lossy or satellite
> links, and were frequently used by large organizations (like schools)
> to manage content.
Well, yes. They also have the effect of gerrymandering routing. All traffic through a proxy could go directly to the destination but goes first to the proxy. If the proxy is on the path, well and good. If it’s off-path, it adds RTT.
>> Chrome and Silk send https and SPDY traffic directly to the targeted service, but http traffic to their proxies, which do their magic and send the result back. One of the potential implications is that instead of going to the CDN nearest me, it then goes to the CDN nearest the proxy. That’s not good for me. I just hope that the CDNs I use accept https from me, because that will give me the best service (and btw encrypts my data).
>>
>> Blind men and elephants, and they’re all right.
[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Cerowrt-devel] [Bloat] viability of the data center in the internet of the future
2014-07-01 16:38 ` Fred Baker (fred)
@ 2014-07-02 8:21 ` Baptiste Jonglez
0 siblings, 0 replies; 6+ messages in thread
From: Baptiste Jonglez @ 2014-07-02 8:21 UTC (permalink / raw)
To: cerowrt-devel
[-- Attachment #1: Type: text/plain, Size: 777 bytes --]
On Tue, Jul 01, 2014 at 04:38:47PM +0000, Fred Baker (fred) wrote:
> Were I Netflix and company, and for that matter Youtube, I would handle
> delay at the TCP sender by using a delay-based TCP congestion control
> algorithm. There is at least one common data center provider that I
> think does that; they told me that they had purchased a congestion
> control algorithm (although the guy I was speaking with didn’t know what
> they bought or from whom), and the only one I know of that is for sale
> in that sense is a pretty effective delay-based algorithm.
Oh, people actually using FASP [1]? Or is it something else?
(don't look too hard at the description if you are allergic to marketing bullshit)
[1] http://asperasoft.com/technology/transport/fasp
[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2014-07-02 8:21 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-28 4:58 [Cerowrt-devel] viability of the data center in the internet of the future Dave Taht
2014-06-28 12:31 ` David P. Reed
2014-06-29 0:50 ` [Cerowrt-devel] [Bloat] " Fred Baker (fred)
2014-07-01 8:37 ` Dave Taht
2014-07-01 16:38 ` Fred Baker (fred)
2014-07-02 8:21 ` Baptiste Jonglez
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox