[Cerowrt-devel] [Bloat] viability of the data center in the internet of the future

Dave Taht dave.taht at gmail.com
Tue Jul 1 01:37:23 PDT 2014


On Sat, Jun 28, 2014 at 5:50 PM, Fred Baker (fred) <fred at cisco.com> wrote:
>
> On Jun 27, 2014, at 9:58 PM, Dave Taht <dave.taht at gmail.com> wrote:
>
>> One of the points in the wired article that kicked this thread off was
>> this picture of what the internet is starting to look like:
>>
>> http://www.wired.com/wp-content/uploads/2014/06/net_neutral.jpg.jpeg
>>
>> I don't want it to look like that.
>
> Well, I think trying to describe the Internet in those terms is a lot like half a dozen blind men describing an elephant. The picture makes a point, and a good one. But it’s also wildly inaccurate. It depends on which blind man you ask. And they’ll all be right, from their perspective.

In the case of the 'fast lane/slow lane' debate it seemed the blind
men were fiercely arguing over the elephant's preference for chicken
or beef, without engaging the (vegetarian) elephant in the discussion.

The picture was about content and not connectivity. The debate seemed
to be about who was going to put speed bumps on the highway when the
real problem was over who was going to plug what port into which
switch.

I grew quite annoyed after a while.

> There is in fact a backbone. Once upon a time, it was run by a single company, BBN. Then it was more like five, and then ... and now it’s 169. There are, if the BGP report (http://seclists.org/nanog/2014/Jun/495) is to be believed, 47136 ASNs in the system, of which 35929 don’t show up as transit for anyone and are therefore presumably edge networks and potentially multihomed, and of those 16325 only announce a single prefix. Of the 6101 ASNs that show up as transit, 169 ONLY show up as transit. Yes, the core is 169 ASNs, and it’s not a little dot off to the side. If you want to know where it is, do a traceroute (tracery on windows).

The fact that the internet has grown to 10+ billion devices (by some
estimates), and from 1 transit provider to only 169 doesn't impress
me. There are 206 countries in the world...

It is a shame that multi-homing has never been easily obtainable nor
widely available, it would be nice to be able to have multiple links
for any business critically dependent on the continuous operation of
the internet and cloud.

> I’ll give you two, one through Cisco and one through my residential provider.
>
> traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
>  1  sjc-fred-881.cisco.com (10.19.64.113)  1.289 ms  12.000 ms  1.130 ms

This is through your vpn?

>  2  sjce-access-hub1-tun10.cisco.com (10.27.128.1)  47.661 ms  45.281 ms  42.995 ms

>  3  ...
> 11  sjck-isp-gw1-ten1-1-0.cisco.com (128.107.239.217)  44.972 ms  45.094 ms  43.670 ms
> 12  tengige0-2-0-0.gw5.scl2.alter.net (152.179.99.153)  48.806 ms  49.338 ms  47.975 ms
> 13  0.xe-9-1-0.br1.sjc7.alter.net (152.63.51.101)  43.998 ms  45.595 ms  49.838 ms
> 14  206.111.6.121.ptr.us.xo.net (206.111.6.121)  52.110 ms  45.492 ms  47.373 ms
> 15  207.88.14.225.ptr.us.xo.net (207.88.14.225)  126.696 ms  124.374 ms  127.983 ms
> 16  te-2-0-0.rar3.washington-dc.us.xo.net (207.88.12.70)  127.639 ms  132.965 ms  131.415 ms
> 17  te-3-0-0.rar3.nyc-ny.us.xo.net (207.88.12.73)  129.747 ms  125.680 ms  123.907 ms
> 18  ae0d0.mcr1.cambridge-ma.us.xo.net (216.156.0.26)  125.009 ms  123.152 ms  126.992 ms
> 19  ip65-47-145-6.z145-47-65.customer.algx.net (65.47.145.6)  118.244 ms  118.024 ms  117.983 ms
> 20  * * *
> 21  209.59.211.175 (209.59.211.175)  119.378 ms *  122.057 ms
> 22  reed.com (67.223.249.82)  120.051 ms  120.146 ms  118.672 ms


> traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
>  1  10.0.2.1 (10.0.2.1)  1.728 ms  1.140 ms  1.289 ms
>  2  10.6.44.1 (10.6.44.1)  122.289 ms  126.330 ms  14.782 ms

^^^^^ is this a wireless hop or something? Seeing your traceroute jump
all the way to 122+ms strongly suggests you are either wireless or
non-pied/fq_codeled.

>  3  ip68-4-12-20.oc.oc.cox.net (68.4.12.20)  13.208 ms  12.667 ms  8.941 ms
>  4  ip68-4-11-96.oc.oc.cox.net (68.4.11.96)  17.025 ms  13.911 ms  13.835 ms
>  5  langbprj01-ae1.rd.la.cox.net (68.1.1.13)  131.855 ms  14.677 ms  129.860 ms
>  6  68.105.30.150 (68.105.30.150)  16.750 ms  31.627 ms  130.134 ms
>  7  ae11.cr2.lax112.us.above.net (64.125.21.173)  40.754 ms  31.873 ms  130.246 ms
>  8  ae3.cr2.iah1.us.above.net (64.125.21.85)  162.884 ms  77.157 ms  69.431 ms
>  9  ae14.cr2.dca2.us.above.net (64.125.21.53)  97.115 ms  113.428 ms  80.068 ms
> 10  ae8.mpr4.bos2.us.above.net.29.125.64.in-addr.arpa (64.125.29.33)  109.957 ms  124.964 ms  122.447 ms
> 11  * 64.125.69.90.t01470-01.above.net (64.125.69.90)  86.163 ms  103.232 ms
> 12  250.252.148.207.static.yourhostingaccount.com (207.148.252.250)  111.068 ms  119.984 ms  114.022 ms
> 13  209.59.211.175 (209.59.211.175)  103.358 ms  87.412 ms  86.345 ms
> 14  reed.com (67.223.249.82)  87.276 ms  102.752 ms  86.800 ms

Doing me to you:

d at ida:$ traceroute -n 68.4.12.20

traceroute to 68.4.12.20 (68.4.12.20), 30 hops max, 60 byte packets

 1  172.21.2.1  0.288 ms  0.495 ms  0.469 ms
 2  172.21.0.1  0.758 ms  0.744 ms  0.725 ms
 3  172.29.142.6  1.121 ms  1.105 ms  1.085 ms

(wireless mesh hop 1)

 4  172.20.142.9  2.932 ms  2.923 ms  6.429 ms
 5  172.20.142.2  6.417 ms  6.398 ms  6.378 ms
                    |
(wireless mesh hop 2)
                    |
 6  172.20.142.10  10.217 ms  12.162 ms  16.041 ms
 7  192.168.100.1  16.042 ms  16.751 ms  19.185 ms
 8  50.197.142.150  19.181 ms  19.547 ms  19.529 ms
 9  67.180.184.1  24.600 ms  23.674 ms  23.659 ms
10  68.85.102.173  30.633 ms  30.639 ms  32.414 ms
11  69.139.198.142  32.404 ms 69.139.198.234  29.263 ms 68.87.193.146  32.465 ms
12  68.86.91.45  30.067 ms  32.566 ms  32.074 ms
13  68.86.85.242  30.238 ms  32.691 ms  32.031 ms
14  68.105.31.38  29.484 ms  28.925 ms  28.086 ms
15  68.1.0.185  44.320 ms  42.021 ms 68.1.0.181  41.999 ms

....

Using ping rather than traceroute I get a typical min RTT to you
of 32ms.

As the crow drives between santa barbara and los gatos, (280 miles) at
the speed of light in cable, we have roughly 4ms of RTT between us, or
28ms of induced latency due to the characteristics of the underlying
media technologies, and the quality and limited quantity of the
interconnects.

A number I've long longed to have from fios, dsl, and cable are
measurements of "cross-town" latency - in the prior age of
circuit-switched networks, I can't imagine it being much higher than
4ms, and local telephony used to account for a lot of calls.

Going cable to cable, between two comcast cablemodems on (so far as I
know) different CMTSes, the 20 miles between los gatos and scotts
valley:

 1  50.197.142.150  0.794 ms  0.692 ms  0.517 ms
 2  67.180.184.1  19.266 ms  18.397 ms  8.726 ms
 3  68.85.102.173  14.953 ms  9.347 ms  10.213 ms
 4  69.139.198.146  20.477 ms  69.139.198.142  12.434 ms
69.139.198.138  16.116 ms
 5  68.87.226.205  17.850 ms  15.375 ms  13.954 ms
 6  68.86.142.250  28.254 ms  33.133 ms  28.546 ms
 7  67.180.229.17  21.987 ms  23.831 ms  27.354 ms

gfiber testers are reporting 3-5ms RTT to speedtest (co-lo'd in their
data center), which is a very encouraging statistic, but I don't have
subscriber-2-subscriber numbers there. Yet.

>
> Cisco->AlterNet->XO->ALGX is one path, and Cox->AboveNet->Presumably ALGX is another. They both traverse the core.
>
> Going to bufferbloat.net, I actually do skip the core in one path. Through Cisco, I go through core site and hurricane electric and finally into ISC. ISC, it turns out, is a Cox customer; taking my residential path, since Cox serves us both, the traffic never goes upstream from Cox.
>
> Yes, there are CDNs. I don’t think you’d like the way Video/IP and especially adaptive bitrate video - Netflix, Youtube, etc - worked if they didn’t exist.

I totally favor CDNs of all sorts. My worry - not successfully
mirrored in the fast/slow lane debate - was over the vertical
integration of certain providers preventing future CDN deployments of
certain kinds of content.

>Akamai is probably the prototypical one, and when they deployed theirs it made the Internet quite a bit snappier - and that helped the economics of Internet sales. Google and Facebook actually do operate large data centers, but a lot of their common content (or at least Google’s) is in CDNlets. NetFlix uses several CDNs, or so I’m told; the best explanation I have found of their issues with Comcast and Level 3 is at http://www.youtube.com/watch?v=tR1sLLOYxnY (and it has imperfections). And yes, part of the story is business issues over CDNs. Netflix’s data traverses the core once to each CDN download server, and from the server to its customers.

Yes, that description mostly mirrors my understanding, and the viewpoint we
point forth in the wired article which I hoped help to defuse the hysteria.

Then what gfiber published shortly afterwards on their co-lo policy
scored some points, I thought.

http://googlefiberblog.blogspot.com/2014/05/minimizing-buffering.html

In addition the wayward political arguments, the what bothered me
about level3's argument is that the made unsubstantiated claims about
packet loss and latency that I'd have loved to hear more about,
notably whether or not they had any AQM in place.

> The IETF uses a CDN, as of recently. It’s called Cloudflare.
>
> One of the places I worry is Chrome and Silk’s SPDY Proxies, which are somewhere in Google and Amazon respectively.

Well, the current focus on e2e encryption everywhere is breaking good
old fashioned methods of minimizing dns and web traffic inside an
organization and coping with odd circumstances like satellite links. I
liked web proxies, they were often capable of reducing traffic by 10s
of percentage points, reduce latency enormously for lossy or satellite
links, and were frequently used by large organizations (like schools)
to manage content.

>Chrome and Silk send https and SPDY traffic directly to the targeted service, but http traffic to their proxies, which do their magic and send the result back. One of the potential implications is that instead of going to the CDN nearest me, it then goes to the CDN nearest the proxy. That’s not good for me. I just hope that the CDNs I use accept https from me, because that will give me the best service (and btw encrypts my data).
>
> Blind men and elephants, and they’re all right.
>
>
>



-- 
Dave Täht

NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article


More information about the Cerowrt-devel mailing list