dave.taht at gmail.com
Wed Apr 15 17:35:10 EDT 2020
I've always kind of wanted a guestimate and cost breakdown
(politics/fiber cost/trenching) as to how much it costs to run, oh,
16km of quality 100GBit fiber from los gatos to me. I know, month to
month that would kind of cost a lot to fill....
I costed out what it would take to trench the whole community once
upon a time, and instead of that I've been patiently awaiting my first
On Wed, Apr 15, 2020 at 2:09 PM Joel Wirāmu Pauling <joel at aenertia.net> wrote:
> Another neat thing about 400 and 800GE is that you can get MPO optics that allow splitting a single 4x100 or 8x100 into individual 100G feeds. Good for port density and/or adding capacity to processing/Edge/Appliances
> Now there are decent ER optics for 100G you can now do 40-70KM runs of each 100G link without additional active electronics on the path or going to and optical transport route.
> On Thu, 16 Apr 2020 at 08:57, Michael Richardson <mcr at sandelman.ca> wrote:
>> Mikael Abrahamsson via Cerowrt-devel wrote:
>> > Backbone ISPs today are built with lots of parallel links (20x100GE for
>> > instance) and then we do L4 hashing for flows across these. This means
>> got it. inverse multiplexing of flows across *links*
>> > We're now going for 100 gigabit/s per lane (it's been going up from 4x2.5G
>> > for 10GE to 1x10G, then we went for lane speeds of 10G, 25G, 50G and now
>> > we're at 100G per lane), and it seems the 800GE in your link has 8 lanes of
>> > that. This means a single L4 flow can be 800GE even though it's in reality
>> > 8x100G lanes, as a single packet bits are being sprayed across all the
>> > lanes.
>> Here you talk about *lanes*, and inverse multiplexing of a single frame across *lanes*.
>> Your allusion to PCI-E is well taken, but if I am completing the analogy, and
>> the reference to DWDM, I'm thinking that you are talking about 100 gigabit/s
>> per lambda, with a single frame being inverse multiplexed across lambdas (as lanes).
>> Did I understand this correctly?
>> I understand a bit of "because we can".
>> I also understand that 20 x 800GE parallel links is better than 20 x 100GE
>> parallel links across the same long-haul (dark) fiber.
>> But, what is the reason among ISPs to desire enabling a single L4 flow to use more
>> than 100GE? Given that it seems that being able to L3 switch 800GE is harder
>> than switching 8x flows of already L4 ordered 100GE. (Flowlabel!), why pay
>> the extra price here?
>> While I can see L2VPN use cases, I can also see that L2VPNs could generate
>> multiple flows themselves if they wanted.
>> ] Never tell me the odds! | ipv6 mesh networks [
>> ] Michael Richardson, Sandelman Software Works | IoT architect [
>> ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
Make Music, Not War
CTO, TekLibre, LLC
More information about the Cerowrt-devel