[Cerowrt-devel] 800gige

Dave Taht dave.taht at gmail.com
Wed Apr 15 17:35:10 EDT 2020


I've always kind of wanted a guestimate and cost breakdown
(politics/fiber cost/trenching) as to how much it costs to run, oh,
16km of quality 100GBit fiber from los gatos to me. I know, month to
month that would kind of cost a lot to fill....

I costed out what it would take to trench the whole community once
upon a time, and instead of that I've been patiently awaiting my first
starlink terminals....

https://www.google.com/maps/place/20600+Aldercroft+Heights+Rd,+Los+Gatos,+CA+95033/@37.1701322,-121.9806674,17z/data=!3m1!4b1!4m5!3m4!1s0x808e37ced60da4fd:0x189086a00c73ad37!8m2!3d37.1701322!4d-121.9784787

On Wed, Apr 15, 2020 at 2:09 PM Joel Wirāmu Pauling <joel at aenertia.net> wrote:
>
> Another neat thing about 400 and 800GE is that you can get MPO optics that allow splitting a single 4x100 or 8x100 into individual 100G feeds. Good for port density and/or adding capacity to processing/Edge/Appliances
>
> Now there are decent ER optics for 100G you can now do 40-70KM runs of each 100G link without additional active electronics on the path or going to and optical transport route.
>
> On Thu, 16 Apr 2020 at 08:57, Michael Richardson <mcr at sandelman.ca> wrote:
>>
>>
>> Mikael Abrahamsson via Cerowrt-devel wrote:
>>     > Backbone ISPs today are built with lots of parallel links (20x100GE for
>>     > instance) and then we do L4 hashing for flows across these. This means
>>
>> got it. inverse multiplexing of flows across *links*
>>
>>     > We're now going for 100 gigabit/s per lane (it's been going up from 4x2.5G
>>     > for 10GE to 1x10G, then we went for lane speeds of 10G, 25G, 50G and now
>>     > we're at 100G per lane), and it seems the 800GE in your link has 8 lanes of
>>     > that. This means a single L4 flow can be 800GE even though it's in reality
>>     > 8x100G lanes, as a single packet bits are being sprayed across all the
>>     > lanes.
>>
>> Here you talk about *lanes*, and inverse multiplexing of a single frame across *lanes*.
>> Your allusion to PCI-E is well taken, but if I am completing the analogy, and
>> the reference to DWDM, I'm thinking that you are talking about 100 gigabit/s
>> per lambda, with a single frame being inverse multiplexed across lambdas (as lanes).
>>
>> Did I understand this correctly?
>>
>> I understand a bit of "because we can".
>> I also understand that 20 x 800GE parallel links is better than 20 x 100GE
>> parallel links across the same long-haul (dark) fiber.
>>
>> But, what is the reason among ISPs to desire enabling a single L4 flow to use more
>> than 100GE?  Given that it seems that being able to L3 switch 800GE is harder
>> than switching 8x flows of already L4 ordered 100GE. (Flowlabel!), why pay
>> the extra price here?
>>
>> While I can see L2VPN use cases, I can also see that L2VPNs could generate
>> multiple flows themselves if they wanted.
>>
>> --
>> ]               Never tell me the odds!                 | ipv6 mesh networks [
>> ]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
>> ]     mcr at sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [
>>
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel



-- 
Make Music, Not War

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729


More information about the Cerowrt-devel mailing list