Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
From: "Joel Wirāmu Pauling" <joel@aenertia.net>
To: Michael Richardson <mcr@sandelman.ca>
Cc: Mikael Abrahamsson <swmike@swm.pp.se>,
	cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] 800gige
Date: Thu, 16 Apr 2020 09:08:45 +1200	[thread overview]
Message-ID: <CAKiAkGSGp6EkTrDEvUGwF1RXTm09-NZUxBs5cNJ025jh33o5ug@mail.gmail.com> (raw)
In-Reply-To: <1177.1586984260@localhost>

[-- Attachment #1: Type: text/plain, Size: 2566 bytes --]

Another neat thing about 400 and 800GE is that you can get MPO optics that
allow splitting a single 4x100 or 8x100 into individual 100G feeds. Good
for port density and/or adding capacity to processing/Edge/Appliances

Now there are decent ER optics for 100G you can now do 40-70KM runs of each
100G link without additional active electronics on the path or going to and
optical transport route.

On Thu, 16 Apr 2020 at 08:57, Michael Richardson <mcr@sandelman.ca> wrote:

>
> Mikael Abrahamsson via Cerowrt-devel wrote:
>     > Backbone ISPs today are built with lots of parallel links (20x100GE
> for
>     > instance) and then we do L4 hashing for flows across these. This
> means
>
> got it. inverse multiplexing of flows across *links*
>
>     > We're now going for 100 gigabit/s per lane (it's been going up from
> 4x2.5G
>     > for 10GE to 1x10G, then we went for lane speeds of 10G, 25G, 50G and
> now
>     > we're at 100G per lane), and it seems the 800GE in your link has 8
> lanes of
>     > that. This means a single L4 flow can be 800GE even though it's in
> reality
>     > 8x100G lanes, as a single packet bits are being sprayed across all
> the
>     > lanes.
>
> Here you talk about *lanes*, and inverse multiplexing of a single frame
> across *lanes*.
> Your allusion to PCI-E is well taken, but if I am completing the analogy,
> and
> the reference to DWDM, I'm thinking that you are talking about 100
> gigabit/s
> per lambda, with a single frame being inverse multiplexed across lambdas
> (as lanes).
>
> Did I understand this correctly?
>
> I understand a bit of "because we can".
> I also understand that 20 x 800GE parallel links is better than 20 x 100GE
> parallel links across the same long-haul (dark) fiber.
>
> But, what is the reason among ISPs to desire enabling a single L4 flow to
> use more
> than 100GE?  Given that it seems that being able to L3 switch 800GE is
> harder
> than switching 8x flows of already L4 ordered 100GE. (Flowlabel!), why pay
> the extra price here?
>
> While I can see L2VPN use cases, I can also see that L2VPNs could generate
> multiple flows themselves if they wanted.
>
> --
> ]               Never tell me the odds!                 | ipv6 mesh
> networks [
> ]   Michael Richardson, Sandelman Software Works        |    IoT
> architect   [
> ]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on
> rails    [
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>

[-- Attachment #2: Type: text/html, Size: 3579 bytes --]

  reply	other threads:[~2020-04-15 21:08 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-11 23:08 Dave Taht
2020-04-12 16:15 ` David P. Reed
2020-04-15 17:39 ` Mikael Abrahamsson
     [not found] ` <mailman.1077.1586972355.1241.cerowrt-devel@lists.bufferbloat.net>
2020-04-15 20:57   ` Michael Richardson
2020-04-15 21:08     ` Joel Wirāmu Pauling [this message]
2020-04-15 21:35       ` Dave Taht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKiAkGSGp6EkTrDEvUGwF1RXTm09-NZUxBs5cNJ025jh33o5ug@mail.gmail.com \
    --to=joel@aenertia.net \
    --cc=cerowrt-devel@lists.bufferbloat.net \
    --cc=mcr@sandelman.ca \
    --cc=swmike@swm.pp.se \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox