Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
From: Michael Richardson <mcr@sandelman.ca>
To: Mikael Abrahamsson <swmike@swm.pp.se>,
	cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] 800gige
Date: Wed, 15 Apr 2020 16:57:40 -0400	[thread overview]
Message-ID: <1177.1586984260@localhost> (raw)
In-Reply-To: <mailman.1077.1586972355.1241.cerowrt-devel@lists.bufferbloat.net>

[-- Attachment #1: Type: text/plain, Size: 1821 bytes --]


Mikael Abrahamsson via Cerowrt-devel wrote:
    > Backbone ISPs today are built with lots of parallel links (20x100GE for
    > instance) and then we do L4 hashing for flows across these. This means

got it. inverse multiplexing of flows across *links*

    > We're now going for 100 gigabit/s per lane (it's been going up from 4x2.5G
    > for 10GE to 1x10G, then we went for lane speeds of 10G, 25G, 50G and now
    > we're at 100G per lane), and it seems the 800GE in your link has 8 lanes of
    > that. This means a single L4 flow can be 800GE even though it's in reality
    > 8x100G lanes, as a single packet bits are being sprayed across all the
    > lanes.

Here you talk about *lanes*, and inverse multiplexing of a single frame across *lanes*.
Your allusion to PCI-E is well taken, but if I am completing the analogy, and
the reference to DWDM, I'm thinking that you are talking about 100 gigabit/s
per lambda, with a single frame being inverse multiplexed across lambdas (as lanes).

Did I understand this correctly?

I understand a bit of "because we can".
I also understand that 20 x 800GE parallel links is better than 20 x 100GE
parallel links across the same long-haul (dark) fiber.

But, what is the reason among ISPs to desire enabling a single L4 flow to use more
than 100GE?  Given that it seems that being able to L3 switch 800GE is harder
than switching 8x flows of already L4 ordered 100GE. (Flowlabel!), why pay
the extra price here?

While I can see L2VPN use cases, I can also see that L2VPNs could generate
multiple flows themselves if they wanted.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

  parent reply	other threads:[~2020-04-15 20:57 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-11 23:08 Dave Taht
2020-04-12 16:15 ` David P. Reed
2020-04-15 17:39 ` Mikael Abrahamsson
     [not found] ` <mailman.1077.1586972355.1241.cerowrt-devel@lists.bufferbloat.net>
2020-04-15 20:57   ` Michael Richardson [this message]
2020-04-15 21:08     ` Joel Wirāmu Pauling
2020-04-15 21:35       ` Dave Taht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1177.1586984260@localhost \
    --to=mcr@sandelman.ca \
    --cc=cerowrt-devel@lists.bufferbloat.net \
    --cc=swmike@swm.pp.se \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox