From: Dave Taht <dave.taht@gmail.com>
To: rjmcmahon <rjmcmahon@rjmcmahon.com>
Cc: Rpm <rpm@lists.bufferbloat.net>
Subject: Re: [Rpm] Almost had a dialog going with juniper...
Date: Sun, 19 Feb 2023 15:45:08 -0800 [thread overview]
Message-ID: <CAA93jw4mUwvqu4daxnGEm9pEDw_HV4hfcsNZzM=ks-Fb93gYsg@mail.gmail.com> (raw)
In-Reply-To: <4012c0f33597bb20b5034957b5c8e1a2@rjmcmahon.com>
On Sun, Feb 19, 2023 at 3:44 PM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>
> It's a conflict in design goals. They're trying to sell to customers
> that don't want data loss in the data centers for things like RDMA
> messaging and claiming their equipment is universal to all use cases.
> It's really not.
>
> We're TCP end/end guys and packet loss can be a very good signal.
The test I did in that blog series was with RFC3168 ecn. No loss on the link.
> Bob
> > On Sun, Feb 19, 2023 at 3:34 PM rjmcmahon <rjmcmahon@rjmcmahon.com>
> > wrote:
> >>
> >> Their post isn't really about bloat. It's about the discrepancy in i/o
> >> bw of memory off-chip and on-chip.
> >
> > The original comment thread was about how the flow queuing aspect of
> > fq_codel derived algorithms, would leverage on-chip resources better,
> > and about how VOQs do misbehave today.
> >
> >
> >>
> >> My opinion is that the off-chip memory or hybrid approach is a design
> >> flaw for a serious router mfg.
> >
> > I concur. I think we need smarter buffering, not more buffering.
> > Admittedly while the overhead for
> > 10,000 fq_codel'd virtual queues (e.g. 1 million total queue states)
> > without tuning is presently 64M that needs to live in high speed
> > memory for that next indirect lookup... it is possible to trim that
> > down quite a lot.
> >
> >> The flaw is thinking the links' rates and
> >> the chip memory i/o rates aren't connected when obviously they are.
> >> Just
> >> go fast as possible and let some other device buffer, e.g. the end
> >> host
> >> or the server in the cloud.
> >
> > Juniper will hold onto their big buffers are
> > profitable^H^H^H^H^H^H^H^Hneeded strategy until the bitter end.
> >
> >>
> >> Bob
> >> > https://blog.cerowrt.org/post/juniper/
> >> >
> >> > But they deleted the comment thread. It is interesting, I suppose, to
> >> > see how they frame the buffering problems to themselves in their post:
> >> > https://www.linkedin.com/pulse/sizing-router-buffers-small-new-big-sharada-yeluri/
--
Surveillance Capitalism? Or DIY? Choose:
https://blog.cerowrt.org/post/an_upgrade_in_place/
Dave Täht CEO, TekLibre, LLC
prev parent reply other threads:[~2023-02-19 23:45 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-19 23:02 Dave Taht
2023-02-19 23:34 ` rjmcmahon
2023-02-19 23:37 ` rjmcmahon
2023-02-19 23:44 ` Dave Taht
2023-02-19 23:52 ` rjmcmahon
2023-02-20 0:02 ` rjmcmahon
2023-02-20 17:56 ` Frantisek Borsik
2023-02-20 18:27 ` Dave Taht
2023-02-20 19:22 ` rjmcmahon
2023-02-19 23:40 ` Dave Taht
2023-02-19 23:44 ` rjmcmahon
2023-02-19 23:45 ` Dave Taht [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/rpm.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAA93jw4mUwvqu4daxnGEm9pEDw_HV4hfcsNZzM=ks-Fb93gYsg@mail.gmail.com' \
--to=dave.taht@gmail.com \
--cc=rjmcmahon@rjmcmahon.com \
--cc=rpm@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox