From: Dave Taht <dave.taht@gmail.com>
To: Frantisek Borsik <frantisek.borsik@gmail.com>
Cc: rjmcmahon <rjmcmahon@rjmcmahon.com>, Rpm <rpm@lists.bufferbloat.net>
Subject: Re: [Rpm] Almost had a dialog going with juniper...
Date: Mon, 20 Feb 2023 10:27:41 -0800 [thread overview]
Message-ID: <CAA93jw6e6MwHCnoMvkYev36JrefU_Y7RBzA-95w7Zsas_etz-w@mail.gmail.com> (raw)
In-Reply-To: <CAJUtOOjOuYw0odSAeBJDqHbL+=8y+uzAPNQ+E8wddFB7e--yVA@mail.gmail.com>
On Mon, Feb 20, 2023 at 9:57 AM Frantisek Borsik via Rpm
<rpm@lists.bufferbloat.net> wrote:
>
>Besides the actual evaporating of those comments that's the saddest thing for me...
The article has improved in multiple respects, actually mentioning
better packet management earlier than it had, and the overall tone
shifted nicely.
Frank... in the future, please criticise the ideas, and framing, and
not the person. I was, I'll admit, incensed at seeing the comment
thread disappear, but I then took a day to write a much better article
about the VOQ problem with purchased circuits I have observed many
times, and posted it widely. The comment - still preserved on that
piece - with that link to that blog entry - I have a nice screenshot
of now. :)
Stirring up a little controversy along the way towards the truth is
fine! We are all in this bloat together, and need to engineer our way
out.
I really do hope that what I see in so many VOQ -> XGbit SLA
configurations (where the delay is additive per voq) is not as common
(or as under-observed) as I think it is. Perhaps the scripts and blog
I posted will encourage more folk to look at this problem more deeply,
as it certainly seems to exist at many ISP->internet interconnects.
Maybe good solutions will be posted somewhere on some support site
that can be achieved on more hardware available today.
https://blog.cerowrt.org/post/juniper/
> All the best,
>
> Frank
>
> Frantisek (Frank) Borsik
>
>
>om/in/frantisekborsik
>>
> Signal, Telegram, WhatsApp: +421919416714
>
> iMessage, mobile: +420775230885
>
> Skype: casioa5302ca
>
> frantisek.borsik@gmail.com
>
>
>
> On Mon, Feb 20, 2023 at 1:02 AM rjmcmahon via Rpm <rpm@lists.bufferbloat.net> wrote:
>>
>> Here, look at this. Designed as a WiFi aggregation device.
>>
>> https://www.arista.com/en/products/750-series
>>
>> It supplies 60W PoE and claims support for 384 ports. Oh, the max
>> distance per PoE AP is 100 meters.
>>
>> That's insane as a power source and the 100M distance limit is not
>> viable.
>>
>> Our engineering needs to improve a lot.
>>
>> Bob
>> > Cisco's first acquisition was Crescendo. They started with twisted
>> > pair and moved to Cat5. At the time, the claim was nobody would rewire
>> > corporate offices. But they did and those engineers always had an AC
>> > power plug nearby so they never really designed for power/bit over
>> > distance.
>> >
>> > Broadcom purchased Epigram. They started with twisted pair and moved
>> > to wireless (CMOS radios.) The engineers found that people really
>> > don't want to be tethered to wall jacks. So they had to consider power
>> > at all aspects of design.
>> >
>> > AP engineers have been a bit of a Frankenstein. They have power per AC
>> > wall jacks so the blast energy everywhere to sell sq ft. The
>> > enterprise AP guys do silly things like PoE.
>> >
>> > Better is to add CMOS radios everywhere and decrease power,
>> > inter-connected by fiber which is the end game in waveguides. Even the
>> > data centers are now limited to 4-meter cables when using copper and
>> > the energy consumption is through the roof.
>> >
>> > Bob
>> >> On Sun, Feb 19, 2023 at 3:37 PM rjmcmahon <rjmcmahon@rjmcmahon.com>
>> >> wrote:
>> >>>
>> >>> A bit off topic, but the AP/client power asymmetry is another design
>> >>> flaw similar to bloat.
>> >>
>> >> It makes no sense to broadcast at a watt when the device is nearby. I
>> >> think this is a huge, and largely unexplored problem. We tried to
>> >> tackle it in the minstrel-blues project but didn't get far enough, and
>> >> the rate controllers became too proprietary to continue. Some details
>> >> here:
>> >>
>> >> https://github.com/thuehn/Minstrel-Blues
>> >>
>> >>>
>> >>> Not sure why nobody is talking about that.
>> >>
>> >> Understanding of the inverse square law is rare. The work we did at
>> >> google fiber, clearly showed the chromecast stick overdriving nearby
>> >> APs.
>> >>
>> >> https://apenwarr.ca/diary/wifi-data-apenwarr-201602.pdf
>> >>
>> >>
>> >>> https://www.youtube.com/watch?v=Ey5jVUXSJn4
>> >>
>> >> Haha.
>> >>
>> >>>
>> >>> Bob
>> >>> > Their post isn't really about bloat. It's about the discrepancy in i/o
>> >>> > bw of memory off-chip and on-chip.
>> >>> >
>> >>> > My opinion is that the off-chip memory or hybrid approach is a design
>> >>> > flaw for a serious router mfg. The flaw is thinking the links' rates
>> >>> > and the chip memory i/o rates aren't connected when obviously they
>> >>> > are. Just go fast as possible and let some other device buffer, e.g.
>> >>> > the end host or the server in the cloud.
>> >>> >
>> >>> > Bob
>> >>> >> https://blog.cerowrt.org/post/juniper/
>> >>> >>
>> >>> >> But they deleted the comment thread. It is interesting, I suppose, to
>> >>> >> see how they frame the buffering problems to themselves in their post:
>> >>> >> https://www.linkedin.com/pulse/sizing-router-buffers-small-new-big-sharada-yeluri/
>> >>> > _______________________________________________
>> >>> > Rpm mailing list
>> >>> > Rpm@lists.bufferbloat.net
>> >>> > https://lists.bufferbloat.net/listinfo/rpm
>> > _______________________________________________
>> > Rpm mailing list
>> > Rpm@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
--
A pithy note on VOQs vs SQM: https://blog.cerowrt.org/post/juniper/
Dave Täht CEO, TekLibre, LLC
next prev parent reply other threads:[~2023-02-20 18:27 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-19 23:02 Dave Taht
2023-02-19 23:34 ` rjmcmahon
2023-02-19 23:37 ` rjmcmahon
2023-02-19 23:44 ` Dave Taht
2023-02-19 23:52 ` rjmcmahon
2023-02-20 0:02 ` rjmcmahon
2023-02-20 17:56 ` Frantisek Borsik
2023-02-20 18:27 ` Dave Taht [this message]
2023-02-20 19:22 ` rjmcmahon
2023-02-19 23:40 ` Dave Taht
2023-02-19 23:44 ` rjmcmahon
2023-02-19 23:45 ` Dave Taht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/rpm.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAA93jw6e6MwHCnoMvkYev36JrefU_Y7RBzA-95w7Zsas_etz-w@mail.gmail.com \
--to=dave.taht@gmail.com \
--cc=frantisek.borsik@gmail.com \
--cc=rjmcmahon@rjmcmahon.com \
--cc=rpm@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox