* [Bloat] recommended PC config for network testing using Ubuntu
@ 2015-02-23 15:14 Bill Ver Steeg (versteb)
2015-02-23 16:51 ` Jonathan Morton
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Bill Ver Steeg (versteb) @ 2015-02-23 15:14 UTC (permalink / raw)
To: Bloat
[-- Attachment #1.1: Type: text/plain, Size: 717 bytes --]
I have been running network tests for several years on a mix of older Cisco UCS servers and old HP desktops.
It seems that releases of the vintage of 3.2.0.18 run fine on these systems, but when I try to move forward to 3.19.0 I seem to bump into compatibility problems. Rather than spend a bunch of time resolving my configuration issues, I thought I would ask this illustrious group what PC-based platforms they are using to run the latest AQM code. Hopefully I can just pick up a new server or two and be back in business.
Thanks in advance.
Bvs
[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]
Bill Ver Steeg
DISTINGUISHED ENGINEER
versteb@cisco.com
[-- Attachment #1.2: Type: text/html, Size: 5469 bytes --]
[-- Attachment #2: image003.jpg --]
[-- Type: image/jpeg, Size: 2527 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] recommended PC config for network testing using Ubuntu
2015-02-23 15:14 [Bloat] recommended PC config for network testing using Ubuntu Bill Ver Steeg (versteb)
@ 2015-02-23 16:51 ` Jonathan Morton
2015-02-23 22:18 ` Bill Ver Steeg (versteb)
2015-02-23 18:31 ` Dave Taht
2015-02-23 22:03 ` Michael Richardson
2 siblings, 1 reply; 6+ messages in thread
From: Jonathan Morton @ 2015-02-23 16:51 UTC (permalink / raw)
To: Bill Ver Steeg (versteb); +Cc: Bloat
[-- Attachment #1: Type: text/plain, Size: 513 bytes --]
I use quite old hardware myself - mainly a Pentium MMX and a PowerBook G4.
They don't run Ubuntu, but Gentoo. Generally older hardware gets better
supported over time than brand new, and I like to keep a weather eye out
for performance problems this way.
Compatibility problems may result from the way you've configured the new
kernel, rather than the difference in versions in itself. The big,
organised distros may be pickier about that than smaller ones, but that
isn't hardware specific.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 586 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] recommended PC config for network testing using Ubuntu
2015-02-23 15:14 [Bloat] recommended PC config for network testing using Ubuntu Bill Ver Steeg (versteb)
2015-02-23 16:51 ` Jonathan Morton
@ 2015-02-23 18:31 ` Dave Taht
2015-02-23 19:03 ` Dave Taht
2015-02-23 22:03 ` Michael Richardson
2 siblings, 1 reply; 6+ messages in thread
From: Dave Taht @ 2015-02-23 18:31 UTC (permalink / raw)
To: Bill Ver Steeg (versteb); +Cc: Bloat
[-- Attachment #1.1: Type: text/plain, Size: 8655 bytes --]
I do confess to being disappointed that the cisco ethernet drivers in the
linux kernel have not been updated to use BQL. It takes a couple hours to
code up the 2-6 lines of additional code needed, if you have the hardware.
The core thing to look for is any hardware that has bql support for
ethernet.
http://www.bufferbloat.net/projects/bloat/wiki/BQL_enabled_drivers
(this list is a bit behind, I think 3 or more new BQL drivers got added in
the last two kernel revs, including the TI 10gig chip and I forget the
other two)
For rack mounts presently bufferbloat.net mostly uses a bunch of donated
cast off hardware, held together with chewing gum and bailing wire. More
donations gladly appreciated.
Back in the day I used to buy a lot of gear from:
http://www.penguincomputing.com/products/rackmount-servers/relion-servers/
And then there is hp, dell, etc, which do linux boxes fairly well in the dc
space and ship with modern kernels.
But these days I mostly build my own, usually around an asus motherboard.
There is a ton of work around the opencompute stuff:
http://www.opencompute.org/projects/server/ but most of it is so much
higher end than is needed to forward packets at the rates I care about - IF
I wanted to do 160Gps or more I would be fiddling with these 12 and 16 core
xeon boxes, but I am not....
In trying to stay low cost, fan free...
I went through hell trying out the supermicro rangeley platform - the
manufacturer I tried was under the delusion that an 8 core rangeley did not
need a fan, in particular, and nobody at the time was making a rackmount
case with the right sort of power supply for it nor port breakouts, and I
never found a way to reliabily boot from and use a USB stick for more than
a few days, so I ended up going back to a normal sata drive for it, and
going through a half dozen power supplies and cases. I do hope someone is
now packaging up that sort of box properly now, in a form that can be used.
If anyone wants 4 useless rack mount cases I never got around to shipping
them back.
Once I got a rangeley running it is *excellent* at forwarding packets in
aqm/fq algos at GigE line rate (due to the ivy bridge dma to cache
architecture) but not very good at driving tests directly (due to the still
quite weak cpu cores), but it does do effective software rate control + an
aqm/fq algorithm at GigE speeds. Also the ethernet chips in it are well
supported by BQL and the new xmit_more stuff. A MAJOR testing issue is that
the ethernet driver has 8 hardware queues and unless you are careful/aware
of that, or disable you run into birthday problems everywhere and results
that dont make sense. (in the general case, running without the 8 queues
proves faster with fq_codel than with the 8)
It is very nice to have 4 onboard ethernet ports also, which makes testing
a variety of scenarios (like 2 ports into one) a snap. I never got around
to trying to make it do 10GigE although the cards for that are lying around
here somewhere.
I used intel i3 based NUCs as my primary test drivers for the past year.
Small, silent, fast. Hardware offloads are needed to drive them to gigE, so
they are a little wrong for direcly evaluating AQM/fq - but good if you
want to make sure those algos work with TSO/GSO/GRO correctly. I218-v
ethernet. Cant route unless you want to observe the horrors that are in
present day usb to ethernet adaptors.
For laptops, I am generally sticking with the older lenovo ones off-lease
and off of craigslist that had a decent keyboard, unlike the chicklet stuff
they are now shipping. I swore by the old T60 and T61s and have T400 and
T440 ones now. Lenovo gear generally still, even the chicklet ones, have
very good linux support (and are probably the most common laptop you will
see in the linux community). E1000E ethernet, good support for bql and
xmit_more.
The chromebooks from HP, with the ath9k in them, can actually be turned
into a decent test platform once you replace the OS on them. I am actually
going to go get another one today (my last two were stolen) as we are
making really good progress on improving wifi behavior of late and I need
more client gear to test with. They dont have ethernet.
As for a desktop I am terribly pleased with snapon - no crashes *ever* in
3+ years running, and it cost about 2.2k to build, and if I could remember
the parts in it, I would build another one just like it - 64GB ram i7
6-core, flash disk, liquid cooling and all.
As for a home router, on openwrt chaos calmer everything that is ath9k and
ar71xx based is reasonable at rates below 60mbit for soft rate shaping and
I have had good results of late without at 500+mbit hard (I dont see aqm
engaging but I do see fq working). Tons of other chipsets, YMMV.
My topmost candidate for a cheap home router to work with going forward is
the tp-link archer c7 v2 although that might change if I can find the
DIR-650L model B somewhere. The archer has a pathetic cpu but both ath9k
and ath10k chips, and my focus is more on fixing wifi than fixing ethernet
these days.
The archer´s ath10k was AWEFUL when I last tried it but I think that can
now be improved
http://snapon.lab.bufferbloat.net/~d/archer/overnight/normality2.png
Still not a lot of hope for the topmost netgear (x4), asus, and dp-link
models as yet. IMHO QCA is doing the best job of getting their latest cpu
stuff into the linux mainline, but all these architectures use proprietary
hardware offloads that are impossible to get fingers into and are not
mainlinable. Best we can hope to see is that their chipset vendors are
taking the openly available algorithms and burning them into their
proprietary firmware. I have certainly talked to them enough about it (and
am not in a position to say who is doing what, sorry - but although I am
encouraged by the progress behind the scenes, it is taking way too long for
these guys to implement an algorithm eric dumazet wrote and mainlined in a
single saturday afternoon). Proprietary firmware is not particularly
helpful for researchers, I know...
I do wish very much I could find a low cost platform that could forward
without offloads at line rate with aqm/fq AND do software rate limiting
+aqm/fq to nearly gigE on both inbound and outbound, but so far, have not
found one. The AMD cougar products cant crack 600mbits, you can almost but
not quite get there on an intel 3, and I have not surveyed the rangeley
market in the 6 months since I got the last one to work.
The older (and commonly available with two ethernet ports) atoms *sucked*:
http://snapon.lab.bufferbloat.net/~d/Native_GigE_Atoms_NoOffloads-5873/
I do have several 64bit 8 arm products under evaluation, they barely boot.
The Xgene I can probably talk about now, but I havent tried to boot it in a
while. The new TI stuff was looking good, havent got around to it. the
current crop of cheap boards for arm a8, a9, etc, equipped with ethernet
have really lousy drivers in them. But ooh! 4 cores! Shiny!
If you would like a specific recommendation for a specific rackmount in
particular, I need to go get a couple for an upcoming round of testing, and
as you can tell, just as confused about what to buy as anyone else. I am
*very* sure about which ethernet chips are worth buying, pretty sure that I
can get away with rangeley for gigE routing, dubious about arm, angry at
offloaders, that is about it.
On Mon, Feb 23, 2015 at 7:14 AM, Bill Ver Steeg (versteb) <versteb@cisco.com
> wrote:
> I have been running network tests for several years on a mix of older
> Cisco UCS servers and old HP desktops.
>
>
>
> It seems that releases of the vintage of 3.2.0.18 run fine on these
> systems, but when I try to move forward to 3.19.0 I seem to bump into
> compatibility problems. Rather than spend a bunch of time resolving my
> configuration issues, I thought I would ask this illustrious group what
> PC-based platforms they are using to run the latest AQM code. Hopefully I
> can just pick up a new server or two and be back in business.
>
>
>
> Thanks in advance.
>
>
>
> Bvs
>
>
>
>
>
> [image: http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]
>
> *Bill Ver Steeg*
> DISTINGUISHED ENGINEER
> versteb@cisco.com
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
--
Dave Täht
thttp://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks
[-- Attachment #1.2: Type: text/html, Size: 12608 bytes --]
[-- Attachment #2: image003.jpg --]
[-- Type: image/jpeg, Size: 2527 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] recommended PC config for network testing using Ubuntu
2015-02-23 18:31 ` Dave Taht
@ 2015-02-23 19:03 ` Dave Taht
0 siblings, 0 replies; 6+ messages in thread
From: Dave Taht @ 2015-02-23 19:03 UTC (permalink / raw)
To: Bill Ver Steeg (versteb); +Cc: Bloat
[-- Attachment #1: Type: text/plain, Size: 2901 bytes --]
I was a little unclear - I use the nucs as a my primary linux desktop in 3
different locations, and they are also test clients on the networks I am
testing. They *just work* with all the kernels I build, and the video
driver, in particular, is excellent for day to day use, even with multiple
hires displays. You can bolt them on the back of a monitor that supports
that, getting rid of unsightly cables. I am always finding a need to have
a 5th usb port, but that is me....
The shorter ones are good if all you need is a half high wifi card and use
an msata hard disk, the taller ones with a sata SSD are way faster and let
you use a full length wifi card.
I do sometimes regret not having got the i5 or better ones, but I offload
major compilations into googles cloud and snapon - or just toss them into
the background - so the speed rarely bothers me on my day to day workloads.
If I were a gamer I would consider something else.
I run ubuntu gnome (various versions) rather than normal ubuntu as I hate
their present gui direction.
While I am dumping my stack about hardware I am huge fan of buckling spring
keyboards
http://www.pckeyboard.com/page/FeaturedProducts/UB40PGA
They have the same nubbie thing that the lenovo laptops have for a mouse
pointer, they are loud enough to annoy co-workers across the room, and they
let me type accurately at insane speeds and flood peoples mailboxes with
email. :)
The vast majority of the other working parts of my lab are wndr3800, ubnt
picostation, and ubnt nanostation. The beaglebones mostly did not work out,
nor the rasberry pi - too easy to roach the filesystems. I have a ton of
other embedded hardware that didnt work out worse, but to talk about them
requires editing out a lot of epithets. Avoid the globalscale products like
the plague they are in particular - they have ancient kernels and run way
too hot. As already noted, most of the 32 bit arm chips have lousy ethernet
- and most lousy video drivers. The arm64 stuff is starting to work but
havent touched it in a while.
People keep asking me to try out the wandboard, haven´t.
I have a zedboard. no bql, but easy to add. there is really promising work
going on around the zynq FPGA in particular. These guys might be onto
something, but they havent returned my mail with questions about their "35"
product.
https://www.kickstarter.com/projects/onetswitch/onetswitch-open-source-hardware-for-networking
lastly, I meant to include a plot of rangeley´s behavior:
http://snapon.lab.bufferbloat.net/~d/rangeley/fq2fq_vs_pfifo_fast_rangeley.png
it seems impossible to get a modern linux architecture under load down much
below 2ms at gigE at least in part due to context switch overhead. I have a
bit of hope for the dpdk work after the recent preso by stephen at
netconf01.org
/me stops ranting, goes back to work
[-- Attachment #2: Type: text/html, Size: 4281 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] recommended PC config for network testing using Ubuntu
2015-02-23 15:14 [Bloat] recommended PC config for network testing using Ubuntu Bill Ver Steeg (versteb)
2015-02-23 16:51 ` Jonathan Morton
2015-02-23 18:31 ` Dave Taht
@ 2015-02-23 22:03 ` Michael Richardson
2 siblings, 0 replies; 6+ messages in thread
From: Michael Richardson @ 2015-02-23 22:03 UTC (permalink / raw)
To: Bill Ver Steeg (versteb); +Cc: Bloat
What Jonathan said:
- it's not your hardware, it's your kernel configuration.
You could install a distro with a newer kernel, but that won't actually solve
the problem since nobody is shipping 3.19 yet, and I think you want the
latest code so that you can have the latest AQM.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | network architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] recommended PC config for network testing using Ubuntu
2015-02-23 16:51 ` Jonathan Morton
@ 2015-02-23 22:18 ` Bill Ver Steeg (versteb)
0 siblings, 0 replies; 6+ messages in thread
From: Bill Ver Steeg (versteb) @ 2015-02-23 22:18 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Bloat
[-- Attachment #1.1: Type: text/plain, Size: 1548 bytes --]
Yup – As I look into it a bit more, it looks like grub is no longer being seen by the UCS’s stage 1 boot loader. I should be able to fix this, but I will probably pick up a simpler low end system to get some diversity in my testing. The UCS is a great box for compiling/testing, as it is a pretty powerful system. It is designed to be administered as part of a data center deployment, so the software upgrade process is a bit different than what I am used to.
The old HP box is one I had sitting on the shelf for a while, and I am beginning to think I put it on the shelf for a reason (8-)).
Thanks for the reply.
Bvs
[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]
Bill Ver Steeg
DISTINGUISHED ENGINEER
versteb@cisco.com
From: Jonathan Morton [mailto:chromatix99@gmail.com]
Sent: Monday, February 23, 2015 11:51 AM
To: Bill Ver Steeg (versteb)
Cc: Bloat@lists.bufferbloat.net
Subject: Re: [Bloat] recommended PC config for network testing using Ubuntu
I use quite old hardware myself - mainly a Pentium MMX and a PowerBook G4. They don't run Ubuntu, but Gentoo. Generally older hardware gets better supported over time than brand new, and I like to keep a weather eye out for performance problems this way.
Compatibility problems may result from the way you've configured the new kernel, rather than the difference in versions in itself. The big, organised distros may be pickier about that than smaller ones, but that isn't hardware specific.
- Jonathan Morton
[-- Attachment #1.2: Type: text/html, Size: 7894 bytes --]
[-- Attachment #2: image003.jpg --]
[-- Type: image/jpeg, Size: 2527 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2015-02-23 22:18 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-23 15:14 [Bloat] recommended PC config for network testing using Ubuntu Bill Ver Steeg (versteb)
2015-02-23 16:51 ` Jonathan Morton
2015-02-23 22:18 ` Bill Ver Steeg (versteb)
2015-02-23 18:31 ` Dave Taht
2015-02-23 19:03 ` Dave Taht
2015-02-23 22:03 ` Michael Richardson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox