[Bloat] Can't Run Tests Against netperf.bufferbloat.net

Dave Taht dave at taht.net
Sun Feb 9 11:31:40 EST 2020


I note that I totally missed this thread somehow.

I have about 12 (rather undermaintained) servers in linode's cloud these
days that have netperf and irtt on them, all over the world, singapore,
mumbai, etc, etc....

The monthly transfer allotment is ~28TB across them all. So far this
month we've used up ~1.2TB, principally from flent-fremont.bufferbloat.net.

The current cost, which includes bufferbloat.net and taht.net, is
 180/month.  I just vector a portion of the donations to patreon over
 there and don't think about it much. All of these boxes could use love
 and an upgrade (softwarewise), especially bufferbloat.net, which could
 also get a downgrade cpuwise. The investment of time (days) to do that,
 vs the ongoing extra cost of a 2 cpu machine (15/dollars/month) has not
 seemed worth it.
 
I would like it very much if we were more serious about a published
reliable test infrastructure for flent. However, it was also my hope
that flent would get mostly used inside people's firewalls on their own
testbeds while developing new gear and drivers and network designs, as
it's so easy to do so (apt-get install irtt netperf flent etc) nowadays.

One of my concerns has long been that I have no idea what lies
underneath these cloudy vms (anywhere) that is actually doing the rate
limiting (anyone? AWS? linode?), and have kind of wished we could go
back to at least one piece of bare metal on the internet as we had when
isc.org was still donating services to us. Back then vm jitter was
measured in the 10s of ms using xen... and we have no means to compare
vm performance vs bare metal performance with all this finicy stuff,
such as pacing, SCE/L4S etc. The google folk are perpetually publishing
ce_threshold set at 260us, which is just impossible (IMHO) in a vm....

The "nanodes" in this cloud are limited to 100Mbit, somehow.

I've occasionally priced out what it would take to get a 10Gbit in
hurricane's cages in california - 900/month just for transit when on
special, 2k a month otherwise.... Half a cage was another 400 or so. I
already have a spare /24 and BGP AS and hardware I could put in it,
but...

Rich Brown <richb.hanover at gmail.com> writes:

> Update: (I thought I had sent the previous message yesterday. My mistake.)
>
> I now have atl3.richb-hanover.com running a netperf server. it's a
> stock Ubuntu 18.04.4 LTS - uname -a shows: Linux atl3
> 4.15.0-76-generic #86-Ubuntu SMP Fri Jan 17 17:24:28 UTC 2020 x86_64
> x86_64 x86_64 GNU/Linux. I have installed netperf 2.6.0, and little
> else.

I imagine fq_codel is the underlying qdisc?

irtt would be nice.

At some point, somehow, using some tool, we need to start supporting QUIC.

>
> Next steps:
>
> 1) Please hammer on the server to see if it's a suitable replacement
> for the canonical "netperf.bufferbloat.net". Please feel free to check
> both its ability to handle traffic as well as any security surprises
> you discover...

Thank you so much for maintaining this server for so long and I had no
idea before this thread that it was being so hard hit. 


> 2) I welcome suggestions for configuring the server's TCP stack to be
> most useful for researchers. fq_codel, bbr, - I'm open to your
> thoughts.

I have generally made available cubic, reno, bbr, and dctcp. I was
always rather reluctant to publish where I'd turned on dctcp given that
it only recently gained a response to packet loss. In fact, reluctant to
publish anything in the cloud.

I had (briefly) an SCE capable machine up in singapore. It would be
good if more folk could try cubic-sce, reno-sce, dctcp-sce, bbr-sce, and
for that matter, play with l4s in the same dc on an equivalent machine,
but that involves making custom kernels for each at present. I'm also
dying to try out the ebpf + etx stuff google is presenting at netdevconf....

>
> 3) It's not too soon for advice on an iptables strategy for limiting
> the access/bandwidth/traffic to people who're abusing the service...

I'd love to have that too!

> Once we have all this in place, we can change the netperf.bufferbloat.net name to point to this server. Thanks.
>
> Rich
>
>> On Feb 8, 2020, at 5:35 PM, Rich Brown <richb.hanover at gmail.com> wrote:
>> 
>> Toke and Jesper,
>> 
>> Thanks both for these responses. 
>> 
>> netperf.bufferbloat.net is running an OpenVZ VPS with a 3.10
>> kernel. Tech support at Ramnode tells me that I need to get to a KVM
>> instance in order to use ipset and other fancy kernel stuff.
>> 
>> Here's my plan:
>> 
>> 1) Unless anyone can recommend a better hosting service ...
>> 
>> 2) Over the weekend, I'll stand up a new KVM server at Ramnode. They
>> offer a 2GB RAM, 2 core, 65 GB SSD, with 3TB per month of
>> data. It'll cost $10/month: adding 2x1TB at $4/month brings it to a
>> total of $18/month, about what the current server costs. I can get
>> Ubuntu 18.04 LTS as a standard install.
>> 
>> 3) While that's in-flight I would request that an iptables expert on
>> the list recommend a better strategy. (I was just makin' stuff up in
>> the current setup - as you could tell :-)
>> 
>> 4) I'd also accept any thoughts about tc commands for setting up the
>> networking on the host to work best as a netperf server. (Maybe
>> enable fq_codel or better...)
>> 
>> Thanks
>> 
>> Rich
>> 
>>> On Feb 7, 2020, at 7:02 AM, Jesper Dangaard Brouer <brouer at redhat.com> wrote:
>>> 
>>> On Thu, 6 Feb 2020 18:47:06 -0500
>>> Rich Brown <richb.hanover at gmail.com> wrote:
>>> 
>>>>> On Feb 6, 2020, at 12:00 PM, Matt Taggart wrote:
>>>>> 
>>>>> This smells like a munin or smokeping plugin (or some other sort of 
>>>>> monitoring) gathering data for graphing.  
>>>> 
>>>> Yup. That is a real possibility. The question is what we do about it.
>>>> 
>>>> If I understood, we left it at:
>>>> 
>>>> 1) Toke was going to look into some way to spread the
>>>> 'netperf.bufferbloat.net' load across several of our netperf servers.
>>>> 
>>>> 2) Can someone give me advice about iptables/tc/? to identify IP
>>>> addresses that make "too many" connections and either shut them off
>>>> or dial their bandwidth back to a 3 or 5 kbps? 
>>> 
>>> Look at man iptables-extensions and find "connlimit" and "recent".
>>> 
>>> 
>>>> (If you're terminally curious, Line 5 of
>>>> https://github.com/richb-hanover/netperfclean/blob/master/addtoblacklist.sh
>>>> shows the current iptables command to drop connections from "heavy
>>>> users" identified in the findunfilteredips.sh script. You can read
>>>> the current iptables rules at:
>>>> https://github.com/richb-hanover/netperfclean/blob/master/iptables.txt)
>>> 
>>> Sorry but this is a wrong approach.  Creating an iptables rule per
>>> source IP-address, will (as you also demonstrate) give you a VERY long
>>> list of rules (which is evaluated sequentially by the kernel).
>>> 
>>> This should instead be solved by using an ipset (howto a match from
>>> iptables see man iptables-extensions(8) and "set").  And use the
>>> cmdline tool ipset to add and remove entries.
>>> 
>>> -- 
>>> Best regards,
>>> Jesper Dangaard Brouer
>>> MSc.CS, Principal Kernel Engineer at Red Hat
>>> LinkedIn: http://www.linkedin.com/in/brouer
>>> 
>> 
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



More information about the Bloat mailing list