From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.taht.net (mail.taht.net [176.58.107.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 7E2033B2A4 for ; Sun, 9 Feb 2020 11:31:43 -0500 (EST) Received: from dancer.taht.net (unknown [IPv6:2601:646:8301:676f:eea8:6bff:fefe:9a2]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.taht.net (Postfix) with ESMTPSA id E033D2296F; Sun, 9 Feb 2020 16:31:41 +0000 (UTC) From: Dave Taht To: Rich Brown Cc: Jesper Dangaard Brouer , Toke =?utf-8?Q?H=C3=B8ila?= =?utf-8?Q?nd-J=C3=B8rgensen?= , bloat@lists.bufferbloat.net References: <073CE9AB-FE12-402E-BFE3-179DF7BF2093@gmail.com> <20200207130202.5fb87763@carbon> <5B290AD9-1398-4897-97F0-1CA0AA48B522@gmail.com> <8C2A342C-C2E7-4824-8689-60AA7E4AC30A@gmail.com> Date: Sun, 09 Feb 2020 08:31:40 -0800 In-Reply-To: <8C2A342C-C2E7-4824-8689-60AA7E4AC30A@gmail.com> (Rich Brown's message of "Sat, 8 Feb 2020 18:17:51 -0500") Message-ID: <87pnenu3gz.fsf@taht.net> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [Bloat] Can't Run Tests Against netperf.bufferbloat.net X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 09 Feb 2020 16:31:43 -0000 I note that I totally missed this thread somehow. I have about 12 (rather undermaintained) servers in linode's cloud these days that have netperf and irtt on them, all over the world, singapore, mumbai, etc, etc.... The monthly transfer allotment is ~28TB across them all. So far this month we've used up ~1.2TB, principally from flent-fremont.bufferbloat.net. The current cost, which includes bufferbloat.net and taht.net, is 180/month. I just vector a portion of the donations to patreon over there and don't think about it much. All of these boxes could use love and an upgrade (softwarewise), especially bufferbloat.net, which could also get a downgrade cpuwise. The investment of time (days) to do that, vs the ongoing extra cost of a 2 cpu machine (15/dollars/month) has not seemed worth it. I would like it very much if we were more serious about a published reliable test infrastructure for flent. However, it was also my hope that flent would get mostly used inside people's firewalls on their own testbeds while developing new gear and drivers and network designs, as it's so easy to do so (apt-get install irtt netperf flent etc) nowadays. One of my concerns has long been that I have no idea what lies underneath these cloudy vms (anywhere) that is actually doing the rate limiting (anyone? AWS? linode?), and have kind of wished we could go back to at least one piece of bare metal on the internet as we had when isc.org was still donating services to us. Back then vm jitter was measured in the 10s of ms using xen... and we have no means to compare vm performance vs bare metal performance with all this finicy stuff, such as pacing, SCE/L4S etc. The google folk are perpetually publishing ce_threshold set at 260us, which is just impossible (IMHO) in a vm.... The "nanodes" in this cloud are limited to 100Mbit, somehow. I've occasionally priced out what it would take to get a 10Gbit in hurricane's cages in california - 900/month just for transit when on special, 2k a month otherwise.... Half a cage was another 400 or so. I already have a spare /24 and BGP AS and hardware I could put in it, but... Rich Brown writes: > Update: (I thought I had sent the previous message yesterday. My mistake.) > > I now have atl3.richb-hanover.com running a netperf server. it's a > stock Ubuntu 18.04.4 LTS - uname -a shows: Linux atl3 > 4.15.0-76-generic #86-Ubuntu SMP Fri Jan 17 17:24:28 UTC 2020 x86_64 > x86_64 x86_64 GNU/Linux. I have installed netperf 2.6.0, and little > else. I imagine fq_codel is the underlying qdisc? irtt would be nice. At some point, somehow, using some tool, we need to start supporting QUIC. > > Next steps: > > 1) Please hammer on the server to see if it's a suitable replacement > for the canonical "netperf.bufferbloat.net". Please feel free to check > both its ability to handle traffic as well as any security surprises > you discover... Thank you so much for maintaining this server for so long and I had no idea before this thread that it was being so hard hit. > 2) I welcome suggestions for configuring the server's TCP stack to be > most useful for researchers. fq_codel, bbr, - I'm open to your > thoughts. I have generally made available cubic, reno, bbr, and dctcp. I was always rather reluctant to publish where I'd turned on dctcp given that it only recently gained a response to packet loss. In fact, reluctant to publish anything in the cloud. I had (briefly) an SCE capable machine up in singapore. It would be good if more folk could try cubic-sce, reno-sce, dctcp-sce, bbr-sce, and for that matter, play with l4s in the same dc on an equivalent machine, but that involves making custom kernels for each at present. I'm also dying to try out the ebpf + etx stuff google is presenting at netdevconf.... > > 3) It's not too soon for advice on an iptables strategy for limiting > the access/bandwidth/traffic to people who're abusing the service... I'd love to have that too! > Once we have all this in place, we can change the netperf.bufferbloat.net name to point to this server. Thanks. > > Rich > >> On Feb 8, 2020, at 5:35 PM, Rich Brown wrote: >> >> Toke and Jesper, >> >> Thanks both for these responses. >> >> netperf.bufferbloat.net is running an OpenVZ VPS with a 3.10 >> kernel. Tech support at Ramnode tells me that I need to get to a KVM >> instance in order to use ipset and other fancy kernel stuff. >> >> Here's my plan: >> >> 1) Unless anyone can recommend a better hosting service ... >> >> 2) Over the weekend, I'll stand up a new KVM server at Ramnode. They >> offer a 2GB RAM, 2 core, 65 GB SSD, with 3TB per month of >> data. It'll cost $10/month: adding 2x1TB at $4/month brings it to a >> total of $18/month, about what the current server costs. I can get >> Ubuntu 18.04 LTS as a standard install. >> >> 3) While that's in-flight I would request that an iptables expert on >> the list recommend a better strategy. (I was just makin' stuff up in >> the current setup - as you could tell :-) >> >> 4) I'd also accept any thoughts about tc commands for setting up the >> networking on the host to work best as a netperf server. (Maybe >> enable fq_codel or better...) >> >> Thanks >> >> Rich >> >>> On Feb 7, 2020, at 7:02 AM, Jesper Dangaard Brouer wrote: >>> >>> On Thu, 6 Feb 2020 18:47:06 -0500 >>> Rich Brown wrote: >>> >>>>> On Feb 6, 2020, at 12:00 PM, Matt Taggart wrote: >>>>> >>>>> This smells like a munin or smokeping plugin (or some other sort of >>>>> monitoring) gathering data for graphing. >>>> >>>> Yup. That is a real possibility. The question is what we do about it. >>>> >>>> If I understood, we left it at: >>>> >>>> 1) Toke was going to look into some way to spread the >>>> 'netperf.bufferbloat.net' load across several of our netperf servers. >>>> >>>> 2) Can someone give me advice about iptables/tc/? to identify IP >>>> addresses that make "too many" connections and either shut them off >>>> or dial their bandwidth back to a 3 or 5 kbps? >>> >>> Look at man iptables-extensions and find "connlimit" and "recent". >>> >>> >>>> (If you're terminally curious, Line 5 of >>>> https://github.com/richb-hanover/netperfclean/blob/master/addtoblacklist.sh >>>> shows the current iptables command to drop connections from "heavy >>>> users" identified in the findunfilteredips.sh script. You can read >>>> the current iptables rules at: >>>> https://github.com/richb-hanover/netperfclean/blob/master/iptables.txt) >>> >>> Sorry but this is a wrong approach. Creating an iptables rule per >>> source IP-address, will (as you also demonstrate) give you a VERY long >>> list of rules (which is evaluated sequentially by the kernel). >>> >>> This should instead be solved by using an ipset (howto a match from >>> iptables see man iptables-extensions(8) and "set"). And use the >>> cmdline tool ipset to add and remove entries. >>> >>> -- >>> Best regards, >>> Jesper Dangaard Brouer >>> MSc.CS, Principal Kernel Engineer at Red Hat >>> LinkedIn: http://www.linkedin.com/in/brouer >>> >> > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat