From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-x22d.google.com (mail-oi0-x22d.google.com [IPv6:2607:f8b0:4003:c06::22d]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 09B1621F415 for ; Wed, 3 Dec 2014 12:32:48 -0800 (PST) Received: by mail-oi0-f45.google.com with SMTP id a141so11373432oig.18 for ; Wed, 03 Dec 2014 12:32:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; bh=4EaLyFWLvqmlcvINHhGtWS9QMFNA4WeXH7NwHRpSU4c=; b=Y5hN35fW5hrQ3icE5bp5BnxX7Bi282B8+jeaBGoerHrMtRzvmDpHKwb0bd3r5RhMMP jMdEhOCi/1iZdlv/Ui109okQxukdPCkkf8S0HCoSlaq2Whdw3XkarO2IQYrYP//7nefA Iw6wAzQ7BLzBEYkTlQPE5yUCP3LZA99JcFVWdL34j5PnNbWwJMHU+0hDd3vLSTKvTikc 4EPVO+am+ksyUHyqvMIQjpWcGFDG0O2FsIOxH8Tb4OXHEOypb2vmi5zh+uPtvM7HXFud CELG0R3lBvdB1bC9gMsEZwWp7+U7xmzBuvydVM/DQbJzkDvYqUJ/xE57fHMLUj3rdDXB yBVw== MIME-Version: 1.0 X-Received: by 10.202.212.82 with SMTP id l79mr4223557oig.12.1417638767741; Wed, 03 Dec 2014 12:32:47 -0800 (PST) Received: by 10.202.227.77 with HTTP; Wed, 3 Dec 2014 12:32:47 -0800 (PST) In-Reply-To: <20141203120246.GO10533@sliepen.org> References: <20141203120246.GO10533@sliepen.org> Date: Wed, 3 Dec 2014 12:32:47 -0800 Message-ID: From: Dave Taht To: tinc-devel@tinc-vpn.org, Guus Sliepen , "cerowrt-devel@lists.bufferbloat.net" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Cerowrt-devel] tinc vpn: adding dscp passthrough (priorityinherit), ecn, and fq_codel support X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 20:33:17 -0000 On Wed, Dec 3, 2014 at 4:02 AM, Guus Sliepen wrote: > On Wed, Dec 03, 2014 at 12:07:59AM -0800, Dave Taht wrote: > > [...] >> https://github.com/dtaht/tinc >> >> I successfully converted tinc to use sendmsg and recvmsg, acquire (at >> least on linux) the TTL/Hoplimit and IP_TOS/IPv6_TCLASS packet fields, > > Windows does not have sendmsg()/recvmsg(), but the BSDs support it. > >> as well as SO_TIMESTAMPNS, and use a higher resolution internal clock. >> Got passing through the dscp values to work also, but: >> >> A) encapsulation of ecn capable marked packets, and availability in >> the outer header, without correct decapsulationm doesn't work well. >> >> The outer packet gets marked, but by default the marking doesn't make >> it back into the inner packet when decoded. > > Is the kernel stripping the ECN bits provided by userspace? In the code > in your git branch you strip the ECN bits out yourself. Linux, at least, gives access to all 8 bits of the tos field on udp. Windows does not, unless you have admin privs. Don't know about other OSes. The comment there: tos =3D origpkt->tos & ~0x3 ; // chicken out on passing ecn for now was due to seeing this happen otherwise (talking to a tinc not yet modified to decapsulate ecn markings correctly) http://snapon.lab.bufferbloat.net/~d/tinc/ecn.png and was awaiting some thought on a truth table derived from the relevant rfc (which I think is slightly wrong, btw), and further thought on determining if ecn could be used on that path. certainly I could deploy a tinc modified to assume ecn was in use, (and may, shortly!) with the right truth table. There was a comment higher up in the file also - I would like to decrement hopcount/ttl on the encapsulated packet by the actual number of hops in the overlay path, not by one, as is the default here, and in many other vpns. This would decrease the damage caused by routing loops. >> So communicating somehow that a path can take ecn (and/or diffserv >> markings) is needed between tinc daemons. I thought of perhaps >> crafting a special icmp message marked with CE but am open to ideas >> that would be backward compatible. > > PMTU probes are used to discover whether UDP works and how big the path > MTU is, maybe it could be used to discover whether ECN works as well? Yes. > Set one of the ECN bits on some of the PMTU probes, and if you receive a > probe with that ECN bit set, also set it on the probe reply. This is an encapsulated packet vs an overt ping? Seems saner to test over the encapsulation in this case. >If you > succesfully receive a reply with ECN bits set, then you know ECN works. Well it should test for both CE and ECT(0) being set on separate packets. > Since the remote side just echoes the contents of the probe, you could > also put a copy of the ECN bits in the probe payload, and then you can > detect if the ECN bits got zeroed. You can also define an OPTION_ECN in > src/connection.h, so nodes can announce their support for ECN, but that > should not be necessary I think. Not sure. > >> B) I have long theorized that a lot of userspace vpns bottleneck on >> the read and encapsulate step, and being strict FIFOs, >> gradually accumulate delay until finally they run out of read socket >> buffer space and start dropping packets. > > Well, encryption and decryption takes a lot of CPU time, but context > switches are also bad. > > Tinc is treating UDP in a strictly FIFO way, but actually it does use a > RED algorithm when tunneling over TCP. That said, it only looks at its One of these days I'll get around to writing a userspace codel lib in pure C. Or someone else will. The C++ versions in ns2, ns3, and mahimahi are hard to read. My currently pretty elegant codel2.h might be a starting point, if only I could solve count increasing without bound sanely. > own buffers to determine when to drop packets, and those only come into > play once the kernel's TCP buffers are filled. TCP small queues (TSQ) and BQL should be a big boon to vpn and tor users. >> so I had a couple thoughts towards using multiple rx queues in the >> vtun interface, and/or trying to read more than one packet at a time >> (via recvmmsg) and do some level of fair queueing and queue management >> (codel) inside tinc itself. I think that's >> pretty doable without modifying the protocol any, but I'm not sure of >> it's value until I saturate some cpu more. > > I'd welcome any work in this area :) Well, I have to get packet timestamping to give sane results, and then come up with saturating workloads for my hardware. This is easy for cerowrt - I doubt the mips 640mhz processor can encrypt and push even as much as 2mbit/sec.... but my "vision" such as it was, was to toss a beaglebone box in as a vpn gateway instead, (on comcast's dynamically assigned ipv6 networks) and maybe fiddle with the http://cryptotronix.com/products/cryptocape/ which has a new kernel driver.... (it was a weekend, it was raining, I needed to get to my lab in los gatos from gf's in SF and ssh tunneling and portforwarding was getting bothersome... so I hacked on tinc. :) ) >> (and if you thought recvmsg was complex, look at recvmmsg) > > It seems someone is already working on that, see > https://github.com/jasdeep-hundal/tinc. Seemed to be mostly windows related hacking. I am not ready to consider all the infrastructure required to accumulate and manage packets inside of tinc, nor (after fighting with recvmsg/sendmsg for 2 days) ready to tackle recvmmsg... or threads and ringbuffers and all the headache that entails. >> D) >> >> the bottleneck link above is actually not tinc but the gateway, and as >> the gateway reverts to codel behavior on a single encapsulated flow >> encapsulating all the other flows, we end up with about 40ms of >> induced delay on this test. While I have a better codel (gets below >> 20ms latency, not deployed), *fq*_codel by identifying individual >> flows gets the induced delay on those flows down below 5ms. > > But that should improve with ECN if fq_codel is configured to use that, > right? Meh. Ecn is very useful on very short or very long paths where packet loss as an indicator of congestion is hurtful. In the general case it adds a tiny bit to overall latency for other flows as congestion is= not cleared for an RTT, instead of at the bottleneck, with a loss. This is still overly optimistic, IMHO: https://tools.ietf.org/html/draft-ietf-aqm-ecn-benefits-00 current linux pie, red and codel do not enable ecn by default, currently. Arguably pie could (because it has overload protection), but codel, no. Have a version of codel and fq_codel (and cake) that do ecn overload protection, and enable ecn by default, am testing... fq_codel enables ECN by default, (overload does very little harm) but openwrt (not cerowrt) turns it off on their qos-scripts. It's half on by default in sqm-scripts, and works pretty well if you have enough bandwidth - I routinely run a few low latency networks with near zero packet loss, and near-perfect utilization... which impresses me, at least... ECN makes me nervous in general when enabled outside the datacenter, but as something like 60% of the alexa top 1million will enable ecn if asked nowadays, I hope that that worry extends to enough more people for me to worry less. http://ecn.ethz.ch/ I am concerned that enabling ECN generally breaks Tor over tcp even worse at the moment.... (I hadn't thought about it til my last message) Certainly I think ECN is a great idea for vpns so long as it is implemented correctly, although my understanding of CTR mode over udp is that loss hurts not, and neither does reordering? In tinc: what if I get a packet with a seqno 5 after receiving packets with seq 1-4,6-255. does that get dropped due to the replay protection, or (if it passes muster) get decrypted and forwarded even after that much reordering? (I am all in favor of not worrying about reordering much. wifi aps tend to do it a lot, so do route flaps, and linux tcp, at least, is now VERY resistant to reordering problems, handling megabytes of out of order delivery problems with aplomb. windows on the other hand, sucks in this department, still) > >> At one level, tinc being so nicely meshy means that the "fq" part of >> fq_codel on the gateway will have more chance to work against the >> multiple vpn flows it generates for all the potential vpn endpoints... >> >> but at another... lookie here! ipv6! 2^64 addresses or more to use! >> and port space to burn! What if I could make tinc open up 1024 ports >> per connection, and have it fq all it's flows over those? What could >> go wrong? > > Right, hash the header of the original packets, and then select a port > or address based on the hash? Yes. I am leaning towards ipv6 address rather than port, you rapidly run out of ports in ipv4, and making this an ipv6 specific feature seems safer to test. I look forward to messing up the expectations of many a stateful ipv6 firewall.... >What about putting that hash in the flow > label of outer packets? Any routers that would actually treat those as > separate flows? The flow label was a pretty good idea shot down by too many people arguing over the bits. I don't think there is a lot of useful information stored there in any coherent way, (it's too bad that the vxlan stuff added a prepended header, instead of just using the flowlabel) so it is best to just hash the main headers and whatever inner headers you can obtain, as per http://lxr.free-electrons.com/source/net/core/flow_dissector.c#L54 and https://github.com/torvalds/linux/blob/master/net/sched/sch_fq_codel.c#L70 I have quibble with the jhash3 here, as the present treatment of ipv6 is the very efficient but not very hashy addr[0] ^ addr[2] ^ addr[3] ^ addr[4] (somewhere in the code), instead of feeding all the bits to the hash function(s). > > -- > Met vriendelijke groet / with kind regards, > Guus Sliepen > > _______________________________________________ > tinc-devel mailing list > tinc-devel@tinc-vpn.org > http://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc-devel --=20 Dave T=C3=A4ht thttp://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks