From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk0-x241.google.com (mail-qk0-x241.google.com [IPv6:2607:f8b0:400d:c09::241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 3CD6F3B29F; Sat, 1 Oct 2016 13:19:01 -0400 (EDT) Received: by mail-qk0-x241.google.com with SMTP id n66so8145161qkf.0; Sat, 01 Oct 2016 10:19:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=lxioaAVdC1Eu6lF0XCplCx12bNl9PZLEiABQQipR4dc=; b=Umqbmvwzt2aNl0CBqwH0gm9/DgYr896ZX+mtJqCf9CquT5HjVJMQiLqiZGl9WzB7RB vLQdOGNzFIfEwScR4yunUV45jGtBrlY2jFL1tPoIlHRbWwKJV/AMv0soCNjaVSm3/iRl gEHOh8SUSwCNo6ou6dvbETqDzBk1Z52MLqIvqPrn+zboBkui6H34SNX4WGPW6U6NmtVQ YmBk0Blfo+nXq/c6pJEEJ570gmO93LSOI7+VwhSUF9e5b+qKrGwbK6Ta5Q8ozvrn9nwS 1BMzrlW8mQqGQ8dvA3QoYUwxwNYNbMCbTwGH3LeQKDFUpOP0duRaUwkduM+LrSaoolRs rnlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=lxioaAVdC1Eu6lF0XCplCx12bNl9PZLEiABQQipR4dc=; b=jPCKS+Ic0OPKhFsx8IKO1Pv2r5qpY9S0/PMhjaXSV1MtN/SQmim1AbVlKvR/GSwJvi HRiYF0yYb263RnBGMPPPBiWtynChxygoZ0acIR0sAfuYEUJnP1zOYjUzLDttAK5FC590 ZbAlhjehcUNpvb2MrdVkTQ/KqDsBULIattDFOES0WZYLvN0hN7n6GufDNFWxy+z3yCbJ HpYs3u20Fc44i+SOuDRUVodWcGyfnPdBKv/rVBypLKRjPGyptzeGvrkAcvyjwriZ+dob UypOEzDUCFEXO9j8YUtwDuIr/oI9uuT0hzkoRY526UeUeFvaIJ46Te+7JBwlhScPnBtJ 96OA== X-Gm-Message-State: AA6/9Rl4jnNcHl0EB7I+Ih29KJX1PTE8jtRtxgt3gi4Tfh9cZKmWGVNEQH2uS4ojLGmN3E3OsJmRpZwsSPcjFg== X-Received: by 10.55.121.67 with SMTP id u64mr14220541qkc.114.1475342340783; Sat, 01 Oct 2016 10:19:00 -0700 (PDT) MIME-Version: 1.0 Received: by 10.12.146.164 with HTTP; Sat, 1 Oct 2016 10:19:00 -0700 (PDT) In-Reply-To: <87ponk9if1.fsf@toke.dk> References: <87twcw9tih.fsf@toke.dk> <87ponk9if1.fsf@toke.dk> From: Dave Taht Date: Sat, 1 Oct 2016 10:19:00 -0700 Message-ID: To: =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= Cc: "Jason A. Donenfeld" , cake@lists.bufferbloat.net, make-wifi-fast@lists.bufferbloat.net, WireGuard mailing list Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Make-wifi-fast] [Cake] WireGuard Queuing, Bufferbloat, Performance, Latency, and related issues X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Oct 2016 17:19:01 -0000 On Sat, Oct 1, 2016 at 8:51 AM, Toke H=C3=B8iland-J=C3=B8rgensen wrote: > Dave Taht writes: > >> My thought - given that at least on some platforms - encrypting 1000 >> packets at a time is a bad idea - would be something regulating the >> amount of data being crypted at a time, an equivalent to byte queue >> limits - BQL - BCL? byte crypto limits - to keep no more than, say, >> 1ms of data in that part of the subsystem. > > Well, the dynamic queue limit stuff is reusable (in > include/linux/dynamic_queue_limits.h). The netdev BQL stuff just uses > these functions with the packet byte sizes; so adapting it to use in > wireguard should be fairly straight forward :) Having one global queue for all of wireguard makes a lot of sense, one that gets divvied up as per the amount of traffic for each destination, and regulated "fairly". The present model - of one fixed size one per endpoint can run you out of memory right quick. >> ... also pulling stuff out of order from an already encrypted thing >> leads to the same IV problems we had in mac80211. > > Yeah, but who needs IVs, really? ;) Well, in wireguard's case, it does not (yay!) have a-must-deliver-packet-in-IV-order mechanism like 802.11 requires. It will merely throw away things that are outside the replay window (2k packets). So you could *encrypt* first, deliver later, if you wanted. Still, what I'd wanted to do was push back (or pace?) from the interface itself, as well as from the crypto subsystem, and throw away packets before being crypted when things are starting to get out of hand - as well as do better fq-ish mixing of what's going out. Consider a path like this: testbox, flooding -> 1Gbit in -> router-encrypting -> 20Mbit out to the internet. If you flood the thing - you get a big queue in two places - one at the interface qdisc where we struggle to throw already encrypted things away (eventually), and another inside the vpn code where we are encrypting as fast as we can get stuff in. ... As a side note, another crazy daydream of mine is to use up an entire /64 for the vpn endpoint identifier, and pre-negotiate a "spread spectrum" style of sending packets there. This would eliminate needing to bundle the IV in the packet, it would be obvious (to each side) which IV matched what IP address, we'd save 8 bytes on every packet. and completely break every fq+congestion control qdisc we know of, as well as most stateful firewalls, and so on. :) > -Toke --=20 Dave T=C3=A4ht Let's go make home routers and wifi faster! With better software! http://blog.cerowrt.org