From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-x22a.google.com (mail-we0-x22a.google.com [IPv6:2a00:1450:400c:c03::22a]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 84DE521F1E8; Sun, 13 Apr 2014 17:57:36 -0700 (PDT) Received: by mail-we0-f170.google.com with SMTP id w61so7582456wes.1 for ; Sun, 13 Apr 2014 17:57:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=mcHtnzRFmAlgnhfnfn3SiGWjM25CcGN9w+FUR5yogZ8=; b=mcv+6aTapAA3zofIVUrmrGgi3vky7WzxfZXyIEJkd0L1nS0z5fXAJKQz/AaRge4w/x i1ZnfEBLhJoYu2mg0n/omzwXaC1VJ4FnNQ1f3OpUYL+7lWE89Ow9ZDZU0fnpyRsFCQjP 8Zeg2jDGl1GDtnmc8sCY1MWJERY3m/fIB1D/CUaiVMy0bMx0LBcnIfFYzSy8/LGIR+Dn LKCS0d6VD9UENWHnhJpdBG6LfOlKNLvBhKBbB/JeaueJQPl/RX++4X+4l17L85O4J8R/ FT244PKexIuu0bpzSV2aACoLkvXFoWqi83b6tX0T8lIHPg20bcls5nmR0i5OkE2S+qaO SgRw== MIME-Version: 1.0 X-Received: by 10.194.80.7 with SMTP id n7mr30695779wjx.8.1397437054419; Sun, 13 Apr 2014 17:57:34 -0700 (PDT) Received: by 10.216.177.10 with HTTP; Sun, 13 Apr 2014 17:57:34 -0700 (PDT) In-Reply-To: <1397245399.392818481@apps.rackspace.com> References: <1397245399.392818481@apps.rackspace.com> Date: Sun, 13 Apr 2014 17:57:34 -0700 Message-ID: From: Dave Taht To: David Reed Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: "cerowrt-devel@lists.bufferbloat.net" , bloat Subject: Re: [Cerowrt-devel] wired article about bleed and bloat and underfunded critical infrastructure X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 00:57:37 -0000 On Fri, Apr 11, 2014 at 12:43 PM, wrote: > I'm afraid it's not *just* underfunded. I reviewed the details of the c= ode > involved and the fixes, and my conclusion is that even programmers of > security software have not learned how to think about design, testing, et= c. > Especially the continuing use of C in a large shared process address spac= e > for writing protocols that are by definition in the "security kernel" > (according to the original definition) of applications on which the publi= c > depends. > > > > Ever since I was part of the Multics Security Project (which was part of = the > effort that produced the Orange Book > http://csrc.nist.gov/publications/history/dod85.pdf) Which I incidentally have read... and fail to see how it applies well to networked systems. > in the 80's, we've > known that security-based code should not be exposed to user code and vic= e > versa. Yet the SSL libraries are linked in, in userspace, with the > application code. I note that I am glad that they are mostly dynamically linked in - something that wasn't the case for some other crypto libs - because finding applications that linked statically would be even more difficult. And I have seen some reports of people using heavily patched openssl doing smarter things with memory allocation - why weren't those patches pushed back into openssl? Well, because they were held private and not publicly reviewed... and don't appear to actually work, according to this: http://lekkertech.net/akamai.txt > Also, upgrades/changes to protocols related to security (which always sho= uld > have been in place on every end-to-end connection) should be reviewed *bo= th > at the protocol design level* and also at the *implementation level* beca= use > change creates risk. They should not be adopted blindly without serious > examination and pen-testing, yet this change just was casually thrown in = in > a patch release. Yes, change creates risk. Change also breeds change. Without change there would be no progress. Should there be an "office of critical infrastructure" or an underwriters labratory examining and blessing each piece of software that runs as root or handles money? Should some governmental or intergovernmental group be putting a floor under (or a roof over) the people working on code deemed as critical infrastructure? heartbleed was not detected by a coverity scan either. > I suspect that even if it were well funded, the folks who deploy the > technology would be slapdash at best. I agree. Recently I was asked to come up with an "phone-home inside your business embedded device architecture" that would scale to millions of users. I don't want the responsibility, nor do I think any but hundreds of people working together could come up with something that would let me sleep well at night - yet the market demand is there for something, anything, that even barely works. If I don't do the work, someone less qualified will. > Remember the Y2K issue I do. I also remember the response to it. http://www.taht.net/~mtaht/uncle_bills_helicopter.html The response to heartbleed has been incredibly heartening as to the swiftness of repair - something that could not have happened in anything other than the open source world. I have friends, however, that just went days without sleep, fixing it. I've outlined my major concerns with TLS across our critical infrastructure going forward on my g+. > and the cost of > lazy thinking about dates. (I feel a little superior because in 1968 Mult= ics > standardized on a 72-bit hardware microsecond-resolution hardware clock > because the designers actually thought about long-lived systems (actually I agree that was far-thinking. I too worry about Y2036 and Y2038, and do my best to make sure those aren't problems. it seems likely some software will last even longer than that. > only 56 bits of the original clock worked, but the hardware was not expec= ted > to last until the remaining bits could be added)). Multics died. It would not have scaled to the internet. And crypto development and public deployment COULD have gone more hand in hand if it weren't basically illegal until 1994, and maybe before then, some reasonable security could have been embedded deep into more protocols. It would have been nice to have had a secured X11 protocol, or kerberos made globally deployable, or things like mosh, in the 80s. In terms of more recent events, I happen to have liked HIP. We don't know how to build secured network systems to this day, that can survive an exposure to hundreds of millions of potential attackers. > > > The open source movement, unfortunately, made a monoculture of the SSL > source code, so it's much more dangerous and the vulnerable attack surfac= e > of deployments is enormous. No it didn't. Alternatives to openssl exist - gnutls, cyassl, and polarssl are also open source. Libraries that merely implement primitives well like nettle, gmp, and libsodium - all developed later - also exist. I am GLAD we don't have a monoculture in crypto. What happened was mostly inertia from openssl being the first even semi-legal library for crypto operations and huge demand for the functionality backed up with too little understanding of the risks. > > > > Rant off. The summary is that good engineering is not applied where it m= ust > be for the public interest. That remains true even if the NSA actually > snuck this code into the SSL implementation. > > > > On Friday, April 11, 2014 2:22pm, "Dave Taht" said: > >> http://www.wired.com/2014/04/heartbleedslesson/ >> >> And Dan Kaminisky writes about "Code in the Age of Cholera" >> >> http://dankaminsky.com/2014/04/10/heartbleed/ >> >> >> >> -- >> Dave T=E4ht >> _______________________________________________ >> Cerowrt-devel mailing list >> Cerowrt-devel@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/cerowrt-devel >> --=20 Dave T=E4ht NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_= indecent.article