Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
* [Cerowrt-devel] wired article about bleed and bloat and underfunded critical infrastructure
@ 2014-04-11 18:22 Dave Taht
  2014-04-11 19:43 ` dpreed
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Taht @ 2014-04-11 18:22 UTC (permalink / raw)
  To: cerowrt-devel, bloat

http://www.wired.com/2014/04/heartbleedslesson/

And Dan Kaminisky writes about "Code in the Age of Cholera"

http://dankaminsky.com/2014/04/10/heartbleed/



-- 
Dave Täht

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Cerowrt-devel] wired article about bleed and bloat and underfunded critical infrastructure
  2014-04-11 18:22 [Cerowrt-devel] wired article about bleed and bloat and underfunded critical infrastructure Dave Taht
@ 2014-04-11 19:43 ` dpreed
  2014-04-14  0:57   ` Dave Taht
  0 siblings, 1 reply; 5+ messages in thread
From: dpreed @ 2014-04-11 19:43 UTC (permalink / raw)
  To: Dave Taht; +Cc: cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 2549 bytes --]


I'm afraid it's not *just* underfunded.   I reviewed the details of the code involved and the fixes, and my conclusion is that even programmers of security software have not learned how to think about design, testing, etc.  Especially the continuing use of C in a large shared process address space for writing protocols that are by definition in the "security kernel" (according to the original definition) of applications on which the public depends.
 
Ever since I was part of the Multics Security Project (which was part of the effort that produced the Orange Book http://csrc.nist.gov/publications/history/dod85.pdf) in the 80's, we've known that security-based code should not be exposed to user code and vice versa.  Yet the SSL libraries are linked in, in userspace, with the application code.
 
Also, upgrades/changes to protocols related to security (which always should have been in place on every end-to-end connection) should be reviewed *both at the protocol design level* and also at the *implementation level* because change creates risk.  They should not be adopted blindly without serious examination and pen-testing, yet this change just was casually thrown in in a patch release.
 
I suspect that even if it were well funded, the folks who deploy the technology would be slapdash at best. Remember the Y2K issue and the cost of lazy thinking about dates. (I feel a little superior because in 1968 Multics standardized on a 72-bit hardware microsecond-resolution hardware clock because the designers actually thought about long-lived systems (actually only 56 bits of the original clock worked, but the hardware was not expected to last until the remaining bits could be added)).
 
The open source movement, unfortunately, made a monoculture of the SSL source code, so it's much more dangerous and the vulnerable attack surface of deployments is enormous.
 
Rant off.  The summary is that good engineering is not applied where it must be for the public interest.  That remains true even if the NSA actually snuck this code into the SSL implementation.


On Friday, April 11, 2014 2:22pm, "Dave Taht" <dave.taht@gmail.com> said:



> http://www.wired.com/2014/04/heartbleedslesson/
> 
> And Dan Kaminisky writes about "Code in the Age of Cholera"
> 
> http://dankaminsky.com/2014/04/10/heartbleed/
> 
> 
> 
> --
> Dave Täht
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>

[-- Attachment #2: Type: text/html, Size: 3325 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Cerowrt-devel] wired article about bleed and bloat and underfunded critical infrastructure
  2014-04-11 19:43 ` dpreed
@ 2014-04-14  0:57   ` Dave Taht
  2014-04-14 23:22     ` dpreed
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Taht @ 2014-04-14  0:57 UTC (permalink / raw)
  To: David Reed; +Cc: cerowrt-devel, bloat

On Fri, Apr 11, 2014 at 12:43 PM,  <dpreed@reed.com> wrote:
> I'm afraid it's not *just* underfunded.   I reviewed the details of the code
> involved and the fixes, and my conclusion is that even programmers of
> security software have not learned how to think about design, testing, etc.
> Especially the continuing use of C in a large shared process address space
> for writing protocols that are by definition in the "security kernel"
> (according to the original definition) of applications on which the public
> depends.
>
>
>
> Ever since I was part of the Multics Security Project (which was part of the
> effort that produced the Orange Book
> http://csrc.nist.gov/publications/history/dod85.pdf)

Which I incidentally have read... and fail to see how it applies well
to networked systems.

> in the 80's, we've
> known that security-based code should not be exposed to user code and vice
> versa.  Yet the SSL libraries are linked in, in userspace, with the
> application code.

I note that I am glad that they are mostly dynamically linked in -
something that wasn't the case for some other crypto libs - because
finding applications
that linked statically would be even more difficult.

And I have seen some reports of people using heavily patched openssl
doing smarter things with memory allocation - why weren't those patches
pushed back into openssl?

Well, because they were held private and not publicly reviewed... and
don't appear to actually work, according to this:

http://lekkertech.net/akamai.txt

> Also, upgrades/changes to protocols related to security (which always should
> have been in place on every end-to-end connection) should be reviewed *both
> at the protocol design level* and also at the *implementation level* because
> change creates risk.  They should not be adopted blindly without serious
> examination and pen-testing, yet this change just was casually thrown in in
> a patch release.

Yes, change creates risk. Change also breeds change. Without change
there would be no progress.

Should there be an "office of critical infrastructure" or an
underwriters labratory examining and blessing each piece of software
that runs as root or handles money? Should some governmental or
intergovernmental group be putting a floor under (or a roof over) the
people working on code deemed as critical infrastructure?

heartbleed was not detected by a coverity scan either.

> I suspect that even if it were well funded, the folks who deploy the
> technology would be slapdash at best.

I agree. Recently I was asked to come up with an "phone-home inside
your business embedded device architecture" that would scale to
millions of users.

I don't want the responsibility, nor do I think any but hundreds of
people working together could come up with something that would let me
sleep well at night - yet the market demand is there for something,
anything, that even barely works.

If I don't do the work, someone less qualified will.


> Remember the Y2K issue

I do. I also remember the response to it.

http://www.taht.net/~mtaht/uncle_bills_helicopter.html

The response to heartbleed has been incredibly heartening as to the
swiftness of repair - something that could not have happened in
anything other than the open source world. I have friends, however,
that just went days without sleep, fixing it.

I've outlined my major concerns with TLS across our critical
infrastructure going forward on my g+.

> and the cost of
> lazy thinking about dates. (I feel a little superior because in 1968 Multics
> standardized on a 72-bit hardware microsecond-resolution hardware clock
> because the designers actually thought about long-lived systems (actually

I agree that was far-thinking. I too worry about Y2036 and Y2038, and
do my best to make sure those aren't problems.

it seems likely some software will last even longer than that.

> only 56 bits of the original clock worked, but the hardware was not expected
> to last until the remaining bits could be added)).

Multics died. It would not have scaled to the internet. And crypto
development and public deployment COULD have gone more hand in hand if
it weren't basically illegal until 1994, and maybe before then, some
reasonable security could have been embedded deep into more protocols.

It would have been nice to have had a secured X11 protocol, or
kerberos made globally deployable, or things like mosh, in the 80s. In
terms of more recent events, I happen to have liked HIP.

We don't know how to build secured network systems to this day, that
can survive an exposure to hundreds of millions of potential
attackers.
>
>
> The open source movement, unfortunately, made a monoculture of the SSL
> source code, so it's much more dangerous and the vulnerable attack surface
> of deployments is enormous.

No it didn't. Alternatives to openssl exist - gnutls, cyassl, and
polarssl are also open source. Libraries that merely implement
primitives well like nettle, gmp, and libsodium - all developed later
- also exist.

I am GLAD we don't have a monoculture in crypto.

What happened was mostly inertia from openssl being the first even
semi-legal library for crypto operations and huge demand for the
functionality backed up with too little understanding of the risks.


>
>
>
> Rant off.  The summary is that good engineering is not applied where it must
> be for the public interest.  That remains true even if the NSA actually
> snuck this code into the SSL implementation.
>
>
>
> On Friday, April 11, 2014 2:22pm, "Dave Taht" <dave.taht@gmail.com> said:
>
>> http://www.wired.com/2014/04/heartbleedslesson/
>>
>> And Dan Kaminisky writes about "Code in the Age of Cholera"
>>
>> http://dankaminsky.com/2014/04/10/heartbleed/
>>
>>
>>
>> --
>> Dave Täht
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>



-- 
Dave Täht

NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Cerowrt-devel] wired article about bleed and bloat and underfunded critical infrastructure
  2014-04-14  0:57   ` Dave Taht
@ 2014-04-14 23:22     ` dpreed
  2014-04-14 23:41       ` Dave Taht
  0 siblings, 1 reply; 5+ messages in thread
From: dpreed @ 2014-04-14 23:22 UTC (permalink / raw)
  To: Dave Taht; +Cc: cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 7463 bytes --]


All great points.
 
Regarding the Orange Book for distributed/network systems - the saddest part of that effort was that it was declared "done" when the standards were published, even though the challenges of decentralized networks of autonomously managed computers was already upon us.  The Orange Book was for individual computer systems that talked directly to end users and sat in physically secured locations, and did not apply to larger scale compositions of same.  It did not apply to PCs in users' hands, either (even if not connected to a network).  It did lay out its assumptions; but the temptation to believe its specifics applied when those assumptions weren't met clearly overrode engineering and managerial sense.  For example, it was used to argue that Windows NT was "certified secure" according to the Orange Book. :-)
 
 


On Sunday, April 13, 2014 8:57pm, "Dave Taht" <dave.taht@gmail.com> said:



> On Fri, Apr 11, 2014 at 12:43 PM,  <dpreed@reed.com> wrote:
> > I'm afraid it's not *just* underfunded.   I reviewed the details of the code
> > involved and the fixes, and my conclusion is that even programmers of
> > security software have not learned how to think about design, testing, etc.
> > Especially the continuing use of C in a large shared process address space
> > for writing protocols that are by definition in the "security kernel"
> > (according to the original definition) of applications on which the public
> > depends.
> >
> >
> >
> > Ever since I was part of the Multics Security Project (which was part of the
> > effort that produced the Orange Book
> > http://csrc.nist.gov/publications/history/dod85.pdf)
> 
> Which I incidentally have read... and fail to see how it applies well
> to networked systems.
> 
> > in the 80's, we've
> > known that security-based code should not be exposed to user code and vice
> > versa.  Yet the SSL libraries are linked in, in userspace, with the
> > application code.
> 
> I note that I am glad that they are mostly dynamically linked in -
> something that wasn't the case for some other crypto libs - because
> finding applications
> that linked statically would be even more difficult.
> 
> And I have seen some reports of people using heavily patched openssl
> doing smarter things with memory allocation - why weren't those patches
> pushed back into openssl?
> 
> Well, because they were held private and not publicly reviewed... and
> don't appear to actually work, according to this:
> 
> http://lekkertech.net/akamai.txt
> 
> > Also, upgrades/changes to protocols related to security (which always should
> > have been in place on every end-to-end connection) should be reviewed *both
> > at the protocol design level* and also at the *implementation level* because
> > change creates risk.  They should not be adopted blindly without serious
> > examination and pen-testing, yet this change just was casually thrown in in
> > a patch release.
> 
> Yes, change creates risk. Change also breeds change. Without change
> there would be no progress.
> 
> Should there be an "office of critical infrastructure" or an
> underwriters labratory examining and blessing each piece of software
> that runs as root or handles money? Should some governmental or
> intergovernmental group be putting a floor under (or a roof over) the
> people working on code deemed as critical infrastructure?
> 
> heartbleed was not detected by a coverity scan either.
> 
> > I suspect that even if it were well funded, the folks who deploy the
> > technology would be slapdash at best.
> 
> I agree. Recently I was asked to come up with an "phone-home inside
> your business embedded device architecture" that would scale to
> millions of users.
> 
> I don't want the responsibility, nor do I think any but hundreds of
> people working together could come up with something that would let me
> sleep well at night - yet the market demand is there for something,
> anything, that even barely works.
> 
> If I don't do the work, someone less qualified will.
> 
> 
> > Remember the Y2K issue
> 
> I do. I also remember the response to it.
> 
> http://www.taht.net/~mtaht/uncle_bills_helicopter.html
> 
> The response to heartbleed has been incredibly heartening as to the
> swiftness of repair - something that could not have happened in
> anything other than the open source world. I have friends, however,
> that just went days without sleep, fixing it.
> 
> I've outlined my major concerns with TLS across our critical
> infrastructure going forward on my g+.
> 
> > and the cost of
> > lazy thinking about dates. (I feel a little superior because in 1968 Multics
> > standardized on a 72-bit hardware microsecond-resolution hardware clock
> > because the designers actually thought about long-lived systems (actually
> 
> I agree that was far-thinking. I too worry about Y2036 and Y2038, and
> do my best to make sure those aren't problems.
> 
> it seems likely some software will last even longer than that.
> 
> > only 56 bits of the original clock worked, but the hardware was not expected
> > to last until the remaining bits could be added)).
> 
> Multics died. It would not have scaled to the internet. And crypto
> development and public deployment COULD have gone more hand in hand if
> it weren't basically illegal until 1994, and maybe before then, some
> reasonable security could have been embedded deep into more protocols.
> 
> It would have been nice to have had a secured X11 protocol, or
> kerberos made globally deployable, or things like mosh, in the 80s. In
> terms of more recent events, I happen to have liked HIP.
> 
> We don't know how to build secured network systems to this day, that
> can survive an exposure to hundreds of millions of potential
> attackers.
> >
> >
> > The open source movement, unfortunately, made a monoculture of the SSL
> > source code, so it's much more dangerous and the vulnerable attack surface
> > of deployments is enormous.
> 
> No it didn't. Alternatives to openssl exist - gnutls, cyassl, and
> polarssl are also open source. Libraries that merely implement
> primitives well like nettle, gmp, and libsodium - all developed later
> - also exist.
> 
> I am GLAD we don't have a monoculture in crypto.
> 
> What happened was mostly inertia from openssl being the first even
> semi-legal library for crypto operations and huge demand for the
> functionality backed up with too little understanding of the risks.
> 
> 
> >
> >
> >
> > Rant off.  The summary is that good engineering is not applied where it must
> > be for the public interest.  That remains true even if the NSA actually
> > snuck this code into the SSL implementation.
> >
> >
> >
> > On Friday, April 11, 2014 2:22pm, "Dave Taht" <dave.taht@gmail.com>
> said:
> >
> >> http://www.wired.com/2014/04/heartbleedslesson/
> >>
> >> And Dan Kaminisky writes about "Code in the Age of Cholera"
> >>
> >> http://dankaminsky.com/2014/04/10/heartbleed/
> >>
> >>
> >>
> >> --
> >> Dave Täht
> >> _______________________________________________
> >> Cerowrt-devel mailing list
> >> Cerowrt-devel@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >>
> 
> 
> 
> --
> Dave Täht
> 
> NSFW:
> https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
>

[-- Attachment #2: Type: text/html, Size: 9286 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Cerowrt-devel] wired article about bleed and bloat and underfunded critical infrastructure
  2014-04-14 23:22     ` dpreed
@ 2014-04-14 23:41       ` Dave Taht
  0 siblings, 0 replies; 5+ messages in thread
From: Dave Taht @ 2014-04-14 23:41 UTC (permalink / raw)
  To: David Reed; +Cc: cerowrt-devel, bloat

On Mon, Apr 14, 2014 at 4:22 PM,  <dpreed@reed.com> wrote:
> All great points.
>
>
>
> Regarding the Orange Book for distributed/network systems - the saddest part
> of that effort was that it was declared "done" when the standards were
> published, even though the challenges of decentralized networks of
> autonomously managed computers was already upon us.  The Orange Book was for
> individual computer systems that talked directly to end users and sat in
> physically secured locations, and did not apply to larger scale compositions
> of same.  It did not apply to PCs in users' hands, either (even if not
> connected to a network).  It did lay out its assumptions; but the temptation
> to believe its specifics applied when those assumptions weren't met clearly
> overrode engineering and managerial sense.

I worked on C2 level stuff in the early 90s, and on a db that tried to get B2
certification - it was difficult, slow, painful, hard, and ultimately
just a checkbox
that people took just to get past the bar of a RFP and turned off
almost universally
in practice.

> For example, it was used to
> argue that Windows NT was "certified secure" according to the Orange Book.
> :-)

I would like to see a resumption of an orange book scale effort for ipv6 in
particular, actually for all network infrastructure deemed critical.

Note akamai recognised that their newly published patch for openssl had a
flaw, and reacted openly, honestly, and rapidly to upgrade their infrastructure
and start rotating their certs.

https://blogs.akamai.com/2014/04/heartbleed-update-v3.html

I think they wrote the guy a check, too. Cheap at the price...

Openssl on the other hand has had a flood of tiny donations:

http://veridicalsystems.com/blog/of-money-responsibility-and-pride/


>
>
>
>
>
>
> On Sunday, April 13, 2014 8:57pm, "Dave Taht" <dave.taht@gmail.com> said:
>
>> On Fri, Apr 11, 2014 at 12:43 PM, <dpreed@reed.com> wrote:
>> > I'm afraid it's not *just* underfunded. I reviewed the details of the
>> > code
>> > involved and the fixes, and my conclusion is that even programmers of
>> > security software have not learned how to think about design, testing,
>> > etc.
>> > Especially the continuing use of C in a large shared process address
>> > space
>> > for writing protocols that are by definition in the "security kernel"
>> > (according to the original definition) of applications on which the
>> > public
>> > depends.
>> >
>> >
>> >
>> > Ever since I was part of the Multics Security Project (which was part of
>> > the
>> > effort that produced the Orange Book
>> > http://csrc.nist.gov/publications/history/dod85.pdf)
>>
>> Which I incidentally have read... and fail to see how it applies well
>> to networked systems.
>>
>> > in the 80's, we've
>> > known that security-based code should not be exposed to user code and
>> > vice
>> > versa. Yet the SSL libraries are linked in, in userspace, with the
>> > application code.
>>
>> I note that I am glad that they are mostly dynamically linked in -
>> something that wasn't the case for some other crypto libs - because
>> finding applications
>> that linked statically would be even more difficult.
>>
>> And I have seen some reports of people using heavily patched openssl
>> doing smarter things with memory allocation - why weren't those patches
>> pushed back into openssl?
>>
>> Well, because they were held private and not publicly reviewed... and
>> don't appear to actually work, according to this:
>>
>> http://lekkertech.net/akamai.txt
>>
>> > Also, upgrades/changes to protocols related to security (which always
>> > should
>> > have been in place on every end-to-end connection) should be reviewed
>> > *both
>> > at the protocol design level* and also at the *implementation level*
>> > because
>> > change creates risk. They should not be adopted blindly without serious
>> > examination and pen-testing, yet this change just was casually thrown in
>> > in
>> > a patch release.
>>
>> Yes, change creates risk. Change also breeds change. Without change
>> there would be no progress.
>>
>> Should there be an "office of critical infrastructure" or an
>> underwriters labratory examining and blessing each piece of software
>> that runs as root or handles money? Should some governmental or
>> intergovernmental group be putting a floor under (or a roof over) the
>> people working on code deemed as critical infrastructure?
>>
>> heartbleed was not detected by a coverity scan either.
>>
>> > I suspect that even if it were well funded, the folks who deploy the
>> > technology would be slapdash at best.
>>
>> I agree. Recently I was asked to come up with an "phone-home inside
>> your business embedded device architecture" that would scale to
>> millions of users.
>>
>> I don't want the responsibility, nor do I think any but hundreds of
>> people working together could come up with something that would let me
>> sleep well at night - yet the market demand is there for something,
>> anything, that even barely works.
>>
>> If I don't do the work, someone less qualified will.
>>
>>
>> > Remember the Y2K issue
>>
>> I do. I also remember the response to it.
>>
>> http://www.taht.net/~mtaht/uncle_bills_helicopter.html
>>
>> The response to heartbleed has been incredibly heartening as to the
>> swiftness of repair - something that could not have happened in
>> anything other than the open source world. I have friends, however,
>> that just went days without sleep, fixing it.
>>
>> I've outlined my major concerns with TLS across our critical
>> infrastructure going forward on my g+.
>>
>> > and the cost of
>> > lazy thinking about dates. (I feel a little superior because in 1968
>> > Multics
>> > standardized on a 72-bit hardware microsecond-resolution hardware clock
>> > because the designers actually thought about long-lived systems
>> > (actually
>>
>> I agree that was far-thinking. I too worry about Y2036 and Y2038, and
>> do my best to make sure those aren't problems.
>>
>> it seems likely some software will last even longer than that.
>>
>> > only 56 bits of the original clock worked, but the hardware was not
>> > expected
>> > to last until the remaining bits could be added)).
>>
>> Multics died. It would not have scaled to the internet. And crypto
>> development and public deployment COULD have gone more hand in hand if
>> it weren't basically illegal until 1994, and maybe before then, some
>> reasonable security could have been embedded deep into more protocols.
>>
>> It would have been nice to have had a secured X11 protocol, or
>> kerberos made globally deployable, or things like mosh, in the 80s. In
>> terms of more recent events, I happen to have liked HIP.
>>
>> We don't know how to build secured network systems to this day, that
>> can survive an exposure to hundreds of millions of potential
>> attackers.
>> >
>> >
>> > The open source movement, unfortunately, made a monoculture of the SSL
>> > source code, so it's much more dangerous and the vulnerable attack
>> > surface
>> > of deployments is enormous.
>>
>> No it didn't. Alternatives to openssl exist - gnutls, cyassl, and
>> polarssl are also open source. Libraries that merely implement
>> primitives well like nettle, gmp, and libsodium - all developed later
>> - also exist.
>>
>> I am GLAD we don't have a monoculture in crypto.
>>
>> What happened was mostly inertia from openssl being the first even
>> semi-legal library for crypto operations and huge demand for the
>> functionality backed up with too little understanding of the risks.
>>
>>
>> >
>> >
>> >
>> > Rant off. The summary is that good engineering is not applied where it
>> > must
>> > be for the public interest. That remains true even if the NSA actually
>> > snuck this code into the SSL implementation.
>> >
>> >
>> >
>> > On Friday, April 11, 2014 2:22pm, "Dave Taht" <dave.taht@gmail.com>
>> said:
>> >
>> >> http://www.wired.com/2014/04/heartbleedslesson/
>> >>
>> >> And Dan Kaminisky writes about "Code in the Age of Cholera"
>> >>
>> >> http://dankaminsky.com/2014/04/10/heartbleed/
>> >>
>> >>
>> >>
>> >> --
>> >> Dave Täht
>> >> _______________________________________________
>> >> Cerowrt-devel mailing list
>> >> Cerowrt-devel@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>> >>
>>
>>
>>
>> --
>> Dave Täht
>>
>> NSFW:
>>
>> https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
>>



-- 
Dave Täht

NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-04-14 23:41 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-11 18:22 [Cerowrt-devel] wired article about bleed and bloat and underfunded critical infrastructure Dave Taht
2014-04-11 19:43 ` dpreed
2014-04-14  0:57   ` Dave Taht
2014-04-14 23:22     ` dpreed
2014-04-14 23:41       ` Dave Taht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox