Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
From: dpreed <dpreed@reed.com>
To: Dave Taht <dave.taht@gmail.com>
Cc: "Rich Brown" <richb.hanover@gmail.com>,
	"cerowrt-devel@lists.bufferbloat.net"
	<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] dnsmasq CVEs
Date: Sat, 7 Oct 2017 16:54:37 -0400	[thread overview]
Message-ID: <@localhost> (raw)
Message-ID: <20171007205437.MDMzlG82CS493p2jM_6KtYlhE7etAtUkHIlOPKaY-V0@z> (raw)
In-Reply-To: <CAA93jw4gvei441UgyCTE5qn8XouZFDt_t0C88qG9RgnDyS83hA@mail.gmail.com> <CAA93jw4xK6RnWPpB-7UD2mTKq79vGQizwenquhPzck=TBk=8WQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 5072 bytes --]

  
  
Interesting. If stack pops zeroed memory, a stack machine would fix the subroutine call privilege drop issue. Also register zeroing on syscall return avoids privilege leaks. Linux on Intel 64 doesn't do this :-(
  

  
The mill is very interesting.   
  

  
One concern, I have recently realized that it is not fully open like RISC-V. I don't blame its    developers for wanting a ROI. But adoption may require rethinking that choice. These days, shared standardized infrastructure tends to require open adoptability.
  
  

  
  
  
  
  
>   
> On Oct 7, 2017 at 4:28 PM,  <Dave Taht>  wrote:
>   
>   
>  I misstated something, fix below. On Sat, Oct 7, 2017 at 11:32 AM, Dave Taht wrote:  >  On Sat, Oct 7, 2017 at 6:33 AM, dpreed wrote:  >>  No disagreement here. I saw a wonderful discussion recently by a researcher  >>  at Mentor Graphics about 2 things: VLSI design hacking and low level  >>  interconnect hacking. Things we call "hardware" and just assume are designed  >>  securely. Was this filmed, btw?  >>  They are not. The hardware designers at the chip and board level know little  >>  or nothing about security techniques. They don't work with systems people  >>  who build with their hardware to limit undefined or covert behaviors.  >>   >>  Systems people in turn make unreasonable and often wrong assumptions about  >>  what is hard about hardware. Assumptions about what it won't do, in  >>  particular.  >>   >>  We need to treat hardware like we treat software. Full of bugs, easily  >>  compromised. There are approaches to reliability and security that we know,  >>  that are tractable. But to apply them we need to drop the fictional idea  >>  that hardware is hard... It's soft.  >   >  hardware design tools and software seem stuck in the 80s.  >   >>  The principle of least privilege is one of those.  >   >  Everybody here probably knows by now how much I am a mill cpu fan.  >   >  The principle of least privs, on a mill, can apply to individual subroutines.  >   >  The talk (it's up at [0], but because it has to cover so much prior  >  material doesn't really get rolling till slide 30) highlighted how  >  they do secure IPC, and transfer memory access privs around, cheaply.  >   >  One thing I hadn't realized was that the belt concept[1] resulted in  >  having no register "rubble" left over from making a normal... or! IPC  >  call that changed privs. Say you have a belt with values like:  >   >  3|4|2|1|5|6|7|8  >   >  a subroutine call, with arguments  >   >  jsr somewhere,b1,b4,b3  >   >  creates a new belt (so the called routine sees no other registers from  >  the caller)  >   >  4,5,1,X,X,X,X,X # (the mill has a concept of "not a value, or NAR")  >   >  On a return, the same idea applies, where the return values are dropped  >  at the head of the callee's belt. head of the callers belt, I meant.  >  callee does some work:  >   >  8|1|2|3|6|2|7|1  >  ...  >  retn b1,b5  >   >  Which drops those two values only on the callers belt, and discards  >  everything else. SSA, everywhere.  >   >  callee belt becomes: caller belt becomes  >   >  1|2|3|4|2|1|5|6  >   >  This makes peer to peer based secure IPC (Where normally you'd have a  >  priv escalation call like syscall, or attempt sandboxing) a snap,  >  instead of making a jsr, you make a "portal" call, which also ets up  >  memory perms, etc.  >   >  Me trying to explain here how they handle priv (de)escalation  >  (switching between "turfs" and so on) is way beyond the scope of what  >  I could write here, let me just say their work is computer  >  architecture Pr0n of the highest order, and I've lost many, many  >  weekends to grokking it all. [2].  >   >>  The end to end argument  >>  should be applied to bus protocols like CAN, for the same reason.  >   >  Too late!  >   >  [0] https://millcomputing.com/docs/inter-process-communication/  >  [1] https://en.wikipedia.org/wiki/Belt_machine  >  [2] https://millcomputing.com/docs/  >   >>   >>  On Oct 4, 2017 at 12:38 PM, wrote:  >>   >>  well, I still think the system is rotten to its (cpu) cores and much  >>  better hardware support for security is needed to start from in order  >>  to have better software. Multics pioneered a few things in that  >>  department as I recall, but research mostly died in the 90s...  >>   >>  Blatant Plug: The mill cpu folk are giving a talk about how they do  >>  secure interprocess communication tonight in san jose, ca. I'm going.  >>  While I expect to be cheered up by the design (the underlying  >>  architecture supports memory protections down to the byte, not page,  >>  level, and may be largely immune to ROP) - I expect to be depressed by  >>  how far away they still remain from building the darn thing.  >>   >>  https://millcomputing.com/event/inter-process-communication-talk-on-october-4-2017/  >>   >>   >   >   >   >  --  >   >  Dave Täht  >  CEO, TekLibre, LLC  >  http://www.teklibre.com  >  Tel: 1-669-226-2619 -- Dave Täht CEO, TekLibre, LLC http://www.teklibre.com Tel: 1-669-226-2619  
>   
     

[-- Attachment #2: Type: text/html, Size: 5740 bytes --]

  parent reply	other threads:[~2017-10-07 20:54 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-04  0:43 Rich Brown
2017-10-04  3:49 ` Dave Taht
2017-10-04 13:12   ` David P Reed
2017-10-04 16:38     ` Dave Taht
2017-10-07 13:33       ` dpreed [this message]
2017-10-07 20:54         ` dpreed
     [not found]     ` <59d8d7ae.5b37c80a.9c70e.c057SMTPIN_ADDED_BROKEN@mx.google.com>
2017-10-07 18:32       ` Dave Taht
2017-10-07 20:28         ` Dave Taht
     [not found]     ` <59d8d7b6.06c3370a.2a6e1.858eSMTPIN_ADDED_BROKEN@mx.google.com>
2017-10-07 20:42       ` valdis.kletnieks
2017-10-09  8:32         ` Mikael Abrahamsson
2017-10-09 17:33           ` Dave Taht
2017-10-09 18:37           ` dpreed
  -- strict thread matches above, loose matches on Subject: below --
2017-10-02 18:18 [Cerowrt-devel] dnsmasq cves Dave Taht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=@localhost \
    --to=dpreed@reed.com \
    --cc=cerowrt-devel@lists.bufferbloat.net \
    --cc=dave.taht@gmail.com \
    --cc=richb.hanover@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox