Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
diff for duplicates of <@localhost>

diff --git a/a/1.txt b/N1/1.txt
index dfa1d59..2a31883 100644
--- a/a/1.txt
+++ b/N1/1.txt
@@ -1,34 +1,25 @@
- 
- 
-No disagreement here. I saw a wonderful discussion recently by a researcher at Mentor Graphics about 2 things: VLSI design hacking and low level interconnect hacking. Things we call "hardware" and just assume are designed securely.
- 
+  
+  
+Interesting. If stack pops zeroed memory, a stack machine would fix the subroutine call privilege drop issue. Also register zeroing on syscall return avoids privilege leaks. Linux on Intel 64 doesn't do this :-(
+  
 
- 
-They are not. The hardware designers at the chip and board level know little or nothing about security techniques. They don't work with systems people who build with their hardware to limit undefined or covert behaviors.
- 
+  
+The mill is very interesting.   
+  
 
- 
-Systems people in turn make unreasonable and often wrong assumptions about what is hard about hardware. Assumptions about what it won't do, in particular.
- 
+  
+One concern, I have recently realized that it is not fully open like RISC-V. I don't blame its    developers for wanting a ROI. But adoption may require rethinking that choice. These days, shared standardized infrastructure tends to require open adoptability.
+  
+  
 
- 
-We need to treat hardware like we treat software. Full of bugs, easily compromised. There are approaches to reliability and security that we know, that are tractable. But to apply them we need to drop the fictional idea that hardware is hard... It's soft.
- 
-
- 
-The principle of least privilege is one of those. The end to end argument should be applied to bus protocols like CAN, for the same reason.
- 
- 
- 
-
- 
- 
- 
- 
- 
->  
-> On Oct 4, 2017 at 12:38 PM,  <Dave Taht>  wrote:
->  
->  
->  well, I still think the system is rotten to its (cpu) cores and much better hardware support for security is needed to start from in order to have better software. Multics pioneered a few things in that department as I recall, but research mostly died in the 90s... Blatant Plug: The mill cpu folk are giving a talk about how they do secure interprocess communication tonight in san jose, ca. I'm going. While I expect to be cheered up by the design (the underlying architecture supports memory protections down to the byte, not page, level, and may be largely immune to ROP) - I expect to be depressed by how far away they still remain from building the darn thing. https://millcomputing.com/event/inter-process-communication-talk-on-october-4-2017/ 
+  
+  
+  
+  
+  
+>   
+> On Oct 7, 2017 at 4:28 PM,  <Dave Taht>  wrote:
+>   
+>   
+>  I misstated something, fix below. On Sat, Oct 7, 2017 at 11:32 AM, Dave Taht wrote:  >  On Sat, Oct 7, 2017 at 6:33 AM, dpreed wrote:  >>  No disagreement here. I saw a wonderful discussion recently by a researcher  >>  at Mentor Graphics about 2 things: VLSI design hacking and low level  >>  interconnect hacking. Things we call "hardware" and just assume are designed  >>  securely. Was this filmed, btw?  >>  They are not. The hardware designers at the chip and board level know little  >>  or nothing about security techniques. They don't work with systems people  >>  who build with their hardware to limit undefined or covert behaviors.  >>   >>  Systems people in turn make unreasonable and often wrong assumptions about  >>  what is hard about hardware. Assumptions about what it won't do, in  >>  particular.  >>   >>  We need to treat hardware like we treat software. Full of bugs, easily  >>  compromised. There are approaches to reliability and security that we know,  >>  that are tractable. But to apply them we need to drop the fictional idea  >>  that hardware is hard... It's soft.  >   >  hardware design tools and software seem stuck in the 80s.  >   >>  The principle of least privilege is one of those.  >   >  Everybody here probably knows by now how much I am a mill cpu fan.  >   >  The principle of least privs, on a mill, can apply to individual subroutines.  >   >  The talk (it's up at [0], but because it has to cover so much prior  >  material doesn't really get rolling till slide 30) highlighted how  >  they do secure IPC, and transfer memory access privs around, cheaply.  >   >  One thing I hadn't realized was that the belt concept[1] resulted in  >  having no register "rubble" left over from making a normal... or! IPC  >  call that changed privs. Say you have a belt with values like:  >   >  3|4|2|1|5|6|7|8  >   >  a subroutine call, with arguments  >   >  jsr somewhere,b1,b4,b3  >   >  creates a new belt (so the called routine sees no other registers from  >  the caller)  >   >  4,5,1,X,X,X,X,X # (the mill has a concept of "not a value, or NAR")  >   >  On a return, the same idea applies, where the return values are dropped  >  at the head of the callee's belt. head of the callers belt, I meant.  >  callee does some work:  >   >  8|1|2|3|6|2|7|1  >  ...  >  retn b1,b5  >   >  Which drops those two values only on the callers belt, and discards  >  everything else. SSA, everywhere.  >   >  callee belt becomes: caller belt becomes  >   >  1|2|3|4|2|1|5|6  >   >  This makes peer to peer based secure IPC (Where normally you'd have a  >  priv escalation call like syscall, or attempt sandboxing) a snap,  >  instead of making a jsr, you make a "portal" call, which also ets up  >  memory perms, etc.  >   >  Me trying to explain here how they handle priv (de)escalation  >  (switching between "turfs" and so on) is way beyond the scope of what  >  I could write here, let me just say their work is computer  >  architecture Pr0n of the highest order, and I've lost many, many  >  weekends to grokking it all. [2].  >   >>  The end to end argument  >>  should be applied to bus protocols like CAN, for the same reason.  >   >  Too late!  >   >  [0] https://millcomputing.com/docs/inter-process-communication/  >  [1] https://en.wikipedia.org/wiki/Belt_machine  >  [2] https://millcomputing.com/docs/  >   >>   >>  On Oct 4, 2017 at 12:38 PM, wrote:  >>   >>  well, I still think the system is rotten to its (cpu) cores and much  >>  better hardware support for security is needed to start from in order  >>  to have better software. Multics pioneered a few things in that  >>  department as I recall, but research mostly died in the 90s...  >>   >>  Blatant Plug: The mill cpu folk are giving a talk about how they do  >>  secure interprocess communication tonight in san jose, ca. I'm going.  >>  While I expect to be cheered up by the design (the underlying  >>  architecture supports memory protections down to the byte, not page,  >>  level, and may be largely immune to ROP) - I expect to be depressed by  >>  how far away they still remain from building the darn thing.  >>   >>  https://millcomputing.com/event/inter-process-communication-talk-on-october-4-2017/  >>   >>   >   >   >   >  --  >   >  Dave Täht  >  CEO, TekLibre, LLC  >  http://www.teklibre.com  >  Tel: 1-669-226-2619 -- Dave Täht CEO, TekLibre, LLC http://www.teklibre.com Tel: 1-669-226-2619  
 >
diff --git a/a/2.bin b/N1/2.bin
index f2dbcb3..c15d786 100644
--- a/a/2.bin
+++ b/N1/2.bin
@@ -1,15 +1,129 @@
-<div id="edo-message"><div>No disagreement here. I saw a wonderful discussion recently by a researcher at Mentor Graphics about 2 things: VLSI design hacking and low level interconnect hacking. Things we call "hardware" and just assume are designed securely.</div><div><br></div><div>They are not. The hardware designers at the chip and board level know little or nothing about security techniques. They don't work with systems people who build with their hardware to limit undefined or covert behaviors.</div><div><br></div><div>Systems people in turn make unreasonable and often wrong assumptions about what is hard about hardware. Assumptions about what it won't do, in particular.</div><div><br></div><div>We need to treat hardware like we treat software. Full of bugs, easily compromised. There are approaches to reliability and security that we know, that are tractable. But to apply them we need to drop the fictional idea that hardware is hard... It's soft.</div><div><br></div><div>The principle of least privilege is one of those. The end to end argument should be applied to bus protocols like CAN, for the same reason.<br><br><br><div id="edo-signature"></div></div></div><div id="edo-original"><div><blockquote type="cite"><div id="edo-meta">On Oct 4, 2017 at 12:38 PM, &lt;<a href="mailto:dave.taht@gmail.com">Dave Taht</a>&gt; wrote: <br><br></div><pre>well, I still think the system is rotten to its (cpu) cores and much
-better hardware support for security is needed to start from in order
-to have better software. Multics pioneered a few things in that
-department as I recall, but research mostly died in the 90s...
-
-Blatant Plug: The mill cpu folk are giving a talk about how they do
-secure interprocess communication tonight in san jose, ca. I'm going.
-While I expect to be cheered up by the design (the underlying
-architecture supports memory protections down to the byte, not page,
-level, and may be largely immune to ROP) - I expect to be depressed by
-how far away they still remain from building the darn thing.
-
-https://millcomputing.com/event/inter-process-communication-talk-on-october-4-2017/
-</pre>  
+<div id="edo-message"><div>Interesting. If stack pops zeroed memory, a stack machine would fix the subroutine call privilege drop issue. Also register zeroing on syscall return avoids privilege leaks. Linux on Intel 64 doesn't do this :-(</div><div><br></div><div>The mill is very interesting.&nbsp;</div><div><br></div><div>One concern, I have recently realized that it is not fully open like RISC-V. I don't blame its&nbsp; developers for wanting a ROI. But adoption may require rethinking that choice. These days, shared standardized infrastructure tends to require open adoptability.<br><br><div id="edo-signature"></div></div></div><div id="edo-original"><div><blockquote type="cite"><div id="edo-meta">On Oct 7, 2017 at 4:28 PM, &lt;<a href="mailto:dave.taht@gmail.com">Dave Taht</a>&gt; wrote: <br><br></div><pre>I misstated something, fix below.
+
+On Sat, Oct 7, 2017 at 11:32 AM, Dave Taht <dave.taht@gmail.com> wrote:
+&gt; On Sat, Oct 7, 2017 at 6:33 AM, dpreed <dpreed@reed.com> wrote:
+&gt;&gt; No disagreement here. I saw a wonderful discussion recently by a researcher
+&gt;&gt; at Mentor Graphics about 2 things: VLSI design hacking and low level
+&gt;&gt; interconnect hacking. Things we call "hardware" and just assume are designed
+&gt;&gt; securely.
+
+Was this filmed, btw?
+
+&gt;&gt; They are not. The hardware designers at the chip and board level know little
+&gt;&gt; or nothing about security techniques. They don't work with systems people
+&gt;&gt; who build with their hardware to limit undefined or covert behaviors.
+&gt;&gt;
+&gt;&gt; Systems people in turn make unreasonable and often wrong assumptions about
+&gt;&gt; what is hard about hardware. Assumptions about what it won't do, in
+&gt;&gt; particular.
+&gt;&gt;
+&gt;&gt; We need to treat hardware like we treat software. Full of bugs, easily
+&gt;&gt; compromised. There are approaches to reliability and security that we know,
+&gt;&gt; that are tractable. But to apply them we need to drop the fictional idea
+&gt;&gt; that hardware is hard... It's soft.
+&gt;
+&gt; hardware design tools and software seem stuck in the 80s.
+&gt;
+&gt;&gt; The principle of least privilege is one of those.
+&gt;
+&gt; Everybody here probably knows by now how much I am a mill cpu fan.
+&gt;
+&gt; The principle of least privs, on a mill, can apply to individual subroutines.
+&gt;
+&gt; The talk (it's up at [0], but because it has to cover so much prior
+&gt; material doesn't really get rolling till slide 30) highlighted how
+&gt; they do secure IPC, and transfer memory access privs around, cheaply.
+&gt;
+&gt; One thing I hadn't realized was that the belt concept[1] resulted in
+&gt; having no register "rubble" left over from making a normal... or! IPC
+&gt; call that changed privs. Say you have a belt with values like:
+&gt;
+&gt; 3|4|2|1|5|6|7|8
+&gt;
+&gt; a subroutine call, with arguments
+&gt;
+&gt; jsr somewhere,b1,b4,b3
+&gt;
+&gt; creates a new belt (so the called routine sees no other registers from
+&gt; the caller)
+&gt;
+&gt; 4,5,1,X,X,X,X,X # (the mill has a concept of "not a value, or NAR")
+&gt;
+&gt; On a return, the same idea applies, where the return values are dropped
+&gt; at the head of the callee's belt.
+
+head of the callers belt, I meant.
+
+&gt; callee does some work:
+&gt;
+&gt; 8|1|2|3|6|2|7|1
+&gt; ...
+&gt; retn b1,b5
+&gt;
+&gt; Which drops those two values only on the callers belt, and discards
+&gt; everything else. SSA, everywhere.
+&gt;
+&gt; callee belt becomes:
+
+caller belt becomes
+&gt;
+&gt; 1|2|3|4|2|1|5|6
+&gt;
+&gt; This makes peer to peer based secure IPC (Where normally you'd have a
+&gt; priv escalation call like syscall, or attempt sandboxing) a snap,
+&gt; instead of making a jsr, you make a "portal" call, which also ets up
+&gt; memory perms, etc.
+&gt;
+&gt; Me trying to explain here how they handle priv (de)escalation
+&gt; (switching between "turfs" and so on) is way beyond the scope of what
+&gt; I could write here, let me just say their work is computer
+&gt; architecture Pr0n of the highest order, and I've lost many, many
+&gt; weekends to grokking it all. [2].
+&gt;
+&gt;&gt; The end to end argument
+&gt;&gt; should be applied to bus protocols like CAN, for the same reason.
+&gt;
+&gt; Too late!
+&gt;
+&gt; [0] https://millcomputing.com/docs/inter-process-communication/
+&gt; [1] https://en.wikipedia.org/wiki/Belt_machine
+&gt; [2] https://millcomputing.com/docs/
+&gt;
+&gt;&gt;
+&gt;&gt; On Oct 4, 2017 at 12:38 PM, <dave taht=""> wrote:
+&gt;&gt;
+&gt;&gt; well, I still think the system is rotten to its (cpu) cores and much
+&gt;&gt; better hardware support for security is needed to start from in order
+&gt;&gt; to have better software. Multics pioneered a few things in that
+&gt;&gt; department as I recall, but research mostly died in the 90s...
+&gt;&gt;
+&gt;&gt; Blatant Plug: The mill cpu folk are giving a talk about how they do
+&gt;&gt; secure interprocess communication tonight in san jose, ca. I'm going.
+&gt;&gt; While I expect to be cheered up by the design (the underlying
+&gt;&gt; architecture supports memory protections down to the byte, not page,
+&gt;&gt; level, and may be largely immune to ROP) - I expect to be depressed by
+&gt;&gt; how far away they still remain from building the darn thing.
+&gt;&gt;
+&gt;&gt; https://millcomputing.com/event/inter-process-communication-talk-on-october-4-2017/
+&gt;&gt;
+&gt;&gt;
+&gt;
+&gt;
+&gt;
+&gt; --
+&gt;
+&gt; Dave Täht
+&gt; CEO, TekLibre, LLC
+&gt; http://www.teklibre.com
+&gt; Tel: 1-669-226-2619
+
+
+
+--  
+
+Dave Täht
+CEO, TekLibre, LLC
+http://www.teklibre.com
+Tel: 1-669-226-2619
+</dave></dpreed@reed.com></dave.taht@gmail.com></pre>  
  <br> </blockquote></div></div>
diff --git a/a/content_digest b/N1/content_digest
index 9059700..f65d987 100644
--- a/a/content_digest
+++ b/N1/content_digest
@@ -2,64 +2,172 @@
  "ref\0CAA93jw59DVzLVQv3mkdYNx2YduDTn73PJx6Zn7kX8FymLB_hBQ@mail.gmail.com\0"
  "ref\0CAA93jw7ZDwv8yFFOBuxUP8bp-K+3tQ9Xj9inHYHt3thMkKZw2A@mail.gmail.com\0"
  "ref\082be7dac-c30b-449d-a392-305c31b83519@reed.com\0"
+ "ref\059d8d7ae.5b37c80a.9c70e.c057SMTPIN_ADDED_BROKEN@mx.google.com\0"
+ "ref\0CAA93jw4gvei441UgyCTE5qn8XouZFDt_t0C88qG9RgnDyS83hA@mail.gmail.com\0"
+ "ref\0CAA93jw4xK6RnWPpB-7UD2mTKq79vGQizwenquhPzck=TBk=8WQ@mail.gmail.com\0"
  "From\0dpreed <dpreed@reed.com>\0"
  "Subject\0Re: [Cerowrt-devel] dnsmasq CVEs\0"
- "Date\0Sat, 7 Oct 2017 09:33:34 -0400\0"
+ "Date\0Sat, 7 Oct 2017 16:54:37 -0400\0"
  "To\0Dave Taht <dave.taht@gmail.com>\0"
  "Cc\0Rich Brown <richb.hanover@gmail.com>"
  " cerowrt-devel@lists.bufferbloat.net <cerowrt-devel@lists.bufferbloat.net>\0"
  "\01:1\0"
  "b\0"
- " \n"
- " \n"
- "No disagreement here. I saw a wonderful discussion recently by a researcher at Mentor Graphics about 2 things: VLSI design hacking and low level interconnect hacking. Things we call \"hardware\" and just assume are designed securely.\n"
- " \n"
+ "  \n"
+ "  \n"
+ "Interesting. If stack pops zeroed memory, a stack machine would fix the subroutine call privilege drop issue. Also register zeroing on syscall return avoids privilege leaks. Linux on Intel 64 doesn't do this :-(\n"
+ "  \n"
  "\n"
- " \n"
- "They are not. The hardware designers at the chip and board level know little or nothing about security techniques. They don't work with systems people who build with their hardware to limit undefined or covert behaviors.\n"
- " \n"
+ "  \n"
+ "The mill is very interesting.   \n"
+ "  \n"
  "\n"
- " \n"
- "Systems people in turn make unreasonable and often wrong assumptions about what is hard about hardware. Assumptions about what it won't do, in particular.\n"
- " \n"
+ "  \n"
+ "One concern, I have recently realized that it is not fully open like RISC-V. I don't blame its    developers for wanting a ROI. But adoption may require rethinking that choice. These days, shared standardized infrastructure tends to require open adoptability.\n"
+ "  \n"
+ "  \n"
  "\n"
- " \n"
- "We need to treat hardware like we treat software. Full of bugs, easily compromised. There are approaches to reliability and security that we know, that are tractable. But to apply them we need to drop the fictional idea that hardware is hard... It's soft.\n"
- " \n"
- "\n"
- " \n"
- "The principle of least privilege is one of those. The end to end argument should be applied to bus protocols like CAN, for the same reason.\n"
- " \n"
- " \n"
- " \n"
- "\n"
- " \n"
- " \n"
- " \n"
- " \n"
- " \n"
- ">  \n"
- "> On Oct 4, 2017 at 12:38 PM,  <Dave Taht>  wrote:\n"
- ">  \n"
- ">  \n"
- ">  well, I still think the system is rotten to its (cpu) cores and much better hardware support for security is needed to start from in order to have better software. Multics pioneered a few things in that department as I recall, but research mostly died in the 90s... Blatant Plug: The mill cpu folk are giving a talk about how they do secure interprocess communication tonight in san jose, ca. I'm going. While I expect to be cheered up by the design (the underlying architecture supports memory protections down to the byte, not page, level, and may be largely immune to ROP) - I expect to be depressed by how far away they still remain from building the darn thing. https://millcomputing.com/event/inter-process-communication-talk-on-october-4-2017/ \n"
+ "  \n"
+ "  \n"
+ "  \n"
+ "  \n"
+ "  \n"
+ ">   \n"
+ "> On Oct 7, 2017 at 4:28 PM,  <Dave Taht>  wrote:\n"
+ ">   \n"
+ ">   \n"
+ ">  I misstated something, fix below. On Sat, Oct 7, 2017 at 11:32 AM, Dave Taht wrote:  >  On Sat, Oct 7, 2017 at 6:33 AM, dpreed wrote:  >>  No disagreement here. I saw a wonderful discussion recently by a researcher  >>  at Mentor Graphics about 2 things: VLSI design hacking and low level  >>  interconnect hacking. Things we call \"hardware\" and just assume are designed  >>  securely. Was this filmed, btw?  >>  They are not. The hardware designers at the chip and board level know little  >>  or nothing about security techniques. They don't work with systems people  >>  who build with their hardware to limit undefined or covert behaviors.  >>   >>  Systems people in turn make unreasonable and often wrong assumptions about  >>  what is hard about hardware. Assumptions about what it won't do, in  >>  particular.  >>   >>  We need to treat hardware like we treat software. Full of bugs, easily  >>  compromised. There are approaches to reliability and security that we know,  >>  that are tractable. But to apply them we need to drop the fictional idea  >>  that hardware is hard... It's soft.  >   >  hardware design tools and software seem stuck in the 80s.  >   >>  The principle of least privilege is one of those.  >   >  Everybody here probably knows by now how much I am a mill cpu fan.  >   >  The principle of least privs, on a mill, can apply to individual subroutines.  >   >  The talk (it's up at [0], but because it has to cover so much prior  >  material doesn't really get rolling till slide 30) highlighted how  >  they do secure IPC, and transfer memory access privs around, cheaply.  >   >  One thing I hadn't realized was that the belt concept[1] resulted in  >  having no register \"rubble\" left over from making a normal... or! IPC  >  call that changed privs. Say you have a belt with values like:  >   >  3|4|2|1|5|6|7|8  >   >  a subroutine call, with arguments  >   >  jsr somewhere,b1,b4,b3  >   >  creates a new belt (so the called routine sees no other registers from  >  the caller)  >   >  4,5,1,X,X,X,X,X # (the mill has a concept of \"not a value, or NAR\")  >   >  On a return, the same idea applies, where the return values are dropped  >  at the head of the callee's belt. head of the callers belt, I meant.  >  callee does some work:  >   >  8|1|2|3|6|2|7|1  >  ...  >  retn b1,b5  >   >  Which drops those two values only on the callers belt, and discards  >  everything else. SSA, everywhere.  >   >  callee belt becomes: caller belt becomes  >   >  1|2|3|4|2|1|5|6  >   >  This makes peer to peer based secure IPC (Where normally you'd have a  >  priv escalation call like syscall, or attempt sandboxing) a snap,  >  instead of making a jsr, you make a \"portal\" call, which also ets up  >  memory perms, etc.  >   >  Me trying to explain here how they handle priv (de)escalation  >  (switching between \"turfs\" and so on) is way beyond the scope of what  >  I could write here, let me just say their work is computer  >  architecture Pr0n of the highest order, and I've lost many, many  >  weekends to grokking it all. [2].  >   >>  The end to end argument  >>  should be applied to bus protocols like CAN, for the same reason.  >   >  Too late!  >   >  [0] https://millcomputing.com/docs/inter-process-communication/  >  [1] https://en.wikipedia.org/wiki/Belt_machine  >  [2] https://millcomputing.com/docs/  >   >>   >>  On Oct 4, 2017 at 12:38 PM, wrote:  >>   >>  well, I still think the system is rotten to its (cpu) cores and much  >>  better hardware support for security is needed to start from in order  >>  to have better software. Multics pioneered a few things in that  >>  department as I recall, but research mostly died in the 90s...  >>   >>  Blatant Plug: The mill cpu folk are giving a talk about how they do  >>  secure interprocess communication tonight in san jose, ca. I'm going.  >>  While I expect to be cheered up by the design (the underlying  >>  architecture supports memory protections down to the byte, not page,  >>  level, and may be largely immune to ROP) - I expect to be depressed by  >>  how far away they still remain from building the darn thing.  >>   >>  https://millcomputing.com/event/inter-process-communication-talk-on-october-4-2017/  >>   >>   >   >   >   >  --  >   >  Dave T\303\244ht  >  CEO, TekLibre, LLC  >  http://www.teklibre.com  >  Tel: 1-669-226-2619 -- Dave T\303\244ht CEO, TekLibre, LLC http://www.teklibre.com Tel: 1-669-226-2619  \n"
  >
  "\01:2\0"
  "b\0"
- "<div id=\"edo-message\"><div>No disagreement here. I saw a wonderful discussion recently by a researcher at Mentor Graphics about 2 things: VLSI design hacking and low level interconnect hacking. Things we call \"hardware\" and just assume are designed securely.</div><div><br></div><div>They are not. The hardware designers at the chip and board level know little or nothing about security techniques. They don't work with systems people who build with their hardware to limit undefined or covert behaviors.</div><div><br></div><div>Systems people in turn make unreasonable and often wrong assumptions about what is hard about hardware. Assumptions about what it won't do, in particular.</div><div><br></div><div>We need to treat hardware like we treat software. Full of bugs, easily compromised. There are approaches to reliability and security that we know, that are tractable. But to apply them we need to drop the fictional idea that hardware is hard... It's soft.</div><div><br></div><div>The principle of least privilege is one of those. The end to end argument should be applied to bus protocols like CAN, for the same reason.<br><br><br><div id=\"edo-signature\"></div></div></div><div id=\"edo-original\"><div><blockquote type=\"cite\"><div id=\"edo-meta\">On Oct 4, 2017 at 12:38 PM, &lt;<a href=\"mailto:dave.taht@gmail.com\">Dave Taht</a>&gt; wrote: <br><br></div><pre>well, I still think the system is rotten to its (cpu) cores and much\r\n"
- "better hardware support for security is needed to start from in order\r\n"
- "to have better software. Multics pioneered a few things in that\r\n"
- "department as I recall, but research mostly died in the 90s...\r\n"
+ "<div id=\"edo-message\"><div>Interesting. If stack pops zeroed memory, a stack machine would fix the subroutine call privilege drop issue. Also register zeroing on syscall return avoids privilege leaks. Linux on Intel 64 doesn't do this :-(</div><div><br></div><div>The mill is very interesting.&nbsp;</div><div><br></div><div>One concern, I have recently realized that it is not fully open like RISC-V. I don't blame its&nbsp; developers for wanting a ROI. But adoption may require rethinking that choice. These days, shared standardized infrastructure tends to require open adoptability.<br><br><div id=\"edo-signature\"></div></div></div><div id=\"edo-original\"><div><blockquote type=\"cite\"><div id=\"edo-meta\">On Oct 7, 2017 at 4:28 PM, &lt;<a href=\"mailto:dave.taht@gmail.com\">Dave Taht</a>&gt; wrote: <br><br></div><pre>I misstated something, fix below.\r\n"
+ "\r\n"
+ "On Sat, Oct 7, 2017 at 11:32 AM, Dave Taht <dave.taht@gmail.com> wrote:\r\n"
+ "&gt; On Sat, Oct 7, 2017 at 6:33 AM, dpreed <dpreed@reed.com> wrote:\r\n"
+ "&gt;&gt; No disagreement here. I saw a wonderful discussion recently by a researcher\r\n"
+ "&gt;&gt; at Mentor Graphics about 2 things: VLSI design hacking and low level\r\n"
+ "&gt;&gt; interconnect hacking. Things we call \"hardware\" and just assume are designed\r\n"
+ "&gt;&gt; securely.\r\n"
+ "\r\n"
+ "Was this filmed, btw?\r\n"
+ "\r\n"
+ "&gt;&gt; They are not. The hardware designers at the chip and board level know little\r\n"
+ "&gt;&gt; or nothing about security techniques. They don't work with systems people\r\n"
+ "&gt;&gt; who build with their hardware to limit undefined or covert behaviors.\r\n"
+ "&gt;&gt;\r\n"
+ "&gt;&gt; Systems people in turn make unreasonable and often wrong assumptions about\r\n"
+ "&gt;&gt; what is hard about hardware. Assumptions about what it won't do, in\r\n"
+ "&gt;&gt; particular.\r\n"
+ "&gt;&gt;\r\n"
+ "&gt;&gt; We need to treat hardware like we treat software. Full of bugs, easily\r\n"
+ "&gt;&gt; compromised. There are approaches to reliability and security that we know,\r\n"
+ "&gt;&gt; that are tractable. But to apply them we need to drop the fictional idea\r\n"
+ "&gt;&gt; that hardware is hard... It's soft.\r\n"
+ "&gt;\r\n"
+ "&gt; hardware design tools and software seem stuck in the 80s.\r\n"
+ "&gt;\r\n"
+ "&gt;&gt; The principle of least privilege is one of those.\r\n"
+ "&gt;\r\n"
+ "&gt; Everybody here probably knows by now how much I am a mill cpu fan.\r\n"
+ "&gt;\r\n"
+ "&gt; The principle of least privs, on a mill, can apply to individual subroutines.\r\n"
+ "&gt;\r\n"
+ "&gt; The talk (it's up at [0], but because it has to cover so much prior\r\n"
+ "&gt; material doesn't really get rolling till slide 30) highlighted how\r\n"
+ "&gt; they do secure IPC, and transfer memory access privs around, cheaply.\r\n"
+ "&gt;\r\n"
+ "&gt; One thing I hadn't realized was that the belt concept[1] resulted in\r\n"
+ "&gt; having no register \"rubble\" left over from making a normal... or! IPC\r\n"
+ "&gt; call that changed privs. Say you have a belt with values like:\r\n"
+ "&gt;\r\n"
+ "&gt; 3|4|2|1|5|6|7|8\r\n"
+ "&gt;\r\n"
+ "&gt; a subroutine call, with arguments\r\n"
+ "&gt;\r\n"
+ "&gt; jsr somewhere,b1,b4,b3\r\n"
+ "&gt;\r\n"
+ "&gt; creates a new belt (so the called routine sees no other registers from\r\n"
+ "&gt; the caller)\r\n"
+ "&gt;\r\n"
+ "&gt; 4,5,1,X,X,X,X,X # (the mill has a concept of \"not a value, or NAR\")\r\n"
+ "&gt;\r\n"
+ "&gt; On a return, the same idea applies, where the return values are dropped\r\n"
+ "&gt; at the head of the callee's belt.\r\n"
+ "\r\n"
+ "head of the callers belt, I meant.\r\n"
+ "\r\n"
+ "&gt; callee does some work:\r\n"
+ "&gt;\r\n"
+ "&gt; 8|1|2|3|6|2|7|1\r\n"
+ "&gt; ...\r\n"
+ "&gt; retn b1,b5\r\n"
+ "&gt;\r\n"
+ "&gt; Which drops those two values only on the callers belt, and discards\r\n"
+ "&gt; everything else. SSA, everywhere.\r\n"
+ "&gt;\r\n"
+ "&gt; callee belt becomes:\r\n"
+ "\r\n"
+ "caller belt becomes\r\n"
+ "&gt;\r\n"
+ "&gt; 1|2|3|4|2|1|5|6\r\n"
+ "&gt;\r\n"
+ "&gt; This makes peer to peer based secure IPC (Where normally you'd have a\r\n"
+ "&gt; priv escalation call like syscall, or attempt sandboxing) a snap,\r\n"
+ "&gt; instead of making a jsr, you make a \"portal\" call, which also ets up\r\n"
+ "&gt; memory perms, etc.\r\n"
+ "&gt;\r\n"
+ "&gt; Me trying to explain here how they handle priv (de)escalation\r\n"
+ "&gt; (switching between \"turfs\" and so on) is way beyond the scope of what\r\n"
+ "&gt; I could write here, let me just say their work is computer\r\n"
+ "&gt; architecture Pr0n of the highest order, and I've lost many, many\r\n"
+ "&gt; weekends to grokking it all. [2].\r\n"
+ "&gt;\r\n"
+ "&gt;&gt; The end to end argument\r\n"
+ "&gt;&gt; should be applied to bus protocols like CAN, for the same reason.\r\n"
+ "&gt;\r\n"
+ "&gt; Too late!\r\n"
+ "&gt;\r\n"
+ "&gt; [0] https://millcomputing.com/docs/inter-process-communication/\r\n"
+ "&gt; [1] https://en.wikipedia.org/wiki/Belt_machine\r\n"
+ "&gt; [2] https://millcomputing.com/docs/\r\n"
+ "&gt;\r\n"
+ "&gt;&gt;\r\n"
+ "&gt;&gt; On Oct 4, 2017 at 12:38 PM, <dave taht=\"\"> wrote:\r\n"
+ "&gt;&gt;\r\n"
+ "&gt;&gt; well, I still think the system is rotten to its (cpu) cores and much\r\n"
+ "&gt;&gt; better hardware support for security is needed to start from in order\r\n"
+ "&gt;&gt; to have better software. Multics pioneered a few things in that\r\n"
+ "&gt;&gt; department as I recall, but research mostly died in the 90s...\r\n"
+ "&gt;&gt;\r\n"
+ "&gt;&gt; Blatant Plug: The mill cpu folk are giving a talk about how they do\r\n"
+ "&gt;&gt; secure interprocess communication tonight in san jose, ca. I'm going.\r\n"
+ "&gt;&gt; While I expect to be cheered up by the design (the underlying\r\n"
+ "&gt;&gt; architecture supports memory protections down to the byte, not page,\r\n"
+ "&gt;&gt; level, and may be largely immune to ROP) - I expect to be depressed by\r\n"
+ "&gt;&gt; how far away they still remain from building the darn thing.\r\n"
+ "&gt;&gt;\r\n"
+ "&gt;&gt; https://millcomputing.com/event/inter-process-communication-talk-on-october-4-2017/\r\n"
+ "&gt;&gt;\r\n"
+ "&gt;&gt;\r\n"
+ "&gt;\r\n"
+ "&gt;\r\n"
+ "&gt;\r\n"
+ "&gt; --\r\n"
+ "&gt;\r\n"
+ "&gt; Dave T\303\244ht\r\n"
+ "&gt; CEO, TekLibre, LLC\r\n"
+ "&gt; http://www.teklibre.com\r\n"
+ "&gt; Tel: 1-669-226-2619\r\n"
+ "\r\n"
+ "\r\n"
  "\r\n"
- "Blatant Plug: The mill cpu folk are giving a talk about how they do\r\n"
- "secure interprocess communication tonight in san jose, ca. I'm going.\r\n"
- "While I expect to be cheered up by the design (the underlying\r\n"
- "architecture supports memory protections down to the byte, not page,\r\n"
- "level, and may be largely immune to ROP) - I expect to be depressed by\r\n"
- "how far away they still remain from building the darn thing.\r\n"
+ "--  \r\n"
  "\r\n"
- "https://millcomputing.com/event/inter-process-communication-talk-on-october-4-2017/\r\n"
- "</pre>  \r\n"
+ "Dave T\303\244ht\r\n"
+ "CEO, TekLibre, LLC\r\n"
+ "http://www.teklibre.com\r\n"
+ "Tel: 1-669-226-2619\r\n"
+ "</dave></dpreed@reed.com></dave.taht@gmail.com></pre>  \r\n"
   <br> </blockquote></div></div>
 
-dff70c6cd8af265f0316cd2393b40400f288518047c0d757a368a5d0cd590907
+ca651e05ad43b3dcd423f1f2fdbae51b3b34841b1c755d76814fc984f38f4541

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox