[Cerowrt-devel] dnsmasq CVEs
Dave Taht
dave at taht.net
Mon Oct 9 13:33:05 EDT 2017
Mikael Abrahamsson <swmike at swm.pp.se> writes:
> On Sat, 7 Oct 2017, valdis.kletnieks at vt.edu wrote:
>
>> Know how x86 people complain that SSM mode introduces jitter?
I'm a really jitter-sensitive person. my life is warped by trying to
achieve hard realtime (meaning - *0* missed deadlines in a variety of
OSes and products - everything from rockets to brakes, to medical and
safety systems to DAW's like "ardour.org"). Achieving hard deadlines
below 5ms in modern processors and OSes is incredibly difficult.
Everyone (else) seems willing to sacrifice a little latency here or
there for their special feature - power management, SSM, sidechannel
uploads to various government agencies, whathaveyou.
In walking down the street, dodging the smart cars...
I try really hard to not think about all the stuff out there that works
correctly only 99.9999% of the time, but I still perpetually flash on
Monty Python's character of "Mr Bledsoe", who explodes after adding on
one additional wafer thin mint, everytime I learn of some new "feature"
in some chip, that costs latency and/or jitter, that you cannot control.
Anyway, my larger point is, a reason why we are seeing this explosion of
dedicated processors for various things is that the main cpus - even
with a ton of cores on die - can no longer reliably meet hard RT
deadlines, and either you try to make a really complicated OS (linux) do
RT within that flakyness - or you graft on the needed features - like,
ugh - a web server - on a harder RT os running on a dedicated chip.
I'd like good harder-rt performance to become a priority again on
mainstream cpus.
For example, I'd have a dedicated i and d-cache for interrupt handling.
(I mentioned the mill on this thread. Interrupt context switch is 3-5
clocks, vs 1000s for intel. I also get a kick out of the propeller
architecture - one cpu per I/O)
>> That's just the
>> tip of the iceberg. Believe it or not, there's an entire IPv4/IPv6 stack *and
>> a webserver* hiding in there...
>>
>> https://schd.ws/hosted_files/ossna2017/91/Linuxcon%202017%20NERF.pdf
>>
>> Gaak. Have some strong adult beverage handy, you'll be needing it....
>
> Also see the wifi processor remote exploit that Apple devices (and others I
> presume) had problems with.
>
> Mobile baseband processors behave in the same way, and also have their own
> stack. I have talked to mobile developers who discovered all of a sudden the
> baseband would just silently "steal" a bunch of UDP ports from the host OS and
> just grab these packets. At least with IPv6, the baseband can have its own IPv6
> address, separated from the host stack IPv6 addreses.
>
> Just to illustrate (incompletely) what might be going on when you're tethering
> through a mobile phone.
>
> Packet comes in on the 4G interface. It now hits the baseband processor (that
> runs code), which might send the packet to either the host OS, or via an packet
> accelerator path (which the host OS might or not have a control plane into), and
> this accelerator runs code, and then it hits the wifi chip, which also runs
> code.
>
> So while the host OS programmer might see their host OS doing something, in real
> life the packet potentially hits at least three other things that run code using
> their own firmware. Also, these components are manufactured in factories, how do
> we verify that these actually do what they were intended to do, and not modified
> between design and production? How do we know the firmware we load is actually
> loaded and it's not intercepted and real time patched before execution? Oh,
> also, the OS is loaded from permanent storage, that is also running code. There
> are several talks about people modifying the storage controller (which also runs
> code of coutse) to return different things depending on usage pattern. So it's
> not safe for the OS to read data, check that it passes integrity checks, and
> then read it again, and execute. The storage might return different things the
> second time.
>
> I don't know that we as humanity know how to do this securely. I've discussed
> this with vendors in different sectors, and there's a lot of people who aren't
> even aware of the problem.
>
> I'd say the typical smartphone today probably has 10 things or more running
> code/firmware, all susceptable to bugs, all of them risked of even with exposed
> security problems, they're never going to be patched.
>
> So the IoT problem isn't only for "smart meters" etc, it's for everything. We've
> created devices that are impossible to verify without destroying them (sanding
> down ICs and looking at billions of gates), and in order to verify them, you
> need equipment and skills that are not available to most people.
This was a really good summary, thank you. A couple comments:
Bugs tend to accumulate at the edges between interfaces, also.
I don't actually make much difference between bugs and security bugs. At
the moment, I'm catching up on Brin's "Transparent society".
And for an example of one way forward, read larry niven's "safe at any speed".
More information about the Cerowrt-devel
mailing list