[NNagain] upgrading old routers to modern, secure FOSS

Robert McMahon rjmcmahon at rjmcmahon.com
Tue Oct 24 01:16:35 EDT 2023


Thanks, this is very interesting. I wrote code to DMA packets in an early Cisco switch and the hardware ASIC that did the movement across the fabric would provide a simple status of success or not. Unfortunately, the ASIC would at times indicate success and never move the packet across the fabric and then went to a state of using the wrong egress for all subsequent packets. It wasn't possible to change the ASIC as that was locked down years earlier. Luckily, we had the ability to query the ASIC to get more information on what it actually did so the code could see when it needed a fix. We did the FDIR, lost a bunch of packets, and assumed TCP would handle it. Of course, TCP designers assumed the loss was due to congestion so those state machines were incorrect but would ultimately recover.

I started my career working on a NASA FDDI network. SW had gotten so complex that all the states could not be inspected by humans like done on the Shuttle, nor even tested by conmputers. The strategy became commercial off the shelf (COTS) because, through "market magic", it was assumed fully tested.

I think the same naivety is now applied to open source code. There is no magic here either. Testing is way beyond simple scenarios repeated over and over again as the only test that matters.

Networks and distributed systems have bugs. I think a current Linux kernel is 30M lines of code and 1100 config options. Good luck in testing that.

This is beyond complex and not easy. FDIR has to be designed in from the get go.

Bob

On Oct 23, 2023, 4:22 PM, at 4:22 PM, Karl Auerbach <karl at cavebear.com> wrote:
>On 10/23/23 2:54 PM, rjmcmahon wrote:
>> Home networks today are embarrassing to me. Our industry is woefully 
>> behind here.
>>
>I would be more expansive.
>
>(Bringing this back to network neutrality - my argument, not clearly 
>suggested below, is that "neutrality" is more than bandwidth or 
>connectivity but ought also ought to include other aspects including 
>robust and repairable service in the face of reasonably foreseeable 
>events.  By-the-way, when I was involved in the early days of the net,
>I 
>worked for groups [such as the US Joint Chiefs] who thought that
>routers 
>being vaporized by nuclear explosions were "reasonable foreseeable".)
>
>The lawyer half of me lives in fear of the harm that can come from bad 
>code in network devices.  I've seen the growth of strict product 
>liability laws in the consumer space (sometimes resulting in those
>silly 
>"do not eat" labels on silica gel packets, but also resulting in 
>important steps, like pressure-release closures on cleaning products 
>that contain dry sodium hydroxide, or dual braking systems in
>automobiles.)
>
>And the railroad nut in me remembers that Murphy's law is as strong as 
>ever.  (Just ask "Why are highway and railroad traffic control signals 
>red and green [actually a quite bluish green]?" [Hint, they originally 
>were red and white, and sometimes the red colored lens would fall
>out.])
>
>When I was working with the DARPA Robotics Challenge my job was to 
>introduce network problems - the kinds of things that can happen in
>real 
>life when a robot operates in a disaster zone.  I could introduce a 
>simple change - like increasing the level of lost Ethernet frames when
>a 
>robot went through a door into a (simulated) concrete reactor building
>- 
>and the robot would simply stop or fall over.
>
>I've seen videos of animal surgery performed by remote control over a 
>long distance (50km) network link where the doctors presumed that the 
>net was endlessly flawless.  (I have this mental image of a robotic 
>scalpel overshooting its cut due to a non-idempotent command contained 
>in a packet that was replicated on the net.)
>
>And I've seen users of satellites fail to remember that every now and 
>then, from the point of view of a ground station, a satellite may 
>transit across the face of the sun (a highly predictable event) and be 
>temporarily blinded and unable to receive data.
>
>Many of our implementations today are hanging on only because modern 
>machines have gobs upon gobs of memory and nobody notices if a couple
>of 
>gigabytes leak or are uselessly allocated for a few minutes.
>
>(For instance, one way to stop a Linux stack is to send it patterns of 
>tiny IPv4 fragments that overlap or have gaps so that reassembly is not
>
>possible (or difficult) and buffers just sit there waiting for a rather
>
>long timeout before being reclaimed.)
>
>It seems that everybody and her brothers think they can write code. 
>And 
>they do.  And in our open source world the code they write is often 
>protocol code.  Often it is badly written protocol code containing 
>monumental flaws, such as use of "integer" types in C (when "unsigned 
>uint16" or similar is needed), failure to recognize that number spaces 
>wrap, assumptions that "everything is in ASCII" or that character 
>sequences do not contain null bytes. (Last time I looked some major 
>libraries went down in flames when string data in packets happened to 
>contain nulls - the code was using ancient Unix/C string routines.)  I 
>once sent several SIP phones into the weeds when I sent length fields 
>(in text form) with leading zero characters (e.g. 050 rather than 50) -
>
>some code treated that as octal!)
>
>It would certainly be nice if we had a body of network implementation 
>design/implementation rules - similar in concept to engineering design 
>rules used in bridges, aircraft, electrical networks, etc - for use
>when 
>writing code.  Any one who wanted to do something outside of those
>rules 
>could do so, but would be strongly "encouraged" to seek the advice and 
>oversight of others.
>
>Once the Interop show net was brought to a stop (by infinitely looping 
>packets) when two brands of routers had different notions how to expand
>
>IPv4 multicast addresses into MAC addresses.  (I can't remember the 
>details, but when every light in the NOC turned red everybody in the 
>Interop NOC turned to look at me, guessing [incorrectly in this 
>instance] that I was the cause.])
>
>It would be nice if we built our network devices so that they each had
>a 
>little introspective daemon that frequently asked "am I healthy, am I 
>still connected, are packets still moving through me?"  (For consumer 
>devices an answer of "no" could trigger a full device reboot or reset.)
>
>For larger devices, such as routers, we could have some machinery, 
>internal or external, that did a bit of modelling and informed the 
>routing machinery of anticipated queue lengths and similar metrics.  
>Then the router could monitor itself to check if it was wobbling
>outside 
>of those anticipated ranges and take appropriate action to signal the 
>issue.  (I was once quite surprised to learn on at least one large type
>
>of router that it was difficult-to-impossible to obtain queue length 
>data because so much function had been pushed into hardware that had
>few 
>test or measurement points.)
>
>My grandfather and father were radio and TV repair guys.  I learned
>from 
>an early age the value of good tools and of looking outside the basic 
>operation of a device for symptoms. (You could often hear a failing 
>capacitor or inductor; or you could smell a slowly burning resistor.)  
>Our modern networks and code usually lack that kind of observational 
>(and active testing) plane.
>
>I can see a big net neutrality differentiator between providers being 
>"time to detect" and "time to repair".
>
>         --karl--
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/nnagain/attachments/20231023/844ee029/attachment.html>


More information about the Nnagain mailing list