For those who are outside of the industry, the article is probably not completely accurate. I don't know specifically about the 737 MAX, but for many of their other newer airframes (787, upcoming 777x), Boeing relies on a concept called 'high integrity at the source'. Essentially, two complete copies of the flight computer hardware are put on a single card and they cross-compare their results. If you're looking for a bit of dense reading material on the subject, you might find a related patent application interesting: https://patents.google.com/patent/US9170907
As described, that provides zero protection against software bugs. Both of the redundant lanes are carrying out identical computations on identical data using identical code and will make identical errors if there's any bug. On paper it's more powerful than the non-synchronized system Airbus uses in that it can stop erroneous computations from being used at all, rather than detecting them after the fact, but it wouldn't be able to detect problems like the Qantas Flight 72 accident in which erroneous data with a particular timing happens to trip a latent bug.
In Airbus case, who have been doing full fly by wire for a while now, there are at least two completly seperate software implementations which run in parallel and cross compare the result. They also run on redundant flight computers with different hardware architectures.
Boeing probably has a similar thing for the fly by wire fighter jets they are involved in but there passenger planes are still mainly directly controlled by the pilot.
One of my professors at uni was involved in the flight computers of the Eurofighter and tells the same thing: different teams were given identical specifications but contact between them was forbidden so they were forced to develop completely different implementations in order to avoid shared bugs that could affect all computers at once.
NPR just did a piece on cosmic rays, saying they cause hardware faults way more often than people realize. In one case they switched off a passenger jet's autopilot. They're also getting blamed for the Toyota unintended accelerations.
They said it's common now in critical systems to use three computers and ignore any single computer that disagrees with the other two.
See above comment. With 2, you don't. With 3, you do.
But if there's a human in the loop and a manual alternate control pathway, detecting a disagreement allows you to cue the manual operator and transfer control to them. Or fall back to a much simpler system of computer aid.
With 1, hardware failures are extremely hard to detect at all, as even your computational checks for internal consistency are subject to mutation.
> See above comment. With 2, you don't. With 3, you do.
Unless all 3 different give different results, two failures and one correct.
IIRC the shuttle had a 3+1 system 3 as a cohort with voting and if they couldn't reach consensus the 1 was a minimal system that could keep the lights on.
You don't need to. You just need to know that the module as a whole has a fault. Reboot the module and let the hot spare take over (all critical functions have a hot spare).
This also happened with cars during the Toyota Prius scandal. For a thorough and entertaining treatment, I recommend RadioLab's "Bit Flip" episode https://www.npr.org/podcasts/452538884/radiolab