> If there is a critical software malfunction in a single gas stove that could result in it filling a house in the middle of a night with gas and there are no other safety precautions then maybe four people could die.
You've made an assumption about the failure pattern. Perhaps thousands of stoves could fail at once, all encountering the same defect. Something like this could happen if the software fails after 10,000 hours of continuous operation, say. (A bug of this sort once resulted in the deaths of 28 US soldiers, when a missile defence system failed. [0])
> The real problem in these cases is internet-connectivity + software-control
I agree this is a very obvious and severe problem, and it's rather shocking that it's even legal to do this. Avionics software is famously strictly regulated, but the software in road vehicles isn't.
> Without internet-connectivity, your primary failure mode is random chance
It depends. There could be a failure after some number of hours of continuous operation. Perhaps a particular stretch of road could be dangerously mishandled by all self-driving cars of a certain model. There's also a risk that deliberately crafted fake road signs could cause the system to drive dangerously, opening the door to targeted attacks.
> once you add in internet-connectivity, you now introduce an easy way to induce simultaneous failure
That's a good point. It's similar to how we place enormous trust in Windows Update, except we're talking about a life-and-death system.
> They should not be releasing internet-connected gas stoves unless every engineer on their team is comfortable that their security would be good enough to connect the US nuclear arsenal to the internet since the worst-case consequences are equally bad.
Well, no, they plainly aren't, but I do agree with your general point.
I wonder if we'll see any kind of general software regulation (that is, not specific to a domain such as aviation) to cover this sort of recklessness. Currently, the company behind such a product would only suffer after the fact, were something to go badly wrong.
Yes. I endorse your corrections. I was eliding systemic errors in the interest of brevity.
The main point I was trying to make is that mass failures that occur with a degree of simultaneity where the failure can not be corrected between failures are relatively rare outside of internet-connected software due to an access problem. Even the cases you mentioned are generally less dangerous because you can stop usage when an error is detected. If you discover that after 10,000 hours of operation the brakes on all of your cars stop working while driving, it is unlikely that everybody will encounter this problem at the same time. A few will encounter the problem first which will let you detect the error and then work to rectify it or recall your cars before it becomes a truly catastrophic error. It is a truly rare problem where a mass produced product contains a catastrophic defect that will affect a large percentage of them simultaneously. The only ones I can really think of are drugs with long-term unexpected side effects or a response to an equally massive cause like the Carrington Event coronal mass ejection which blew out the electrical grid in 1859.
> The only ones I can really think of are drugs with long-term unexpected side effects or a response to an equally massive cause like the Carrington Event coronal mass ejection which blew out the electrical grid in 1859.
I'd count the mass ejection as a natural disaster rather than the failure of a system, but the harm is just the same of course.
Related to your medicine example: bad dietary advice could be similar to this. We're straying some ways from technological systems here, of course.
You've made an assumption about the failure pattern. Perhaps thousands of stoves could fail at once, all encountering the same defect. Something like this could happen if the software fails after 10,000 hours of continuous operation, say. (A bug of this sort once resulted in the deaths of 28 US soldiers, when a missile defence system failed. [0])
> The real problem in these cases is internet-connectivity + software-control
I agree this is a very obvious and severe problem, and it's rather shocking that it's even legal to do this. Avionics software is famously strictly regulated, but the software in road vehicles isn't.
> Without internet-connectivity, your primary failure mode is random chance
It depends. There could be a failure after some number of hours of continuous operation. Perhaps a particular stretch of road could be dangerously mishandled by all self-driving cars of a certain model. There's also a risk that deliberately crafted fake road signs could cause the system to drive dangerously, opening the door to targeted attacks.
> once you add in internet-connectivity, you now introduce an easy way to induce simultaneous failure
That's a good point. It's similar to how we place enormous trust in Windows Update, except we're talking about a life-and-death system.
> They should not be releasing internet-connected gas stoves unless every engineer on their team is comfortable that their security would be good enough to connect the US nuclear arsenal to the internet since the worst-case consequences are equally bad.
Well, no, they plainly aren't, but I do agree with your general point.
I wonder if we'll see any kind of general software regulation (that is, not specific to a domain such as aviation) to cover this sort of recklessness. Currently, the company behind such a product would only suffer after the fact, were something to go badly wrong.
[0] https://www-users.math.umn.edu/~arnold/disasters/patriot.htm...