If someone does something specifically with knowledge that it is likely to cause harm, there should almost certainly be recourse there. Yes.
If they did something that was a standard part of their duty, such as assemble a burrito using certified ingredients to the best practices of the organization, then not so much.
I'll go even further, if the company reacted slowly to recall produce from their shelves after it was discovered that there was contamination, then the company should be held liable for some of the damages that resulted from the delay.
That gets obviously complicated to tease out damages that happened from before discovery. More, to me, I care more about healing the people that were impacted by the contamination as well as possible. If that means that we have to have a cost of business fund to make sure people can be attended to in the event of a disaster, then we should have such a fund.
You can get even more fun, though. Lets say you have a detection system that can reject produce if a threshold is passed on detected contamination. Why would the goal not be for this to fail in a "closed" position to minimize risk of contamination? It could cost more for the company to discard some inventory? Do we expect to have everything hand inspected and always signed off by a person? Even if it can easily be shown that is both more expensive for the company, and more risk of accidental contamination?
We assign fault precisely so that we know who to extract damages from. The burrito assembler is not going to be held liable for bad outcomes when they did everything right.
The mere fact that there are damages to extract means that someone was already damaged, not merely that we want to punish someone. If my neighbor is poisoned by food, I do not have standing to sue in my own capacity because my neighbor was poisoned. At this point, the moral question is whether or not you should be able to extract damages when you can show that you are damaged. Essentially every society has said "yes, of course," even though the specifics of what constitutes damage and recompense differs.
Why do people not universally (or at least generally) tend toward making Fail-Safe systems? I don't know, but they just do not. They must be compelled to.
Call it original sin or prevarication, the second law of thermodynamics, evolutionary inclination to save effort, whatever. But humans just default the opposite way of what you're saying.
If they did something that was a standard part of their duty, such as assemble a burrito using certified ingredients to the best practices of the organization, then not so much.
I'll go even further, if the company reacted slowly to recall produce from their shelves after it was discovered that there was contamination, then the company should be held liable for some of the damages that resulted from the delay.
That gets obviously complicated to tease out damages that happened from before discovery. More, to me, I care more about healing the people that were impacted by the contamination as well as possible. If that means that we have to have a cost of business fund to make sure people can be attended to in the event of a disaster, then we should have such a fund.
You can get even more fun, though. Lets say you have a detection system that can reject produce if a threshold is passed on detected contamination. Why would the goal not be for this to fail in a "closed" position to minimize risk of contamination? It could cost more for the company to discard some inventory? Do we expect to have everything hand inspected and always signed off by a person? Even if it can easily be shown that is both more expensive for the company, and more risk of accidental contamination?