Can you give a specific example of something that e.g. processes absolutely must support which users absolutely cannot?
Consider that in Linux, processes and threads are implemented via the same abstraction (tasks). This abstraction actually leaks in some unfortunate cases, but it's generally considered "good enough."
The abstraction may be good enough functionally. My comment was a security not functional statement.
In the case you mention, your choice of abstraction may affect your threat model, depending on if there is shared state and what data may require isolation.
I'm assuming that the underlying isolation mechanism is formally proven (or at least as good as possible). With a single set of reasonable features, it should be able to provide isolation between processes, users, containers and VMs. What am I missing?
For general purpose operating systems formal verification of security mechanisms should not always be assumed.
I was not talking about ideal security but that certain pre-existing mechanisms do not have equivalent security postures, as the parent had mentioned. The point isn't that with enough work isolation can be achieved but that that work has not in fact been done and the various mechanisms are distinct and their security values should not be conflated.
Consider that in Linux, processes and threads are implemented via the same abstraction (tasks). This abstraction actually leaks in some unfortunate cases, but it's generally considered "good enough."