By the 'end to end principle', redundancy should probably be concentrated somewhere in the stack, and the rest of the stack should be concerned merely with validating integrity. It's unlikely that the optimum balance of resources and loss probability will entail redundancy at every level of the stack, from raw HDD bytes up to the global system level.
You move them, then you verify the integrity of the new copy, then you can get rid of the old one. You don't need to build integrity checks and extra FEC at every level of the system redundantly.
Just like in the end-to-end principle when applied to networking: you have a single strong integrity check at the very furthest endpoint possible, and then you don't build in integrity & ECC at every level of the stack, you devote those resources to higher performance, and just do retransmission from the other endpoint when a file occasionally gets corrupted and the integrity check catches it.
-_- I never appealed to 'more manual work', nor did I say that. If you refuse to understand my point and want to make up things I did not say, so be it.
Don't be a moron. You knew perfectly well what I meant. (Did I also mean that 'you', a human, should be checking hashsums and FEC by hand for every network packet...?)