On the other hand, one can verify that an algorithm is fair. One can verify it in the algorithm itself and one can verify that the inputs are fair. In fact, this is often necessary to do just to make it work well.
With humans one can't, and as a result there is blatant unfairness everywhere. Nepotism, racism, corruption, ... these algorithms can, if allowed, make it very hard for those to exist. Humans cannot be trusted to make decisions about other humans, not even with the best of intentions (because they still destroy people against the rules to protect the rules).
Needless to say a great many politicians are trying to stop the algorithms. Because they can't explain what they've done is the best they can come up with (which is stupid, the better argument, imho, is that one cannot with certainty predict what the algorithms will do in any given case. So if a neural network was the law, there would be no way to know what the edge of legality is. They might send Maria Theresa to jail, so to speak, for tying her shoelaces crossed or some other stupidly small unique thing)
Besides, do a bit of law, the reason humans can explain and algorithms can't be explained is as trivial as it is horrible: humans simply don't take more than 2 or 3 variables into account even for critical decisions. This is stupid, for obvious reasons, but it's also why you can explain. I assure you, for a 3 neuron neural network, just taking 3 factors into account, there is no explainability problem.
this is a fact often complained about in outcomes. Certain variables make seemingly malevolent decisions innocent. But those variables can't or aren't taken into account, so ... boom hammer comes down, life destroyed.
With humans one can't, and as a result there is blatant unfairness everywhere. Nepotism, racism, corruption, ... these algorithms can, if allowed, make it very hard for those to exist. Humans cannot be trusted to make decisions about other humans, not even with the best of intentions (because they still destroy people against the rules to protect the rules).
Needless to say a great many politicians are trying to stop the algorithms. Because they can't explain what they've done is the best they can come up with (which is stupid, the better argument, imho, is that one cannot with certainty predict what the algorithms will do in any given case. So if a neural network was the law, there would be no way to know what the edge of legality is. They might send Maria Theresa to jail, so to speak, for tying her shoelaces crossed or some other stupidly small unique thing)
Besides, do a bit of law, the reason humans can explain and algorithms can't be explained is as trivial as it is horrible: humans simply don't take more than 2 or 3 variables into account even for critical decisions. This is stupid, for obvious reasons, but it's also why you can explain. I assure you, for a 3 neuron neural network, just taking 3 factors into account, there is no explainability problem.
this is a fact often complained about in outcomes. Certain variables make seemingly malevolent decisions innocent. But those variables can't or aren't taken into account, so ... boom hammer comes down, life destroyed.