AI simply replicate existing bias the data already has. The downside is that it could amplify the bias if not done carefully; the upside is that now we can analyze the algorithm's bias and fix it, which is much harder to do to people with biases.
This was exactly my point -- at what inflection point is the AI's decision sound, vs when it may be based on bias from its base creation code (whatever the substrate code that 'births' an AI? WTF do we call that (as obviously an AI is meant to iteratively evolve - at what point is an AI required to 'check in changes' such that if a rollback is required across that AI's reach may be accomplished...
We need a "product recall" method that doesnt involve Blade Runners and campy one-liners...