Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have been thinking recently about the ethics of AI;

At what point do we 'allow' AI to determine actions based on perceived (programmed) *BIAS* How can one prevent any bias on an AI's ability to be deterministic.



All of the conversations I've seen about AI bias recently seem to define "bias" as "any difference between the output and the particular rightthink ordained by whoever's speaking." Nobody cares about making the AI's output correct, they just want it to agree with them.

So the glib answer is "train it on unbiased data". Depending on your philosophy, this translates either to "manually 'fix' anything you see as 'bias' in the training data", or "use a sufficient amount of entirely unmodified raw data along with an algorithm sufficiently insightful to cancel out all of the sources of inaccuracy introduced by the various sources of data and extract ground truth."

But you know if anyone ever does manage the latter, its results will still be decried as 'biased' by anyone who disagrees with them.


I am now convinced this is what is meant by this:

https://en.wikipedia.org/wiki/Ouroboros

but instead its the warning against letting AI iterate upon itself without external intervention...


>Nobody cares about making the AI's output correct, they just want it to agree with them.

that is the 4th industrial revolution, the "post-correct" world where abundance of information (and its consumers) and speed of its production allows for and results in co-existence of multiple truths (kind of like hyperbolic geometry where one can draw through a point multiple different lines parallel to the given line) with the information space splitting into multiple feudal times like dukedoms.

Anyway, in general for biases i think we have D.Rumsfeld situation - the known/expected biases are known while the AI driven world would most probably bring new biases that we don't even expect.


AI simply replicate existing bias the data already has. The downside is that it could amplify the bias if not done carefully; the upside is that now we can analyze the algorithm's bias and fix it, which is much harder to do to people with biases.


But to do this you would need unbiased people. Since those don't exist, this correction would just be matching the bias to that of the bias adjuster.


Nah, you just need enough people to review the fixes. If at the end everyone is unhappy with the results, you're good to go.


There's an infinite number of wrong answers that also anger everyone, so that's not a sufficient metric for correctness.


This was exactly my point -- at what inflection point is the AI's decision sound, vs when it may be based on bias from its base creation code (whatever the substrate code that 'births' an AI? WTF do we call that (as obviously an AI is meant to iteratively evolve - at what point is an AI required to 'check in changes' such that if a rollback is required across that AI's reach may be accomplished...

We need a "product recall" method that doesnt involve Blade Runners and campy one-liners...

SERIOUSLY


What if an AI is unbiased but makes decisions people don't like?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: