Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not entirely a fair comparison.

The tiger is dangerous because whether you consider it a sentient, intentional killing machine or a bunch of atoms, it exists and manipulates the same physical space that you do (indeed, as the tweeted image points out implicitly, it is only a tiger when you consider at the same sort of physical scale that we exist at).

Software, however, does not have this property. Ultimately it does exist as something in the physical world (voltages on gates, or whatever), but at that level it's equivalent to the "bunch of atoms" view. Software (by itself) does not operate in the physical space that we do, and so it cannot pose the same kind of threats to us as other physical systems do.

The question is therefore a lot more nuanced: what types of control (if any) can (a given piece of) software exert over the world in which we operate? This includes the abstract yet still large scale world of things like finance and record keeping, but it also obviously covers the physical space in which our bodies exist.

Right now, there is very (very) little software that exists as a sentient, intentional threat to us within that space. When and if software starts to be able to exert more force on that space, then the "it's just logic and gates and stuff" view will be inappropriate. For now, the main risk from software comes from what other humans will do with it, not what it will do to us (though smartphones do raise issues about even that).



Software has been killing people since at least Therac-25, so "sentience" is a red herring.

The idea of harm from the unemotional application of an unthinking and unfeeling set of rules, which is essentially what algorithms are, predates modern computing by some margin as it's the cliché that Kafka became famous for.


Software doesn't "apply" rules, humans do that.

Yes, the software may be part of the apparatus of a cold unfeeling bureaucracy (private or state), but it is the decision of human beings to accept its output that causes the damage.

I should have probably dropped the term "sentience" - I agree it is not really relevant. I will need to think about examples like Therac-25. Not sure how that fits in my ontology right now.


> Software doesn't "apply" rules, humans do that.

I think you're using at least one of those words very differently than me, because to me software is nothing but the application of rules.


When a software system says "this person must have their property foreclosed", it is following rules at several levels - electronics, code, business, legal. But ultimately, it is a human being that makes the choice to "apply" this "rule" i.e. to have consequences in the real world. The software itself cannot do that.


Thanks, that clears up which word we differ on: "apply".

With your usage, you are of course correct.

Given how often humans just do whatever they're told, I don't trust that this will prevent even a strict majority of possible bad real-world actions, but I would certainly agree that it will limit at least some of the bad real-world actions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: