Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>The 'regulate it like nukes' thing because teenagers might ask it to end the world thing seems pretty far down the line.

It's straight up absurd. AI is going to have no appreciable effect on this, because it would simply be disseminating existing knowledge. You can already look up how to build a nuclear bomb. It's not some kind of big secret that nobody's allowed to know. The same goes for all the other world-ending scenarios. AI can't act on its own in the physical world. Even when it eventually does, it will be bound by the same limitations people are.

I agree that the real dangers AI presents are much more mundane. People being able to do things so much more efficiently is going to cause instability in people's lives whose jobs it will affect. It won't replace everyone, but it will cause problems - people have to learn new skills, find new jobs etc.



>AI can't act on its own in the physical world.

If it can earn money or is given money, it can certainly get people to do things in the physical world.


AIs are already acting in the physical world: Any AI (e.g., ChatGPT) changes the configuration of the human brain of those who use it.

Sure, it's a rather philosophical argument currently, but this will change.

If I would be a malicious actor, I'd create a chat bot that sneaks in responses that are in favor of my agenda. In that I'll make my users the puppets of the AI.


Its not like this hasnt been always happening. Its the reason billionares buy media companies, fake news, astroturfing, bans of books… does it even matter if its Andrew Tate or “AI” spewing nonsense?


> I'd create a chat bot that sneaks in responses that are in favor of my agenda

They're already doing this as there's a clear far left bias/lecturing that occurs in the models, especially those put out by OpenAI.


Any proof of this? Example?


Oh yeah i thought so. This shows who is biased here.

The AI doesnt care or know. its about what source the material is taught on. Turns out a lot literature, academic texts and even stuff on the internet is leaning left (and right). So this naturally seeps in.

When you live in a bubble left or right you will see the problem everywhere.


"Any AI (e.g., ChatGPT) changes the configuration of the human brain of those who use it.

Can you elaborate more?


As soon as I observe the behavior or output of an agent or AI, the configuration of my brain changes. This is somewhat tautological as observation is connected to memory and thus brain changes.

I guess the more interesting question is whether there are lasting changes. I'm not an expert in learning, memory, or brain development. Even without much knowledge it should be clear that frequent interaction will have lasting changes. For example, ChatGPT is biased in the sense that it does not know about all the books that haven't been digitalized. Used frequently, those missing books will be reflected in my brain.

Another example could be a chess training AI. If I train with an AI, I could have blind spots in the landscape of chess skills that are deliberately (or not) excluded. So clearly, this AI would change my brain but not only in a way I intend.


AI alone, but logically AI eventually combines with capabilities from robotics, or is able to reach into auto-pilots of cars and planes, drones, self-manned turrets, missile controls, etc. No?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: