Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These arguments keep happening because models keep surpassing most peoples' expectations, whose default behavior right now is denial of capabilities out of fear.

There has been a large majority on HN who have dismissed AGI and model capabilities at every turn since OpenAI was founded a decade ago. The problem is the universe where models are going to be super powerful is unprecedented, revolutionary, and probably scary, so therefore it is easier to digest it as untrue. "they won't be powerful". "LLM's couldn't have possibly done the vulnerability expose that I could never have." And every time capabilities are leveling up, there is a refusal to accept basic facts on the ground.

 help



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: