Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this post is specifically an answer to yet another "AGI is just around the corner" post that made waves recently.

Fundamentally, I think that many problems in white-collar life are text comprehension problems or basic automation problems. Further, they often don't even need to be done particularly well. For example, we've long decided that it's OK for customer support to suck, and LLMs are now an upgrade over an overseas call center worker who must follow a rigid script.

So yeah, LLMs can be quite useful and will be used more and more. But this is also not the discourse we're having on HN. Every day, there's some AGI marketing headline, including one at #1 right now from OpenAI.



The AI-assisted GPT theoretical physics derivation? There's literally no mention of AGI in the article, and it's pretty tame, especially considering it's a PR piece by OpenAI.



It's a press release from a vendor that constantly talks about AGI, and it's meant to showcase the capabilities of an unreleased model in an experiment you can't replicate. But my comment was less about the link and more about the discussion, which has immediately bifurcated into the "it's done and dusted" and "this is overhyped and LLMs are useless" camps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: