Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I may be wrong, but I think right now, from reading stories of people looking at use AI and having poor experiences, AI is useful and effective for some tasks and not for others, and this is an intrinsic property - it won't get better with bigger models. You need a task which fits well with what AI can do, which is basically auto-complete. If you have a task which does not fit well, it's not going to fly.


Right: LLMs have a "jagged frontier". They are really good at some things and terrible at other things, but figuring out WHAT those things are is extremely unintuitive.

You have to spend a lot of time experimenting with them to develop good intuitions for where they make sense to apply.

I expect the people who think LLMs are useless are people who haven't invested that time yet. This happens a lot, because the AI vendors themselves don't exactly advertise their systems as "they're great at some stuff and terrible at other stuff and here's how to figure that out".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: