Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs introspection is good at giving plausible ideas about prior behavior to consider, but it's just that; plausible.

They do not actually "know" why a prior response occurred and are just guessing. Important for people to keep in mind.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: