Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think they are questioning whether human feedback is even necessary to make progress, i.e. whether the premise that RL needs to be RLHF is true.

My (limited) understanding is that LLMs are not capable of escaping their learned distribution by simply feeding on their own output.

But the question is whether the required external (out of distribution) "stimulus" needs to come from humans.

Could LLMs design experiments/interventions to get feedback from their environment like human scientists would?

I have my doubts that this is possible without an inherent causal reasoning capability but I'm not sure.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: