My (limited) understanding is that LLMs are not capable of escaping their learned distribution by simply feeding on their own output.
But the question is whether the required external (out of distribution) "stimulus" needs to come from humans.
Could LLMs design experiments/interventions to get feedback from their environment like human scientists would?
I have my doubts that this is possible without an inherent causal reasoning capability but I'm not sure.
My (limited) understanding is that LLMs are not capable of escaping their learned distribution by simply feeding on their own output.
But the question is whether the required external (out of distribution) "stimulus" needs to come from humans.
Could LLMs design experiments/interventions to get feedback from their environment like human scientists would?
I have my doubts that this is possible without an inherent causal reasoning capability but I'm not sure.