Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're right of course, but at the point where you're saying "well we can make a turing machine with the LLM as the transition function by defining some tool calls for the LLM to interact with the tape" it feels like a stretch to call the LLM itself turing complete.

Also people definitely talk about them as "thinking" in contexts where they haven't put a harness capable of this around them. And in the common contexts where people do put harness theoretically capable of this around the LLM (e.g. giving the LLM access to bash), the LLM basically never uses that theoretical capability as the extra memory it would need to actually emulate a turing machine.

And meanwhile I can use external memory myself in a similar way (e.g. writing things down), but I think I'm perfectly capable of thinking without doing so.

So I persist in my stance that turing complete is not the relevant property, and isn't really there.

 help



That's why I specifically didn't call the LLM itself Turing complete, but stated that if you put a loop around a Turing machine you can trivially make it Turing complete. Maybe I should have been clearer and write "the combined system" instead of it.

But the point is that this is irrelevant, because it is proof that unlesss human brains exceed the Turing computable, LLM's can at least theoretically be made to think. And that makes pushing the "they're just predicting the next token" argument anti-intellectual nonsense.


I am not sure it is proof, at least not in an interesting way. It's also proof that Magic: The Gathering could theoretically be made to think. Which is true but doesn't tell you anything much about MtG other than that it is a slightly complicated ruleset that has a couple of properties that are pretty common.

I think both sides of this end up proving "too much" in their respective directions.


Yeah, humans and LLMs and a TM transition function are all Turing complete in the same way, but it's also basically a useless fact. You could possibly train a sufficiently motivated rat to compute a TM transition function.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: