That’s an interesting approach, but what do you learn from it that is applicable to the next task? Do you find that this eventually boils down to heuristics that generalize to any task? It sounds like it would only work because you already put a lot of effort into understanding the constraints of the specific problem in detail.
I wonder why they fail this specific way. If you just let them do stuff everything quickly turns spaghetti. They seem to overlook obvious opportunities to simplify things or see a pattern and follow through. The default seems to be to add more, rather than rework or adjust what’s already in place.
I suspect it has something to do with a) the average quality of code in open source repos and b) the way the reward signal is applied in RL post-training - does the model face consequences of a brittle implementation for a task?
I wonder if these RL runs can extend over multiple sequential evaluations, where poor design in an early task hampers performance later on, as measured by amount of tokens required to add new functionality without breaking existing functionality.
Yeah I've been wondering if the increasing coding RL is going to draw models towards very short term goals relative to just learning from open source code in the wild
To me this seems like a natural consequence of the next-token prediction model. In one particular prompt you can’t “backtrack” once you’ve emitted a token. You can only move forwards. You can iteratively refine (e.g the agent can one shot itself repeatedly), but the underlying mechanism is still present.
I can’t speak for all humans, but I tend to code “nonlinearly”, jumping back and forth and typically going from high level (signatures, type definitions) to low level (fill in function bodies). I also do a lot of deletion as I decide that actually one function isn’t needed or if I find a simpler way to phrase a particular section.
Edit: in fact thinking on this more, code is _much_ closer to a tree than sequence of tokens. Not sure what to do with that, except maybe to try a tree based generator which iteratively adds child nodes.
This would make sense to me as an explanation when it only outputs code. (And I think it explains why code often ends up subtly mangled when moved in a refactoring, where a human would copy paste, the agent instead has to ”retype” it and often ends up slightly changing formatting, comments, identifiers, etc.)
But for the most part, it’s spending more tokens on analysis and planning than pure code output, and that’s where these problems need to be caught.
I feel like planning is also inherently not sequential. Typically you plan in broad strokes, then recursively jump in and fill in the details. On the surface it doesn’t seem to be all that much different than codegen. Code is just more highly specified planning. Maybe I’m misunderstanding your point?
Chat gpt is a great name though — you “chat” with the “GPT” so its self informing (even if you dont know what a GPT is), it’s 4 syllables that roll off the tongue well together.
RSS, has no vowels, no information, and looks like an alphabet term you might see at the doctor’s office or in an HR onboarding form at a corpo.
In Japan it's now known colloquially as 「チャッピー」 ("Chappy" or "Chappie"). High praise that it has received such shortened and personified version so quickly.
As a European, my impression is that things named something something ”Euro” tend to be cheap and low quality. I don’t think it’s possible to build a positive consumer brand around ”Eurosky”. I support the cause though - we probably need to find a catchy word like ”Brexit” or ”enshittification” to make it salient.
This is almost universally true for every national identity (or however we want to widen the term to include Euro).
If you have a good product, you usually lead with that. "Made in X" becomes one bullet point in the list of things that make you great. If you lead with "made in X" or even make that your entire brand, that's a sign that you probably don't have much else to bring to the table.
The only real exception are foods and beverages. And even there it's questionable
> Eurosky is a pan-European initiative spearheaded by a coalition of entrepreneurs, technologists and civil society organizations
A brit, a belgian and a german by the looks of their profiles, which are just their linkedin pages.
Posting this to HN feels like some guys trying to do "growth hacking" with Brusselian characteristics.
Honestly I even propose this conjecture: If you are in Europe you will learn about any truly European social media from some other source long before it appears on HN.
For the record, now it has changed again, to ’Meta’s AI smart glasses and data privacy concerns’, which is even more milquetoast.
Parent and another comment reacting to this change have also been (artificially, I must assume) sunk from top to below gems like ’Too funny that the subcontractor working for meta is “sama”’.
reply