Hacker Newsnew | past | comments | ask | show | jobs | submit | hsaliak's commentslogin

The problem with vibe coded re-writes is that you basically sign off on understanding the generated codebase at that point. Any historical knowledge of the codebase is gone.

This prompt defines the translation as a file for file, line for line port. Seems like historical knowledge will be fine.

Having dabbled with both Zig and Rust, they do things so fundamentally differently, it isn’t possible to do exact lines like that.

the rust they've written (so far) is highly unidiomatic (and with a ton of unsafe). I can't speak to the zig part, but it seems plausible to me it is line-by-line, horrendous rust.

Whether or not they can clean it up is an interesting question.


zig can do some things wrt. compiler time compute which sits somewhere in between rust const expr and proc macro usage. This isn't something rust (or most languages) have. So even if we are generous and interpret line by line as expression by expression this isn't fully doable

but also telling a LLM to do a line-by-line translation and giving it a file _is guaranteed to never truly be a line-by-line translation_ due to how LLMs work. But thats fine you don't tell it to do line-by-line to actually make it work line by line but to try to "convince" it to not do any of the things which are the opposite (like moving things largely around, completely rewriting components based on it "guessing" what it is supposed to do etc.). Or in other words it makes the result more likely to be behavior (incl. logic bug) compatible even through it doesn't do line-by-line. And that then allow you to fuzz the behavior for discrepancies in the initial step before doing any larger refactoring which may include bug fixes.

Through tbh. I would prefer if any zip -> terrible rust part where done with a deterministic, reproducible, debug-able program instead of a LLM. The LLM then can be used to support incremental refactoring. But the initial "bad" transpilation is so much code that using an LLM there seems like an horror story, wrt. subtle hallucinations and similarr.


If anyone can do it, it's Anthropic. The question is more how long it will take and how many tokens it will burn/how much groundwater.

care to attempt a top 3 differences that someone doing this kind of rewrite should know?

(would teach me a little about Zig, about which i know 0)


Wouldn’t call myself an expert in either, but I think 2 things stand out far more than anything else: 1. Rust is effectively as strict as can be in terms of ownership. In Zig you can just allocate some memory and then start slinging pointers (or slices) all over. If you’re doing this then you’re presumably doing it for mutability and you don’t strictly know where that pointer ends up once you’ve passed it on. 2. Rust’s metaprogramming is split among a couple different things (e.g. traits, macros), whereas Zig’s is unified (comptime). comptime is (at least advertised as) “just normal Zig code” and Rust macros are a great example of “this doesn’t work at all like the base language”.

#1 boils down to “can the LLM solve the pointer aliasing here?” and #2 is translating between metaprogramming paradigms. Could work but a line-by-line translation is a pipe dream.


great answers! exp the recap last line

Zig doesn't have a borrow checker. It's basically C, if C had been much better designed.

Line-by-line ports to idiomatic Rust are usually not possible because of the borrow checker and Rust's ownership rules. That's the reason the Typescript compiler was ported to Go instead of Rust.


You can do this with a bunch of clones. But this will make your software slower and kind of defeats the entire purpose.

It makes the git history a bit more confusing to follow if you want to see old changes, but I'm sure a simple wrapper to check for the zig equivalent files as well wouldn't be very difficult.

the problem is not the forward pass, its the control/feedback loop when slop is written in response to the forward pass. Perhaps we should give the LLM 2 specs, one designed for the forward pass and another for the acceptance criteria /backward pass that's focused on tests, best practices and code, so that the output is independently verified?

nobody knows what to build when everything can be built, there is no moat.

Using CRDT gossip to inform scaling is a clever idea. You are on to something there. Perhaps extract it as a core library/concept from the runtime? I feel that would be generally useful!

Thanks! That’s certainly crossed my mind!

I find parallel agents to be an exception rather than a norm. Maybe I’m the problem? For those exceptional cases, opening a few more terminals gets the job done. It’s unclear to me if this needs to be the primary workflow. My brain naturally does better on deep work on one problem..


I have historically not used them but I want to start so that some of the spin up / tear down work of doing any particular task can happen in isolation. For example, drafting a change before I start editing, checking out and setting up code from a branch before I do a review, etc.


I am exactly the same, except I am really excited about this update! It’s not so much “in parallel” but being able to easily jump between threads. It allows me to dive into misc investigations in a side thread without derailing some main context where I’m doing the main editing.


I have a coding agent https://github.com/hsaliak/std_slop where the sessions are in SQL ledger. So /session [new, clone, deletes, undo] are supported and all sessions are persistent. Cloning lets you 'fork' the context and undo lets you roll back, basically solving the problem you state above.

Sessions are linear though, so you cant do this _while_ an existing session is cooking.

That said, I am excited about this update too, I've been playing with ACP support and Zed's UX was bare bones. I want to run my agent with multiple workers now, and see what happens.


I also have this workflow, it’s like you’re on the side quests or on the main story; those are not necessarily in parallel


I made https://github.com/hsaliak/filc-bazel-template bazel target for people who may want to use these two together to make hermetic builds with it.


Clojure had lousy error messages, agents deal with this well. Clojure is capable of producing some of the most dense code I’ve ever seen, so manual code reviews really start to feel like a bottleneck unless your goal is to level up.


> Clojure is capable of producing some of the most dense code I’ve ever seen, so manual code reviews really start to feel like

For me it's the opposite, the dense code is easier to review, because the proposed changes are almost always smaller and more informative. Contrast a change in a typical TypeScript project where changes propagate across tens of files, that you need to jump in-between just to understand the context. In the time it takes me to ramp up understanding what the change is, I've already completed the review of a change in a Clojure program.


Couldn't agree more. And I actually kind of like Typescript, but man, typical Typescript projects are so verbose and sprawling, it's crazy.


Not to mention that Clojurescript often emit safer code than Typescript does. Sounds insane and counter-intuitive, but here's the thing - Typescript actually removes all the type information from the emitted JS. Clojure, being strongly typed retains the strong typing guarantees in the compiled JS code. So all that enormous amount of effort required to deal with complex types, in practice feels like bringing kata choreography to a street fight - it's not utterly useless by itself, but hardly helping in a real fight-or-flight situation. You can impress the attacker with your beautiful dance and even prevent them from attacking you, but that's more like hope than a real strategy.


I would say dense code tends to help code reviews. It just is a bit unintuitive to spend minutes looking at a page of code when you are used to take a few seconds in more verbose languages.

I find it also easier to just grab the code and interactively play with it compared to do that with 40 pages of code.


I've long had the same idea.. this one has legs.


Tool output truncation helps a lot and is one of the best ways to reduce context bloat. In my coding agent the context is assembled from SQLite. I suffix the message ID to rehydrate the truncated tool call if it’s needed and it works great. My exploration on context management is mostly documented here https://github.com/hsaliak/std_slop/blob/main/docs/CONTEXT_M...


This is most certainly vibed with a few optimization focused prompts. Yes - performance is a feature, but so is lack of risk.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: