Hacker Newsnew | past | comments | ask | show | jobs | submit | jeffmess's commentslogin


When somebody comments PR with “Incredible work, Jacob. It is an honor to call you my colleague.” then it's safe to assume it's out of the ordinary contribution. Pretty much falling outside of the “in all likelyhood”.

3000 line LLM commit is not that.


Also 95% of those 30k lines changed are fully self-contained inside of the aarch64 directory and of the remaining changes it looks like the majority is just adding "aarch64" as another item into an existing list. There are a few core changes that to me look like they could be done in their own PRs, but also core maintainers get to decide if they want to apply bureaucracy to their own work.

No description provided. I love this PR. But yeah, try being anyone besides Jacob and submitting that!

> In successful open source projects you eventually reach a point where you start getting more PRs than what you’re capable of processing. Given what I mentioned so far, it would make sense to stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project. Instead, we try our best to help new contributors to get their work in, even if they need some help getting there. We don’t do this just because it’s the “right” thing to do, but also because it’s the smart thing to do.

I feel like if their goal is to prioritize contributors over contributions, it'd also logically follow that they should try to have descriptions where possible? Just to make exploring any set of changes and learning easier? Looked it over briefly, no Markdown or similar doc changes there either.

I mean the changes can be amazing, it's just that adding some description of what they are in more detail, alongside the considerations during development, for new folks or anyone wanting to learn from good code would also be due diligence.


How would you differentiate a 3000 line LLM commit made by the best models and good AI processes from a 3000 line commit made by the best human developer?

edit Okay, I set the bar too high here with "best human developer" and vague "good AI processes". My bad. Yes, LLM is not quite there yet.


A personal relationship and trust, as seems to be the case here?

By using my brain.

Don't be ridiculous! We don't do that anymore.

Read it?

It's still fairly obvious just by skimming the code. The best AI models are still quite far from the best human developers in ability and especially in code quality.

When the best AI models are the same or better than the best[1] human developers, what then?

We're already at the point talking about best vs. best.


If that happens and we have a way of reliably knowing if some code is produced to that high quality, then I think we probably can accept that AI coding is the only sensible option.

We definitely are not close to that point though and it's unclear if/when we will get there.


It seems to me that people might be arguing from conflicting hidden premises here. "AI Coding" is a spectrum that could mean something as simple as letting the LLM proofread your changes and then act on those with your own human brain, or it could mean just telling the agent what you want and let it rip and tear until it is done.

If I do the latter and submit a PR to something like Zig, I'll be certainly caught doing it and rightfully chastised. If I do the former, my PR will be better without anybody besides myself having any way of knowing how it got better. Probably I do something in between when I contribute to open-source these days.

Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing, because I respect these people and projects that much. It feels like I would be doing something they find disgusting if my work has touched an LLM and I obviously don't want to do that to people I respect. But it's fine, there are plenty of things to do in the world even when some doors are closed.

I do not presume to have any say on Zig project's well argued decisions[0] -- I'm not really even their user let alone someone important like a contributor. Their point of preferring human contact is superb, frankly. Probably a different kind of problem in an open-source project staffed with a lot of remote working people, where human contact is scarce.

https://kristoff.it/blog/contributor-poker-and-ai/


Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing

in my projects i will reject any contribution that i do not understand. even if the contribution is handwritten by an expert developer. that developer will have to earn my trust like anyone else, like you would have too.

LLM contributions are non-deterministic, which means they can never be trusted.

therefore, if you use LLM to contribute, you can not earn my trust. if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute. because if you could understand it, then you could write it yourself. i want your personal contributions, not those of your LLM. i want contributions that the submitter actually understands. i want you to earn my trust by showing me that you understand what you are doing. i want you to grow your understanding of my project. none of this happens when you use LLMs.

if you are unable to make a contribution without the help of an LLM then you are not ready to contribute. try looking for smaller issues that you can work on instead until you learned enough to make larger contributions.


> i will reject any contribution that i do not understand

Fair.

> that developer will have to earn my trust like anyone else

What does it take to "earn your trust"?

> LLM contributions are non-deterministic, which means they can never be trusted.

Provably incorrect. LLM contributions can be reviewed, tested, and understood like any other contribution. There's nothing "special" about LLM contributions.

Contributions authored by human brains are also non-deterministic, perhaps if the author was feeling in a slightly different way they'd have formatted the code a bit differently.

> therefore, if you use LLM to contribute, you can not earn my trust.

The premise is wrong.

> if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute

What if I believe I can do so without an LLM, but that it could be even better with an LLM?

What if I'm great at understanding code, but terrible at writing it?

Again, this is a premise that you just decided to take as truth, without proof.

> because if you could understand it, then you could write it yourself.

False. I can understand a novel algorithm by reading and studying it, but perhaps I could have not come up with it myself.

> i want you to earn my trust by showing me that you understand what you are doing

I can easily do that even if my contribution involves LLM assistance.

> i want you to grow your understanding of my project

Ditto.

> none of this happens when you use LLMs

False. Why do you think so?

> if you are unable to make a contribution without the help of an LLM then you are not ready to contribute.

Again, this is your opinion and you have no way of proving it. I can prove the opposite.


> What does it take to "earn your trust"?

multiple successful contributions of increasing complexity, among other things.

>> LLM contributions are non-deterministic, which means they can never be trusted.

> Provably incorrect. LLM contributions can be reviewed, tested, and understood like any other contribution. There's nothing "special" about LLM contributions.

read this comment to see what i mean: https://news.ycombinator.com/item?id=47968180

> Contributions authored by human brains are also non-deterministic, perhaps if the author was feeling in a slightly different way they'd have formatted the code a bit differently.

i can tell a human to focus on a certain issue. they will either listen and follow my instructions, or i will reject their contribution. the LLM is almost guaranteed to not follow all my instructions and make changes i didn;t ask for. see my comment above.

>> therefore, if you use LLM to contribute, you can not earn my trust.

> The premise is wrong.

how so?

>> if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute

> What if I believe I can do so without an LLM, but that it could be even better with an LLM?

what you believe is not relevant. only what you can convince me of. you'll have to first show that you actually can work without an LLM before i will consider your contribution.

> What if I'm great at understanding code, but terrible at writing it?

your problem not mine. if you are terrible at writing code but good at understanding it then it's your choice to only do code reviews. you can still make a meaningful contribution that way. i'd even let you write code so you can practice that, but i am not interested in your LLM generated code.

> Again, this is a premise that you just decided to take as truth, without proof.

i don't need proof. i need trust. you need to convince me that your code can be trusted.

>> because if you could understand it, then you could write it yourself.

> False. I can understand a novel algorithm by reading and studying it, but perhaps I could have not come up with it myself.

that's called learning. once you learned it, you can write it. but in order to effectively learn you also have to practice. if you let LLM write all your code then you are not practicing, so you won't improve.

>> i want you to earn my trust by showing me that you understand what you are doing

> I can easily do that even if my contribution involves LLM assistance.

it depends on the level of assistance. i am not ruling out use of AI to do research and learn, just don't let it write the code for you.

>> i want you to grow your understanding of my project

>> none of this happens when you use LLMs

> False. Why do you think so?

as i said above, if you don't practice writing the code yourself you are not learning. not enough at least to satisfy my expectations.

>> if you are unable to make a contribution without the help of an LLM then you are not ready to contribute.

> Again, this is your opinion and you have no way of proving it. I can prove the opposite.

whether you are ready to contribute to my project or not is not something i need to prove. it is a choice based on my preference which depends on the amount of trust you have earned. you can not prove to me that you are ready to contribute. this is not a standardized test that if you pass you automatically qualify. you can only convince me by earning my trust. this is a human decision, based on feelings.


>because if you could understand it, then you could write it yourself.

I accept most things you said there as valid opinions, but this is where the logic goes wrong.

I use LLMs to give me more from the only resource (now that my basic and mid-level needs are largely met) that ultimately matters: time. That means that I need to waste far less time in front of the computer, typing code, and use far more time doing more useful things, like hobbies, art, being with my children.

But as I said before, every project is obviously allowed to make their own rules, and contributors should obey those rules. There are plenty of projects that take both AI deniers and plenty of projects who prefer AI aficiandos.

At least for now. My belief is that one those groups will fade away like horseback riding did, but we'll see. Perhaps you have heard the famous stages quoted by many different people in different forms: first an idea is ridiculed, then it's attacked, then it's accepted. Some open-source communities have clearly entered the attacking phase in the last year so.


you are saying that even if you understand the code, using an LLM saves you time writing it. fair enough[*]. the problem on my side still is that if you didn't write the code yourself, i have no evidence that you actually understood it. the only way to prove that you understand the code is to write it yourself. that's where the trust building comes in. you may actually understand the code, but i can't trust that you do.

[*] in my opinion it takes more time to verify that the LLM code is correct than it takes to write it yourself. based on that, if you save time using an LLM then you didn't spend enough time to verify that the code is correct.

Some open-source communities have clearly entered the attacking phase in the last year so

i feel it's more like defense, but yes.


How can AI possibly be better than “the best” when the corpus of training data now includes its own slop in addition to all the code by new devs/lazy devs/bad devs scattered all over the internet? Law of averages applies here.

Because LLM models are obviously much more than the sum of their parts.

Oh, which parts are those? Do tell!

Don't use "the corpus", but use thinking, source code of the libraries and existing software, documentation, tools, best practices.

Billion times faster than a human, no tiring, no miscalculation, no brain-fart, no cheating.


The post that inspired this post [0] says:

> So while one could in theory be a valid contributor that makes use of LLMs, from the perspective of contributor poker it’s simply irrational for us to bet on LLM users while there’s a huge pool of other contributors that don’t present this risk factor.

> The people who remarked on how it’s impossible to know if a contribution comes from an LLM or not have completely missed the point of this policy and are clearly unaware of contributor poker.

The point isn't about the 3000 line PR, it's about do we think the submitter is going to stick around.

[0] https://kristoff.it/blog/contributor-poker-and-ai/


It seems to be trivially easy for everyone but people heavily invested into LLM to spot LLM slop

Jacob is part of the core team, not a random outside contributor.

Very different context: that PR is from a maintainer, and trusted member of Zig, which surely discussed the implementation/design internally as well

I can see brown I can see blue I can see violet sky…


The title is editorialized but why on earth was this even drafted? It's completely absurd.


Even though I knew it was the author I read this article in the comedians voice :)


I can't help but wonder how the two feel about each other; both being big names in their own field. Both enjoyable in their own right too.


It does sound a bit like something he might have written.


Looks interesting but I don't have a Facebook account :/


The dependency on Facebook is kind of unfortunate, as it excludes those who either don't have a Facebook account, or for various reasons do not want to use it to sign up for services. I am myself in the latter category, but went with it as I wanted a single login, and saw it as the best alternative at the time - got some more about that in the 'why Facebook?' section on the site.

Would really like to provide a solution that could work without login, but I'm kinda clueless on how to solve this in a good way, that allows for the creator not knowing what has been checked off or not, as well as allowing others to both check and later - perhaps even on another device - uncheck items :/


I am slowly getting there. After watching the film cowspiracy my wife and I decided to slowly transition, mainly environmental reasons but also because of the way animals are treated. We eat meat maybe once/twice a week coming down from twice daily, and when we do eat meat we try and source the best from small farmers. Transitioning away from meat has been relatively easy, where I personally fall short is dairy. There just isn't a good substitute for cheese and I find the smell and taste of soya milk awful. Hopefully new products will come along...


Unsweetened cashew milk is the best "veggie milk".

I find soy milk to be so thick it feels like drinking yogurt. Almond milk tastes like dirt to me, but cashew milk is so close to milk that I haven't found any reason to buy actual cow's milk in years.

Agree with you on cheese though. There just isn't good vegan cheese.


Have you tried almond milk?


Nice. Now if only we could get GRRM or Patrick Rothfuss to use this tool.


+1 to Elixir and Phoenix from my side. I have yet to dabble with Phoenix but the short amount of time I've spent with Elixir has been thoroughly enjoyable. I can't wait to rewrite chunks of code at work into small phoenix applications.


Elixer and Phoneix definitely on my dabble list. Falcon (Python) definitely on my 'use in production' list.


Welcome to the party!


Hi @FollowSteph. You mention that it takes at least 2-3 years to get a good product going. Curiously, how many developers did you have for those initial 3 years?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: