Hacker Newsnew | past | comments | ask | show | jobs | submit | more grey-area's commentslogin

I know, I recently upgraded and skipped several releases without any issues with some large codebases.

The compatability guarantee is a massive win, so exciting to have a boring language to build on that doesn’t change much but just gradually gets better.


Really? My experience is that of C, C++, Go, Python, and Rust, Go BY FAR breaks code most often. (except the Python 2->3 change)

Sure, most of that is not the compiler or standard library, but dependencies. But I'm not talking random opensource library (I can't blame the core for that), but things like protobuf breaking EVERY TIME. Or x/net, x/crypto, or whatever.

But also yes, from random dependencies. It seems that language-culturally, Go authors are fine with breaking changes. Whereas I don't see that with people making Rust crates. And multiple times I've dug out C++ projects that I have not touched in 25 years, and they just work.


The stdlib has been very very stable since the first release - I still use some code from Go 1.0 days which has not evolved much.

The x/ packages are more unstable yes, that's why they're outside stdlib, though I haven't personally noticed any breakage and have never been bitten by this. What breakage did you see?

I think protobuf is notorious for breaking (but more from user changes). I don't use it I'm afraid so have no opinion on that, though it has gone through some major revisions so perhaps that's what you mean?

I don't tend to use much third party code apart from the standard library and some x libraries (most libraries are internal to the org), I'm sure if you do have a lot of external dependencies you might have a different experience.


Well, for C++ the backwards compatability is even better. Unless you're using `gets()` or `auto_ptr`, old C++ code either just continue to compile perfectly, or was always broken.

Sure, the Go standard library is in some sense bigger, so it's nice of them to not break that. But short of a Python2->3 or Perl5->6 migration, isn't that just table stakes for a language?

The only good thing about Go is that its standard library has enough coverage to do a reasonable number of things. The only good thing. But any time you need to step outside of that, it starts a bit-rotting timer that ticks very quickly.

> though [protobuf] has gone through some major revisions so perhaps that's what you mean?

No, it seems it's broken way more often than that, requiring manual changes.


But any time you need to step outside of that, it starts a bit-rotting timer that ticks very quickly.

This is not my experience with my own or third party code. I can't remember any regressions I experienced caused by code changes to the large stdlib at all in the last decade, and perhaps one caused by changes to a third party library (sendgrid, who changed their API with breaking changes, not really a Go problem).

A 'bit-rotting timer' isn't very specific or convincing, do you have examples in mind?


>> But any time you need to step outside of that

"That" here refers to the standard library, so:

> I can't remember any regressions I experienced caused by code changes to the large stdlib at all in the last decade

I agree. But I'm saying it's a very low bar, since that's true for every language. But repeating myself I do acknowledge that Go in some senses has a bigger standard library. It's still just table stakes to not break stdlib.

> A 'bit-rotting timer' isn't very specific or convincing, do you have examples in mind?

I don't want to dox myself by digging up examples. But it seems that maybe half the time dependabot or something encourages me to bump versions on a project that's otherwise "done", I have to spend time adjusting to non backwards compatible changes.

This is not my experience at all in other languages. And you would expect it to be MORE common in languages where third party code is needed for many things that Go stdlib has built in, not less.

I've made and maintained opensource code continuously since years started with "19", and aside from Java Applets, everything else just continues to work.

> sendgrid, who changed their API with breaking changes, not really a Go problem

To repeat: "It seems that language-culturally, Go authors are fine with breaking changes".


I disagree about culture, I’d say that’s the culture of js.

For Go I’d say it’s the opposite and you have obviously been unlucky in your choices which you don’t want to talk about.

But it is not a universal experience. That is the only third party package with breaking changes I have experienced.


Isn't the x for experimental and therefore breaking API changes are expected?

Sure.

To repeat: "It seems that language-culturally, Go authors are fine with breaking changes". I just chose x as examples of near-stdlib, as opposed to appearing to complain about some library made by some random person with skill issues or who had a reasonable opinion that since almost nobody uses the library, it's OK to break compat. Protobuf is another. (not to mention the GCP libraries, that both break and move URLs, and/or get deprecated for a rewrite every Friday)

The standard library not breaking is table stakes for a language, so I find it hard to give credit to Go specifically for table stakes.

And it's not like Go standard library is not a bit messy. As any library would be in order to maintain compatibility. E.g. net.Dialer has Timeout (and Deadline), but it also has DialContext, introduced later.

If the Go standard library had managed to maintain table stakes compatibility without collecting cruft, that'd be more impressive. But as those are contradictory requirements in practice, we shouldn't expect that of any language.


That’s not quite the same - you were paying for delivery by a third party, and you still do that when you pay for your email provider.

This is pay to play to contact someone - more akin to donors paying politicians for access at a dinner.


If privacy is the main concern (as it is in most usage of UUIDs) you could just encrypt the integer primary key instead with something like feistel and avoid the performance problems of UUIDs while still having opaque public identifiers.

I’d find Error: failed processing order: context deadline exceeded just as useful and more concise.

Typically there is only one possible code path if you can identify both ends.


Not in my experience. Usually your call chain has forks. Usually the DoThing function will internally do 3 things and any one of those three things failed and you need a different error message to disambiguate. And four methods call DoThing. The 12 error paths need 12 uniquely rendered error messages. Some people say "that is just stack traces," and they are close. It is a concise stack trace with the exact context that focuses on your code under control.

If you have both the start of the call chain and the end of the call chain mapped you will get a different error response almost every time and it is usually more than enough, so say your chain is:

Do1:...Do10, which then DoX,DoY,DoZ and one of those last 3 failed.

Do you really need Do1 to Do10 to be annotated to know that DoY failed when called from Do1? I find:

Do1:DoZ failed for reason bar

Just as useful and a lot shorter than: Do1: failed:Do2:failed...Do9 failed:Do10:failed:DoZ failed for reason bar

It is effectively a stack trace stored in strings, why not just embed a proper stack trace to all your errors if that is what you want?

Your concern with having a stack trace of calls seems a hypothetical concern to me but perhaps we just work on different kinds of software. I think though you should allow that for some people annotating each error just isn't that useful, even if it is useful for you.


This is a really interesting potential problem. I wonder how providers are going to avoid training on slop?

The training data is full of ‘any’ so you will keep getting ‘any’ because that is the code the models have seen.

An interesting example of the training data overriding the context.


Then you add a biome rule to say "no any ever" and the LLM will fix it before claiming the job is done.

This is a fascinating look into code generated by an LLM that is correct in one sense (passes tests) but doesn't meet requirements (painfully slow). Doesn't use is_ipk to identify primary keys, uses fsync on every statement. The problem with larger projects like this even if you are competent is that there are just too many lines of code to read it properly and understand it all. Bravo to the author for taking the time to read this project, most people never will (clearly including the author of it).

I find LLMs at present work best as autocomplete -

The chunks of code are small and can be carefully reviewed at the point of writing

Claude normally gets it right (though sometimes horribly wrong) - this is easier to catch in autocomplete

That way they mostly work as designed and the burden on humans is completely manageable, plus you end up with a good understanding of the code generated. They make mistakes I'd say 30% of the time or so when autocompleting, which is significant (mistakes not necessarily being bugs but ugly code, slow code, duplicate code or incorrect code.

Having the AI produce the majority of the code (in chats or with agents) takes lots of time to plan and babysit, and is harder to review, maintain and diagnose; it doesn't seem like much of a performance boost, unless you're producing code that is already in the training data and just want to ignore the licensing of the original code.


> This is a fascinating look into code generated by an LLM that is correct in one sense (passes tests) but doesn't meet requirements (painfully slow).

Why isn't requirements testing automated? Benchmarking the speed isn't rocket science. At worst a nightly build should run a benchmark and log it so you can find any anomalies.


I agree go’s error handling feels a bit clunky, though I prefer the local error handling and passing up the chain (if it were a bit more ergonomic) to exceptions, which IMO have a lot of other problems.

The main problems seem to me to be boilerplate and error types being so simplistic (interface just has a method returning a string). Boilerplate definitely seems solvable and a proper error interface too. I tend to use my own error type where I want more info (as in networking errors) but wish Go had an interface with at least error codes that everyone used and was used in the stdlib.

My rule of thumb on annotation is default to no, and add it at the top level. You’ll soon realise if you need more.

How would you fix it if given the chance?


> I agree go’s error handling feels a bit clunky

It should be the same handling as all other types. If it feels clunkier than any other type, you've not found a good design yet. Keep trying new ideas.


Well two things to me feel clunky, first is less serious but leads to lots of verbosity:

1. if err != nil is verbose and distracting and happens a lot. I'd prefer say Ian Lance Taylor's suggestion of something like this where you're just going to return it vs standard boilerplate which has to return other stuff along with the error:

// ? Returns error if non-nil, otherwise continue

data, err := os.ReadFile(path) ?

// Current situation

data, err := os.ReadFile(path)

if err != nil {

  return x,y,z,err
}

The second is a problem of culture more than anything but the stdlib is to blame:

2. The errors pkg and error interface has very basic string-based errors. This is used throughout the stdlib and of course in a lot of go code so we are forced to interact with it. It also encourages people to string match on errors to identify them etc etc. Yes you can use your own error types and error interfaces but this then creates interop problems and inevitably many pkgs you use return the error interface. I use my own error types, but still have to use error a lot due to stdlib etc. The wrapping they added and the annotation they encourage is also pretty horrible IMO, returning a bunch of concatted strings.

So these are not things that end users of the language can fix. Surely we can do better than this for error handling?


> if err != nil is verbose and distracting and happens a lot.

if err != nil is no more or less verbose than if x > y. You may have a point that Go could do branching better in general, but that isn't about errors specifically.

If there is something about errors that happening a lot then that still questions your design. Keep trying new ideas until it isn't happening a lot.

> Surely we can do better than this for error handling?

Surely we can do better for handling of all types? And in theory we can. In practice, it is like the story of generics in Go: Nobody smart enough to figure out a good solution wants to put in the work. Google eventually found a domain expert in generics to bring in as a contractor to come up with a design, but, even assuming Google is still willing to invest a lot of money in the new budget-tightening tech landscape, it is not clear who that person is in this case.

Ian Lance Taylor, as you mention, tried quite hard — with work spanning over many years — in both in both cases to find a solution, which we should commend him for, but that type of design isn't really his primary wheelhouse.


> if err != nil is no more or less verbose than if x > y. You may have a point that Go could do branching better in general, but that isn't about errors specifically.

In practice though, there's not nearly as many cases where someone needs to repeat `if x > y { return x }` a bunch of times in the same function. Whether the issue is "about errors" specifically doesn't really change the relatively common view that it's an annoying pattern. It's not surprising that some people might be more interested in fixing the practical annoyance that they deal with every day even if it's not a solution to the general problem that no one has made progress on for over a decade.


> there's not nearly as many cases where someone needs to repeat `if x > y { return x }` a bunch of times

In my evaluating of a fairly large codebase, if err != nil makes up a small percentage of all if statements. I think you may have a point that branching isn't great, but I'm still not sure trying to focus that into errors isn't missing the forest for the trees.

> it's an annoying pattern.

But, again, if it is so annoying, why is it the pattern you are settling on? There are all kinds of options here, including exception handlers, which Go also supports and even uses for error handling in the standard library (e.g. encoding/json). If your design is bad, make it better.

> It's not surprising that some people might be more interested in fixing

If they were interested in fixing it, they'd have done so already. The Go team does listen and has made it clear they are looking for solutions. Perhaps you mean some people dream about someone else doing it for them? But, again, who is that person going to be?

Philip Wadler, the guy who they eventually found to come up with a viable generics approach, also literally invented monads. If there was ever someone who might have a chance of finding a solution in this case I dare say it is also him, but it is apparent that not even he is willing/able.


> In my evaluating of a fairly large codebase, if err != nil makes up a small percentage of all if statements. I think you may have a point that branching isn't great, but I'm still not sure trying to focus that into errors isn't missing the forest for the trees.

I don't agree with the premise that a frustrating pattern has to comprise a large percentage of the instances of the general syntax for people to want to change it. I can tell you do, but I don't think this is something that people will universally agree with, and I'd argue that telling people "you can't have the opinion you have because it doesn't make sense to me" isn't a very effective or useful statement.

> But, again, if it is so annoying, why is it the pattern you are settling on? There are all kinds of options here, including exception handlers, which Go also supports and even uses for error handling in the standard library (e.g. encoding/json). If your design is bad, make it better.

Empirically, people don't seem to think they have better options, or else they'd be using them. If you try to solve someone's problem by giving them a different tool, and they still say they have the problem even with that, you're probably not going to convince them by telling them "you're doing it bad".

> If they were interested in fixing it, they'd have done so already. The Go team does listen and has made it clear they are looking for solutions. Perhaps you mean some people dream about someone else doing it for them? But, again, who is that person going to be? > Philip Wadler, the guy who they eventually found to come up with a viable generics approach, also literally invented monads. If there was ever someone who might have a chance of finding a solution in this case I dare say it is also him, but it is apparent that not even he is willing/able.

I'd argue there have been plenty of solutions for the specific problem that's being discussed here proposed that are rejected for not being general solutions to the problem that you're describing. My point is that there's a decent number of people who aren't satisfied with this, and would prefer that this is something solved in the specific case rather than the general for the exact reason you pointed out: it doesn't seem like anyone is willing or able to solve it in the general case.

My point is that I think a lot of people want a solution to a specific problem, and you don't want that problem solved unless it solves the general problem that the problem is a specific case of. There's nothing wrong with that, but your objections are mostly phrased as claiming that they don't actually have the problem they have, and I think that's kind of odd. It's totally fair to hold the opinion that solving the specific problem would not be a good idea, but telling people that they don't care about what they care about is just needlessly provocative.


> I can tell you do

I like nice things. You've identified a clear pain point that I agree with. If something can be better, why not make it better? "I am not able to think beyond the end of my nose, therefore we have to stop there" is a silly response.

> Empirically, people don't seem to think they have better options, or else they'd be using them.

I have read many Go codebases that do a great job with errors, without all the frustration you speak of. I've also read codebases where the authors crated great messes and I know exactly what you're talking about. That isn't saying Go couldn't improve in any way, but it does say that design matters. If your design sucks, fix it. Don't design your code as if you are writing in some other language.

> My point is that there's a decent number of people who aren't satisfied with this

Including the Go team. Hence why Ian Lance Taylor (who isn't on the core team anymore, granted, but was at the time) went as far as to create a build of Go that exhibited the change he wanted to see. But, once it was tried, we learned it wasn't right.

Nobody has been able to find a design that actually works yet. Which is the same problem we had with generics. Everyone and their brother had half-assed proposals, but all of them fell down to actual use. So, again, who is going to be the person who is able to think about the bigger picture and get it right?

Philip Wadler may be that person. There is unlikely anyone else in the world with a more relevant background. But, if he has no interest in doing it, you can't exactly force him — can you? It is clearly not you, else you'd have done it already. It isn't me either. I am much too stupid for that kind of thing.


> I am not able to think beyond the end of my nose, therefore we have to stop there" is a silly response.

This is exactly what I mean by needlessly provocative. You're almost directly saying that people who happen to care more about one specific case than you do are stupid or naive rather than having a different technical opinion than you. If you genuinely think that people who disagree with you are stupid or naive, then I don't understand why you'd bother trying to engage with them. If you think they aren't, but their ideas are, I don't think you're going to be effective at trying to educate them by talking down to them like this.

> Nobody has been able to find a design that actually works yet. Which is the same problem we had with generics. Everyone and their brother had half-assed proposals, but all of them fell down to actual use. So, again, who is going to be the person who is able to think about the bigger picture and get it right?

Whether a design "actually works" is dependent on what the actual thing it's trying to solve is, since a design that works for one problem might not solve another. This is still circular; you're defining the problem to be larger than what the proposals were trying to solve, so of course they didn't solve what you're looking for. You're obviously happier with nothing changing if it doesn't solve the general problem, which is a perfectly valid opinion, but you're talking in absolute terms as if anyone who disagrees with you is objectively wrong rather than having a subjectively different view on what the right tradeoff is.

> Philip Wadler may be that person. There is unlikely anyone else in the world with a more relevant background. But, if he has no interest in doing it, you can't exactly force him — can you? It is clearly not you, else you'd have done it already. It isn't me either. I am much too stupid for that kind of thing.

Once again, this is exactly the reason that I'd argue that it's reasonable to consider a solution to a specific subset of the problem than trying to solve it generally. If nobody is capable of solving a large problem, some people will want to solve a small one instead. The issue isn't that I can't personally see beyond the end of my nose, but that unless someone comes up with the solution, it's impossible to tell the difference between whether it's a few hundred yards outside my field of view or light-years away in another galaxy we'll never reach. I'd argue that there should be some threshold where after enough time, it's worth it to stop holding out for a perfect solution and accept one that only solves an immediate obvious problem, and further that we've reached that threshold. You can disagree with that, but condescending to people who don't have the same view as you isn't going to convince anyone, so I don't understand what the point of it is other than if you're just trying to feel smugly superior.


> This is exactly what I mean by needlessly provocative.

This doesn't make sense. You might be mistakenly anthropomorphizing HN?

> so of course they didn't solve what you're looking for.

What I am looking for is irrelevant. They straight up didn't solve the needs of Go. It was not me who rejected them, it was the Go community who rejected them, realizing that they won't work for anyone.

> Once again, this is exactly the reason that I'd argue that it's reasonable to consider a solution to a specific subset of the problem than trying to solve it generally.

The Go project is looking for a subset solution. Nobody knows, even within that subset, of how to make it work.

Which clearly includes you. Me too. Obviously if we had a solution, we'd already be using it. But who?

No matter how much you hope and pray, things cannot magically appear. Someone has to do it.


> This doesn't make sense. You might be mistakenly anthropomorphizing HN?

You're a human talking to other humans. Yes, you're online, but there are still a range of ways you can phrase things, some of which are more polite than others. I don't understand what doesn't make sense about it, although as always you're free to disagree.

> What I am looking for is irrelevant. They straight up didn't solve the needs of Go. It was not me who rejected them, it was the Go community who rejected them, realizing that they won't work for anyone.

Go is not a monolithic community, and for obvious reasons the people making decisions are a much smaller group than the community as a whole. Not everyone in the community will agree with every decision, and my impression is that there's a sizable group of people who would have been happier if one of the proposals had been merged. You're stating it as fact that this isn't the case, and obviously I'm not going to convince you otherwise, but it's clear you don't have any desire to provide any more context because you think your claim is self-evident.

> Which clearly includes you. Me too. Obviously if we had a solution, we'd already be using it. But who?

Sure, if you think that the people who make the language are infallibly able to both know and care about what's good for 100% of Go programmers and every proposal will somehow be either something that will strictly fit what with every single one of them wants or be bad for all of them (regardless of what they say they want). Alternately, maybe there's nuance where different people have competing technical views on what would make sense or disagreeing views on subjective matters, and the lack of a solution having been adopted doesn't mean that it's impossible for someone to think that anything that's been discussed would be a good idea without being objectively wrong. Given that you'd rather refer to any other viewpoint as akin to magical hopes and prayers, you obviously don't think that it's possible anyone else could have something reasonable to say on the issue if it disagrees with your opinions, so I guess we've both been wasting our time here.


> You're a human talking to other humans.

Okay. Let's put that to the test. Describe my human features. What do I look like, sound like?

> and my impression is that there's a sizable group of people who would have been happier if one of the proposals had been merged.

Fair enough. What do they say to the specific criticism that brought rejection?

> Sure, if you think that the people who make the language are infallibly able to both know and care about what's good for 100% of Go programmers

They satisfy 100% of Go programmers, but not all programmers. Those who aren't satisfied are already using another language or have forked Go to make it what they actually need. Even Google uses their own fork, funnily enough. If something doesn't work for you, you can't sensibly continue to use it.


In an HTTP server, top level means the handlers, is that so?

Yes I guess I do annotation in two places - initial error deep in libraries is annotated, this is passed back up to the initial handlers who log and respond and decide what to show users. Obviously that’s just a rule of thumb and doesn’t always apply.

Depends if it can be handled lower (with a retry or default data for example), if it can be it won’t be passed all the way up.

Generally though I haven’t personally found it useful to always annotate at every point in the call chain. So my default is not to annotate and if err return err.

What I like about errors instead of exceptions is they are boring and predictable and in the call signature so I wouldn’t want to lose that.


The database table is someone else’s data. That’s why this company exists and is explained in the article.

They don’t have the option to clean up the data.


Yep pretty much, feldera is the engine we don't control what SQL people throw at us.

There are sometimes reasons this is harder in practice, for example let’s say the business or even third parties have access to this db directly and have hundreds of separate apps/services relying on this db (also an anti-pattern of course but not uncommon), that makes changing the db significantly harder.

Mistakes made early on and not corrected can snowball and lead to this kind of mess, which is very hard to back out of.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: