> What you'll find is that your Go app will be extremely efficient and performant.
This is true, for sure. But my experience--and I've written plenty of Go, though I find it unpleasant to write and think about for all the usual reasons that Go partisans roll their eyes and so would rather deal with it as artifacts rather than actually writing it--has led to watching lots of developers make mudball codebases in the process (while blogging very heavily about how great it is three weeks in, and less a year-plus in). A pursuit of faux-simplicity has led to what I see as reinventing a lot of Java-1.4-era (because the language itself is essentially that) design patterns--and we should be reminded that design patterns exist to address defects in tooling--that more expressive languages have managed to avoid; the general desire for smaller applications helps to some extent, but software has this tendency to grow that I don't think Go gives you the tools to effectively manage and maintain. Reasonable minds can differ, of course.
Past that, while I think it's an alright choice for company-internal deployable products--web servers, worker nodes, etc.--I have straight-up problems with it being the new devops tool of choice. Statically linking your SSL library makes you an asshole when it inevitably fails and now an application has to be regression-tested so that new features and new bugs don't hose you just because that's the only way to upgrade Heartbleed 2.0. (Go also discourages program extensibility through components and rather as recompilation; the stuff Packer and Terraform do to provide "plugins" that the same company's Vagrant just did with a `require` is gross and, to my mind, completely foolish.)
So I wouldn't say it's hype, but I would say it's not all true, either.
> But my experience--and I've written plenty of Go, though I find it unpleasant to write and think about for all the usual reasons that Go partisans roll their eyes and so would rather deal with it as artifacts rather than actually writing it--has led to watching lots of developers make mudball codebases in the process (while blogging very heavily about how great it is three weeks in, and less a year-plus in).
I get what you're saying, and I agree with some of the criticism around the tooling, although I do think much of that owes to the age of the language. There are some specific things in the (sometimes rather cranky) replies to you that I think are incorrect but I'm not going to debate the merits of the language. Use it if you like, don't use it if you don't.
I will say that we've been using Go for nearly all back end services for nearly 3 years now, and we all still think it's pretty great. Our code base has also remained pretty clean, and in fact we're investing more heavily in Go going forward.
reinventing a lot of Java-1.4-era (because the language itself is essentially that) design patterns
Really? Java 1.4 had an extensive and consistent standard library, implicit interfaces, composition instead of inheritance, static builds, and built-in concurrency?
Go is not early Java, it's more like C 2.0, and I don't really see anything faux about the simplicity, it really is pretty simple, perhaps too simple for some tastes, but it's not pretending to be simple, nor is it simplistic. Which specific design patterns did you have in mind?
The comparison offered by the OP was Java 1.4 to Go 1.4.
I think the standard library is debatable (see the infamous early Java Date class), Java encourages using inheritance (so not composition instead of inheritance) - something Gosling later remarked as his biggest regret, and concurrency had improved tools in Java 2, not sure the story was as good in early Java.
> Also, to be very clear, Java's threads ("green threads") were _originally_ M:N but they switched back to 1:1.
Java's "green threads" were N:1 threads, not M:N threads. While both are sometimes referred to as "green threads", they are very different things. (N:1 tends to give cheap concurrency but no parallelism, M:N tends to give cheap concurrency with as much parallelism as the hardware can handle, but with more overhead than 1:1 threads.)
I don't know if the programming model of threads and channels that's used by haskell and golang strictly depends on the existence of M:N threading, but it sure is nicer to use than the model in java/rust.
It wasn't immediately obvious from scanning that link. Does it provide channel select? Because that's the hard part. Threadsafe queues are easy, select on them is hard.
It doesn't strictly depend on it, but it is strongly supported by M:N threading since that threading model decouples the supportable degree of concurrency from the supportable degree of parallelism (unlike 1:1), without abandoning parallelism (unlike N:1).
You can do Erlang-style concurrency with 1:1 or N:1 threading models (and there are libraries for languages whose many implementations are N:1 or 1:1 rather than M:N that do that), but it makes most sense when you have M:N.
> It doesn't strictly depend on it, but it is strongly supported by M:N threading since that threading model decouples the supportable degree of concurrency from the supportable degree of parallelism (unlike 1:1), without abandoning parallelism (unlike N:1).
A modern Linux kernel has excellent thread scalability, and you can customize thread stack sizes to improve memory usage. (Thread memory use is kind of independent of 1:1 vs. M:N, honestly; the thing that can improve scalability is relocatable stacks, which is really not the same thing.)
It's instructive to look at the history of M:N versus 1:1 in the days of NPTL and LinuxThreads and compare that to the history of programming languages. In that world it was received wisdom that M:N would be superior for the reasons always cited today, but at the end of the day 1:1 was found to be better in practice, because the Linux kernel is pretty darn good at scheduling and pretty darn fast at syscalls. Nobody advocates M:N anymore in the Linux world; it's universally agreed to be a dead end. Now, to be fair to Golang, relocatable stacks do change the equation somewhat, but (a) as argued above, I think that's really independent of 1:1 vs. M:N; (b) any sort of userspace threading won't get you to the performance of, say, nginx, as once you have any sort of stack per "thread" you've already lost.
> A modern Linux kernel has excellent thread scalability, and you can customize thread stack sizes to improve memory usage.
Cross-platform code can't count on always running on a modern Linux kernel, though.
But, sure, I'd think that M:N (which is never free) is going to be less likely to be worth the cost if you are specifically targeting an underlying platform reduces the cost of high numbers of native threads and the cost of native thread switching so that the price for user-level threading in the runtime isn't buying you improvements in those areas.
Well, if what you're saying about not needing M:N to get thread scalability is true, I'm probably wrong. I really like being able to run tons of threads and write synchronous code, letting the the haskell RTS swap threads when I block.
My mental model, which probably comes from everyone complaining about apaches thread-per-request model a few years ago, is that you basically either consume lots of resources with lots of threads, or you write ugly cps style code. I view haskell/go as giving the best of both worlds, but perhaps they aren't separate worlds after all.
I feel the same about Erlang processes. It's so much easier on my brain to write synchronous code that's preemptively executed and communicating by message passing. Wish Rust gave me that option, speed hit and all, but I understand why it doesn't.
The choice of 1:1 and M:N does not affect the programming model at all. It is simply an implementation detail. M:N scheduling is no more a requirement for CSP than Unix is for TCP.
Maybe I'm using terms incorrectly -- is there a good green thread library for Rust which supports the kind of Erlangy experience I've described? I looked around a couple of months ago but didn't find anything that seemed suitable.
In my experience, kernel threads have a cost that is sufficiently far from that of, for example, Erlang processes (400+ bytes each) that it changes the way you program with them.
In Erlang you don't think twice about spawning a million threads if that models your problem nicely. The same isn't true when you're dealing with kernel threads. (Right?) So I'm interested in green threads because when I have to start thinking about the cost of the threads I'm using, then I'm thinking less about how to most elegantly solve the problem and more about how to satisfy the architecture I'm programming on.
Now, if kernel threads are massively cheap these days and there's no problem spawning a million of them, then I need to take another look at that model.
So since you're talking about scalability into the millions of threads, I think what you actually want is stackless coroutines rather than M:N threading with separate user-level stacks. If you have 1M threads, even one page of stack for each will result in 4G of memory use. That's assuming no fragmentation or delayed reclamation from GC. Stacks, even when relocating, are too heavyweight for that kind of extreme concurrent load. With a stackless coroutine model, it's easier to reason about how much memory you're using per request; with a stack model, it's extremely dynamic, and compilers will readily sacrifice stack space for optimization behind your back (consider e.g. LICM).
Stackless coroutines are great--you can get to nginx levels of performance with them--but they aren't M:N threading as seen in Golang. Once you have a stack, as Erlang and Go do, you've already paid a large portion of the cost of 1:1 threading.
Coroutines are preemptible at I/O boundaries or manual synchronization points. Those synchronization points could be inserted by the compiler, but if you do that you're back into goroutine land, which typically isn't better than 1:1. In particular, it seems quite difficult to achieve scalability to millions of threads with "true" preemption, which requires either stacks or aggressive CPS transformation.
A higher cost for context switching. Lots of people work on apps with lots of i/o, and there is a long history of coroutine/callback/green thread architectures beating the pants off of thread per request architectures.
No, the context switching overhead tends to be minimal if you're using a well-tuned kernel. You're doing a context switch to the kernel for I/O in the first place, and in a green thread model you have to do a userspace context switch in addition to the context switch the kernel imposes to get back into your scheduler.
Go is not very much like c. Pointers are very close to reified addresses, but much of their use comes from manual memory management. If you have to use GC, its not clear what you gain from the semantic similarities with c. Furthermore, it is a prescriptive ecosystem that is quite restrictive. The runtime is not very friendly to ffi. It is not a suitable systems language due to the GC and associated complexities (interfaces should not mandate GC roots).
It's actually more like a combination of ALGOL, Oberon, and practical solutions to problems. It's supposed to be simple, effective, easy to compile, and efficient at run-time. Like Wirth's languages. But with extras to help it go mass market, esp standard library. So, although I often poke that it's not-novel, I at least give credit that designers chose wisely in what to copy for their simple-to-use language.
Pretty much the opposite of the mess that was and is Java.
> Go also discourages program extensibility through components
Can you elaborate on this? Because in my experience with Go, I found that I had to make _far_ more components (assuming this means libraries?) than other languages.
> Statically linking your SSL library makes you an asshole when it inevitably fails and now an application has to be regression-tested so that new features and new bugs don't hose you just because that's the only way to upgrade Heartbleed 2.0
Not sure I understand that last bit. I get that statically linking could be bad(ish) because you would need to know to recompile with a fixed library, but what was that about new features?
With regards to your first question 'cause the sibling gets the second--I'm talking about a plugin architecture. Having to recompile an app to add a third-party feature is, in my world of "the user is more important than the developer," bogus; it means that I can't just use my OS packages if I need anything even remotely out of the ordinary. (nginx is the only thing I regularly use and might want to extend that has that misfeature, but it escapes that annoyance only because I've never needed to add a plugin Ubuntu doesn't by default.) Packer and Terraform attempt to get around this by shipping plugins as separate binaries. It is not the worst solution in the world, but I think it's an unpleasant, unsatisfying experience even when I do my best to separate that from just finding Go fugly in the first place.
It's an unrelated field to my day job of devops/server software, but I wrote a plugin system in .NET and it's super trivial to just suck down an assembly and expose its types to the core logic. I've written the same in Java, and Ruby basically makes it a breeze with a Gemfile and a 'require'. It can be slower (though for the overwhelming majority of tasks not at all too slow for a tool, rather than a high-throughput server or whatever) than Go can in some cases be, but it doesn't suck to use, and it's for that reason that while I can understand Go for server stuff I have a real beef with it intruding on my systems when I need to use it as a user.
I think the parent was saying that new releases would bring new features the end user may not want, in addition to something like a security fix for an included library.
With shared libs, you can keep using an old version if it works for you, while still updating ssl to a fixed version (assuming api compat).
This, precisely. If I have to compile YourApp 1.5 with YourTLSLib instead of just `apt-get upgrade`, I'm going to be sending you flaming karmic poop for your karmic doorstep.
And not just new features--though SemVer is very often honored in the breach more than the observance--but breaking, sometimes undocumented changes (two Go projects, Packer and Terraform, both come to mind).
You are missing the point. "apt-get upgrade" would just get the latest SSL lib and fix the vulnerability. You wouldn't be required to upgrade the core app in question.
It is, to my mind, much less likely that somebody goes through and recompiles and publishes every statically linked application in the Ubuntu repos the day that the inevitable critical bug is unearthed than somebody recompiling and publishing libimportantthing3.
It is also much more likely that a "minor version bump" that happens to contain the dependency with the bug has regressions or new, untested-in-my-environment features that I must accept as the price of a Go application's upgrade unless I want to start playing with said application's vendored dependencies. Which I don't, which is why libimportantthing3 is a vastly superior choice for software I must use but do not want to adopt and care for.
> design patterns--and we should be reminded that design patterns exist to address defects in tooling
This is a really important point, and I think it's why a number of best-practices in Go are actually not best practices in other languages, and vice versa.
One of the explicit, top-level design goals of Go was to focus on creating top-notch tooling as part of the language. While it is not the only language that has tried to do this from the get-go, it's one of a very small number[0].
Because the tooling was a first-class design goal, a number of the problems that traditional design patterns were created to address are less problematic in Go code[1].
[0] Case-in-point: gofmt, which other languages are now adopting due to its success.
[1] Again, gofmt: there are a number of design patterns and style bikesheds around code format, but really, the most important thing is that there exist a uniform standard. gofmt provides that reliably and a way to enforce that as a pre-commit hook, which has done wonders for eliminating minor style variations that IMHO cause more problems than they solve.
Top notch tooling? The debugger is neigh unusable. `go get` is a community joke. The compiler is fast mostly because it doesn't check for things that better compilers do. Go race is a symptom that not all is well in the CSP house. Other static analysis tools are third party and limited if they exist at all. Go really does remind me of JDK 1.4.
I'm sad that your post is downvoted, because (while I might have been a little more pleasant about it) I think you're generally on-point. The debugger is pretty poor, the compiler is pretty poor, there isn't much static analysis (and while the language's youth is an excuse, the profusion of tools for more semantically rich languages like Scala make me skeptical of the excuse).
I didn't mention Java 1.4 just for a lark; it's been long enough since I used it that I don't remember the ecosystem well but I do remember the style of coding being so brutally centered around type assertions and blind casts that Go really does remind me a lot of it.
Perhaps his post was downvoted because he made a strong assertion - "the compiler is fast because it doesn't check for things that other compilers do" without elaborating or providing any evidence. This is the first time that I've seen someone complain about deficiencies in the compiler too, so that adds a little skepticism.
Could you elaborate on the static analysis that you find lacking?
Actually JDK 1.4 was much, much better than Go for tooling.
The debugger was usable. You had realtime memory/CPU profiling tools like JProfiler. There was a workaround to get JMX working (JMXRI) with is great for production monitoring. Code formatting tools were significantly better than gofmt. You had bug finding static analysis tools e.g. Findbugs which integrated well into build tools.
And of course you had great IDEs like Eclipse (which was actually great back then), NetBeans, JDeveloper which allowed for refactoring, autocompletion and code assistance.
You're right of course but what I meant was what it's like to actually type the code in and read other code more than the rest of the rant which is about tooling. In many ways saying "Go is like JDK 1.4" is a compliment as JDK 1.4 was older than Golang is now.
I'm working with Golang every day now and the ONLY reason I'm using it is its CSP concurrency model, which is still way behind what Clojure's core.async can do.
> Because the tooling was a first-class design goal, a number of the problems that traditional design patterns were created to address are less problematic in Go code[1].
> [1] Again: gofmt…
What does gofmt have to do with design patterns?
gofmt is about code formatting. Design patterns are about abstraction and expressiveness. A code formatting tool does nothing to address abstraction and expressiveness of the language.
> gofmt is about code formatting. Design patterns are about abstraction and expressiveness.
First, gofmt can do more than code formatting, such as applying simplifying code transformations that are semantically equivalent. Second, code formatting and design patterns are indeed related, because the way code is laid out in text is the way that design patterns are expressed. The layout of code affects how abstractions are presented, which affects which abstractions are easy to reason about and work with.
Finally, I picked gofmt because it's a pretty uncontroversial tool, and one that is so successful that even languages like Rust have adopted or are working something similar. I'm really not interested in starting another flamewar about why Go lacks $FEATURE and therefore $OTHER_LANG is better, because we have had enough of those on HN, don't you think?
> First, gofmt can do more than code formatting, such as applying simplifying code transformations that are semantically equivalent
Which also has nothing to do with design patterns. (At least not if you're limited to the extremely basic -r gofmt rewrite rules.)
> Second, code formatting and design patterns are indeed related, because the way code is laid out in text is the way that design patterns are expressed. The layout of code affects how abstractions are presented, which affects which abstractions are easy to reason about and work with.
No, I don't buy that a code formatting tool obviates the need for design patterns. How does gofmt replace the Visitor pattern (just to pick one at random from the GoF)?
> No, I don't buy that a code formatting tool obviates the need for design patterns. How does gofmt replace the Visitor pattern (just to pick one at random from the GoF)?
Asking if a code formatting tool "obviates the need for design patterns" is the wrong question, because it assumes that there is a need for design patterns in the first place.
I do agree with you that a code formatting tool is not capable of somehow fixing software design and architecture decisions. However, "design patterns" by their GoF meaning exist as to address shortcomings of the languages and tools used: most of their advice does not make sense as soon as you move away from Java into less object-oriented or less procedural languages. To talk about "the need for design patterns" as if they are some sort of mathematical truth is misleading and dangerous.
You do need design patterns in Go. A prime example: the interface that Sort() requires is basically the Strategy pattern. You need it in Go because the language is missing generics. There are many other examples.
In that message, Rob Pike misunderstands what the visitor pattern is for. Go's "type switch" is just chained Java instanceof. Java still benefits from the visitor pattern for (e.g.) compiler transformations, even though it has instanceof.
This is one of the more troubling aspects of the parts of the Go community that I am exposed to--aggressive insularity. It's perhaps cyclical--in the past I've made similar criticisms of Node, and did of Ruby before the hype died down and the more aggressively enthusiastic people calmed down or moved on--but it's kind of a pain in the rear at present. That said, Go's deification of a single individual is unique to me; of Matz, Guido, Bjarne, Gosling, or Rasmus, I don't know any who are quoted as if it's by itself persuasive. What Rob Pike says about things is not necessarily correct--and sometimes it's not even accurate--but seems strangely, to Go advocates (as separate from "Go users"), to immediately enter a sort of canon to be trotted out at every opportunity.
All languages need design patterns; they are about designing how to implement certain functionality.
Languages differ in which design patterns have a native expression in the language, which can be reduced to simple reusable library code, and which require type-it-in-each-time fill-in-the-blanks code recipes.
The fact that Design Patterns were popularized in software development by the GoF book which, among other things, included code recipes to illustrate its patterns, and that the patterns in it tended to be ones for which the popular languages of the day required code recipes (lacking native implementations or the ability to provide general implementations as library code), has unfortunately associated the term with the code recipes, which aren't really the central point of understanding and using patterns.
> How does gofmt replace the Visitor pattern (just to pick one at random from the GoF)?
This would not be the first time that we have had a discussion on HN about this exact design pattern in Go, so it's hardly a random choice. And given past precedent, I think it's best if I end my half of the conversation here and propose to agree to disagree. To quote my previous post,
> I'm really not interested in starting another flamewar about why Go lacks $X and therefore $Y is better, because we have had enough of those on HN, don't you think?
> Statically linking your SSL library makes you an asshole...Heartbleed 2.0.
Go has its own SSL library, crypto/tls, which is not linked to any C libraries and wasn't affected by Heartbleed 1.0. You haven't written much Go if you don't know that.
The argument is specious anyway, there's nothing difficult about building a binary from an old version of your code or just upgrading in most cases. Deploying a new Go binary is always trivial as compared with upgrading, testing, and deploying Python, Ruby, or Java applications with their associated libraries and interpreters.
> Go has its own SSL library, crypto/tls, which is not linked to any C libraries and wasn't affected by Heartbleed 1.0. You haven't written much Go if you don't know that.
The parent poster obviously wasn't saying that crypto/tls was affected by Heartbleed specifically. It was a statement about the security implications of static linking.
I'm not even particularly worried about applications to which I don't have code access. I'm worried about getting off my OS's upgrade track because the minor version of the application I've verified to be usable and correct in my environment is no longer the one I'm going to have because a vendored dependency was upgraded during a release of the application itself rather than as an independent, dynamically linked library.
This is true, for sure. But my experience--and I've written plenty of Go, though I find it unpleasant to write and think about for all the usual reasons that Go partisans roll their eyes and so would rather deal with it as artifacts rather than actually writing it--has led to watching lots of developers make mudball codebases in the process (while blogging very heavily about how great it is three weeks in, and less a year-plus in). A pursuit of faux-simplicity has led to what I see as reinventing a lot of Java-1.4-era (because the language itself is essentially that) design patterns--and we should be reminded that design patterns exist to address defects in tooling--that more expressive languages have managed to avoid; the general desire for smaller applications helps to some extent, but software has this tendency to grow that I don't think Go gives you the tools to effectively manage and maintain. Reasonable minds can differ, of course.
Past that, while I think it's an alright choice for company-internal deployable products--web servers, worker nodes, etc.--I have straight-up problems with it being the new devops tool of choice. Statically linking your SSL library makes you an asshole when it inevitably fails and now an application has to be regression-tested so that new features and new bugs don't hose you just because that's the only way to upgrade Heartbleed 2.0. (Go also discourages program extensibility through components and rather as recompilation; the stuff Packer and Terraform do to provide "plugins" that the same company's Vagrant just did with a `require` is gross and, to my mind, completely foolish.)
So I wouldn't say it's hype, but I would say it's not all true, either.