It's so sad that Java 8 had the chance to really fix the null problem, but gave us only the half-assed `java.util.Optional<T>`. Rather than implementing optional values at the language level, it's just another class tossed into the JRE.
This is perfectly legal code, where the optional wrapper itself is null:
What do you mean by had a chance? That chance isn't gone. This area is under investigation, and when we have a solution we like, we'll implement it. We can't address all issues at once.
Optional isn't half-assed because it was never envisioned as a general solution to the null problem. It's simply an interface that's useful in streams and other similar cases for method return values, and it does the job it was intended to do reasonably well. It's possible that a solution to nulls in the type system could have obviated the need for Optional, but I don't think that delaying lambdas and streams until such a solution was found would have been a better decision.
I have definitely come to the conclusion, especially after doing a lot of work in Typescript, that class Optional is a big mistake, whether from the JDK or other libraries that preceded it.
First, because exactly the type of code that the parent commenter showed. I've actually seen this in production code (and shrieked). The fact is that without language-level support, you can end up getting the worst of all worlds.
Second, like all things in Java along the lines of "why use 1 character when 10 will do?", the Optional syntax is verbose and annoying to use.
But, most importantly, the fundamental issue is that all classes are optional by default in Java (indeed, that's the problem at hand). Adding an Optional class doesn't really mean you can make any non-nullability assumptions about other classes, especially when using different libraries. The Optional class just added more complexity while simplifying very little.
> But, most importantly, the fundamental issue is that all classes are optional by default in Java (indeed, that's the problem at hand). Adding an Optional class doesn't really mean you can make any non-nullability assumptions about other classes, especially when using different libraries. The Optional class just added more complexity while simplifying very little.
When pron mentioned "this is actively being worked on", he meant exactly this problem. [1] [2] Java is currently working on changing the dynamics of the language such that classes can in fact be non-nullable. Last I checked in, this was the notion of `primitive` types being added in project Valhalla.
Optional just so happens to be one of the types in the JDK that will converted over to being a `primitive` type.
Now, what that means in terms of existing code I believe what's being discussed. How do you handle
public Optional<Foo> bar() { return null; }
?
We'll see, it might be something like changing the code to something like this
public Optional<Foo>.val bar() { return Optional.empty(); }
which would guarantee that the optional type returned is in fact never null.
I miss the pre-Rust days. When Haskell was HN's cool pet language, and you had to obtain at least some vague familiarity with the term "monad" to understand half the discussion threads here.
I'm sorry, but I don't think you understand the purpose that "Optional" was intended to serve. And are unduly dismissive simply because it does not serve some larger purpose that was not intended.
I mean having a monadic api is nice if it's strictly enforced by both ecosystem/culture/typesystem. But with Optional being potentially a null itself, it barely improves the need to defensively program and might in fact be worse. For example, when I used a lot of Scala in a past job, Java libraries were scary to use unless you defensively wrap java functions with Try/Option/etc.
Whereas with Haskell/Rust/OCaml/etc. you can largely trust type signatures to properly encode the existence of nullability or failure.
> I'm sorry, but I don't think you understand the purpose that "Optional" was intended to serve.
Lol, your response is the equivalent of the old SNL "IT guy" skits: "Silly programmer peasant, you don't even know what a monad is!"
Regardless of what you may think Optional was intended to serve, I have seen its use across large and varied code bases, and it simply does not make the cognitive burden easier for developers.
Again, look at the example given by the GP comment. Of course Optional isn't intended to be used that way. But the fact is, as the compiler doesn't prohibit it, it WILL get used that way, and it doesn't provide strong guarantees about the state of the variable.
> But, most importantly, the fundamental issue is that all classes are optional by default in Java (indeed, that's the problem at hand). Adding an Optional class doesn't really mean you can make any non-nullability assumptions about other classes, especially when using different libraries.
Perhaps it's just me, but I don't equate optional with nullable. An optional value is just the designer specifying that you may or many not specify that value, while nullable are objects which may or may not have been initialized yet.
Even though nullables have been used and abused to represent optional values, I'm not sure it's a good idea to conflate both. It would be like in C++ equating shared pointers with optional types.
> Perhaps it's just me, but I don't equate optional with nullable.
But the main usage of Optional and similar types in mainstream languages is exactly that - making the potential nullness (is that a word?) of the value explicit in the type.
It's like when people say Java had the chance to do generics right (like C#) and then didn't
Yeah technically tomorrow morning Java could fix it, but there have been kingdoms built on the current situation. C# took its lumps years back on breaking things and so there were fewer kingdoms to demolish. And if you fix it, it'll be years before it trickles down to a large portion of devs who don't get to work at the bleeding edge, or even near the edge.
First of all, Java's generics are already superior to C#'s. We've exchanged the minor inconvenience of not being able to have overloads that erase to the same type with the ability to support multiple variance strategies rather than one that's baked into the runtime. That's why Java has Clojure and Kotlin and Scala and Ruby running on top of it with such great interop. And, as it turns out, when specialisation is really important -- for Valhalla's value types -- then it's invariant and doesn't require demolishing kingdoms. We've also added virtual threads years after C# had async/await, and now they're the ones stuck with that inferior solution, which we'll remain in the ecosystem even if and when they add user-mode threads.
But in this case, I don't see what additional kingdoms there would be to demolish. No Java code employs nullability types now just as it didn't in 2014. Optional does the limited job it needs to do rather well, and will become even better when we extend pattern matching, so there would be no need to demolish it even if and when we have some solution to nulls. As you can see here (https://docs.oracle.com/en/java/javase/19/docs/api/java.base...) it's used sparingly and in specific situations, definitely not as a general null replacement.
"Java generics is superior to C#" - That's a first for me. C# generics don't have to do boxing of types like java does, and overall have much better type safety between library boundaries, etc...
The guys from Scala.NET did mention .NET generics as one of the reasons they gave up on porting Scala to .NET.
That is also a reason why .NET needed the whole effort to create DLR and the dynamic keyword, while Java only needed to add one additional bytecode invokedynamic.
> First of all, Java's generics are already superior to C#'s. We've exchanged the minor inconvenience of not being able to have overloads that erase to the same type with the ability to support multiple variance strategies rather than one that's baked into the runtime.
Working on the JDK I'm sure you're aware just how revisionist that take is: we got the version of generics that landed because of backwards compatibility concerns.
One of the key selling points going from Pizza to GJ was that the result ran on then current JVM targets without modification.
> C1) Upward compatibility with existing code. Pre-existing code must work on the new system. This implies not only upward compatibility of the class file format, but also interoperability of old applications with parameterized versions of pre-existing libraries, in particular those used in the platform library and in standard extensions.
> C2) Upward source compatibility. It should be possible to compile essentially all existing Java language programs with the new system.
Interop came at the expense of Java (not the JVM) having a worse generics story, since the incredibly onerous constraint of "works with parameterized versions of pre-existing libraries" forced other languages to then go and reinvent the exact same wheel in different ways.
-
> We've also added virtual threads years after C# had async/await, and now they're the ones stuck with that inferior solution, which we'll remain in the ecosystem even if and when they add user-mode threads.
"Kingdoms to demolish" refers to changing generics, but there are "kingdoms" have been built that wouldn't be demolished: they still represent a lot of throwaway work from putting off attacking the problem. Case in point, the post we're commenting under.
Nullability specifically is much more subject to the part you left out: "And if you fix it, it'll be years before it trickles down to a large portion of devs who don't get to work at the bleeding edge, or even near the edge."
-
At the end of the day it's always been a tradeoff between backwards compatibility, timeliness, and correctness, but when I switch between Java specifically and C# specifically (not the JVM and the CLR) I find C# did an amazing job providing value upfront rather than hand wringing, and is not nearly as worse off for it as this comment would imply.
I mean you're looking down on async/await, but it came out a decade ago while Loom is just landing. I guess you're trying to paint that decade of development as "weighing down" whatever .NET comes out with because it'll still exist, but that seems like a stretch at best.
Java had no async/await story and will get a very good alternative. C# had an async/await story and will get an improved alternative (it's already in the experimentation stage according to the .NET team). I'd just much rather have the latter?
> we got the version of generics that landed because of backwards compatibility concerns.
We got the version of generics because of the need for two languages with different variance strategies -- Java 1.4 and Java 5 -- to be compatible, and so precluded baking a particular variance strategy into the runtime. It is true that the goal at the time wasn't to support, say, Scala or Clojure or Ruby specifically, but why does it matter?
> forced other languages to then go and reinvent the exact same wheel in different ways.
I don't think so. Both untyped languages like Clojure and Ruby, as well as existing typed languages such as Haskell, have been ported to the Java platform. Erasure, BTW, is a pretty standard strategy. Even Haskell does not distinguish at runtime between List Int and List String.
> Nullability specifically is much more subject to the part you left out: "And if you fix it, it'll be years before it trickles down to a large portion of devs who don't get to work at the bleeding edge, or even near the edge."
So what? Java is likely to be one of the world's most popular languages 20 years from now. Who would care if we wait to do things right, especially as it worked very well for Java so far. Who cares now that lambdas were in Java 8 and not Java 5?
As for C# vs Java, there's no doubt some developers prefer one over the other -- for some it's Java, for others C# -- but I see absolutely no reason for Java to adopt C#'s strategy. Even if you don't think it's any worse, it's certainly hasn't proven to be better. Those who prefer C#'s approach already use either it or Kotlin, so we've got them covered on the Java platform, too.
Yes, which is why in Java the cost isn't free but it's still rather low. Of course, the calculus has changed due to memory hierarchies, which is why Valhalla brings with it specialisation for value types (that outweighs the benefits of erasure for those types, but they're not variant, so it doesn't require baking a variance model into the VM).
> First of all, Java's generics are already superior to C#'s.
That's... highly controversial to say the least. You didn't just lose the ability to overload List<String> and List<Integer>. You've also lost run-time type information and type-safety when doing reflection. And reflection-based metaprogramming cannot be ignored in Java (and to a lesser degree in Kotlin): So much is based on it!
It also makes interfacing with primitives (especially arrays of primitives) quite messy. And you've also lost some possibilities for JIT optimizations, but I'm not sure if C# does them anyway and how much they really matter - so that part may be minor indeed.
Allowing different variance strategies could be a boon for a language-agonstic runtime, but, as other comments mentioned, that's just rewriting history. The JVM wasn't built to be a language-agnostic runtime. The truth thoroughly documented. It started with a new Java-like language, Pizza, that had to run on the then-current JVM, and continued with a fork of Java called Generic Java. These attempts were independent at first, so they could not change the JVM. Type erasure was chosen because it was the best fit to the existing Java model and had the best performance with the JVM as-it-was[1].
In hindsight it turned out to make writing JVM languages with different variance strategies easier, but it's not a superior design, just a coincidence. Although Martin Odersky was behind both Pizza and Scala, he did not mention variance as concern back in the 1990s when he chose type erasure. Let's not try to rationalize a JVM-compatibility decision that was done by a team outside Sun as some great insight by "the brilliant engineers at Sun".
In the end of the day, you can do type erasure just as well on .Net when you need it, the same way you'd do it in Java. Nothing prevents a CLR language from compiling List<T> as List<Object>.
> We've also added virtual threads years after C# had async/await, and now they're the ones stuck with that inferior solution, which we'll remain in the ecosystem even if and when they add user-mode threads.
Virtual threads are not inherently superior or inferior to the async/await mode. It's the old stackful coroutine vs. stackless coroutine or implicit-vs-explicit concurrency debate. I've already written about it multiple times in the past[2]:
Stackful pros:
* No "colored functions": I can write asynchronous I/O code the same way I write multi-threaded code or non-blocking code. It's definitely more ergonomic, but also means your code can hold more surprises for you. Not everybody thinks 'Colored Functions' are an absolute advantage, otherwise we wouldn't have Effect Typing.
* Less GC allocations: stack is [re-]allocated in bulk every time time a lightweight starts (go statement) or whenever it needs to grow. Stackless coroutines typically allocate a smaller object on every call to an async function.
Stackless pros:
* Clear yield points: you always know the points where your coroutine could suspend and control would be moved to another coroutine: whenever you see an await statement. This is why function colors are needed - to be explicit about the control flow.
* Less memory waste and less overall allocation size: the "compile-time stacks" generated by stackless coroutines are perfectly efficient. There is no unnecessary memory allocated.
I personally have a strong preference for explicitly marking concurrency, just like I would love to distinguish pure function and functions that have outside effects: I think Effect Typing is generally a good thing. Java also went its own way with explicit and mandatory effect marking when it comes to checked exceptions. Unfortunately, it never really got composability right, and using checked exceptions is simply too verbose - but Rust also went for explicit error type signatures, and its solution is generally well-received.
> And reflection-based metaprogramming cannot be ignored in Java (and to a lesser degree in Kotlin): So much is based on it!
Its prevalence has been going down, and we're trying to encourage that trend.
> Allowing different variance strategies could be a boon for a language-agonstic runtime, but, as other comments mentioned, that's just rewriting history.
I don't see how this matters, and I never claimed that supporting, say, Clojure was the intent, but the design was very specifically to allow for multiple variance strategies (because Java 1.4 and Java 5 were two languages with different variance strategies).
> Let's not try to rationalize a JVM-compatibility decision that was done by a team outside Sun as some great insight by "the brilliant engineers at Sun".
I don't think I was doing that, but on the other hand let's not discount three decades and hundreds of decisions that have made and kept Java one of the biggest and longest-lived successes in software history. Call it lucky if you want, but they/we have been consistently lucky for three decades now. During that time, MS have made huge breaking changes to their platform at least three times.
> Nothing prevents a CLR language from compiling List<T> as List<Object>.
And Java could specialise just as Ceylon did. Erasure ended up working so well for Java because the main language on the platform did it. It is that, not some hypothetical ability, that was able to support Clojure and Kotlin well.
> Virtual threads are not inherently superior or inferior to the async/await mode. It's the old stackful coroutine vs. stackless coroutine or implicit-vs-explicit concurrency debate.
Actually, it isn't. User-mode threads might not be superior to syntactic stalkless coroutines in every language and on every platform (and certainly not for every use-case), but they are in Java (and could be in .NET). For one, we are now close to achieving zero memory waste in virtual threads compared to async (will be released in a year or so); in fact, we already allocate fewer objects than Kotlin's stalkless coroutines [1]. For another, while it is true that the less composable cooperative model (i.e. "clear scheduling points") has some advantages in languages like JS (where a non-cooperative model would break pretty much all existing code), Java and C# already have the non-cooperative model built into the language. It's a question of having two semantically similar yet syntactically incompatible concurrency models or just one.
> Not everybody thinks 'Colored Functions' are an absolute advantage, otherwise we wouldn't have Effect Typing.
I'm not a big believer in effect systems in general, for the simple reason that they've yet to show significant tangible bottom-line benefits (although we'll wait and see). But they're clearly wrong for coroutines in languages that already allow side-effects, like Java. The semantics -- in the precise Hoare-triplet sense -- are identical to regular mutation, and they're less composable. So they introduce an additional incompatible syntactic world with identical semantics, which makes them, at best, a redundant overhead.
> Unfortunately, it never really got composability right, and using checked exceptions is simply too verbose
You are absolutely correct there, but I believe we can fix that, and quite elegantly.
Now, I don't want to give the impression that I think Java is a perfect language or platform; I don't. Mistakes were made along the way, especially early on. But I do think we've managed to do better than others. Python is as popular as Java or perhaps even more so, but it didn't manage to preserve compatibility; .NET never achieved Java's popularity while also failing to preserve compatibility (and several times). The only other languages with a similar backward compatibility story alongside long-lived super-popularity are C and JS, and both of which are famous for having a small or nonexistent standard library. So things could have hypothetically been better, but to date no one has actually managed to do that better, or even as well.
[1]: The perfect memory efficiency of stalkless coroutines is only achievable when you disallow virtual calls and recursion -- as Rust does, although it pays for that dearly -- but neither C# nor Kotlin can or want to do that.
> Optional isn't half-assed because it was never envisioned as a general solution to the null problem. It's simply an interface that's useful in streams and other similar cases for method return values, and it does the job it was intended to do reasonably well.
Regardless of the intent, once it got into the hands of enterprisey java devs, it immediately got used in two ways:
Tri-value nulls: treating null, None and Some in different ways.
Annotation hell: spam nullabilty annotations like @Nonnull Optional haphazardly everywhere with no semblance of soundness in sight.
When actually used as intended (by the 90 percentile coders on the team), it just bludgeons the GC to death because the JIT does not always do the right thing and elide the extra heap indirections.
If that happens, there would be language level nullability types that may supersede Optional if they indeed end up solving a strictly more general problem. It's not unheard of to have more general solutions supersede older, more specific ones. For example, the newly added pattern-matching features largely obviate the need for visitors, which are a pretty common pattern in libraries; or virtual threads, which largely obviate the need for many asynchronous APIs (and there are opposite examples, such as j.u.c locks, which followed the language's locks). That's the nature of evolution: sometimes there are surviving vestiges of the past.
Yeah, I've heard this kind of stuff in the past. "Just copy async/await from C#, what are you waiting for?". And now C# is stuck with that thing, while we'll soon be getting the proper solution.
Can you point a couple of cases where Java got a better version?
Generics?
Value Types?
Lambdas?
Null Safety?
Async/Await?
Also do you just have to assign all the value the features provide in interim 0 value? Like Async/Await has provided a 10 years of value, and I'm guessing you mean Project Loom... so how much better than Async/Await does Loom have to be to justify an entire decade of just straight up missing the value add entirely, let alone having a better one
-
I can't reply to comments anymore (thanks dang) but yeah...
Generics at the Java layer are worse, generics at the JVM layer are convenient if you're going to write a language targeting the JVM. That's just an artifact of the fact that the JSR for Generics defined backwards compatibility as its top goals, not because they were trying to enable a friendlier runtime for languages that didn't exist yet.
I already addressed Project Loom: They're coming a decade after async/await are coming after a decade of a strictly worse concurrency story, so unless you're assigning 0 value to a decade of having a better situation (which I'd say is disingenous) I don't think it's a great example.
Also definitely not sure how Java Records are better than C#?
I can! Generics, virtual threads, and records. Java is unburdened with the prevalent and obsolete features of async/await, properties, and its generics allow a flourishing ecosystem in exchange for a minor inconvenience.
> Java is unburdened with the prevalent and obsolete features of async/await, properties, and its generics allow a flourishing ecosystem in exchange for a minor inconvenience.
I don't quite get the "unburdened" part. C# now has records as well[1], and it always had List<Object>. It may also get stackful coroutines as an alternative to stackless coroutine.
You could say that C# now has to deal with a lot of legacy code that's using properties and async/await and cannot be replaced with "superior" records and stackful coroutines[2]. That's all true, but Java hasn't been frozen in vacuum for the last 20 and 10 years (respectively).
While C# programmers created mountains of legacy code with properties, Java developers churned out mountains of legacy code using "Beans" with getters and setters, auto-generated by the IDE or Lombok. That's strictly worse.
At the same time during the last 10 years, Java programmers that needed Async I/O didn't just sit around and wait for Project Loom. They wrote callback hells, and then moved on to CompletableFuture or a reactive programming style. And all this code tends to be a lot messier than Async/Await.
In short, real world Java developers are just as burdened as C# developers by legacy solutions that predate virtual threads and records, and the Java legacy solutions are usually quite a lot uglier.
[1] Although I dislike their decision of permitting mutability, their records they are at least useful right now, since we don't have to wait another 5 years to get a with expressions.
[2] I completely agree with you that immutable equality-based types are superior, and that mutable data types probably shouldn't be baked in the language. But I disagree with the stance that stackful coroutines are superior.
Then let me explain. Some programmers (~10% by my educated guess) like languages with lots of features, but most don't; really don't. It seems that being able to serve as a first language is a prerequisite for being super-popular (and Java, Python and JS serve in that role), and having more features is a big hindrance to that. We are more concerned with Java having too many features than too few (and teachers complain about that).
Every additional significant language feature has a cost, and often for every experienced developer it gains, it loses several beginners. The trick is balancing the benefit and the cost, and in Java we do this by adding features slowly, and trying to pick only the ones that give a lot of bang for the buck (which means we often add features relatively late, after they've proven themselves elsewhere). This strategy -- originally laid out by James Gosling in the mid-90s -- works, and no other strategy has so far proven to work better.
> That's strictly worse.
It really isn't for languages seeking to be at the top for the reason I just explained. But even if you think there's some subjectivity there, the objective fact is that no language with a strategy of adding features quickly has performed better in the market than Java. The only languages in the same tier have even fewer features. So there is no objectively measurable metric by which it is worse: languages that add more features more quickly don't fare better (in fact, they seem to fare worse), and no significant differences in productivity or correctness have been found (despite trying).
> and the Java legacy solutions are usually quite a lot uglier
I think they're usually quite a lot nicer, but people have different preferences.
> But I disagree with the stance that stackful coroutines are superior.
Okay. There aren't many things all developers agree on.
> Then let me explain. While some programmers (~10% by my educated guess) like languages with lots of features, but most don't; really don't. It seems that being able to serve as a first language is a prerequisite for being super-popular (and Java, Python and JS serve in that role), and having more features is a big hindrance to that. We are more concerned with Java having too many features than too few (and teachers complain about that).
Programmers want to deliver working code. If that code needs to be asynchronous and handle data, they'll use whatever is easy, acceptable and available to use with their language. That "feature" may be provided by the language, by a compiler, by IDE code generation, by a pre-processor, by a post-processor, by runtime code generation or by a library that is using some idioms.
Most programmers (my educated guess is 80%) don't care very much how they get that done, as long as they can get quickly learn it and get stuff done keep their managers and co-workers are happy. Programmers need to go through a learning curve to learn new features, whether they are library features or language features. A language can "cheat" a little by keeping itself lean and mean and deliver data class creation to IDEs and annotation processors, while letting libraries implement concurrency and even coroutines (see Quasar for Java and Greenlet for Python).
This doesn't remove any complexity from the programmer dealing with these languages. Some times it could make the programmers' life even more miserable since there are multiple competing methods to choose from.
Having led programmer teams working in both C# and Java, I can clearly say that C# programmers are less confused by data classes and concurrency issues. I wouldn't say C# is perfect (C# properties are messy and their async/await implementation is pioneering, but somewhat confusing). I don't like all features that get added to C# either, and I sometimes think Microsoft could exhibit more care here. But I feel less "burdened" when we have to use C# (or Kotlin, Go or to certain degree Rust) for concurrency and asynchronous I/O rather than using Java and JavaScript, which have way too many historical ways of doing the same thing.
I think you're thinking too much from the perspective of a _language developer_. Language developers like less complexity in their languages, and that's totally understandable. But developers don't care so much about that, and despite what you say, I don't think there's good evidence that this is what makes a language extremely successful. The process of a language becoming successful is a lot more complicated than that and I doubt if it can be easily described. After all, the historical data we've got is too sparse. But if we look at languages that did become successful before and after Java, most of them were quite complex compared to their competitors:
PL/I was considered extremely complex and bloated in its time, but it became popular nevertheless, since it had a strong corporate backing from IBM. The very same company, incidentally, was one of the largest corporate backers of Java since the early 1990s.
While C was arguably almost as lean as Pascal (if not as easy to learn), it barely got any foothold in the world where Java has won so decisively later: Enterprise line-of-business software. These programs tended to be written mostly in COBOL and PL/I on mainframes, and on the PC developers used whatever was available, including assembly. Smalltalk (not a "simple" language for its time, considering the _entire_ package you'd get) also had a short heyday. But when Java came out, the world settled on C++. Sure, C++ back then was less than the shining paragon of feature creep that it is today, but it was still a rather complex language. The various dialsects of object-oriented Pascal were simpler. Modula 3 was simpler. Smalltalk was probably simpler too. But C++ won.
In the defense industry during the 1980s, Ada was queen. The reasons for this are really simple: this was often what the DoD mandated. Just like many corporate IT departments mandated Java years later. Anecdotally, all Ada programmers I met loved the language back then. They didn't lament the massive amount of features (for the time).
Embedded languages are also an interesting case. Small languages like Lua are quick objectively easier to embed and you'd think they'll win decisively. Lua is quite popular as an embedded language, but it seems to be on par with Python and losing to JavaScript.
Why did JavaScript become so popular? It's certainly not a simple language. There are many complex features that are not used very often like proxies, generators and all the various Object functions. There are often several layers of historical cruft or different solution strategies, such as prototypes vs. Object.create() vs. classes. Or "Error(...)" vs. "new Error(...)" or var vs. let. Or this-binding arrow functions vs. traditional anonymous functions and Function.prototype.bind(). Equality laws type coercion are famous bonkers[1]. Even from its very inception, JavaScript was more complex than necessary, and made some decisions that kept annoying generations of developers (e.g. vars and prototypes). And it still won, since Brendan Eich got included in Netscape and Microsoft rushed to copy it with (the frustratingly randomly incompatible) JScript. JavaScript never made developers happy and still has a certain type of fatigue named after it.
> the objective fact is that no language with a strategy of adding features quickly has performed better in the market than Java
This is deductive reasoning from a single datum, where there are many other social and technological factor that could determine language success. The fact is that before Java, the most successful languages - especially in the same industry Java rules most strongly today (business programs) - have been rather complex: COBOL, PL/I, C++. And to the best of my knowledge, they all had strategy of indiscriminately adding features in the language level.
I fail to see any strong evidence to your claim, while there is some anecdotal evidence to the contrary.
>> But I disagree with the stance that stackful coroutines are superior.
> Okay. There aren't many things all developers agree on.
That's great, but then it's just opinion. I personally prefer stackless coroutines, but I won't go ahead say they're strictly superior to stackless coroutines. There's a reason many languages still chose to go with stackless coroutines even though stackful coroutines are far from new. Swift has added async await quite recently an I'm pretty sure Apple was fully aware of the work done on Project Loom, Go, Lua, Erlang and many others. Microsoft could have very well chosen stackful coroutines for C# and implemented them to be transparent like virtual threads. gevent was doing the same thing (in a more primitive way) in Python after all. It's not like stackful coroutines are a new kind of tecnology we had to sit and wait for years to research properly.
I understand your perspective though I disagree with some of the observations, but I still don't see why any of this should convince us to switch strategies for Java given the following two facts: no other strategy has proven to work better, and claims about added productivity/correctness could not be established. So this all comes down to you liking different things, but that some developers want the opposite of others is a given. No matter what we do, there will be people trying to get us to do the opposite. So even coming to this with a blank slate and knowing nothing about programming, it would be irrational for us to change our strategy.
> This is deductive reasoning from a single datum
I don't think that's true, but even if it were, we don't need evidence to support us not changing our strategy; we need evidence to convince us that we should.
> especially in the same industry Java rules most strongly today (business programs) - have been rather complex: COBOL, PL/I, C++
I disagree. C was still more popular than C++ in the nineties, and compared to the average of the time, COBOL was simpler than C#. But even that doesn't matter. Teachers are telling us that if we increase the pace, even more of them would switch to teaching Python.
> It's not like stackful coroutines are a new kind of tecnology we had to sit and wait for years to research properly.
It's not as simple as that. Implementing user-mode threads is drastically more difficult than async/await (and requires deep compiler backend and runtime changes). Doing them competitively efficiently also requires a good GC. The .NET team said that they weren't sure that adding user-mode threads in a compatible way is possible in a large existing ecosystem, and they have some bigger challenges than Java (such as pointers into the stack). Languages that target LLVM have additional challenges. The idea is very well known, but good implementations are much harder than async/await, especially in established languages.
> I understand your perspective though I disagree with some of the observations, but I still don't see why any of this should convince us to switch strategies for Java [...]
I'm not really arguing for that. I'm really not advocating any school of language design here. I'm just saying that from the point of view of _the users_ the actual purity of the core language matters less than the state of the ecosystem.
I'm perfectly happy with the current course of Java, especially post Java 8. It just isn't for me, and I wouldn't use it or recommend using it for most types of software I'm dealing with. I can say the exactly same for C++.
There's a place for conservative and highly selective languages like Java, C and Go. As a CTO or a tech lead I generally tend to avoid these languages and I don't think they work great for my organization, but they might work out for other people. It's good to have choice. And it goes both way: if you need to stay within the JVM and you're not satisfied with Java, you've got Scala, Kotlin and Clojure. Between them, thy cover a lot of ground and have very different strategies for adding features.
The most conservative users of Java have their choice though. They don't care very much about the absolute _size_ of the language, but they very much dislike change. And for these businesses, the current pace Java is moving at is still too fast. That's why we still have a great deal of businesses out there running Java 8 with outdated frameworks and libraries. But they're not unhappy with Java, since they could always stay at an older version - and that's exactly what they do.
> I disagree. C was still more popular than C++ in the nineties
I was talking specifically about the industry which is now dominated by Java: business software. C++ strongly dominated two industries throughout the 90s: post-mainframe business software of all kinds and desktop GUI software. C never came close to dominating business software. On the early days of the desktop these were often written in Basic or Pascal and by the 1990s C++ became the standard object oriented language, although most of its competitors were simpler. C was probably dominant in early desktop GUI software, but was completely replaced[1] by C++ and other languages by the mid-1990s. C was still dominant in games throughout the 1990s, although C++ would come to dominate this industry later on.
> and compared to the average of the time, COBOL was simpler than C#.
I disagree that COBOL was simpler than earlier languages like FORTRAN or LISP, or contemporaries like ALGOL 60, but it's probably a moot point. COBOL was not a general-purpose language at that point, and it's structure was quite unique.
> But even that doesn't matter. Teachers are telling us that if we increase the pace, even more of them would switch to teaching Python.
I'm a bit surprised by this sentiment, since Python has been steadily adding major features since its very inception, while Java only increased its pace rather recntly, starting with Java 9. Python started gaining popularity as a teaching language during the priod it was moving faster than Java.
> It's not as simple as that. Implementing user-mode threads is drastically more difficult than async/await.
I completely agree with you here. This must have been a major part of the .NET team's rationale. But they could have undertaken a multi-year project to implement it like you did. The end-result probably would have been the same: a variety of community-based projects that try to bring async I/O and M:N concurrency to the language.
[1] Outside of small holdouts like Gtk and Carbon I guess.
When we ask companies why they picked Java, they mention backward compatibility, a good combination of performance, productivity and observability, and sustained popularity that means a large ecosystem and hiring pool and a good predictor of future popularity, which, alongside with backward compatibility means there's a good chance that their investment will be preserved.
Unlike runtime features, such as great GC performance and observability tools, specific language features or lack thereof don't usually come up, so I don't know if that has a direct or an indirect effect, and it's certainly possible that had the Java language been less conservative it would have achieved similar success. It's just that no one has ever managed to do that, so there's no reason to change course. (We can argue over C++, but its super-popularity was very short-lived, certainly compared to Java)
A direct effect that we do know about is teaching. Teachers do tell us that they don't like teaching rich languages as a first language. I don't think that the absolute pace matters so much, but rather the complexity of the language compared to alternatives. Teachers pick from a (very small) selection of relevant languages, and language simplicity is one of the important factors (so they tell us). In particular, those who pick Python always mention two factors: its simplicity compared to Java and the ease of getting started, which is why we'll be trying to address that second factor.
> And for these businesses, the current pace Java is moving at is still too fast.
The concrete issue is actually the difficult migration from 8 to 9+, which happened because Java lacked strong encapsulation until JDK 16, libraries depended on JDK internals, and those internals changed with changes to the runtime and libraries. Libraries quickly updated, but some did so with breaking changes of their own, which meant that old products needed a bit of work to upgrade, and some of them didn't have sufficient personnel. They are a minority now, though. This would have happened even with no changes to the language (and there weren't many in 9).
> while Java only increased its pace rather recently, starting with Java 9
More precisely, it returned to its former pace after years of relative stagnation due to diminished resources. Although it may appear faster because partial language features are trickling in every six months rather than in a more complete form every three years. We're trying not to exceed that original pace, and yes, we are selective and conservative, in line with Gosling's original strategy of "a wolf in sheep's clothing" (an innovative runtime wrapped in a conservative language).
.NET's async is far from great, but Loom/vthreads is an impressive achievement in distilling all of the mistakes that .NET made, while learning from none of the good ideas that it (or future promise/async/await) had.
I guess he's talking about project loom (fibers, go style concurrency) which IMHO is a much better solution than async/await, but yeah... took some 10 years to arrive.
You could have had the Loom experience 20 years ago by just spawning OS threads. Of course, there's a reason that this was discouraged... threads quickly turn into a nightmare to manage safely, especially when they need to interact.
Genuine question, how is managing cross interacting virtual threads any different or easier than managing interacting threads? I say this and I am greatly looking forward to using Loom in production. It's definitely the correct way to go as opposed to async/await.
It's not. That's the problem with the thread API that Loom is so dead set on preserving, and the big improvement that promises/async/await provide over threads.
Issue with os threads is that their number is limited. Not the issue with communication.
Futures are good enough for that, if you need more then structured concurrency.
Increased productivity comes in various ways. A more popular language often has a better ecosystem that helps productivity, and adding lots of language features quickly is a hindrance to huge popularity. This might not be the case for many here, but most programmers prefer fewer features than more, and the most popular languages are also often those that can be taught as a first language, which also requires restraint. Also, languages that add features quickly are often those that add breaking changes more easily, which loses productivity.
So even if some language feature helps with some problem, the assumption that adding it ASAP is the best way to maximise the integral of productivity over time is not necessarily true. Language features also cost some productivity, and the challenge is finding the right balance. Java chooses to follow a strategy that has so far worked really well.
And you could probably say the same for JavaScript and Python, that, together with Java, make up the current topmost tier of super-popular programming languages, which is why I think your conclusion is wrong.
These three languages are often first languages (or, at least, first professional languages), which is a necessary (though insufficient) condition for being super-popular. All of these languages more than make up for the loss of programmers who prefer richer languages with the programmers for whom it's a first (professional) language. Different programmers indeed prefer different languages, but that doesn't mean that the preferences are easily distributed. My rough personal estimate is a 90-10 split, where the 10% prefer more feature-rich, faster-moving languages. That's a very big minority, but Java addresses it by offering the Java language for the 90%, and supporting other languages on the Java platform for the 10%.
You can also see that while the market is becoming more fragmented, no language is currently threatening those top three (although TypeScript could threaten JS, I think), and no language is doing better than them. I.e. other strategies seem to be doing worse.
So knowing that for every X programmers you win you lose Y no matter what you do means that we try to carefully balance the evolution. I can tell you that we are more worried about teachers telling us that Java is getting too many features too quickly, and that it's harder to teach than Python (hence "Paving the Onramp [1] and other planned features) than the programmers asking for more features quicker. The former represent a much larger group than the latter. Also, moving more toward the latter group is both easier and less reversible, so it has to be done with great care.
This isn’t just me, the language’s popularity has been steadily falling from a peak in the mid 2000’s. You can quibble about specific numbers, but the overall trend seems to be languages peak and then slowly fade, and Java is following Fortran, Pascal, and C as languages that simply aren’t keeping up with the shifting demands of the modern workforce.
Java's mid 2000 peak was anomalous, not just for Java, but for any language. I can't think of a language (maybe C in the 80s?) ever dominating so much of the market. The market is now much more fragmented. What you should be asking is, which language is doing better? JS and Python are the only candidates. No single language is currently posed to take its place (although PHP and Ruby came closest), and no language outside that group of three (with the possible exception of TypeScript) seems to be threatening any of them.
So Java isn't as dominant as it was 20 years ago, but no one else is, either. And when you compare Java to current landscape, you see that it's in a very enviable position, and it's as safe a bet today as it ever was.
Several languages had similar levels of dominance. Fortran was even more dominant in the late 1960’s, then C in the early 90’s followed by Java in the 2000’s all had clear dominance.
It’s possible though harder to verify if Pascal reached that level and we might see Python hitting it soon.
> Several languages had similar levels of dominance.
I don't think Fortran was ever quite that dominant, but we can certainly agree that no language has been as dominant since Java.
> It’s possible though harder to verify if Pascal reached that level and we might see Python hitting it soon.
Not even close for either one of these. I was already programming in Pascal's heyday, and it was mostly used in education. It was never very popular in industry. Python is extremely popular, but many of its users are not professional programmers, and it isn't dominant in big server software at all. In the early 2000s, Java dominated servers, clients, and education. I don't think any language is even remotely approaching that today, but JS would be the closest (although still very far).
Java today is the #1 most popular server-side language, certainly for big software, and by a big margin. According to the best data we have [1][2], it's about 1.5x-2x more popular than C#, maintaining the same gap those two languages have had for about 15 years. It's about 7-15x more popular than Go.
So other JS and Python, all other languages's are doing worse than Java, with prospects that look significantly worse than Java's. No language is even coming close to threatening Java's position as PHP and Ruby were. Node.JS looked like it could have for a while, but then it sank quickly; some thought Go might do it, but while it's certainly interesting and there's much we can learn from it, its growth has stalled.
Comparing Java to itself 20 years ago is unfair, because it was an unusual time for programming. But when you compare it to the competition today, you see that Java is doing spectacularly. I would like to see it taught more in schools, though, where Python has overtaken it.
The thing is even Java’s heyday was far more fragmented than Pascal’s.
Despite what HN suggest sever side programming isn’t that big. Android is currently a major component of it’s position, if you don’t include Android then Java isn’t the most popular language.
Java is practically non existent on client side web, system programming, desktop applications, iOS, scientific computing etc. So only something like 20% of programmers are using Java as their primary language and it might have topped 35% at it’s peak.
> Despite what HN suggest sever side programming isn’t that big. Android is currently a major component of it’s position, if you don’t include Android then Java isn’t the most popular language.
Quite the opposite. If you look at the hiring labs data, iOS and Android combined make up less than 1/3 of the Java market alone. That's not surprising. There are lots of mobile apps, but they don't require that many hours of work.
> So only something like 20% of programmers are using Java as their primary language and it might have topped 35% at it’s peak.
But no other single language has better prospects, with only JS and Python in the same game. Java is not as super-dominant as it once was, but it is more dominant than almost any other language in existence. Everyone else is doing worse (or about as well in the case of Python and JS). Moreover, there currently aren't languages that are seriously threatening Java's position as PHP and Ruby (and maybe JS with Node.JS) once were.
The observation that the market is more fragmented than before with all languages commanding smaller portions than some did in the past is true. But if you're worried about Java's position, you need to be much more worried about, say, Go's.
The data I looked at suggests without Android Java falls behind JavaScript. Also, Python is starting to take over at CS schools which is usually a sign it’s going to be even more popular in the future.
I suspect Python is going to take over from Java fairly soon, even if it might not reach Java’s mid 2000 dominance. That said, Java never reached the dominance of C, and C never reached the dominance of Fortran so I don’t think such arbitrary benchmarks mean much. In absolute numbers we have far more programmers so the next dominant language is likely to surpass past peaks by that metric.
I don't know if Android matters that much (it's quite small), but regardless, as I've said several times, JS and Python are indeed the only languages with arguably better prospects than Java at the moment. Those who want Java to remain in that top tier should at least understand why, if we'd emulate anyone in any way, we'd try to emulate them rather than languages that are doing so much worse than Java.
But it's also good to remember that both JS and Python have their own issues, that are by no means smaller than Java's, and neither of them currently threatens Java's dominance on the server, and no one else does either (although PHP and Ruby did in the past).
I think you are jumping to your desired conclusion here. Even if we take your vastly at face value, there are so many reasons people switch languages, and problematic language evolution is not the only possible reason.
What else would you suggest for the falling popularity of Java from it’s mid 2000’s peak?
Some of this is just fads, but I think languages tend to suit the time period when they are most popular. In 1990 C was a hugely dominant force because it suited the kinds of programs being written and the hardware available. IMO Java was a great compromise for late 90’s hardware, but different tradeoffs are becoming more useful resulting in Python’s rise.
Much of this could be fixed with a better tooling and an overhauled standard library, but basic language pitfalls are still a problem.
There were waves. First wave of leaving Java was for RoR. Then there was a period where we had all these JVM languages, with Scala, JRuby, & finally Closure. Then a bit later came Go, which is when I switched.
So why. First wave was due to the god awful experience of building web apps on app-servers. Remember how tedious that was? I forget the names, but there were so many frameworks. And of course imo things like JSPs and Spring also definitely played a part in motivating seeking more pleasant pastures. So this I would chalk to impedance mismatch between Java and the browser tech.
Second wave was more about 'language'. FP. DSLs. Rich Hickey! State is not identity! :)
Third wave was, imo, due to a more seismic shift that included more victims than just Java. This was the beginning of the noSQL era, "simplifying", Redis! What a breath of fresh air. Again, not that Redis made people switch languages, but that there was a shift in mindset as to how to build software. Java all of a sudden looked like the RDBMs second cousin.
Then the cloud. JVM startup times. Memory footprint, etc. I stopped following Java's progress after 2008, but sense this was the period when the Java stewards finally were motivated to be more adventurous with new features. But in the meantime, Go ended up being the server side networking champ.
But now, with things like GraalVM, I'm actually excited to switch back to Java as my main lang again.
Concurrent with all this, fads as you mention; amplification of unseasoned voices via blog-sphere that shifted mindshare; and just the basic human need to seek variety.
That’s fair. I would add the Oracle acquisition of Sun and subsequent missteps shouldn’t be ignored. There was a huge wave of negative publicity and uncertainty which played off of the perception of being the new COBOL with a helping of factory.factory_endless_boilerplate_word_salid.
Java’s fragmentation also didn’t help as even just web technology evolution though Applets, Servlets, JavaBeans, Spring, JSF etc without letting people settle into something that just worked reasonably cleanly. The perception was always that there was tons of legacy options and sometimes multiple hot new fads creating an endless treadmill where working on the same thing for 4 years left you behind the curve rather than a productive environment.
By comparison .Net benefited from the second mover advantage. Embrace, Extend, Extinguish didn’t work but uniformity brought it’s own advantages and they could always copy and tweak something when it was clearly better.
Finally, there was a perception that the kind of companies using Java where exactly the kind of companies that would soon outsource jobs to India or just underpay and replace everyone with H1B’s.
What is this argument exactly? We're arguing against the concept of features? Why are we talking hypothetically?
Async/await provides immediate user value. If people didn't like it they could just use threads in a Java-like style. While some dislike the features, and other push against the sour grapes, its a popular feature found in many languages used by many developers.
Java is popular and so is C# and Javascript, so I can't see how we can draw any conclusions on async/await.
First let me say that since it's established that different programmers have different preferences, the fact that some programmers might prefer a different evolution strategy is no reason to change it, because that will always be true. A reason to change strategy is if some other one has fared better, and none has. The only languages that have arguably fared better than Java are Python and JS, and they have fewer, not more features. So there's simply no good reason to pick a different strategy that has not shown to fare better.
Now, when it comes to async/await, first let's take JS off the table, because JS has no other convenient concurrency feature, and it couldn't add threads because it would have broken most existing JS code (it could add isolates, but they're not sufficiently fine-grained, at least not in the JS world).
If Java had got async/await ten years ago, it would have been burdened with it for decades. It would have provided some value in those ten years, and extracted value for the following forever (albeit gradually diminishing). "Just don't use it" works fine for library features, but not so much for foundational language features, because programmers usually don't start with a blank slate, and don't pick the features in the codebase they work on. Therefore, all language features carry a cost, and they carry a higher cost when it comes to beginners, where this matters most.
It's hard to precisely describe what could have been, but I think most would agree that in those ten years Java didn't lose many developers to languages with async/await because they had async/await. It probably lost some developers to Python and JS for other reasons (say, Python is easier to get started with, and JS is easier for those who know it from the browser), and it didn't even lose that many people to Go (Python lost many more to Go than Java did). Considering all that, I think that Java's position in 2022 is better, now that it has virtual threads, than it would have been had it also had async/await (which would have likely also delayed virtual threads).
If I could go back in time knowing what I know now, I would have advocated against adding async/await ten years ago with even higher certainty. Back then I just believed there's a better way; now not only do I know there's a better way, but I also know that not adding async/await didn't cost us must if at all.
Going back to the original topic, Java's primary competitors -- Python and JS -- also don't have a great solution for nulls. So while I would very much like to address this problem, I see no reason to change our game plan in favour of scrambling for a quick solution. We'll tackle this issue like we try to tackle all others: methodically and diligently, and preferably after we've had time to study how well various solutions work elsewhere.
I think Java really dropped the ball on UIs (and other paradigms where you need to pass data across threads) and this is partly because of the terrible threading hoops that need to be jumped for lack of async/await. Kotlin gained traction exactly because Java didn't solve this.
I think it's pretty depressing to hear that you're numb to this.
TBH, I don't think virtual threads even address this use case in a way that isn't just async/await but less sugared syntax. (Although I'm willing to be pleasantly surprised once new libraries pop up)
Not only am I not numb to this, Oracle is now renewing investment on the client. But client side programming does have a clear winner -- JS. Not only Java, but everyone else lost. But whatever Java does, it's clear that no single language is currently on a path to dominance across multiple domains. Java is dominant on the server, JS on the client, and Python in ML and smaller programs. When we compare Java to its historically anomalous dominance in the early oughts, its position today is no doubt worse. But when we compare it to how other languages are doing, no one else seems to be doing much better, and most are doing significantly worse.
Because you don't always control where the code is being called from.
Other compile time typed languages (C, C++, C#, typescript, rust, kotlin) usually would let you enforce that as part of the type system, but the java compiler will silently accept null in place of optional and throw an NPE at runtime.
Right, if only there was a language on the JVM that did that /s
It's really an eye-opener when you compare kotlin and scala, with all their superficial similarities: Where kotlin simply takes the @Nonnull annotation, promotes it to a core language feature and drowns it in convenient function-scope syntactic sugar until it's actually nice to use, scala opts for Option and stacks layer upon layer of architecture trying to make Option somehow disappear. I lost half a decade holding on to plain java snobbishly dismissing kotlin as a second class scala before I finally got converted.
Rather than opting into a slightly incompatible dialect for some but not all code, I would like an IDE that lets me specify what is @Nullable, and quietly inserts @NotNull everywhere else without displaying it. We can keep the boilerplate in the bytecode without rubbing our noses in it.
Yes, it's always been possible to check for nulls at runtime.
Personally I use notNull(..) over @NonNull since it actually fires when you expect it to (as opposed to whether your framework dispatcher interceptor trigger decided to invoke it)
isn't @NonNull just a syntactic sugar for adding checkNonNull() call as first statement to the function declaration in compile time, or am I mistaken? Just like lombok, it is supposed to generate code that checks null arguments, from what I know.
@Nullable/@NotNull is great when the IDE shows the warnings, basically dev time checking. There are also tools to integrate it into your builds for compile time checking.
Right, that's because the system libraries don't have the annotations. That's the biggest issue with it. But it still helps a lot if you're religious about it in your own code.
Scala had the same possibility, but the ecosystem of libraries used Option<T> appropriately so in practice I never thought about null. That might be harder in an older, larger ecosystem like Java...
imo it seemed like there was this phase of "if we pretend null doesn't exist maybe it will go away" that resulted in a bunch of design issues. Beyond Optional, the other big one to me is Map.compute/Map.merge, which subtly breaks the previous interface contract for Map. As annoying as null is, I'd rather have null than have broken apis.
> They couldn't have fixed it while retaining backward compatibility.
Opt-ins exist. That's what .net ended up doing: by default the language uses "ubiquitous" nullability, but you can enable reference nullability support at the project or file level.
If you do that, references become non-nullable, and have to be explicitly marked as nullable when applicable.
No, because in C++ bare types are not nullable. In java, reference types are nullable, Optional being a reference type, Optional<T> can be null, Empty, or have a value, whereas T can be null or have a value.
Also because std::optional does essentially the opposite. It’s a more efficient _ptr rather than a safer one. Optional<T> is strictly less efficient as it implies an additional allocation.
> Optional<T> can be null, Empty, or have a value, whereas T can be null or have a value
Thanks for the explanation. Wow, this sounds like a a bit of a mess, one because of the allocation, and also because it seems there are multiple ways something can be semantically null.
OK, so when valhalla lands I assume Optional will become value type (or we will have another class for that, to avoid issues with backward compatibility)
Just a heads up, Scala 3 actually has a compile flag that will make types exclude ‘null’ as a valid subtype, so every nullable variable will have to have type signatures like String | Null.
We've been using `@Nullable` as Meta does in our code base for many years now (everything else is assumed to be non-null). Initially, we had CheckerFramework actually doing the checks at compile time... but eventually had to remove it because it's unstable, slow, has not kept up with Java evolution (we're on JDK 17, I think Checker still barely works on JDK 11) and also, because our IDE can be configured to flag errors on `probable nullability issue`! That was really a boon for us... it costs nothing at compile time and it's pretty easy to not mess up as the IDE keeps us in check whenever we make any change.
At the same time, we have adopted Kotlin as well. Interestingly, most of our developers prefer to stay on Java... not because they love Java, but because of familiarity and that they seem to think it's "good enough".
Anyway, we haven't had a large rate of NPEs for a very long time and when one actually happens in the wild we're extremely surprised!
That said: looking forward to the specification[1] that's being cooked up as Java absolutely should have had this a long time ago (as mentioned in the post).
> At the same time, we have adopted Kotlin as well. Interestingly, most of our developers prefer to stay on Java... not because they love Java, but because of familiarity and that they seem to think it's "good enough".
I have been using Kotlin on Android for a few years now, and really enjoy the extra expressiveness and conciseness that the language offers ... but have come to the conclusion its much easier to write unreadable kotlin code than java code.
One of the things I think makes Kotlin hard to read is scoping functions (apply/also/let/run/with). Based on their names alone, the newcomer can't tell what they do (I had to consult a table for weeks on whether they returned the original object, last expression, and whether they passed it, this, etc).
I do love these when used at the right place, right time, and "just enough" but I keep coming into contact with code that nests these 4-5 levels deep:
aaa?.run {
bbb?.run run2@ {
ccc?.run {
ddd?.run {
this@run2.foo()
// whats in scope here urgh
}
}
}
}
Other pathologies I see are making any method (foo), that operates on X, as an extension method X?.foo(). Then the namespace for X gets polluted with all these things.
One of the great things about Kotlin's nullability in the type system is the operators like ?. and ?: ... but often see code like this:
aaa?.bbb?.ccc?.ddd()
Is it safe, yes, yes it is. It won't crash, but what happens if one of these is null, and should not be? There is nothing in Kotlin forcing people to write this way, but find Java's explicit checking of null made people think more of the "negative case" (throw an exception? etc)
Like anything else, I guess we could use linting rules, it comes to having good taste ... which is not in abundance.
What worries me the most though, is that in the beginning, Kotlin was mostly additive on Java. I read, in biological systems, adding is easier that changing or removing. It seemed like Effective Java + Typed Groovy. Now as Java has "last mover" advantage, and adopts the good parts of Kotlin, I am worried its not just additive, they need interop between different implementations (records vs data classes).
In your first example, thats poorly written code that should never make it past code review. I'm also having a hard time thinking of a case where run would even be useful in that situation that wouldn't also fix your second example.
If you need all aaa through ddd to be non-null you return early or you check ahead of time
val aaa = aaa ?: return
val bbb = bbb ?: return
val ccc = ccc ?: return
val ddd = ddd ?: return
if those three-letter friends happen to be "getter vals" that might change behind your back (the compiler is perfectly aware of the difference).
That's the one gripe I have with kotlin, those "getter vals" that aren't even remotely related to immutability. Likely a consequence of building closely on top of java, there's simply no way an interface could promise to never return two different references from a call on the same object without that promise being a lie. And as usual, kotlin provides gentle damage mitigation on syntax level: "val x = x", as in the four lines above.
I would have preferred something even more consise like "val=x" though, where the name overlay is promoted to explicit intention by not typing it twice, as in "call that getter and nail its return to the getter's name for the remainder of our scope". People tend to be reluctant with that name hiding, unsure of it's good style or bad, if there was an explicit shortcut it would be blessed as The Kotlin Way (and, as you said, it's so much better than getting pulled into that let/also nesting abyss that feels clever while writing but turns on you before you are even done)
My workplace is trying to get onto the KMM train and I get sweaty palms trying to imagine people onboarding onto Kotlin from other languages.
I mean I love Kotlin, but as you mention there is just so much syntax. The simplest things can be done 100 different ways and it's hard to say what's idiomatic.
I'm personally going to be ruthless about "syntax simplicity" in code reviews. Otherwise I'm imagining a situation where the dream of shared code really just becomes "Android devs throwing a black-box library over the wall at iOS devs" because only they spend enough time with Kotlin to understand it
Whoa, interesting. I didn't know Kotlin had all those constructs.
In Virgil, a method on an object (or ADT) can declare its return type as "this". Then the method implicitly returns the receiver object. That trick is very useful to allow a chain of calls such as object.foo().bar().baz(). I find it readable and easy to explain:
C# has made it possible to gradually roll out stricter nullability checking as well. The static analysis gets integrated into the regular language analyzers. Incremental migration is the only way to go.
In a similar fashion I was extremely impressed with how Dart approaches this as well and was able to move the entire ecosystem along in about 18 months. https://dart.dev/null-safety
It was a lot of work to get there and we're still not totally there yet. Dart still supports running legacy code that isn't null safe. But I'm really glad we did this and very grateful that Leaf and others on the team were able to design a null safety type system that:
* Defaults to non-nullable.
* Is fully sound.
The latter in particular is very nice because it means the compiler can take advantage of null safety to generate more efficient code, which isn't true in languages like Java, C#, and TypeScript where their null safety type rules deliberately have holes.
Yeah, I did a bit of Dart null safety migration work at Google and I thought they did a good job. (The auto migration tool sometimes made things nullable unnecessarily, and so you had to carefully review its output, but you could always just not use the tool.)
Some rare nice words from me about C# - the not-null facility is great. It's very basic propagation that should be better and I've fought the compiler too many times over it, but I really like the compile time guarantee it gives.
C# have taking it further. New projects give errors if you don't declare vars as nullable when they may be null in some context. It will help for static analyzers to find these cases. I see a big risk in it. That people will just init nonsense objects to get around the warnings.
> That people will just init nonsense objects to get around the warnings.
You technically can do "= null!", which means assign null by default, but assume it is not null. This is currently the recommended way to do deserialization, where you know the value is not null, but you have not explicitly filled it.
No, you get a nullable warning for that as well. You also need to change the reference type to nullable. I am convinced that some people will just init the properties with new(). And by that hide the null errors. Easily the hardest bugs to track down is when things fails long after the real error occur. I think they could have come with a better solution to the problem.
If you assign it with "= null", you get the warning. If you assign it with "= null!" (note the exclamation mark, the null-forgiving operator), you do not.
It sure can be a problem when people initialize stuff with nonsense to avoid running into nullable warnings. However, accidentally running into null pointer exceptions is still worse than people deliberately footgunning themselves with bad workarounds.
I feel C#'s nullable has helped me personally to avoid a lot of potential bugs and also changed the way I write code in a lot of places - like creating `bool Try...(..., out var)` style APIs instead of "old school" returns-null/throws style stuff, which I think make a lot of code cleaner and more easy to read.
Sometimes nullable can get a little messy and annoying, especially when retrofitting old code to make use of it without breaking existing APIs, and all in all the way C# does it is a clear net win in my opinion.
I think you have to separate nullable types from the global nullable directive. The global nullable directive will just make people return nonsense like the trend some years ago when people started to return empty lists instead of null.
The null problem has been around a very long time. It's been a huge source of errors in programming. I'm glad to see it actively being worked on, even if it is awkwardly being retrofitted on an old language.
I can only look on with disappointment when new (or newish) languages include null as part of the language instead of doing something more sensible like having an Option/Either type. Of course, I won't name names... cough Go cough
It's okay for null to be a part of the language as long as the type system can reason about it (via nullable-types, sum types, etc). Dart, TypeScript, Kotlin are examples. But yeah, Go is the problem-child that doesn't have any way of statically reasoning about nulls
Yes, it's OK to have null in the language in limited cases. I'm fine with null so long as the language enforces checking for null when it could cause a NullPointerException (or that language's equivalent).
Java's biggest failure in null handling was allowing EVERY reference type to be null without a way of specifying that it could be non-null. This basically ensured that null could creep in almost anywhere.
Allowing a type to be null should always be opt-in. It still bothers me that I have to put NOT NULL in almost every SQL column definition because the SQL spec has columns default to being nullable. Ugh. At least SQL does allow me to choose nullable vs non-nullable, unlike Java.
TBF what Java does was pretty standard back in the 90s when it was released. Though obviously it would be nice if this had been fixed since. Not that there were no grumblings and better alternative but it was very much the little (if at all) questioned norm for procedural langages.
I agree about Go. One quite frustrating scenario I have ran into multiple times is if you unmarshal (parse) some JSON into a struct, you can’t tell if an integer field with value 0 was specified in the JSON input as 0, or missing from the input. AKA there’s no way to differentiate the zero value from undefined for primitives.
There are roundabout hacky ways like parsing the JSON into a map[string]any to check if it’s there, but it’s so ugly and requires so many lines of code.
I also hate this about Go, but its (partial) saving grace here is the `x, err := NewX()` pattern which (at least for me) tends to prevent a decent number of these issues in practice since usually either `x` is non-nil xor `err` is non-nil.
Makes the
// Java
name = personService.getPerson(123).getName()
problem less likely since you'd generally have to write:
// Go
person, err := personService.GetPerson(123)
if err != nil { ... }
name, err := person.Name()
if err != nil { ... }
but I think that's part of Go's tradeoffs -- much more likely that errors will be annotated more correctly (i.e., in Go you'd be more likely return an error like "failed to load person: 123" if it was the GetPerson call that failed rather than a generic error that doesn't describe which step failed)
So after working at Google for 6 years (and doing Java and other things prior) I went to Facebook and programmed in Hack for 4 years. Hack is actually a really interesting and convenient language in many ways. One way I really appreciated is how nullability is built into the type system. Having to deal with nullability in Java is actually way more of a chore than any of the verbosity that people normally complain about (eg boilerplate around anonymous classes).
The story I like to tell about how people abuse Java is this:
@nullable Optional<Boolean>
I don't know if this is still the case but you could literally find many instances of that in Google3 (and not all of them were the product of an auto-generated tool). I mean it's a super-large code base. There was a cleanup at one point for the (many) instances of 1204 in the code base.
The way I like to describe this is "for when 3 values for your boolean just aren't enough" (since you obviously now have 4 states: 1) null 2) Not set 3) Set, true 4) Set, false).
I have a few rules that have held up v ery well over the years including:
1. If you don't do arithmetic on it and it's not an ID of some kind, it's almost certainly not a number (eg people who store SSNs as INT fields); and
2. It's almost never a boolean. You're almost always better off replacing any boolean with an enum. Not only is this typesafe but if done right adding an enum value will cause compiler errors because you've been exhausitve with switches without a default case.
My experience has been that "migrate on touch" is a reasonable strategy, so if you have to make a change to a file, use the "Code > Convert Java to Kotlin" (control-alt-shift-k)
A reasonable strategy if you never ever merge two branches. If there's a non-homeopathic chance that a parallel change to the file in question might eventually pop up I'd limit "migrate on touch" to occasions when you do major rework and not just a minor touch. If it's possible to occasionally enforce a "branch singularity moment" I'd go with migrate on major rework until a branch singularity opportunity comes up and then do the bulk conversion to what I affectionately call "shit kotlin" (the endless procession of exclamation marks that faithfully recreate each and every opportunity where the java could, in theory, achieve an NPE) in one go. And leave only the cleanup of that mess to "on touch". If it later comes to parallel cleanup, that wont be half as annoying to merge, not even remotely.
What I haven't tried is "migrate on touch" with a strict rule that there must be explicit commits just before and after the conversion (plus a third commit documenting the file rename separately, before or after). That could perhaps work out well - or not help much at all, I don't feel like I could even guess.
But other than that, the intermediate state of partial conversion is surprisingly acceptable to work with, I'm not disagreeing!
IIUC, there's already automated tooling that will do the conversion and not produce a giant mess like c/c++ to rust does so the cost is predominately CPU and not SWE.
But then you have to use Kotlin, which isn't just Java with nullability types, but a language with quite a different design philosophy, and a language that is increasingly at odds with the evolution of the JDK (partly but not solely because it also targets other platforms, such as Android and JS). It appeals to some but certainly not to all (interestingly, it hasn't significantly affected the portion of Java platform developers using alternative languages, which has remained pretty much constant at about 10% for the past 15 years).
I tried Kotlin, and while I liked it, iterop was still somewhat annoying, Java's lambdas are better, using the Java 8 stream API is ugly, and the code ends up being similar enough that I'd rather use Java and avoid tooling hassles.
The article actually addresses Kotlin. They'd love to switch to it but they just can't do it overnight because they have so much mission critical Java code. So, this is a stop gap solution for legacy code. They published another article some time ago how they are switching to Kotlin: https://engineering.fb.com/2022/10/24/android/android-java-k...
Migrating millions of lines of code is a non trivial effort. They'll be stuck with bits of Java for quite some time. So, this helps make that less painful.
So you're in the exact same case as you were in Java, which was my third point. But the type is a special type to let you know what you're doing is unsafe.
Migrating from Java to Kotlin looks nice and easy on the surface (optionals!), but the lack of checked exceptions will absolutely bite you sooner or later if you are consuming Java code. Better carefully read the docs and source of all your transitive Java dependencies.
I can't think of any popular language that would take more than a few days to get acclimated to as an experienced developer, so that's not a very compelling argument.
It's always the (usually quite bad) tooling, learning about platform/SDK shittiness and pitfalls, and figuring out which parts of the open-source library ecosystem you want to engage with, that takes like 90+% of the time getting decent with a new language, in my experience. Getting comfortable with the language per se takes low tens of hours at most, as you wrote.
Kotlin (the base language) is really not that different from java. I went from 0 to standing up new backend services with limited friction. Coroutines and maybe frontends are a different story. Java doesn't yet have a coroutines equiv so that was a larger hurdle for me.
Most of the changes for me from 10/20+ hours to now we're more about identifying a style that works as effectively as I can. These types of behaviours are normal in all but the most idiomatic languages, so if anyone is doing java dev as their daily language, Kotlin felt very natural(though you really are limited to Intellij since the IDE does a ton of lifting to make your life easy).
Well, C and Scala are some counter examples that immediately come to mind.
Kotlin is probably more similar to Java than any other mainstream language. There’s almost no learning curve there, while going from Java to other “easy” languages like Python requires significantly more time to get used to.
Scala it depends how you want to use it. If you're going for full FP then sure it can take a little bit longer, but you can also just use it like Java+ if you really want...
I think there's a difference here between getting acclimated to scala (for new code, presumably), which is reasonably easy, and getting acclimated to a scala codebase that was already written by someone else.
You can do the first one basically the same way you'd do kotlin, the second one can get pretty hairy if someone decided to bring in a bunch of macro heavy DSLs and syntax extensions.
Ooof, I think most devs can modify an existing code base in a few days. To learn all the idiomatic styles, tradeoffs of the major libraries and different build systems take months, maybe years IMHO.
I can't think of many other languages that will compile into a Java codebase, and be interoprable in both directions, as well as Kotlin. It's a lot quicker to pick up than e.g. Scala IMHO.
I have been writing Java for money for more than ten years and never ever have I had non-trivial problems with null. Less than one percent of the bugs I fixed were caused by nullpointers, less than one percent of write-deploy-test loops were caused by it.
I either have code that can't be null (e.g. getters of lists that create a list if the field is null, outright validation before usage), code where null has a desired meaning that must be handled (new JPA entity with no key yet) or code where the Nullpointer will lead to a client/user/implementor caused bad request style response.
What the hell are you guys doing for you to spend significant amounts of time on NullpointerExceptions?
On the same note, I didn't understand the inclusion of Optional in Java. Always felt like some annoying special flavor type of custom Java nuisace like vavr or jooq.
There are patterns that experienced developers will use to avoid issues with nulls. Here are a couple off the top of my head.
1) Reversing string comparisons:
"literal".equals(variable)
instead of
variable.equals("literal")
2) Always initializing lists instead of leaving them to default to null
public class SomeClass{
private List someList=new ArrayList(); //or Collections.emptyList()
...
}
The problems with NullPointerExceptions can mostly be solved by teaching programmers to use these strategies and teaching them to look for boundaries where nulls can slip in.
The real problem happens when things get scaled up. If you have thousands of developers then it makes sense to take steps to automate the problem away. Also, if the code is used by billions of users then even rare NullPointerExceptions will happen frequently. So it makes perfect sense for Facebook to be the one to work on this.
The slow-down doesn't come from NPEs occurring as bugs.
The slow-down occurs from all the extra thinking that happens once [your teammate] allows nulls into the code "as a feature".
If you assume no nulls, then you get to lean on the type system. A String is actually a String, not an instruction to GOTO the nearest enclosing RuntimeException handler.
And honestly, I'd be mostly happy to work on the no-null assumption, hit a few NPEs, and fix them as I go. But someone else will decide that nulls are OK as values, and now every value is suspect. Which means you can no longer just blindly assert against nulls.
And that's the value-add of Optional. You can use it to deliberately represent things that aren't there, meaning you can go back to treating every null as a bug to squish, rather than a design choice.
Way too many developers are lazy and haven't actually studied the language they're using to adapt their code. So instead you get discussions where the tools are blamed.
Obviously there are a lot of lazy/bad developers and the larger the company/team the more likely you have some of them. The irony is when this comes up in discussions about places like Google and Facebook that have built up a cult of believing they are better than everyone else while seemingly having lots of trouble dealing with these kinds of issues.
I don't hate Optional, but what I find is the same people who can't handle using null also fall into the Optional anti-patterns.
This is just covering up design problems. NPEs show you were you have design deficiencies.
If you have getAccount().getContact().getPhoneNumber() and contact is null, you'll get an NPE. The question shouldn't be: "How do I shove the NPE under the rug for the next 1337 coder to deal with?", the question should be: "How did I initialize an Account without a Contact?"
Static analysis (detecting) is the opposite of "covering up".
> "How did I initialize an Account without a Contact?"
As long as your language lets you do it, your teammates will do it. And in many cases it won't be accidental either. Your teammate will want an Account with its Contact set to null.
Every time I see people "deal" with this problem, it looks like this:
if (getAccount() != null &&
getAccount().getContact() != null &&
getAccount().getContact.getPhoneNumber() != null) {
// do something
}
// don't put an else condition in, just keep going and let the program
// produce the wrong result in a confusing way when it happens in production
I actually blame rampant code generation and an adamant refusal to even consider object-oriented design for this problem - this sort of case ought to be handled by something like:
getAccount().callContact()
(or whatever you were going to do with the phone number when you got it). Insistence on generating code from SQL schemas or XML DTDs and then writing procedural code makes that pretty much impossible, though.
With that awful `if` statement, you're calling the functions multiple times. It's negligible with the simple getters Java users use everywhere, but if that `getAccount()` call involved, say, the database, now you're making multiple calls when you don't have to.
C# at least has null propagation and pattern matching that can make that line:
if (getAccount()?.getContact()?.getPhoneNumber() is string pn) {
// do something with `pn`
}
The "idiomatic" C# way would also include properties:
if (GetAccount()?.Contact?.PhoneNumber is string pn) {
// do something with `pn`
}
Class based OOD is way harder than that. There's a reason codebases are full of these train wrecks (the unofficial official name of the pattern). And it's not because of auto generation and/or laziness. It's because, despite what's on the tin, passable - nevermind good - OOD doesn't look anything like how we actually think of objects as humans.
You mention code generation as a problem, but it is actually a solution as well for this particular problem, I have really grown on mapstruct, where you create an interface declaratively specifying what field maps to what, and it will generate at compile time fast, efficient, correct code you might write by hand.
It's sad because while I don't think the second concern is so big, the first half of your comment is deceptively important, but this is a conversation that really difficult to have.
I've brought it up when using Swift and Kotlin and it takes so much energy for people to realize that safety operators, which feel really really good to use, accidentally sweep up major issues
People assume you're missing something when you explain that they need to get more comfortable with force unwrapping (which is intentionally introducing a NPE)
-
But for example, if you have getAccount().getContract().getPhoneNumber() so you can show it in a settings page, you should not throw an NPE. You can easily correct the issue in a non-invariant breaking way by say defaulting to "<missing>" in the UI.
But if you have getAccount().getContract().setSomeFieldWithRealWorldImportance(true), you should either be force unwrapping or returning a Result type, and ensuring someone is actually handling it with a hard stop on the user.
The problem is reality is often much more subtle than "doSomethingImportant()".
For example, I worked on an iOS app that was used for medical work, and one habit iOS devs had was wrapping access to weak references to objects with a null safety:
So they were super comfortable writing things equivalent to viewController?.showSomeDialog()
because in most situations crashing over lifecycle issues is a bad idea... but here we could be creating incorrect output in a highly sensitive situation.
"someDialog" could be "Drug interaction between X and Y detected" for example, but the mental patterns wouldn't detect that a safety operator was hiding that.
It was better for the app to crash than silently hide that information, since at least then it'd be known the app was in an invalid state.
-
Now the problem is here devs start to think "well I'm not making medical apps" or "well I wouldn't make that mistake"...
But go through any moderately large codebase and you will find important user actions that silently fail where a crash would actually serve the user better.
It's how to convince people that a hard crash can ever be good though.
An aside here, but wouldn't it be simpler to eschew the syntactic sugar and get the account first and then if it's not null then get the contact and if it's not null then get the phone number and return it, returning null if any of those conditions fail? Then there's no exception to handle and flow is not broken.
A null phone number would indicate one wasn't found, which could mean the contact record didn't have one, the account didn't have a contact record, or there was no account. If this violates a business rule, then throw a MissingAccount or MissingContact or MissingPhoneNumber exception at the appropriate place that would be more understandable to the end user or could be handled more specifically than a generic null pointer exception.
I'm a C programmer, though, so I'm used to taking the long way around to do things, usually because there's no other choice.
Thank you for stating this. I been pulling my hair out trying to comprehend how more focus is not put on that fact. Whether your language is null-safe or not, you have to deal with this. Like for example, if I have a type that is serialized from JSON to Kotlin type and one of the fields is not-nullable but the JSON does not have that field, you still going to get an error, right? With Java, at least you'll be able to do something about it and not just fail the request at the these null-safe touchpoints.
Nobody should have ever claimed the GoF is a complete set of patterns. The GoF themselves vigorously said it was not intended to be.
But it has definitely been raised up to The Official Set Of Design Patterns by a lot of people. I've lost count of the number of languages I've seen someone write about "design patterns" in, and what they mean is they show a complete ported implementation of all the GoF design patterns, including all OO quirks, even in languages where they are manifestly inappropriate, including dynamically typed languages (where many of them apply, but are optimally designed quite differently to account for the huge differences in the type system) and functional languages (when many of the GoF patterns just dissolve into the language, but a whole other set of patterns is necessary).
It is the same old story, later repeated with agile manifesto, REST, and so on.
They became misunderstood and abused by enough people that the terms stopped meaning anything useful in public space. When I hear a company say "we're Agile, we are doing REST" etc. I just roll my eyes and think to myself: unlikely.
Over the years I figured out the right way to use all of these things: as resources with solutions to frequent problems.
So no, I will not preach GoF or DDD or CQRS or Agile or REST or anything else, but I will make sure I understood each one very well on many levels and apply the lessons to my projects.
Wow, just came up with fantastic interview question (just kidding).
But, yeah, this is a good point. I interview senior engineers (mostly Java) a lot and I work with a lot of people like managers, tech leads, architects and senior engineers and I have the feeling that almost nobody has ever actually read any of the things that they are talking about.
I actually have the fricking book on my bookshelf. The first paragraph on the first page of the book starts:
"This book isn't an introduction to object-oriented technology or design. Many books already do a good job of that. This book assumes you are reasonably proficient in at least one object-oriented programming language, and you should have some experience in object-oriented design as well."
At no point it claims to be any kind of software development handbook or complete set of patterns or teaching fundamentals of anything. It is just a collection of "hey guys, see, we figured out this might be useful for ya!".
At a former workplace, someone had a great idea to collect and aggregate the logs of our software from across our test labs. At the time, the idea was a bit novel for a bunch of programmers and testers who shipped and didn't run their own software. The plan was to drive elimination of commonly logged errors through bug fixes or reducing the severity of the messages.
I was a lowly programmer, invited to a meeting of distinguished engineers and senior staff to review some of the findings from the log collections. When NPEs came up, their comment was that 'there should be zero NPEs' and that they would handle them punitively through reports to managers. So, I asked, "What's your recommendation to the developers to prevent the NPEs?". Silence, nothing. Findbugs existed at the time, and I believe JSR303 was in draft. Happy to report they decided to treat it the same as any other error or bug.
It amazes me that nullable references keep getting put into otherwise strongly statically typed languages. In 1965, it was completely understandable. By 1995, the harm should have been well understood.
I always wondered why enums in java were allowed to be null. Seemed like something they could have enforced. Could have just had some sort of 'default' attribute on a value.
Just trying to learn here, not being combative, couldn't the same compiler support have just enforced the non nullness? At first you think, how do you enforce
someEnum = getAnEnumSomewhere(), but at some point there's got to be an original reference being hard-set to null that the compiler could see, right?
also i don't understand your point about universal defaults?
enum Foo { bar default, baz };
why can't you guard against the default? how is checking a foo enum against bar different then checking against null?
Why this solution? Maybe there's an even better solution? We're following the various experiments in the area, and when we have a solution we believe is the right one, we'll provide it, but not sooner than that.
> Java has so many deficiencies in its design that so many frameworks are invented to cover its flaws.
Java the language has its flaws, but the frameworks make the situation worse.
> Just give a real Optional type at a language level. It’s clearly possible in other JVM languages.
It's not the presence of Optional that we need, it's the absence of nulls. And nulls are in all the libraries. And you also have to convince your fellow Java developers that null is bad.
This is perfectly legal code, where the optional wrapper itself is null: