Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's so sad that Java 8 had the chance to really fix the null problem, but gave us only the half-assed `java.util.Optional<T>`. Rather than implementing optional values at the language level, it's just another class tossed into the JRE.

This is perfectly legal code, where the optional wrapper itself is null:

    Optional<String> getMiddleName() {
        return null;
    }


What do you mean by had a chance? That chance isn't gone. This area is under investigation, and when we have a solution we like, we'll implement it. We can't address all issues at once.

Optional isn't half-assed because it was never envisioned as a general solution to the null problem. It's simply an interface that's useful in streams and other similar cases for method return values, and it does the job it was intended to do reasonably well. It's possible that a solution to nulls in the type system could have obviated the need for Optional, but I don't think that delaying lambdas and streams until such a solution was found would have been a better decision.


I have definitely come to the conclusion, especially after doing a lot of work in Typescript, that class Optional is a big mistake, whether from the JDK or other libraries that preceded it.

First, because exactly the type of code that the parent commenter showed. I've actually seen this in production code (and shrieked). The fact is that without language-level support, you can end up getting the worst of all worlds.

Second, like all things in Java along the lines of "why use 1 character when 10 will do?", the Optional syntax is verbose and annoying to use.

But, most importantly, the fundamental issue is that all classes are optional by default in Java (indeed, that's the problem at hand). Adding an Optional class doesn't really mean you can make any non-nullability assumptions about other classes, especially when using different libraries. The Optional class just added more complexity while simplifying very little.


> But, most importantly, the fundamental issue is that all classes are optional by default in Java (indeed, that's the problem at hand). Adding an Optional class doesn't really mean you can make any non-nullability assumptions about other classes, especially when using different libraries. The Optional class just added more complexity while simplifying very little.

When pron mentioned "this is actively being worked on", he meant exactly this problem. [1] [2] Java is currently working on changing the dynamics of the language such that classes can in fact be non-nullable. Last I checked in, this was the notion of `primitive` types being added in project Valhalla.

Optional just so happens to be one of the types in the JDK that will converted over to being a `primitive` type.

Now, what that means in terms of existing code I believe what's being discussed. How do you handle

    public Optional<Foo> bar() { return null; }
?

We'll see, it might be something like changing the code to something like this

    public Optional<Foo>.val bar() { return Optional.empty(); }
which would guarantee that the optional type returned is in fact never null.

[1] https://github.com/openjdk/valhalla-docs/blob/main/site/desi...

[2] https://github.com/openjdk/valhalla-docs/blob/main/site/earl...


I miss the pre-Rust days. When Haskell was HN's cool pet language, and you had to obtain at least some vague familiarity with the term "monad" to understand half the discussion threads here.

I'm sorry, but I don't think you understand the purpose that "Optional" was intended to serve. And are unduly dismissive simply because it does not serve some larger purpose that was not intended.


I mean having a monadic api is nice if it's strictly enforced by both ecosystem/culture/typesystem. But with Optional being potentially a null itself, it barely improves the need to defensively program and might in fact be worse. For example, when I used a lot of Scala in a past job, Java libraries were scary to use unless you defensively wrap java functions with Try/Option/etc.

Whereas with Haskell/Rust/OCaml/etc. you can largely trust type signatures to properly encode the existence of nullability or failure.


> I'm sorry, but I don't think you understand the purpose that "Optional" was intended to serve.

Lol, your response is the equivalent of the old SNL "IT guy" skits: "Silly programmer peasant, you don't even know what a monad is!"

Regardless of what you may think Optional was intended to serve, I have seen its use across large and varied code bases, and it simply does not make the cognitive burden easier for developers.

Again, look at the example given by the GP comment. Of course Optional isn't intended to be used that way. But the fact is, as the compiler doesn't prohibit it, it WILL get used that way, and it doesn't provide strong guarantees about the state of the variable.


Yeah the concept of an Optional type is just too complex for other less advanced beings than yourself to understand.

Having the Optional class be nullable is actually genius and makes perfect sense, they are just not bright enough to see it!


> But, most importantly, the fundamental issue is that all classes are optional by default in Java (indeed, that's the problem at hand). Adding an Optional class doesn't really mean you can make any non-nullability assumptions about other classes, especially when using different libraries.

Perhaps it's just me, but I don't equate optional with nullable. An optional value is just the designer specifying that you may or many not specify that value, while nullable are objects which may or may not have been initialized yet.

Even though nullables have been used and abused to represent optional values, I'm not sure it's a good idea to conflate both. It would be like in C++ equating shared pointers with optional types.


> Perhaps it's just me, but I don't equate optional with nullable.

But the main usage of Optional and similar types in mainstream languages is exactly that - making the potential nullness (is that a word?) of the value explicit in the type.


There are multiple ways to represent an optional value, for example:

1) an empty list/array 2) throw an exception if the return value is missing. 3) -1

Optional can be used as a better alternative in all these cases. It's not just null.


It's like when people say Java had the chance to do generics right (like C#) and then didn't

Yeah technically tomorrow morning Java could fix it, but there have been kingdoms built on the current situation. C# took its lumps years back on breaking things and so there were fewer kingdoms to demolish. And if you fix it, it'll be years before it trickles down to a large portion of devs who don't get to work at the bleeding edge, or even near the edge.


First of all, Java's generics are already superior to C#'s. We've exchanged the minor inconvenience of not being able to have overloads that erase to the same type with the ability to support multiple variance strategies rather than one that's baked into the runtime. That's why Java has Clojure and Kotlin and Scala and Ruby running on top of it with such great interop. And, as it turns out, when specialisation is really important -- for Valhalla's value types -- then it's invariant and doesn't require demolishing kingdoms. We've also added virtual threads years after C# had async/await, and now they're the ones stuck with that inferior solution, which we'll remain in the ecosystem even if and when they add user-mode threads.

But in this case, I don't see what additional kingdoms there would be to demolish. No Java code employs nullability types now just as it didn't in 2014. Optional does the limited job it needs to do rather well, and will become even better when we extend pattern matching, so there would be no need to demolish it even if and when we have some solution to nulls. As you can see here (https://docs.oracle.com/en/java/javase/19/docs/api/java.base...) it's used sparingly and in specific situations, definitely not as a general null replacement.


"Java generics is superior to C#" - That's a first for me. C# generics don't have to do boxing of types like java does, and overall have much better type safety between library boundaries, etc...


The guys from Scala.NET did mention .NET generics as one of the reasons they gave up on porting Scala to .NET.

That is also a reason why .NET needed the whole effort to create DLR and the dynamic keyword, while Java only needed to add one additional bytecode invokedynamic.


.NET has F# and Clojure so not convinced by this


It only reveals how little you know about type system differences between Scala, F# and Clojure.


C# also declined to make the mistake of type erasure.


> First of all, Java's generics are already superior to C#'s. We've exchanged the minor inconvenience of not being able to have overloads that erase to the same type with the ability to support multiple variance strategies rather than one that's baked into the runtime.

Working on the JDK I'm sure you're aware just how revisionist that take is: we got the version of generics that landed because of backwards compatibility concerns.

One of the key selling points going from Pizza to GJ was that the result ran on then current JVM targets without modification.

https://jcp.org/en/jsr/detail?id=14

> C1) Upward compatibility with existing code. Pre-existing code must work on the new system. This implies not only upward compatibility of the class file format, but also interoperability of old applications with parameterized versions of pre-existing libraries, in particular those used in the platform library and in standard extensions.

> C2) Upward source compatibility. It should be possible to compile essentially all existing Java language programs with the new system.

Interop came at the expense of Java (not the JVM) having a worse generics story, since the incredibly onerous constraint of "works with parameterized versions of pre-existing libraries" forced other languages to then go and reinvent the exact same wheel in different ways.

-

> We've also added virtual threads years after C# had async/await, and now they're the ones stuck with that inferior solution, which we'll remain in the ecosystem even if and when they add user-mode threads.

"Kingdoms to demolish" refers to changing generics, but there are "kingdoms" have been built that wouldn't be demolished: they still represent a lot of throwaway work from putting off attacking the problem. Case in point, the post we're commenting under.

Nullability specifically is much more subject to the part you left out: "And if you fix it, it'll be years before it trickles down to a large portion of devs who don't get to work at the bleeding edge, or even near the edge."

-

At the end of the day it's always been a tradeoff between backwards compatibility, timeliness, and correctness, but when I switch between Java specifically and C# specifically (not the JVM and the CLR) I find C# did an amazing job providing value upfront rather than hand wringing, and is not nearly as worse off for it as this comment would imply.

I mean you're looking down on async/await, but it came out a decade ago while Loom is just landing. I guess you're trying to paint that decade of development as "weighing down" whatever .NET comes out with because it'll still exist, but that seems like a stretch at best.

Java had no async/await story and will get a very good alternative. C# had an async/await story and will get an improved alternative (it's already in the experimentation stage according to the .NET team). I'd just much rather have the latter?


> we got the version of generics that landed because of backwards compatibility concerns.

We got the version of generics because of the need for two languages with different variance strategies -- Java 1.4 and Java 5 -- to be compatible, and so precluded baking a particular variance strategy into the runtime. It is true that the goal at the time wasn't to support, say, Scala or Clojure or Ruby specifically, but why does it matter?

> forced other languages to then go and reinvent the exact same wheel in different ways.

I don't think so. Both untyped languages like Clojure and Ruby, as well as existing typed languages such as Haskell, have been ported to the Java platform. Erasure, BTW, is a pretty standard strategy. Even Haskell does not distinguish at runtime between List Int and List String.

> Nullability specifically is much more subject to the part you left out: "And if you fix it, it'll be years before it trickles down to a large portion of devs who don't get to work at the bleeding edge, or even near the edge."

So what? Java is likely to be one of the world's most popular languages 20 years from now. Who would care if we wait to do things right, especially as it worked very well for Java so far. Who cares now that lambdas were in Java 8 and not Java 5?

As for C# vs Java, there's no doubt some developers prefer one over the other -- for some it's Java, for others C# -- but I see absolutely no reason for Java to adopt C#'s strategy. Even if you don't think it's any worse, it's certainly hasn't proven to be better. Those who prefer C#'s approach already use either it or Kotlin, so we've got them covered on the Java platform, too.


> Erasure, BTW, is a pretty standard strategy. Even Haskell does not distinguish at runtime between List Int and List String.

Haskell doesn't do overloading and reflection, so for Haskell type erasure is a pure implementation detail. It doesn't leak out to the programmer.


Yes, which is why in Java the cost isn't free but it's still rather low. Of course, the calculus has changed due to memory hierarchies, which is why Valhalla brings with it specialisation for value types (that outweighs the benefits of erasure for those types, but they're not variant, so it doesn't require baking a variance model into the VM).


> First of all, Java's generics are already superior to C#'s.

That's... highly controversial to say the least. You didn't just lose the ability to overload List<String> and List<Integer>. You've also lost run-time type information and type-safety when doing reflection. And reflection-based metaprogramming cannot be ignored in Java (and to a lesser degree in Kotlin): So much is based on it!

It also makes interfacing with primitives (especially arrays of primitives) quite messy. And you've also lost some possibilities for JIT optimizations, but I'm not sure if C# does them anyway and how much they really matter - so that part may be minor indeed.

Allowing different variance strategies could be a boon for a language-agonstic runtime, but, as other comments mentioned, that's just rewriting history. The JVM wasn't built to be a language-agnostic runtime. The truth thoroughly documented. It started with a new Java-like language, Pizza, that had to run on the then-current JVM, and continued with a fork of Java called Generic Java. These attempts were independent at first, so they could not change the JVM. Type erasure was chosen because it was the best fit to the existing Java model and had the best performance with the JVM as-it-was[1].

In hindsight it turned out to make writing JVM languages with different variance strategies easier, but it's not a superior design, just a coincidence. Although Martin Odersky was behind both Pizza and Scala, he did not mention variance as concern back in the 1990s when he chose type erasure. Let's not try to rationalize a JVM-compatibility decision that was done by a team outside Sun as some great insight by "the brilliant engineers at Sun".

In the end of the day, you can do type erasure just as well on .Net when you need it, the same way you'd do it in Java. Nothing prevents a CLR language from compiling List<T> as List<Object>.

> We've also added virtual threads years after C# had async/await, and now they're the ones stuck with that inferior solution, which we'll remain in the ecosystem even if and when they add user-mode threads.

Virtual threads are not inherently superior or inferior to the async/await mode. It's the old stackful coroutine vs. stackless coroutine or implicit-vs-explicit concurrency debate. I've already written about it multiple times in the past[2]:

Stackful pros:

* No "colored functions": I can write asynchronous I/O code the same way I write multi-threaded code or non-blocking code. It's definitely more ergonomic, but also means your code can hold more surprises for you. Not everybody thinks 'Colored Functions' are an absolute advantage, otherwise we wouldn't have Effect Typing.

* Less GC allocations: stack is [re-]allocated in bulk every time time a lightweight starts (go statement) or whenever it needs to grow. Stackless coroutines typically allocate a smaller object on every call to an async function.

Stackless pros:

* Clear yield points: you always know the points where your coroutine could suspend and control would be moved to another coroutine: whenever you see an await statement. This is why function colors are needed - to be explicit about the control flow.

* Less memory waste and less overall allocation size: the "compile-time stacks" generated by stackless coroutines are perfectly efficient. There is no unnecessary memory allocated.

I personally have a strong preference for explicitly marking concurrency, just like I would love to distinguish pure function and functions that have outside effects: I think Effect Typing is generally a good thing. Java also went its own way with explicit and mandatory effect marking when it comes to checked exceptions. Unfortunately, it never really got composability right, and using checked exceptions is simply too verbose - but Rust also went for explicit error type signatures, and its solution is generally well-received.

[1]: https://dl.acm.org/doi/abs/10.5555/647373.724066

[2]: https://news.ycombinator.com/item?id=24361590


> And reflection-based metaprogramming cannot be ignored in Java (and to a lesser degree in Kotlin): So much is based on it!

Its prevalence has been going down, and we're trying to encourage that trend.

> Allowing different variance strategies could be a boon for a language-agonstic runtime, but, as other comments mentioned, that's just rewriting history.

I don't see how this matters, and I never claimed that supporting, say, Clojure was the intent, but the design was very specifically to allow for multiple variance strategies (because Java 1.4 and Java 5 were two languages with different variance strategies).

> Let's not try to rationalize a JVM-compatibility decision that was done by a team outside Sun as some great insight by "the brilliant engineers at Sun".

I don't think I was doing that, but on the other hand let's not discount three decades and hundreds of decisions that have made and kept Java one of the biggest and longest-lived successes in software history. Call it lucky if you want, but they/we have been consistently lucky for three decades now. During that time, MS have made huge breaking changes to their platform at least three times.

> Nothing prevents a CLR language from compiling List<T> as List<Object>.

And Java could specialise just as Ceylon did. Erasure ended up working so well for Java because the main language on the platform did it. It is that, not some hypothetical ability, that was able to support Clojure and Kotlin well.

> Virtual threads are not inherently superior or inferior to the async/await mode. It's the old stackful coroutine vs. stackless coroutine or implicit-vs-explicit concurrency debate.

Actually, it isn't. User-mode threads might not be superior to syntactic stalkless coroutines in every language and on every platform (and certainly not for every use-case), but they are in Java (and could be in .NET). For one, we are now close to achieving zero memory waste in virtual threads compared to async (will be released in a year or so); in fact, we already allocate fewer objects than Kotlin's stalkless coroutines [1]. For another, while it is true that the less composable cooperative model (i.e. "clear scheduling points") has some advantages in languages like JS (where a non-cooperative model would break pretty much all existing code), Java and C# already have the non-cooperative model built into the language. It's a question of having two semantically similar yet syntactically incompatible concurrency models or just one.

> Not everybody thinks 'Colored Functions' are an absolute advantage, otherwise we wouldn't have Effect Typing.

I'm not a big believer in effect systems in general, for the simple reason that they've yet to show significant tangible bottom-line benefits (although we'll wait and see). But they're clearly wrong for coroutines in languages that already allow side-effects, like Java. The semantics -- in the precise Hoare-triplet sense -- are identical to regular mutation, and they're less composable. So they introduce an additional incompatible syntactic world with identical semantics, which makes them, at best, a redundant overhead.

> Unfortunately, it never really got composability right, and using checked exceptions is simply too verbose

You are absolutely correct there, but I believe we can fix that, and quite elegantly.

Now, I don't want to give the impression that I think Java is a perfect language or platform; I don't. Mistakes were made along the way, especially early on. But I do think we've managed to do better than others. Python is as popular as Java or perhaps even more so, but it didn't manage to preserve compatibility; .NET never achieved Java's popularity while also failing to preserve compatibility (and several times). The only other languages with a similar backward compatibility story alongside long-lived super-popularity are C and JS, and both of which are famous for having a small or nonexistent standard library. So things could have hypothetically been better, but to date no one has actually managed to do that better, or even as well.

[1]: The perfect memory efficiency of stalkless coroutines is only achievable when you disallow virtual calls and recursion -- as Rust does, although it pays for that dearly -- but neither C# nor Kotlin can or want to do that.


P.S.

Re "I believe we can fix that, and quite elegantly" -- I believe erasure may well come to the rescue there once again.


Every time this argument comes up I get a sudden urge to eat refried beans for some reason.


> Optional isn't half-assed because it was never envisioned as a general solution to the null problem. It's simply an interface that's useful in streams and other similar cases for method return values, and it does the job it was intended to do reasonably well.

Regardless of the intent, once it got into the hands of enterprisey java devs, it immediately got used in two ways:

Tri-value nulls: treating null, None and Some in different ways.

Annotation hell: spam nullabilty annotations like @Nonnull Optional haphazardly everywhere with no semblance of soundness in sight.

When actually used as intended (by the 90 percentile coders on the team), it just bludgeons the GC to death because the JIT does not always do the right thing and elide the extra heap indirections.

They’re almost as bad an idea as type erasure!


So then there is language-level Optional and library-level Optional?


If that happens, there would be language level nullability types that may supersede Optional if they indeed end up solving a strictly more general problem. It's not unheard of to have more general solutions supersede older, more specific ones. For example, the newly added pattern-matching features largely obviate the need for visitors, which are a pretty common pattern in libraries; or virtual threads, which largely obviate the need for many asynchronous APIs (and there are opposite examples, such as j.u.c locks, which followed the language's locks). That's the nature of evolution: sometimes there are surviving vestiges of the past.


This area is under investigation, and when we have a solution we like, we'll implement it

What's to investigate? Just copy what C# did and be done with it.


If you want all the C# features, just use C#. Java is a separate language with its own tradeoffs.


Yeah, I've heard this kind of stuff in the past. "Just copy async/await from C#, what are you waiting for?". And now C# is stuck with that thing, while we'll soon be getting the proper solution.

Thanks, I'd rather wait.


Can you point a couple of cases where Java got a better version?

Generics? Value Types? Lambdas? Null Safety? Async/Await?

Also do you just have to assign all the value the features provide in interim 0 value? Like Async/Await has provided a 10 years of value, and I'm guessing you mean Project Loom... so how much better than Async/Await does Loom have to be to justify an entire decade of just straight up missing the value add entirely, let alone having a better one

-

I can't reply to comments anymore (thanks dang) but yeah...

Generics at the Java layer are worse, generics at the JVM layer are convenient if you're going to write a language targeting the JVM. That's just an artifact of the fact that the JSR for Generics defined backwards compatibility as its top goals, not because they were trying to enable a friendlier runtime for languages that didn't exist yet.

I already addressed Project Loom: They're coming a decade after async/await are coming after a decade of a strictly worse concurrency story, so unless you're assigning 0 value to a decade of having a better situation (which I'd say is disingenous) I don't think it's a great example.

Also definitely not sure how Java Records are better than C#?


I can! Generics, virtual threads, and records. Java is unburdened with the prevalent and obsolete features of async/await, properties, and its generics allow a flourishing ecosystem in exchange for a minor inconvenience.


> Java is unburdened with the prevalent and obsolete features of async/await, properties, and its generics allow a flourishing ecosystem in exchange for a minor inconvenience.

I don't quite get the "unburdened" part. C# now has records as well[1], and it always had List<Object>. It may also get stackful coroutines as an alternative to stackless coroutine.

You could say that C# now has to deal with a lot of legacy code that's using properties and async/await and cannot be replaced with "superior" records and stackful coroutines[2]. That's all true, but Java hasn't been frozen in vacuum for the last 20 and 10 years (respectively).

While C# programmers created mountains of legacy code with properties, Java developers churned out mountains of legacy code using "Beans" with getters and setters, auto-generated by the IDE or Lombok. That's strictly worse.

At the same time during the last 10 years, Java programmers that needed Async I/O didn't just sit around and wait for Project Loom. They wrote callback hells, and then moved on to CompletableFuture or a reactive programming style. And all this code tends to be a lot messier than Async/Await.

In short, real world Java developers are just as burdened as C# developers by legacy solutions that predate virtual threads and records, and the Java legacy solutions are usually quite a lot uglier.

[1] Although I dislike their decision of permitting mutability, their records they are at least useful right now, since we don't have to wait another 5 years to get a with expressions.

[2] I completely agree with you that immutable equality-based types are superior, and that mutable data types probably shouldn't be baked in the language. But I disagree with the stance that stackful coroutines are superior.


> I don't quite get the "unburdened" part.

Then let me explain. Some programmers (~10% by my educated guess) like languages with lots of features, but most don't; really don't. It seems that being able to serve as a first language is a prerequisite for being super-popular (and Java, Python and JS serve in that role), and having more features is a big hindrance to that. We are more concerned with Java having too many features than too few (and teachers complain about that).

Every additional significant language feature has a cost, and often for every experienced developer it gains, it loses several beginners. The trick is balancing the benefit and the cost, and in Java we do this by adding features slowly, and trying to pick only the ones that give a lot of bang for the buck (which means we often add features relatively late, after they've proven themselves elsewhere). This strategy -- originally laid out by James Gosling in the mid-90s -- works, and no other strategy has so far proven to work better.

> That's strictly worse.

It really isn't for languages seeking to be at the top for the reason I just explained. But even if you think there's some subjectivity there, the objective fact is that no language with a strategy of adding features quickly has performed better in the market than Java. The only languages in the same tier have even fewer features. So there is no objectively measurable metric by which it is worse: languages that add more features more quickly don't fare better (in fact, they seem to fare worse), and no significant differences in productivity or correctness have been found (despite trying).

> and the Java legacy solutions are usually quite a lot uglier

I think they're usually quite a lot nicer, but people have different preferences.

> But I disagree with the stance that stackful coroutines are superior.

Okay. There aren't many things all developers agree on.


> Then let me explain. While some programmers (~10% by my educated guess) like languages with lots of features, but most don't; really don't. It seems that being able to serve as a first language is a prerequisite for being super-popular (and Java, Python and JS serve in that role), and having more features is a big hindrance to that. We are more concerned with Java having too many features than too few (and teachers complain about that).

Programmers want to deliver working code. If that code needs to be asynchronous and handle data, they'll use whatever is easy, acceptable and available to use with their language. That "feature" may be provided by the language, by a compiler, by IDE code generation, by a pre-processor, by a post-processor, by runtime code generation or by a library that is using some idioms.

Most programmers (my educated guess is 80%) don't care very much how they get that done, as long as they can get quickly learn it and get stuff done keep their managers and co-workers are happy. Programmers need to go through a learning curve to learn new features, whether they are library features or language features. A language can "cheat" a little by keeping itself lean and mean and deliver data class creation to IDEs and annotation processors, while letting libraries implement concurrency and even coroutines (see Quasar for Java and Greenlet for Python).

This doesn't remove any complexity from the programmer dealing with these languages. Some times it could make the programmers' life even more miserable since there are multiple competing methods to choose from.

Having led programmer teams working in both C# and Java, I can clearly say that C# programmers are less confused by data classes and concurrency issues. I wouldn't say C# is perfect (C# properties are messy and their async/await implementation is pioneering, but somewhat confusing). I don't like all features that get added to C# either, and I sometimes think Microsoft could exhibit more care here. But I feel less "burdened" when we have to use C# (or Kotlin, Go or to certain degree Rust) for concurrency and asynchronous I/O rather than using Java and JavaScript, which have way too many historical ways of doing the same thing.

I think you're thinking too much from the perspective of a _language developer_. Language developers like less complexity in their languages, and that's totally understandable. But developers don't care so much about that, and despite what you say, I don't think there's good evidence that this is what makes a language extremely successful. The process of a language becoming successful is a lot more complicated than that and I doubt if it can be easily described. After all, the historical data we've got is too sparse. But if we look at languages that did become successful before and after Java, most of them were quite complex compared to their competitors:

PL/I was considered extremely complex and bloated in its time, but it became popular nevertheless, since it had a strong corporate backing from IBM. The very same company, incidentally, was one of the largest corporate backers of Java since the early 1990s.

While C was arguably almost as lean as Pascal (if not as easy to learn), it barely got any foothold in the world where Java has won so decisively later: Enterprise line-of-business software. These programs tended to be written mostly in COBOL and PL/I on mainframes, and on the PC developers used whatever was available, including assembly. Smalltalk (not a "simple" language for its time, considering the _entire_ package you'd get) also had a short heyday. But when Java came out, the world settled on C++. Sure, C++ back then was less than the shining paragon of feature creep that it is today, but it was still a rather complex language. The various dialsects of object-oriented Pascal were simpler. Modula 3 was simpler. Smalltalk was probably simpler too. But C++ won.

In the defense industry during the 1980s, Ada was queen. The reasons for this are really simple: this was often what the DoD mandated. Just like many corporate IT departments mandated Java years later. Anecdotally, all Ada programmers I met loved the language back then. They didn't lament the massive amount of features (for the time).

Embedded languages are also an interesting case. Small languages like Lua are quick objectively easier to embed and you'd think they'll win decisively. Lua is quite popular as an embedded language, but it seems to be on par with Python and losing to JavaScript.

Why did JavaScript become so popular? It's certainly not a simple language. There are many complex features that are not used very often like proxies, generators and all the various Object functions. There are often several layers of historical cruft or different solution strategies, such as prototypes vs. Object.create() vs. classes. Or "Error(...)" vs. "new Error(...)" or var vs. let. Or this-binding arrow functions vs. traditional anonymous functions and Function.prototype.bind(). Equality laws type coercion are famous bonkers[1]. Even from its very inception, JavaScript was more complex than necessary, and made some decisions that kept annoying generations of developers (e.g. vars and prototypes). And it still won, since Brendan Eich got included in Netscape and Microsoft rushed to copy it with (the frustratingly randomly incompatible) JScript. JavaScript never made developers happy and still has a certain type of fatigue named after it.

> the objective fact is that no language with a strategy of adding features quickly has performed better in the market than Java

This is deductive reasoning from a single datum, where there are many other social and technological factor that could determine language success. The fact is that before Java, the most successful languages - especially in the same industry Java rules most strongly today (business programs) - have been rather complex: COBOL, PL/I, C++. And to the best of my knowledge, they all had strategy of indiscriminately adding features in the language level.

I fail to see any strong evidence to your claim, while there is some anecdotal evidence to the contrary.

>> But I disagree with the stance that stackful coroutines are superior. > Okay. There aren't many things all developers agree on.

That's great, but then it's just opinion. I personally prefer stackless coroutines, but I won't go ahead say they're strictly superior to stackless coroutines. There's a reason many languages still chose to go with stackless coroutines even though stackful coroutines are far from new. Swift has added async await quite recently an I'm pretty sure Apple was fully aware of the work done on Project Loom, Go, Lua, Erlang and many others. Microsoft could have very well chosen stackful coroutines for C# and implemented them to be transparent like virtual threads. gevent was doing the same thing (in a more primitive way) in Python after all. It's not like stackful coroutines are a new kind of tecnology we had to sit and wait for years to research properly.

[1] https://www.destroyallsoftware.com/talks/wat


I understand your perspective though I disagree with some of the observations, but I still don't see why any of this should convince us to switch strategies for Java given the following two facts: no other strategy has proven to work better, and claims about added productivity/correctness could not be established. So this all comes down to you liking different things, but that some developers want the opposite of others is a given. No matter what we do, there will be people trying to get us to do the opposite. So even coming to this with a blank slate and knowing nothing about programming, it would be irrational for us to change our strategy.

> This is deductive reasoning from a single datum

I don't think that's true, but even if it were, we don't need evidence to support us not changing our strategy; we need evidence to convince us that we should.

> especially in the same industry Java rules most strongly today (business programs) - have been rather complex: COBOL, PL/I, C++

I disagree. C was still more popular than C++ in the nineties, and compared to the average of the time, COBOL was simpler than C#. But even that doesn't matter. Teachers are telling us that if we increase the pace, even more of them would switch to teaching Python.

> It's not like stackful coroutines are a new kind of tecnology we had to sit and wait for years to research properly.

It's not as simple as that. Implementing user-mode threads is drastically more difficult than async/await (and requires deep compiler backend and runtime changes). Doing them competitively efficiently also requires a good GC. The .NET team said that they weren't sure that adding user-mode threads in a compatible way is possible in a large existing ecosystem, and they have some bigger challenges than Java (such as pointers into the stack). Languages that target LLVM have additional challenges. The idea is very well known, but good implementations are much harder than async/await, especially in established languages.


> I understand your perspective though I disagree with some of the observations, but I still don't see why any of this should convince us to switch strategies for Java [...]

I'm not really arguing for that. I'm really not advocating any school of language design here. I'm just saying that from the point of view of _the users_ the actual purity of the core language matters less than the state of the ecosystem.

I'm perfectly happy with the current course of Java, especially post Java 8. It just isn't for me, and I wouldn't use it or recommend using it for most types of software I'm dealing with. I can say the exactly same for C++.

There's a place for conservative and highly selective languages like Java, C and Go. As a CTO or a tech lead I generally tend to avoid these languages and I don't think they work great for my organization, but they might work out for other people. It's good to have choice. And it goes both way: if you need to stay within the JVM and you're not satisfied with Java, you've got Scala, Kotlin and Clojure. Between them, thy cover a lot of ground and have very different strategies for adding features.

The most conservative users of Java have their choice though. They don't care very much about the absolute _size_ of the language, but they very much dislike change. And for these businesses, the current pace Java is moving at is still too fast. That's why we still have a great deal of businesses out there running Java 8 with outdated frameworks and libraries. But they're not unhappy with Java, since they could always stay at an older version - and that's exactly what they do.

> I disagree. C was still more popular than C++ in the nineties

I was talking specifically about the industry which is now dominated by Java: business software. C++ strongly dominated two industries throughout the 90s: post-mainframe business software of all kinds and desktop GUI software. C never came close to dominating business software. On the early days of the desktop these were often written in Basic or Pascal and by the 1990s C++ became the standard object oriented language, although most of its competitors were simpler. C was probably dominant in early desktop GUI software, but was completely replaced[1] by C++ and other languages by the mid-1990s. C was still dominant in games throughout the 1990s, although C++ would come to dominate this industry later on.

> and compared to the average of the time, COBOL was simpler than C#.

I disagree that COBOL was simpler than earlier languages like FORTRAN or LISP, or contemporaries like ALGOL 60, but it's probably a moot point. COBOL was not a general-purpose language at that point, and it's structure was quite unique.

> But even that doesn't matter. Teachers are telling us that if we increase the pace, even more of them would switch to teaching Python.

I'm a bit surprised by this sentiment, since Python has been steadily adding major features since its very inception, while Java only increased its pace rather recntly, starting with Java 9. Python started gaining popularity as a teaching language during the priod it was moving faster than Java.

> It's not as simple as that. Implementing user-mode threads is drastically more difficult than async/await.

I completely agree with you here. This must have been a major part of the .NET team's rationale. But they could have undertaken a multi-year project to implement it like you did. The end-result probably would have been the same: a variety of community-based projects that try to bring async I/O and M:N concurrency to the language.

[1] Outside of small holdouts like Gtk and Carbon I guess.


When we ask companies why they picked Java, they mention backward compatibility, a good combination of performance, productivity and observability, and sustained popularity that means a large ecosystem and hiring pool and a good predictor of future popularity, which, alongside with backward compatibility means there's a good chance that their investment will be preserved.

Unlike runtime features, such as great GC performance and observability tools, specific language features or lack thereof don't usually come up, so I don't know if that has a direct or an indirect effect, and it's certainly possible that had the Java language been less conservative it would have achieved similar success. It's just that no one has ever managed to do that, so there's no reason to change course. (We can argue over C++, but its super-popularity was very short-lived, certainly compared to Java)

A direct effect that we do know about is teaching. Teachers do tell us that they don't like teaching rich languages as a first language. I don't think that the absolute pace matters so much, but rather the complexity of the language compared to alternatives. Teachers pick from a (very small) selection of relevant languages, and language simplicity is one of the important factors (so they tell us). In particular, those who pick Python always mention two factors: its simplicity compared to Java and the ease of getting started, which is why we'll be trying to address that second factor.

> And for these businesses, the current pace Java is moving at is still too fast.

The concrete issue is actually the difficult migration from 8 to 9+, which happened because Java lacked strong encapsulation until JDK 16, libraries depended on JDK internals, and those internals changed with changes to the runtime and libraries. Libraries quickly updated, but some did so with breaking changes of their own, which meant that old products needed a bit of work to upgrade, and some of them didn't have sufficient personnel. They are a minority now, though. This would have happened even with no changes to the language (and there weren't many in 9).

> while Java only increased its pace rather recently, starting with Java 9

More precisely, it returned to its former pace after years of relative stagnation due to diminished resources. Although it may appear faster because partial language features are trickling in every six months rather than in a more complete form every three years. We're trying not to exceed that original pace, and yes, we are selective and conservative, in line with Gosling's original strategy of "a wolf in sheep's clothing" (an innovative runtime wrapped in a conservative language).


.NET's async is far from great, but Loom/vthreads is an impressive achievement in distilling all of the mistakes that .NET made, while learning from none of the good ideas that it (or future promise/async/await) had.


I guess he's talking about project loom (fibers, go style concurrency) which IMHO is a much better solution than async/await, but yeah... took some 10 years to arrive.


During the last 10 years, the Java ecosystem has heavily invested in reactive APIs, so it's not like if Java devs have no option.

And IMO, the semantics of any reactive APIs is better than just providing await.

Await serializes the async calls instead of running them concurrently. And too few C# devs are aware of/using the Task API.


And saved millions of developer minds not having to deal with mind bending async/await.

Thanks, I prefer to wait.


Do-notation is much simpler than callback hell


You could have had the Loom experience 20 years ago by just spawning OS threads. Of course, there's a reason that this was discouraged... threads quickly turn into a nightmare to manage safely, especially when they need to interact.


Genuine question, how is managing cross interacting virtual threads any different or easier than managing interacting threads? I say this and I am greatly looking forward to using Loom in production. It's definitely the correct way to go as opposed to async/await.


It's not. That's the problem with the thread API that Loom is so dead set on preserving, and the big improvement that promises/async/await provide over threads.


How does async/await improve on managing mutable state across threads of execution (tasks/promises/etc?)


Structured concurrency.


Issue with os threads is that their number is limited. Not the issue with communication. Futures are good enough for that, if you need more then structured concurrency.


I'd rather have the decade of increased productivity.


Increased productivity comes in various ways. A more popular language often has a better ecosystem that helps productivity, and adding lots of language features quickly is a hindrance to huge popularity. This might not be the case for many here, but most programmers prefer fewer features than more, and the most popular languages are also often those that can be taught as a first language, which also requires restraint. Also, languages that add features quickly are often those that add breaking changes more easily, which loses productivity.

So even if some language feature helps with some problem, the assumption that adding it ASAP is the best way to maximise the integral of productivity over time is not necessarily true. Language features also cost some productivity, and the challenge is finding the right balance. Java chooses to follow a strategy that has so far worked really well.


I know vastly more programmers that have abandoned Java than swapped to it.

That’s not to say it failed. Java is one of the most popular languages, but it does suggest real issues with the current approach.


And you could probably say the same for JavaScript and Python, that, together with Java, make up the current topmost tier of super-popular programming languages, which is why I think your conclusion is wrong.

These three languages are often first languages (or, at least, first professional languages), which is a necessary (though insufficient) condition for being super-popular. All of these languages more than make up for the loss of programmers who prefer richer languages with the programmers for whom it's a first (professional) language. Different programmers indeed prefer different languages, but that doesn't mean that the preferences are easily distributed. My rough personal estimate is a 90-10 split, where the 10% prefer more feature-rich, faster-moving languages. That's a very big minority, but Java addresses it by offering the Java language for the 90%, and supporting other languages on the Java platform for the 10%.

You can also see that while the market is becoming more fragmented, no language is currently threatening those top three (although TypeScript could threaten JS, I think), and no language is doing better than them. I.e. other strategies seem to be doing worse.

So knowing that for every X programmers you win you lose Y no matter what you do means that we try to carefully balance the evolution. I can tell you that we are more worried about teachers telling us that Java is getting too many features too quickly, and that it's harder to teach than Python (hence "Paving the Onramp [1] and other planned features) than the programmers asking for more features quicker. The former represent a much larger group than the latter. Also, moving more toward the latter group is both easier and less reversible, so it has to be done with great care.

[1]: https://openjdk.org/projects/amber/design-notes/on-ramp


This isn’t just me, the language’s popularity has been steadily falling from a peak in the mid 2000’s. You can quibble about specific numbers, but the overall trend seems to be languages peak and then slowly fade, and Java is following Fortran, Pascal, and C as languages that simply aren’t keeping up with the shifting demands of the modern workforce.

Ex: https://youtu.be/UNSoPa-XQN0


Java's mid 2000 peak was anomalous, not just for Java, but for any language. I can't think of a language (maybe C in the 80s?) ever dominating so much of the market. The market is now much more fragmented. What you should be asking is, which language is doing better? JS and Python are the only candidates. No single language is currently posed to take its place (although PHP and Ruby came closest), and no language outside that group of three (with the possible exception of TypeScript) seems to be threatening any of them.

So Java isn't as dominant as it was 20 years ago, but no one else is, either. And when you compare Java to current landscape, you see that it's in a very enviable position, and it's as safe a bet today as it ever was.


Several languages had similar levels of dominance. Fortran was even more dominant in the late 1960’s, then C in the early 90’s followed by Java in the 2000’s all had clear dominance.

It’s possible though harder to verify if Pascal reached that level and we might see Python hitting it soon.


> Several languages had similar levels of dominance.

I don't think Fortran was ever quite that dominant, but we can certainly agree that no language has been as dominant since Java.

> It’s possible though harder to verify if Pascal reached that level and we might see Python hitting it soon.

Not even close for either one of these. I was already programming in Pascal's heyday, and it was mostly used in education. It was never very popular in industry. Python is extremely popular, but many of its users are not professional programmers, and it isn't dominant in big server software at all. In the early 2000s, Java dominated servers, clients, and education. I don't think any language is even remotely approaching that today, but JS would be the closest (although still very far).

Java today is the #1 most popular server-side language, certainly for big software, and by a big margin. According to the best data we have [1][2], it's about 1.5x-2x more popular than C#, maintaining the same gap those two languages have had for about 15 years. It's about 7-15x more popular than Go.

So other JS and Python, all other languages's are doing worse than Java, with prospects that look significantly worse than Java's. No language is even coming close to threatening Java's position as PHP and Ruby were. Node.JS looked like it could have for a while, but then it sank quickly; some thought Go might do it, but while it's certainly interesting and there's much we can learn from it, its growth has stalled.

Comparing Java to itself 20 years ago is unfair, because it was an unusual time for programming. But when you compare it to the competition today, you see that Java is doing spectacularly. I would like to see it taught more in schools, though, where Python has overtaken it.

[1]: https://www.devjobsscanner.com/blog/top-8-most-demanded-lang...

[2]: https://www.hiringlab.org/2019/11/19/todays-top-tech-skills/


The thing is even Java’s heyday was far more fragmented than Pascal’s.

Despite what HN suggest sever side programming isn’t that big. Android is currently a major component of it’s position, if you don’t include Android then Java isn’t the most popular language.

Java is practically non existent on client side web, system programming, desktop applications, iOS, scientific computing etc. So only something like 20% of programmers are using Java as their primary language and it might have topped 35% at it’s peak.


> Despite what HN suggest sever side programming isn’t that big. Android is currently a major component of it’s position, if you don’t include Android then Java isn’t the most popular language.

Quite the opposite. If you look at the hiring labs data, iOS and Android combined make up less than 1/3 of the Java market alone. That's not surprising. There are lots of mobile apps, but they don't require that many hours of work.

> So only something like 20% of programmers are using Java as their primary language and it might have topped 35% at it’s peak.

But no other single language has better prospects, with only JS and Python in the same game. Java is not as super-dominant as it once was, but it is more dominant than almost any other language in existence. Everyone else is doing worse (or about as well in the case of Python and JS). Moreover, there currently aren't languages that are seriously threatening Java's position as PHP and Ruby (and maybe JS with Node.JS) once were.

The observation that the market is more fragmented than before with all languages commanding smaller portions than some did in the past is true. But if you're worried about Java's position, you need to be much more worried about, say, Go's.


The data I looked at suggests without Android Java falls behind JavaScript. Also, Python is starting to take over at CS schools which is usually a sign it’s going to be even more popular in the future.

I suspect Python is going to take over from Java fairly soon, even if it might not reach Java’s mid 2000 dominance. That said, Java never reached the dominance of C, and C never reached the dominance of Fortran so I don’t think such arbitrary benchmarks mean much. In absolute numbers we have far more programmers so the next dominant language is likely to surpass past peaks by that metric.


I don't know if Android matters that much (it's quite small), but regardless, as I've said several times, JS and Python are indeed the only languages with arguably better prospects than Java at the moment. Those who want Java to remain in that top tier should at least understand why, if we'd emulate anyone in any way, we'd try to emulate them rather than languages that are doing so much worse than Java.

But it's also good to remember that both JS and Python have their own issues, that are by no means smaller than Java's, and neither of them currently threatens Java's dominance on the server, and no one else does either (although PHP and Ruby did in the past).


I think you are jumping to your desired conclusion here. Even if we take your vastly at face value, there are so many reasons people switch languages, and problematic language evolution is not the only possible reason.


What else would you suggest for the falling popularity of Java from it’s mid 2000’s peak?

Some of this is just fads, but I think languages tend to suit the time period when they are most popular. In 1990 C was a hugely dominant force because it suited the kinds of programs being written and the hardware available. IMO Java was a great compromise for late 90’s hardware, but different tradeoffs are becoming more useful resulting in Python’s rise.

Much of this could be fixed with a better tooling and an overhauled standard library, but basic language pitfalls are still a problem.


My perception:

There were waves. First wave of leaving Java was for RoR. Then there was a period where we had all these JVM languages, with Scala, JRuby, & finally Closure. Then a bit later came Go, which is when I switched.

So why. First wave was due to the god awful experience of building web apps on app-servers. Remember how tedious that was? I forget the names, but there were so many frameworks. And of course imo things like JSPs and Spring also definitely played a part in motivating seeking more pleasant pastures. So this I would chalk to impedance mismatch between Java and the browser tech.

Second wave was more about 'language'. FP. DSLs. Rich Hickey! State is not identity! :)

Third wave was, imo, due to a more seismic shift that included more victims than just Java. This was the beginning of the noSQL era, "simplifying", Redis! What a breath of fresh air. Again, not that Redis made people switch languages, but that there was a shift in mindset as to how to build software. Java all of a sudden looked like the RDBMs second cousin.

Then the cloud. JVM startup times. Memory footprint, etc. I stopped following Java's progress after 2008, but sense this was the period when the Java stewards finally were motivated to be more adventurous with new features. But in the meantime, Go ended up being the server side networking champ.

But now, with things like GraalVM, I'm actually excited to switch back to Java as my main lang again.

Concurrent with all this, fads as you mention; amplification of unseasoned voices via blog-sphere that shifted mindshare; and just the basic human need to seek variety.


That’s fair. I would add the Oracle acquisition of Sun and subsequent missteps shouldn’t be ignored. There was a huge wave of negative publicity and uncertainty which played off of the perception of being the new COBOL with a helping of factory.factory_endless_boilerplate_word_salid.

Java’s fragmentation also didn’t help as even just web technology evolution though Applets, Servlets, JavaBeans, Spring, JSF etc without letting people settle into something that just worked reasonably cleanly. The perception was always that there was tons of legacy options and sometimes multiple hot new fads creating an endless treadmill where working on the same thing for 4 years left you behind the curve rather than a productive environment.

By comparison .Net benefited from the second mover advantage. Embrace, Extend, Extinguish didn’t work but uniformity brought it’s own advantages and they could always copy and tweak something when it was clearly better.

Finally, there was a perception that the kind of companies using Java where exactly the kind of companies that would soon outsource jobs to India or just underpay and replace everyone with H1B’s.


What is this argument exactly? We're arguing against the concept of features? Why are we talking hypothetically?

Async/await provides immediate user value. If people didn't like it they could just use threads in a Java-like style. While some dislike the features, and other push against the sour grapes, its a popular feature found in many languages used by many developers.

Java is popular and so is C# and Javascript, so I can't see how we can draw any conclusions on async/await.


First let me say that since it's established that different programmers have different preferences, the fact that some programmers might prefer a different evolution strategy is no reason to change it, because that will always be true. A reason to change strategy is if some other one has fared better, and none has. The only languages that have arguably fared better than Java are Python and JS, and they have fewer, not more features. So there's simply no good reason to pick a different strategy that has not shown to fare better.

Now, when it comes to async/await, first let's take JS off the table, because JS has no other convenient concurrency feature, and it couldn't add threads because it would have broken most existing JS code (it could add isolates, but they're not sufficiently fine-grained, at least not in the JS world).

If Java had got async/await ten years ago, it would have been burdened with it for decades. It would have provided some value in those ten years, and extracted value for the following forever (albeit gradually diminishing). "Just don't use it" works fine for library features, but not so much for foundational language features, because programmers usually don't start with a blank slate, and don't pick the features in the codebase they work on. Therefore, all language features carry a cost, and they carry a higher cost when it comes to beginners, where this matters most.

It's hard to precisely describe what could have been, but I think most would agree that in those ten years Java didn't lose many developers to languages with async/await because they had async/await. It probably lost some developers to Python and JS for other reasons (say, Python is easier to get started with, and JS is easier for those who know it from the browser), and it didn't even lose that many people to Go (Python lost many more to Go than Java did). Considering all that, I think that Java's position in 2022 is better, now that it has virtual threads, than it would have been had it also had async/await (which would have likely also delayed virtual threads).

If I could go back in time knowing what I know now, I would have advocated against adding async/await ten years ago with even higher certainty. Back then I just believed there's a better way; now not only do I know there's a better way, but I also know that not adding async/await didn't cost us must if at all.

Going back to the original topic, Java's primary competitors -- Python and JS -- also don't have a great solution for nulls. So while I would very much like to address this problem, I see no reason to change our game plan in favour of scrambling for a quick solution. We'll tackle this issue like we try to tackle all others: methodically and diligently, and preferably after we've had time to study how well various solutions work elsewhere.


I think Java really dropped the ball on UIs (and other paradigms where you need to pass data across threads) and this is partly because of the terrible threading hoops that need to be jumped for lack of async/await. Kotlin gained traction exactly because Java didn't solve this.

I think it's pretty depressing to hear that you're numb to this.

TBH, I don't think virtual threads even address this use case in a way that isn't just async/await but less sugared syntax. (Although I'm willing to be pleasantly surprised once new libraries pop up)


Not only am I not numb to this, Oracle is now renewing investment on the client. But client side programming does have a clear winner -- JS. Not only Java, but everyone else lost. But whatever Java does, it's clear that no single language is currently on a path to dominance across multiple domains. Java is dominant on the server, JS on the client, and Python in ML and smaller programs. When we compare Java to its historically anomalous dominance in the early oughts, its position today is no doubt worse. But when we compare it to how other languages are doing, no one else seems to be doing much better, and most are doing significantly worse.


I love this solution. Now you have two kinds of nulls.


I disagree. The problem with Null in my opinion is that it is the default and can easily be created accidentally.

There's nothing inherently wrong with an "absence of value, value"

Optional.empty is not null, it's no-value


But java has the worst of both worlds cause with an optional you now have 3 cases, Optional.empty, !Optional.empty, & null instead of just 2 cases.

In our code base this means we have a the rule that Optional is only allowed for returns so we aren't adding 3rd cases all over the place.


I have never came up to a code that would pass null in place of Optional, why?


Because you don't always control where the code is being called from.

Other compile time typed languages (C, C++, C#, typescript, rust, kotlin) usually would let you enforce that as part of the type system, but the java compiler will silently accept null in place of optional and throw an NPE at runtime.


Pretty much every JSON parser in java does this by default. Why? Because they were all written before Optional existed.

I have to agree with the Optional hate. I loved it in Scala but after a few months on Kotlin I don't think I can go back.


Jackson does it well.

Any sane json library has to support newer Java idioms, if not it is not worth using.


Just like JavaScript!


Javascript also has Option with fp-ts, so it still leads on this front.


Isn't Object itself (and any class) already an optional at the language level?

Either it's null, so has no value, or it's not null so has a value, and you can check for null-ness. What more is needed?

Seems like the opposite, a guaranteed non-null value at core language level, would be novel instead...


Right, if only there was a language on the JVM that did that /s

It's really an eye-opener when you compare kotlin and scala, with all their superficial similarities: Where kotlin simply takes the @Nonnull annotation, promotes it to a core language feature and drowns it in convenient function-scope syntactic sugar until it's actually nice to use, scala opts for Option and stacks layer upon layer of architecture trying to make Option somehow disappear. I lost half a decade holding on to plain java snobbishly dismissing kotlin as a second class scala before I finally got converted.


Don't people use @NonNull everywhere now? It's been a few years since I've programmed in Java but even then I feel like that was common practice.


Which @NonNull are you referring to, Lombok's? Or javax.annotation.NonNull, or something else?


“Unannotated types are considered not-nullable”

Defaulting to not-nullable is a great idea. Much less boilerplate.


Rather than opting into a slightly incompatible dialect for some but not all code, I would like an IDE that lets me specify what is @Nullable, and quietly inserts @NotNull everywhere else without displaying it. We can keep the boilerplate in the bytecode without rubbing our noses in it.


Depends on code base, at the beginning you would prefer the opposite, (not annotated are nullable).


Yes, it's always been possible to check for nulls at runtime.

Personally I use notNull(..) over @NonNull since it actually fires when you expect it to (as opposed to whether your framework dispatcher interceptor trigger decided to invoke it)


I think he means using @NonNull for compile time checking, not instrumenting it for runtime checking (though that doesn’t hurt either).


isn't @NonNull just a syntactic sugar for adding checkNonNull() call as first statement to the function declaration in compile time, or am I mistaken? Just like lombok, it is supposed to generate code that checks null arguments, from what I know.


I tried:

    @Test
    public void foo() {
        bar(null);
    }

    void bar(@NonNull String param) {
        System.out.println(param);
    }
All versions of that annotation let the null right through. I tried with:

    import lombok.NonNull;
    import javax.validation.constraints.NotNull;
    import io.micronaut.core.annotation.NonNull;
    import org.springframework.lang.NonNull;
    import reactor.util.annotation.NonNull;



@Nullable/@NotNull is great when the IDE shows the warnings, basically dev time checking. There are also tools to integrate it into your builds for compile time checking.


I get a little green tick in the top right of my IDE window with the following:

    @Test
    public void foo() {
        final Map<String, Integer> map = new HashMap<>();
        map.put("present", 1);
        bar(map.get("missing"));
    }

    void bar(@NotNull Integer param) {
        System.out.println(param);
    }
"No problems found"


Right, that's because the system libraries don't have the annotations. That's the biggest issue with it. But it still helps a lot if you're religious about it in your own code.


Do the heuristics used for Kotlin-interop work with notNull(...)?


Seems common, though it only helps so much with 3rd party libraries.


In my experience mostly people forced to use Java who are then consuming said code in Kotlin do that


Scala had the same possibility, but the ecosystem of libraries used Option<T> appropriately so in practice I never thought about null. That might be harder in an older, larger ecosystem like Java...


imo it seemed like there was this phase of "if we pretend null doesn't exist maybe it will go away" that resulted in a bunch of design issues. Beyond Optional, the other big one to me is Map.compute/Map.merge, which subtly breaks the previous interface contract for Map. As annoying as null is, I'd rather have null than have broken apis.


It may have been hoped for, but it was an unrealistic hope.

They couldn't have fixed it while retaining backward compatibility.


> They couldn't have fixed it while retaining backward compatibility.

Opt-ins exist. That's what .net ended up doing: by default the language uses "ubiquitous" nullability, but you can enable reference nullability support at the project or file level.

If you do that, references become non-nullable, and have to be explicitly marked as nullable when applicable.


The smart thing that C# has though is automatic conversion to non-nullable references, e.g.:

    void foo1(Bar bar)
    {
    ...
    }

    void foo2(Bar? bar)
    {
        if (bar != null)
            foo1(bar);   
    }
Which wouldn't compile if you didn't have the `if` condition.


I'm not a Java person but isn't this just very similar to std::optional in C++?

https://en.cppreference.com/w/cpp/utility/optional


No, because in C++ bare types are not nullable. In java, reference types are nullable, Optional being a reference type, Optional<T> can be null, Empty, or have a value, whereas T can be null or have a value.

Also because std::optional does essentially the opposite. It’s a more efficient _ptr rather than a safer one. Optional<T> is strictly less efficient as it implies an additional allocation.


> Optional<T> can be null, Empty, or have a value, whereas T can be null or have a value

Thanks for the explanation. Wow, this sounds like a a bit of a mess, one because of the allocation, and also because it seems there are multiple ways something can be semantically null.


OK, so when valhalla lands I assume Optional will become value type (or we will have another class for that, to avoid issues with backward compatibility)


Do you feel that a language has to have something at the language level to prevent NPEs?

In my experience, Scala does pretty well without it.

I guess is your point that the language should make it impossible to write bad code, not just make it easy to write good code?


Just a heads up, Scala 3 actually has a compile flag that will make types exclude ‘null’ as a valid subtype, so every nullable variable will have to have type signatures like String | Null.


One can abuse everything, but should one? (and that error should be catched by any IDE or some static analysis).

Optional are quite nice and we use it where appropriate, they play nice with streams which is a nice bonus.


Optional is not just another class. The control flow in the compiler will check that optionals are checked.


I wonder if the plan is to add syntactic sugar for Optional<T> in a future version.


It wouldn’t be compatible with regular nullable references.


    @NonNull
    Optional<@NonNull String> getMiddleName() {
        return null; // error
    }


Or even just setting the default to non-nullable with any static analysis checker.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: