I certainly hope that's where we're headed. The more we can tell the compiler about our code, the better.
It might also be interesting to provide hints like "this int should only ever be in the range 1-30" :)
That's in all the Pascal-family languages - Pascal, Modula, Ada.
"Perhaps the future of software isn't "rewrite everything in Rust", but instead we end up annotating existing C/C++ with borrow checking information."
Trying to retrofit this to C++ runs into the problem that too often you're lying to the language to get something done. Like extracting a raw pointer to be sent to a system call. You have to break a lot of existing code to make this work. Which means a new safer C++ like language. There are about five of those to choose from, none of which are used much.
Rust was right to do a clean break. But then they just got too weird, with their own brand of functional programming and their own brand of a type system.
I'd love to hear your thoughts on Rust having an in-house type system and functional programming paradigm. It doesn't seem all that different than comparable era languages like Swift (maybe through inspiration) or Haskell. Macros, are IMO, where things go a little wonky but once you learn you can actually trust the compiler it's easy to let go.
Surprised to see that you put Swift in the same bucket as Haskell, since Swift is to me the prime example of successfully balancing power with usability in language design.
This is where languages like Haskell, Rust and others fail: the programming language nerds take over and design a language for themselves, making it weirder and weirder as time passes and new arcane features are added.
This stuff has been out there for a long time, but it's never become popular outside of outside of safety-critical or high-integrity systems. Examples include:
Fun stuff to learn about in theory, but, unless you're working at a place where formal verification is part of the culture, good luck selling these tools to upper management.
Ada checks all arithmetic - the compiler can elide bounds checks if it can statically verify that no out of range error can happen, but if it can’t verify statically you get bounds checks at runtime.
Contracts come in several forms. Some are checked at compile time. Some are checked at run time. Some the check is too expensive to actually run but is left for documentation. Compilers are expected to have a switch to turn off run time checking for production (just like assert)
Contracts were designed so that they can be checked statically.
Pedantically you are correct: checking contracts is expected to be too expensive to actually do as part of a build. C++ expects that static analysis will check contracts as well (static analysis is might take 10 hours to check what compiles in 10 minutes)