Advertising prominently with "AI native" seems necessary today, at least for some folks. To me, that's kind of off-putting, since it doesn't really say anything.
Can anyone of the AI enthusiasts here explain, why, or, what is meant by
> As a compiled, statically-typed language, it's also ideal for agentic programming.
It's been really interesting to see all the desperation on hero pages for all these products and services ever since AI came into prominence. I think the funniest for me was opening IBM DB2 product page and seeing it labeled as 'AI database'. Hysterical.
> why, or, what is meant by
More errors caught at compile time means an agent can quickly check their work statically without unit and other tests.
Current LLMs have been trained on extensive libraries of past code. Therefore, LLMs will for the foreseeable future work better for established languages than new ones. Especially languages with a lot of open source code available, like Python. That's a big problem for incumbents without any existing code to train LLMs on.
Thus this desparate "AI native" marketing is probably necessary to even be considered relevant in an "agentic" world. Whether it's enough, only time will tell.
Python+ruff+pycheck and TypeScript are compiled to bytecode instead of machine code. They’re not statically typed in the Rust sense. And yet, I’ve watched model crank out good, valid in both of those without needing to be either strictly “compiled” or “statically typed”. Turns out AI couldn’t care less about those properties as long as you have good tooling to quickly check the code and iterate.
yes, except it's more ... on the same lines, just to hammer the point home:
it's web 2, it's SaaS, it's the latest weekly, er, sorry, daily, hottest JS framework, its the latest rap / punk / hippie / dreadlock / crewcut / swami / grunge/ guru hairstyle, it's agile, it's functional programming, it's OOP, it's OOAD, it's UML, its the Unix philosophy, its Booch notation, it's CASE tools, ... going back even further, it's structured programming, it's high-level languages, it's assemblers, its veganism, it's the keto diet, it's the Atkins diet, it's the paleo diet, it's cholesterol is bad, no, it's good, etc etc etc.
iow, it's the equivalent of your common or garden variety of teenager proclaiming that this new thing they just found is gr8, all else is shite, only to jump on the next bandwagon next week, month, or more rarely, year.
I don't know what they meant by it, and I share your opinion that "AI native" is somewhat meaningless for a programming language like this.
Regarding compilation and static typing, it's extremely helpful to be able to detect issues at compile time when doing agentic programming. That way, you don't run into as many problems at runtime, which of course the agent has more difficulty addressing. Unit tests can help bridge the gap somewhat but not entirely.
What's not stated on their website is that Mojo is likely a bad choice for agentic programming simply because there isn't much Mojo training data yet.
I've recently used Claude to write quite a bit of mojo (https://github.com/boxed/TurboKod) and I can quite confidently say that Claude will write deprecated mojo syntax a lot, but the compiler tells it and it fixes it pretty fast too. The only reason I notice is that I look at Claude while it's working and I see the compilation warnings (and sometimes Claude is lazy and doesn't compile so I have to see it).
But yea, to write mojo 1.0 code even after getting errors might take a new training round, so next or even next-next models.
Have you used the Mojo syntax skill with modern LLMs? It is updated to latest Mojo and I can say nearly 100% of my code is written by AI, with good quality, and the compiler helping it too.
Because a coding agent (when instructed well) will try to make a piece of code work in a loop. Static typing and compilation help in the process (no more undefined variables discovered at runtime for instance). But that’s not bullet proof at all as most of us know
Many comments here to your creation, PeakSlab, but not yet a dedicated praise. I didn't know it but I have to say it is really cool and innovative! The performance of the dictionary is indeed superb and I will definitely bookmark this for future reuse. So, in a nutshell: thanks for sharing!
> I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.
Did you read the section "Power to the People?" ? In it, the author dismantles your thesis with powerful, highly plausible arguments.
1. You don't have to be an LLM expert to get good, consistent results with LLMs.
My best vibe-code process after years of using LLMs is to have Claude Code create a plan file and then cycle it through Codex until Codex finds nothing more to review, then have an agent implement it. This process is trivial yet produces amazing results.
It's solved by better and better harnesses.
2. You don't have to write technical specs. The LLM does that for you. You just tell it "I want the next-tab button to wrap back to the first one" and it generates a technical plan. Natural language is fine.
3. Software that seems to work only to fail down the line in production is already how software works today. With LLMs you can paste the stacktrace or user bug email and it will fix it.
This is why vibe-coding works. Instead of simulating how an app will run in your head looking at its code, you run the app and tell the LLM what isn't working correctly. The app spec is derived iteratively through a UX feedback look.
4. I don't understand TFA's goalposts, but letting people create software that are only interested in the LLM process (rather than the software craftsmanship) would be a huge democratization of software.
This sounds like someone who have never had to write serious software.
> 1. You don't have to be an LLM expert to get good, consistent results with LLMs.
You don't get good consistent results with LLMs, expert or not
> 2. You don't have to write technical specs. The LLM does that for you. You just tell it "I want the next-tab button to wrap back to the first one" and it generates a technical plan. Natural language is fine.
Try this, have Claude write a section in your specs titled "Performance Optimizations" and see the gibberish it will come up with. Fluffy lists with no actually useful content specific to the project. This is a severe problem with LLM-driven speccing I have encountered uncountable times. I now rarely allow them to touch the specs document.
> 3. Software that seems to work only to fail down the line in production is already how software works today. With LLMs you can paste the stacktrace or user bug email and it will fix it.
And pretty soon you have a big ball of mud. But I guess if the rate of bugs accelerate, the LLMs can also "fix" them faster
> This is why vibe-coding works. Instead of simulating how an app will run in your head looking at its code, you run the app and tell the LLM what isn't working correctly. The app spec is derived iteratively through a UX feedback look.
I should tell you about the markdown viewer with specific features I want, that I have wanted to build only with LLM vibe-coding, and how none of them are able to do it.
> This sounds like someone who have never had to write serious software.
Why the insult? You never know who you're talking to on HN.
Your points have to do with process failure, not intractable LLM limitations. Most of which already apply to human-conceived software.
Your "Performance Optimizations" bit exemplifies this since you baked in the assumption that it will have no connection with your project. Well, why not? You need to figure out how to use your source code and relevant data as ground truth when working with LLMs.
A markdown viewer is on the simpler side of things I've built with LLMs, so this too suggests that you have a weak process. A common mistake is to expect LLMs to one-shot everything (the spec, the plan, or the actual impl). Instead you should use LLMs to review-revise-cycle one of those until it's refined, ideally the spec/plan since impl is derived from it. You will have much better and consistent results.
I recommend finding an engineer you respect/trust that has found a way to build good software with LLMs, and then tap them for their process.
Thanks for your response. I did not mean to insult; my mild jab was meant to draw attention to the idea that using LLMs for serious production software is a whole different game than using them for casual software.
You said
> Your "Performance Optimizations" bit exemplifies this since you baked in the assumption that it will have no connection with your project. Well, why not?
OK, I am talking from experience. Using LLMs for speccing is almost useless above certain complexity levels; what you get is an assemblage of the most average points you can imagine, the kinds of things almost every project in the category you are working on will address without any thought. Ask it to spec auth for a specific design, and all you'll get is: cookie-based login, input validation, password hashing, etc, etc. Which you don't need an LLM for. Nothing like an actual in-depth design. Even asking them to update specs based on discussions is hit or miss.
> A markdown viewer is on the simpler side of things I've built with LLMs, so this too suggests that you have a weak process. A common mistake is to expect LLMs to one-shot everything (the spec, the plan, or the actual impl). Instead you should use LLMs to review-revise-cycle one of those until it's refined, ideally the spec/plan since impl is derived from it. You will have much better and consistent results.
But what you are describing is NOT vibe-coding. I have no doubt I could build the viewer I want (which by the way is not your usual plain vanilla markdown viewer, but one with some very specific features) with LLM assistance. My point is: if you can't even vibe code your way to this specific viewer, how are you supposed to vibe code serious software?
Indeed, the declining quality of Claude Code is, I suspect, testament to the fact that vibe-coding any sufficienly complex piece of software does not work in the long run.
Oh, I see. I'll grant whatever you take vibe-code to mean since that seems to be the hang-up -- vibe-code prob suggests there's no process at all.
My point is that the planning phase and implementing phase are basically unsupervised, and all the work goes into the planning phase.
Yet I've noticed that over time, I'm not even needed in the planning phase because a simple revision loop on a plan file produces a really good plan. My role is mostly to decide what the agents should do next and driving the revision loop by hand (mostly because it's the best place for me to follow what's happening).
I've been getting really good results, though I've also developed a simple process that ensures that LLMs aren't relying on their model but rather external resources which is critical.
While I think the author is entirely right about 'natural language programming' in the current day, if LLMs (or some other AI architecture) continue to improve, it is easy to believe touching code could become unnecessary for even large projects. Consider that this is what software co. executives do all the time: outline a high level goal (software product) to their engineering director, who largely handles the details. We just don't yet know if LLMs will ever manage a level of intelligence and independence in open-ended tasks like this. And, to expand on that, I don't know that intelligence is necessarily the bottleneck for this goal. They can clearly tackle even large engineering tasks, but often complaints are that they miss on important architectural context or choose a suboptimal solution. Maybe with better training, context handling, documentation, these things will cease to be problems.
I have indeed missed the arguments that are so powerful that they dismantles my thesis.
Would there even be a debate in the tech community if such unassailable arguments existed?
The author is entirely entitled to his opinion, just as I am allowed to disagree with him (not sure why I am also downvoted). The good thing is, if I'm right, we will see it in less than 10 years.
In fact, AI might be the opposite of managerial "silver bullet". The more we automate what is repetitive, the less predictability remains overall. Things can get more productive on average but the managing it becomes harder, as productivity amplifies risks.
> Many CLI tools, SDKs, and frameworks collect telemetry data by default.
Any of those are using a dark pattern and before exploring new ways to opt out you should look for and spend your energy on an alternative which respects your freedoms upfront.
I am currently enjoying WYSIWYG with GNU TeXmacs for long-form or scientific text editing. Both, the concept and the tool, are amazingly capable and a breath of fresh air after all the LaTex, Markdown, Org s …
Almost nobody uses TeXmacs it because those who might be interested need LaTeX and its packages. This is not LaTeX. (In the future these authors might all be using Typst, but not this thing.)
Gone are the days of deterministic programming, when computers simply carried out the operator’s commands because there was no other option but to close or open the relays exactly as the circuitry dictated. Welcome to the future of AI; the future we’ve been longing for and that will truly propel us forward, because AI knows and can do things better than we do.
I had this funny moment when I realized we went full circle...
"INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as "READ OUT", "IGNORE", "FORGET", and modifiers such as "PLEASE". This last keyword provides two reasons for the program's rejection by the compiler: if "PLEASE" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if it appears too often, the program could be rejected as excessively polite. Although this feature existed in the original INTERCAL compiler, it was undocumented.[7]"
"PLEASE COME FROM" is one of the eldritch horrors of software development.
(It's a "reverse goto". As in, it hijacks control flow from anywhere else in the program behind your unsuspecting back who stupidly thought that when one line followed another with no visible control flow, naturally the program would proceed from one line to the next, not randomly move to a completely different part of the program... Such naivety)
> "PLEASE COME FROM" is one of the eldritch horrors of software development.
The most enigmatic control flow statements in INTERCAL, however, remain PLEASE GIVE UP and DO ABSTAIN FROM – a most exalted celebration of pure logic and immaculate reason.
This. Should become a general rule for any non-trivial use of LLM in a professionel setting.
reply