Throughout my career in computing I have heard people claim that the solution to the software problem is automatic programming. All that one has to do is write the specifications for the software, and the computer will find a program [...]
The oldest paper known to me that discusses automatic programming was written in the 1940s by Saul Gorn when he was working at the Aberdeen Proving Ground. This paper, entitled “Is Automatic Programming Feasible?” was classified for a while. It answered the question positively.
At that time, programs were fed into computers on paper tapes. The programmer worked the punch directly and actually looked at the holes in the tape. I have seen programmers “patch” programs by literally patching the paper tape.
The automatic programming system considered by Gorn in that paper was an assembler in today’s terminology. All that one would have to do with his automatic programming system would be to write a code such as CLA, and the computer would automatically punch the proper holes in the tape. In this way, the programmer’s task would be performed automatically by the computer.
In later years the phrase was used to refer to program generation from languages such as IT, FORTRAN, and ALGOL. In each case, the programmer entered a specification of what he wanted, and the computer produced the program in the language of the machine. In short, automatic programming always has been a euphemism for programming with a higher-level language than was then available to the programmer. Research in automatic programming is simply research in the implementation of higher-level programming languages.
> automatic programming always has been a euphemism for programming with a higher-level language than was then available to the programmer
And it seems to me that progress is going in the opposite direction than "they" want. Every time you move up the abstraction stack, you're surrendering some decision-making to the lower levels. If the underlying technologies guess right every time, you have no need to understand what they're doing. The first time they guess wrong, you have to spend a lot of time understanding not only how the lower layers work, and not only why they did the "wrong" thing in this one instance, but how to fiddle correctly with the layer you're operating at to get the lower layers to behave properly. You can work quickly with the high-level abstractions only as long as you understand the lower levels reasonably well.
Optimal machine learning requires a good understanding of memory cache hierarchies, parallel instructions and complexity theory - not to mention the statistics and calculus that it's formed on. And "optimal" isn't some trivial "save a few seconds" but often "return an answer within the lifetime of the universe".
Security is also something to be mindful of. A lot of my work as a professional vulnerability researcher is just using my low-level knowledge to circumvent higher level abstractions people usually ignore. The abstractions aren't seeming to slow down and I fear soon only a few will be able to peak into most or all of level needed to provide reasonable security. Whenever I see a system built with "Automated" technologies, I usually start there to find flaws. In order to truly utilize high level abstractions it is useful to actually understand what provides them.
I feel like I was lucky to have started learning computers when I did in the late 80's. There weren't nearly so many "time saving" abstractions back then, so if you wanted to see anything happen, you had to have a good understanding of what was going on under the hood. Although it was at times frustrating back then to put so much effort into something as simple as drawing a circle on a screen, I'm fortunate that I was forced to spend so much time internalizing the details - I don't know if I would have the patience to learn it all if I was starting right now if I could see that shortcuts existed.
It really depends on the specific person. I started programming in 2006 at 21yo and I don't feel like I missed anything at all. Spent a lot of time reading and researching things anyway. Higher abstractions doesn't necessarily mean lower complexity.
Hmm - I don't see much people tinkering with machine code, even compiler bugs are rare and programmers don't normally need to understand what their compilers do.
If you're working with HPC applications, your choice of compilers and sometimes higher level specifications you may write in C, C++, or (gasp) Fortran often do demand you think about what your compilers are doing or choose compilers to think better for you.
If you're fine with lower performance (which is reasonable for a lot of application cases, so I almost entirely agree with you), you certainly don't have to deal with this.
These folks are working on bootstrapping a full Linux distribution solely from a small amount of machine code plus all the source code of the distribution:
I find a lot of useful abstractions end up getting implemented twice: once in a "magical" way where you rely on the runtime to manage it according to a bunch of cobbled-together ad-hoc rules, then again in a "principled" way where it's under the programmer's control and can be reasoned about, but still (almost) as usable as if it were working by magic.
e.g. ad-hoc exceptions -> errors as plain values, but with "railway-oriented programming" techniques that make them as easy to work with as exceptions
e.g. runtime-managed garbage collection -> rust-style borrow checker ad-hoc in the compiler -> haskell/idris-style linearity in the type system
e.g. "magic" green-threading -> explicit-but-easy async/await
e.g. behind-the-scenes MVCC in databases -> explicit event sourcing
> The first time they guess wrong, you have to spend a lot of time understanding not only how the lower layers work, and not only why they did the "wrong" thing in this one instance, but how to fiddle correctly with the layer you're operating at to get the lower layers to behave properly.
A sister team at my company is quite proud of its rules engine that allows non-programmers to quickly implement business policies using its DSL in a web UI.
With the years and postmortems gone by it has grown half-assed attempts at version control, code review, unit testing, deployment pipelines, etc. Now it’s very obviously just a shitty, hobbled software development environment. It’s used primarily because the team that owns it is aggressive about blocking design reviews for “duplication of effort” if you propose to use normal software development tools anywhere near its domain.
>version control, code review, unit testing, deployment pipelines, etc.
This is absolutely one of my biggest pain-points with "no-code" solutions. Even trying to track revisions to something relatively simple like a word document over time is a big pain compared to tracking revisions to source code or a configuration file. Trying to get a grip on how people are fiddling with a no-code product from the audit logs is incredibly difficult, never mind trying to track down a change from 1 year ago. Often changes won't appear on audit logs at all or they won't be explained in enough detail and the format of the logs will have little resemblance to how things are actually configured. You can use some no-code solution to modify a SQL query under the hood and the audit log will just say "THIS USER CHANGED THIS QUERY" and that's all the detail you get! It's frequently difficult to explain to your peers how you're going to change a system without showing them a bunch of screenshots and going "Well I'm going to tick this box and move the green rectangle over here and link it to the orange oval". Rolling back changes can often be impossible without rolling back EVERY change between now and when the first incorrect change was made.
I use code, and these problems just don't happen! It's only when people are using some wonderful "user-friendly" solution that things get so jacked up.
Whenever someone is championing a "no code" or "configuration driven" solution for business processes all of these things you just described are alarm bells ringing in my head.
Especially when it is anything involving finances, even tangentially.
Making it easy for non-engineers to change business rules on the fly without a code deploy sounds nice in theory, until you think about it for a few more minutes.
What if a no-code system was built atop a text-based language that lived in a regular file-system workspace? The "no-code" part would just be a fancy IDE, but the code would still exist to be edited directly, tracked with git, whatever else.
You'd need to make sure that edits in that fancy IDE can be sensibly diffed/merged at the level of that text-based language. I've never seen good diff/merge for a graphical format, so I think this kind of "no-code" ends up being just code.
I'm doing something like this for a system that's configured via web-interface. It has a stable, readable export format which I'm tracking with Git. So we can actually have a diff and reviews of the changes.
They store so much, why not a changelog? It baffles me.
The other thing is there may be no way for me to, say, get a list of all the forms associated with a certain table and the fields they contain in some programmatic way (without browser automation, which is something I use not infrequently with a no-code system) and APIs that are almost complete but not quite.
I'm having flashbacks to attempting keystroke automation which works 90% reliably until a field is added via an update and breaks EVERYTHING until you workaround it.
I was almost bitten by a project like this, despite ample protests the whole way long that "configuration" was just becoming a "crappy programming language". All too often I still have to deal with someone who thinks because logic hinges on a value in a JSON file, they've magically abstracted away the concern of what it is that the code should be doing.
Man, I'm not from the Java world, but these days I had to deal with it for a project and had to mess with ant builds, which I have never dealt with before. I was totally baffled. I just do not understand why would anyone subject themselfs to such levels of pain.
The above describes what is today called program synthesis, i.e. generating a program from a complete specification. There is a parallel discipline, of inductive programming, or pgoram induction, which is the generation of programs from incomplete specifications, which usuall means examples (specifically, positive and negative examples of the inputs and outputs of the target program). Together, program synthesis and program induction, comprise the automatic programming field, which has advanced a little bit since the 1980's, I dare say.
Inductive progamming is very much a branch of machine learning and that's the reason most haven't heard of it (i.e. it's machine learning that is not deep learning). The main approaches are Inductive Functional Programming (IFP) and Inductive Logic Programming (ILP, which I study for my PhD). IFP systems learn programs in functional programming languages, like Haskell, and ILP systems learn progarms in logic programming languages like Prolog. And that is why neither is used in industry.
That is to say, both approaches work just fine - but they're not going to be adopted anytime soon (if I may be a bit of a pessimist) because most programmers lack the background to understand them and they can't be replaced by a large dataset.
A quick introduction to Inductive Programming is on the wikipedia articles:
I suggest to follow the links to the article's sources and to search for IFP and ILP systems separately. Two prominent representatives are Magic Haskeller (IFP) and Aleph (ILP):
Neat! Thanks for posting this. I bumped into Genetic Programming a few times when I was studying evolutionary algorithms. Program induction seems bigger, more modern, and more sophisticated.
Some of the IFP systems actually use a evolutionary algorithms, I believe- ADATE in particular, though I can't find an actual instance of it online, or any other information but a mistitled paper:
I see where you're going here. It might help if we construct a specific syntax and grammar for writing specifications for software. It'll be helpful for all of us if we can make our specifications as clear and efficient as possible. We could even call this combination of a syntax and grammar a programming language.
Just imagine a world when we have these programming languages to help us encode the specifications for our software. Wild! :)
That’s a specification that gives you no knowledge how to solve the problem. You can definitely have precise specifications that do not directly correlate to machine instructions.
I like this answer because it says it will require someone with specific knowledge and specific training doing a specific job anyway...
Also "anyway" a specification is always incomplete. It's part of the programmers' job to fill the gaps with something sensible, and when they have no idea what to do, to point out that some corner case was overlooked.
This is where programmers sometimes start to feel like Lieutenant Columbo: at first everyone is nice, but people become more and more irritated as the pesky cop asks more and more embarrassing questions.
A key distinction here is whether you could write your no-code system using your no-code system. If you no-code system is self-hosting, then you've made a higher-level programming language. Otherwise, it's a tool that lives on some dimension other than "programming language".
This is a good insight. So, clearly, higher-level languages have been an enormous success. But "no-code", as we mean the term today, has still (mostly) been a failure.
So then what do we actually mean by "no-code" these days? I think "no-code", today, has the unspoken implication of "graphical". Okay, so why are "graphical coding" systems (languages?) mostly unsuccessful? There are some clear exceptions like Unreal Engine's Blueprints and certain WYSIWYG web editors like Squarespace, but the great majority of programming is still done in what comes down to, at the end of the day, text files. There may be more and more elaborate editors built atop these text files, but the "bones" of the code is always available, and never too far out of reach.
My pet theory is that this last bit is the differentiator. In a totally graphical programming environment, the programmer never has to be exposed to the underlying format directly. This perhaps encourages proprietary stacks, where the GUI is all that's known and the substance of the "code" itself may never even be made available to those who seek it out.
Maybe this arrangement keeps these "languages" siloed, and therefore keeps them from gaining real traction. It's hard for a thriving ecosystem to evolve around a closed format. Competing tools, editors, compilers, open transfer via regular files, translation, etc are all stifled. You end up totally dependent on one company for all of your needs. For some projects the value-proposition still works out; for most it doesn't.
If so, here's my proposal: instead of focusing on all-in-one no-code environments, focus on creating graphical tooling for existing languages. Or even, creating new (text-based!) programming languages that are designed from the get-go with advanced (graphical?) tooling in mind, while still having that real, open, core source-of-truth underneath it.
We've seen echoes of this already: Rust's features would make it nearly unusable without its (excellent) user-friendly tooling. Design apps like Sketch can output React source code. Pure-functional languages like Clojure really thrive with an editor that has inline-eval. I think if "no-code" is ever going to catch on for real, it needs to be less afraid of code.
On the other hand: is the value-add really the graphical interface, or is it the "height" of the language?
In the latter case, maybe it's more important that we explore even-higher-level languages, and set aside the graphical part as a distraction. Or, maybe we combine the two goals: create higher-level-languages that also lend themselves to graphical UIs, but are still grounded in formalized text underneath.
The common denomination in textual coding is text itself: There were historically some differences by locale, encoding, and line ending, but we have managed to converge on a sufficiently encompassing standard with UTF-8 text that the rest can reasonably-if-imperfectly be dealt with by the text processing code, and this is the source of friction: our text editors and terminal emulators are great - so great, that it's hard to get out of the path dependency involved in utilizing them. It isn't just the interactive cursor, or copy-paste, or search-and-replace, or even regular expressions...it's decades of accumulated solutions that can handle every concievable text problem.
Every time we go to some other way of describing a document we lose all that text editing infrastructure, which is such a huge setback that no alternate solution has yet hit a widespread critical mass, only specific niches in specific domains.
Which doesn't make text good, it makes it worse-is-better!
As I see it, the probable way forward would not be in language design, but in formats and editors. Breaking through the dependency will be a slow process.
I think there's real value add from a language that's designed to work in an IDE-first way, that uses the GUI not to replace text but to enhance it. The best example I know of this is Scala's implicits: they're not visible in the code itself, but in an IDE you can see where they're coming from (e.g. ctrl-shift-P in IntelliJ), so they're a great way to express concerns that are secondary but not completely irrelevant (e.g. audit logging - basically anything you'd consider using an "aspect" or "decorator" for). Another example is type inference (which a lot of languages have): your types aren't written in the program text, but they're available reliably when editing, so you can use them to express secondary information.
People on HN seem to think programming languages should be designed for a text editor as the primary way of editing, and I think that's a mistake that holds programming culture back (and is why we see these "graphical" languages go too far in the other direction, because the only way to get people to take a proper language editor seriously is to make a language that's impossible to edit in a text editor). Having a canonical plain text representation is important. Having good diff/merge is important. Editor customizability is important. But if you embrace the IDE and build an IDE-first language, without abandoning those parts, you can get a much better editing experience.
> This perhaps encourages proprietary stacks, where the GUI is all that's known and the substance of the "code" itself may never even be made available to those who seek it out.
These kinds of full-featured IDEs tend to lead to knowledge gaps, in my experience. I'm not really a Java programmer, but I've helped more than one Java developer (usually junior ones, to be fair) with build issues. This is because when I use Java, I write it using vim and either `javac`and a Makefile, or else the command line interface of a Java build tool (ant, maven, gradle, etc). That has given me a good understanding of the Java build process, that I've found many users of Eclipse or IntelliJ do not have.
As such, I think your proposal may have legs, because it could allow less technical users access to the programming environment, while retaining a text-based interface for more expert users.
I can't agree more. When I started working on Slashdiv, I had looked at the existing NoCode platforms and came to the conclusion that code is the most succinct representation of logic and graphical tools will have to get users to jump through hoops to create basic logic (programmer here, no surprises!).
Where visual tools help is in UI building and layouts. That's exactly what I built Slashdiv for: create a UI and output React code.
Throughout my career in computing I have heard people claim that the solution to the software problem is automatic programming. All that one has to do is write the specifications for the software, and the computer will find a program [...]
The oldest paper known to me that discusses automatic programming was written in the 1940s by Saul Gorn when he was working at the Aberdeen Proving Ground. This paper, entitled “Is Automatic Programming Feasible?” was classified for a while. It answered the question positively.
At that time, programs were fed into computers on paper tapes. The programmer worked the punch directly and actually looked at the holes in the tape. I have seen programmers “patch” programs by literally patching the paper tape.
The automatic programming system considered by Gorn in that paper was an assembler in today’s terminology. All that one would have to do with his automatic programming system would be to write a code such as CLA, and the computer would automatically punch the proper holes in the tape. In this way, the programmer’s task would be performed automatically by the computer.
In later years the phrase was used to refer to program generation from languages such as IT, FORTRAN, and ALGOL. In each case, the programmer entered a specification of what he wanted, and the computer produced the program in the language of the machine. In short, automatic programming always has been a euphemism for programming with a higher-level language than was then available to the programmer. Research in automatic programming is simply research in the implementation of higher-level programming languages.
http://web.stanford.edu/class/cs99r/readings/parnas1.pdf