Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The programmer is in the unique position that his is the only discipline and profession in which such a gigantic ratio, which totally baffles our imagination, has to be bridged by a single technology. He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before. Compared to that number of semantic levels, the average mathematical theory is almost flat. By evoking the need for deep conceptual hierarchies, the automatic computer confronts us with a radically new intellectual challenge that has no precedent in our history.

I am no biographer of Djisktra’s, so is he being unrealistic about programmers here, or does he not have exposure to what a mathematician would consider Mathematics (Wikipedia entry claiming him a mathematician or no)?



> He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.

This is not really true. There were complex multilayered systems before computers. In large systems, Western Electric #5 Crossbar was comparable to a large real-time program. General Railway Signal's NX system had the first "intelligent" user interface. But that level of complexity was very rare.

Both mechanical design and electronic design are harder than program design. The number of people who did really good mechanism design is tiny. There were only two good typesetting machines, over most of a century - Merganthaler's Linotype and Lanston's Monotype. Everybody else's machine was a dud. In the printing telegraph/Teletype business, Howard Krum and Ed Klienschmidt designed the good ones, and the other twenty or so designs over many decades were much inferior. There were been lathes for centuries, but all modern manual lathes strongly resemble Maudsley's design from 1800.

There are far more good programmers than there were good mechanism designers or electronics engineers. Programming is not really that hard by the standards of other complex engineering.


> Both mechanical design and electronic design are harder than program design. The number of people who did really good mechanism design is tiny. There were only two good typesetting machines, over most of a century - Merganthaler's Linotype and Lanston's Monotype.

That’s a good example and of course it immediately brings to mind TeX, which is an equally monumental if not greater achievement. Certainly there’s no denying that TeX has considerably higher dimensionality than the pre-computerized hot type setting machines. Especially when you include all the ancillary stuff like Metafont.

Also recall that Dijkstra was a systems programmer in his industry career. He was well aware of the complexity of the computing hardware of the day—which was cutting edge electronic design. The semaphore wasn’t invented as a cute mathematical trick; he needed it to get hardware interrupts to work properly. Something which THE managed and Unix, among others, never quite did (although it did get to mostly good enough if you don’t mind minefields).

> There are far more good programmers than there were good mechanism designers or electronics engineers. Programming is not really that hard by the standards of other complex engineering.

Most programmers are incapable of writing a correct binary search, let alone something the size and complexity of TeX with only a handful of relatively minor errors. Programmers capable of that level of intellectual feat are indeed few and far between. I suspect they’re more rare than competent EEs or MEs.

Most programmers are more comparable to the guys cleaning the typesetters, not the ones designing them.


> TeX, which is an equally monumental if not greater achievement.

TeX didn't come out of nowhere. It's a successor to the macro-based document formatting system which began with RUNOFF and went through roff, nroff, tbl, eqn, MM, troff, ditroff, and groff. The last remaining usage of those tools seems to be UNIX-type manual pages. There was so much cruft a restart was required.


Linotype didn’t come out of nowhere either. Printers used to cast type manually.

And don’t just gloss over TeX’s astounding correctness. It’s a truly remarkable feat of the human intellect to design something so large with so few errors.


For those who haven't seen Knuth's own analysis of his errors while writing TeX, it's well worth reading his 1989 article "The Errors of TeX" [1] and glancing through the full chronological list of errors [2].

[1] https://yurichev.com/mirrors/knuth1989.pdf

[2] https://ctan.math.utah.edu/ctan/tex-archive/info/knuth-pdf/e...


> Most programmers are incapable of writing a correct binary search

If you exclude the programmers who are incapable of writing FizzBuzz (who I would consider "not programmers", no matter what job title they managed to acquire), then I'm pretty sure your statement is false.

If you mean "could sit down and write one that worked the first time without testing", then yes, you could be write. But could not write one at all? I don't buy it.


> If you exclude the programmers who are incapable of writing FizzBuzz

FizzBuzz really is trivial. Binary search on the other hand is deceptively tricky[1]. It was well over a decade from its discovery to the first correct published implementation! No doubt if asked to write it, you'd look it up and say that's trivial, all the while double checking Internet sources to avoid the many subtle pitfalls. You might even be familiar with one of the more famous errors[2] off the top of year head. And even then the smart money at even odds would be to bet against your implementation being correct for all inputs.

And if you had to do it just from a specification with no outside resources? Much harder. At least unless you know how to formally construct a loop using a loop invariant and show monotonic progress toward termination each iteration. Which brings us back to the original submission. There are some programs that are pretty much impossible to prove correct by testing, but that can, relatively easily, be shown to be correct by mathematical reasoning. Since this is a comment on a submission by Dijkstra, here[3] is how he does it in didactic style

> If you mean "could sit down and write one that worked the first time without testing", then yes, you could be write. But could not write one at all? I don't buy it.

Yes that's what "correct" means. Code that only works sometimes is not correct.

[1] https://reprog.wordpress.com/2010/04/19/are-you-one-of-the-1...

[2] https://research.google/blog/extra-extra-read-all-about-it-n...

[3] https://www.cs.utexas.edu/~EWD/ewd12xx/EWD1293.PDF


It wouldn’t surprise me if there were fewer programmer who know of the overflow edge case (https://research.google/blog/extra-extra-read-all-about-it-n...) and will think of it when writing a binary search than programmers who know of it and will remember to prevent it.

If the implementation language doesn’t automatically prevent that problem (and that is fairly likely), the latter group likely would introduce a bug there.


> There are far more good programmers than there were good mechanism designers or electronics engineers.

But that doesn't mean a thing. The barrier to entry is much smaller.


Yeah if I wanted to just get started with electronics engineering, the easiest cheapest way would be… to use software. Programming / digital engineer bf is lower-stakes than physical stuff.


I think he was arguing that if programming consists of symbol manipulation and proofs, the task of writing proofs for large programs consists of a lot more symbol manipulation (although probably a lot more tedious in nature) than many proofs written by mathematicians in the past, something made worse by the need to precisely describe each step so that a machine could conceivably execute it instead of being able to skip over proof steps considered “obvious” to a human mathematician.

I think he was especially thinking of mathematical logic - he referred to programming as Very Large Scale Application of Logic several times in his writing.


For the peanut gallery:

If you ever want to see what it's like for a mathematician to not hand wave anything away, look at excerpts from Bertrand & Russel in principia mathematics (no, not the newton book).

It takes 362 pages (depending on edition) to get to 1+1=2

https://archive.org/details/principia-mathematica_202307/pag...

Of course, just like real mathematicians, in our everyday work we stand on the shoulders of giants, reuse prior foundational work (I've yet to personally write a bootloader, os, or language+compiler, and include 3p libraries), and then hope that any bugs in our proofs are caught during peer review. Like in math, sometimes peer review for our code ends up being a rubber stamp, or our code/proofs aren't that elegant, or they work for the domain we're using them in but there's latent bugs/logic errors which may cause inconsistencies or require a restriction of domain to properly work (ex code only works with ASCII, or your theorm only works for compact sets).

And of course, the similarities aren't a coincidence

https://en.m.wikipedia.org/wiki/Curry%E2%80%93Howard_corresp...


Note: There are today completely formal systems of proofs which are much more concise than what Russel had. You can now prove 1+1=2 after a few pages of rules (say of Martin-Löf type theory), a couple of definitions (+, 1 and 2) and a very short proof by calculation.

In practice, one would use a proof assistant, which is like a programming language, such as Agda. Then it is just the definitions and the proof is just a call to the proof checker to compute and check the result.


For example, here it is in Coq (not using -- but instead replicating explicitly -- the built-in definitions of 0, 1, 2, +, or =).

  Inductive eq' {T: Type} (x: T) : T -> Prop :=
    | eq_refl': eq' x x.

  Inductive nat' :=
    | zero
    | succ (n: nat').

  Definition one := succ zero.
  
  Definition two := succ (succ zero).
  
  Fixpoint plus' (a b: nat') : nat' :=
    match a with
      | zero => b
      | succ x => succ (plus' x b)
      end.
  
  Theorem one_plus_one_equals_two : eq' (plus' one one) two.
  Proof.
    apply eq_refl'.
  Qed.
As you allude to, there's also the underlying type theory and associated logical apparatus, which in this case give meaning to "Type", "Prop", "Inductive", "Definition", "Fixpoint", and "Theorem" (the latter of which is syntactic sugar for "Definition"!), and allow the system to check that the claimed proof of the theorem is actually valid. I haven't reproduced this here because I don't know what it actually looks like or where to find it. (I've never learned or studied the substantive content of Coq's built-in type theory.)

We would also have to be satisfied that the definitions of eq', one, two, and plus' above sufficiently match up with what we mean by those concepts.


Don’t forget the source code of Coq itself. What you have written here is not a formal proof, it’s a series of instructions to Coq requesting it to generate the proof for you. To know that you’ve proved it you need to inspect the output, otherwise you’re trusting the source code of Coq itself.


>> He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before. Compared to that number of semantic levels, the average mathematical theory is almost flat.

> ... is he being unrealistic about programmers here, or does he not have exposure to what a mathematician would consider Mathematics?

As with any sweeping statement, Dijkstra's assertion is not universally applicable to all programmers. However, for some definition of sufficiently skilled programmer, it is correct if one considers the subset of mathematics applicable to provably correct programs. To wit:

https://bartoszmilewski.com/2014/10/28/category-theory-for-p...


You’ve given a hint of the complexity on the programmer’s side of things but for Dijkstra’s claim to hold we also need to take a look at the mathematician’s. I think most people who are not mathematicians have no idea what they’re working on.

Take for example a single theorem: Classification of Finite Simple Groups [1]. This one proof, the work of a hundred mathematicians or so, is tens of thousands of pages long and took half a century to complete.

Fermat’s Last Theorem [2] took 358 years to prove and required the development of vast amounts of theory that Fermat himself could scarcely have imagined.

[1] https://en.wikipedia.org/wiki/Classification_of_finite_simpl...

[2] https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem


> This one proof, the work of a hundred mathematicians or so, is tens of thousands of pages long and took half a century to complete.

Now compare that to google3 or any other large software. It’s absolutely tiny. A paltry edifice in comparison both in pages and man hours as well as mathematical complexity. Boolean structures get monstrously huge.

On the subject of proofs and the verbosity of traditional mathematical methods, this note[1] is interesting. It provides two fun little examples of shorter than normal proofs.

It amuses me that just as mathematicians persisted in writing out “is equal to” for decades after Recorde gave us =, there will probably continue to be holdouts who write out “if and only if” instead of using ≡ for many years to come.

[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD107...


I also think "if and only if" is only surpassed by "iff" in badness of notation, though I'd like to put in a good word for classic '='. There's always some implicit inference being made about what kind of equality we're considering, and calling two formulas of logic "equal" if they have the same (truth) value in all cases is reasonable in my book.

https://en.wikipedia.org/wiki/Logical_biconditional has some nice comparisons of notation, including a mention that my old friend George Boole also used '='.


Maths has also evolved to be complex now, with the proof of four-color theorem being a computer proof. So, the statement is not as true anymore. But there was a short era when proving that your "static" program was correct was impossible because the number of possible combinations of input approached the size of earth (in terms of how many bits we need to represent). At the same time, almost any single mathematical theorem could be verified by a person to be correct over the course of a couple of years.


https://celebratio.org/Haken_W/article/794/

It was first published in 1976. It is _highly_ unlikely Dijkstra didn't know about it.


Dijkstra’s degree was in mathematics.

And yes the average mathematical theory is indeed flat compared to a large monolith like google3.


All right, but the average program is also flat compared to google3. So that argument doesn't prove much.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: