You move the complexity to the system instead of the automaton.
An ant is an easily understood organism behaviorally, however, when working in unison according to a very simple set of operating conditions they generate incredibly complex behavior.
Most people want to design complex automatons that understand the entire solution space instead of designing simple automatons that interact with other simple automatons to converge on a reasonable approximation of the solution.
You can see this in the article itself where the article describes C code as clear. Yes, it's very clear to anyone reading C exactly what assembly instructions will be produced but it's about as clear as mud as to what the code is actually accomplishing. I tend to care more what the code does as opposed to how it does it which is why I prefer functional languages where I have no clue as to what assembly will be produced but have a good idea of what is actually being accomplished.
Apparently C supports something called a string but all I ever see is a pointer to a char.
The author is not saying "write C in Python," he's saying "Aim for C-style abstractions in Python." i.e. build the codebase around the bread-and-butter: function calls, iteration, reference-passing and mutability, and not around abuse of the numerous higher-level abstractions available(meta-programming, yield, exec()...). It's a lesson learned regularly by programmers as they gain experience and move towards conservativism, after having seen all their silver bullet attempts go up in flames.
A similar tenet holds for the opposite direction - the performance junkies; the tendency with experience to move away from locking in specs early and doing heavy micro-optimization, towards writing the initial code in a way that tries intentionally to make it easy to modify.
> The author is not saying "write C in Python," he's saying "Aim for C-style abstractions in Python." i.e. build the codebase around the bread-and-butter: function calls, iteration, reference-passing and mutability, and not around abuse of the numerous higher-level abstractions available(meta-programming, yield, exec()...). It's a lesson learned regularly by programmers as they gain experience and move towards conservativism, after having seen all their silver bullet attempts go up in flames.
Amen to that. I've been programming in Python for 7 years now, but lately I had started to see more and more of my fellow Pythonistas dropping "yields" all over the place, using "generator this / generator that" etc. I thought I was either getting too old or maybe I was not smart enough, because to me Python it's 99% about writing and manipulating nice hash tables (dictionaries, as they are called) and lists (I love those list comprehensions).
Again, maybe is just me, I just felt that adding another layer of abstraction would get between me and the data I'm trying to manipulate when I write my programs.
Worst programming advice ever IMHO. As Sir Isaac Newton used to say: "If I have seen further it is only by standing on the shoulders of giants." If you think and program in simple abstractions, you'll be unable to solve complex problems.
Newton is a great choice to use in the example as he invented calculus which is an incredible abstraction. I tend to look at the lambda calculus in the same way. Yes, calculus is more complex than algebra but once you understand it you can solve much more challenging problems because of the abstraction inherent in the notation.
To me, the author is spot on here. It is only through experience that you begin to see all the ways in which complexity can creep into a system, and why each and every one of them is bad.
Unfortunately, almost every codebase I work on right now is more complex than it needs to be.
To me, the zenith, "The simplest possible solution that does what is needed, but no more", helps a lot.
he's not saying c code is clear; he's saying that using c code can help you avoid complexity because many things you might attempt in another language are simply too expensive.
nor is he saying (as someone else said) that you should avoid higher level constructs in other languages. its quite ok to use those, as long as you keep things simple.
simplicity does not map 1-to-1 on language features and is hard to explain because it's very context dependent. it's more "YAGNI" than "avoid yield in python".
for example, here's an example from Python i noticed just yesterday. i had some code that expanded a particular kind of data and i wanted to let the user do something with the data. so i added a callback - on each step in the expansion i pass the user the relevant chunk of data through the callback.
but how do i handle errors? i had a use-case that needed to handle errors, but they didn't fit with the callback. so my first thought was to make the callback an object and have a method to handle exceptions. but that looked complicated.
then i realised that i was passing the callback a list of values. if, instead, i passed a generator, then the exceptions, when they occurred, would be raised inside the callback! so by "using yield" i kept things simple.
Agreed. If a language is more expressive, it becomes simpler. I don't think being low level is the same as being simple. Declarative languages tend to look more like English than C or Assembly does. Looking more like English seems like a pretty powerful indicator of simplicity. It is certainly easier to read.
Layers of indirection (abstractions) aren't bad; bad abstractions are bad. A good abstraction should protect you from having to keep lots of implementation details in your head. And C and arrays are not the epitome of simplicity. An array is a terrible way to represent most state; the OO facilities of abstract data types and encapsulation are much better. Complexity is the enemy but "weaker tools", C and arrays are no solution to that.
But in practice, how many bad abstractions are there? From what I understand the Go team sees VMs as bad abstractions and they're ubiquitous.
The Go compiler can be fast, essentially as fast as a VM. Go can already run ~3x slower than C and they haven't put much effort into optimization. V8 is ~5x slower with a larger team working longer, 50,000 LOC compared to ~12,000 LOC of Go. To get the security of a VM they discourage the use of low level stuff by putting it into an unsafe package. This makes the binaries compatible with NativeClient, which actually checks for malicious programs. Go code can be sent as text and compiled quickly enough client side.
So they get many of the features of a VM while cutting down on complexity with some clever tradeoffs. The language is not for everyone's tastes, but their fundamental approach will surely lead somewhere.
In practice people make bad abstractions constantly and build huge legacy on top of them. Lately people have been trying to clean up the mess by starting from a pretty low level, like Go and Redis.
GO is designed for distributed systems that can't afford a garbage collection hit. A function is running on one machine, and it calls out for data from 10 other machines - if one of those has a hiccup to garbage collection (i.e. it responds 1 minute later instead of 10ms) then function fails, or is delayed tying up resources.
This is my understanding of why Google is developing, I'm not too familiar with it though.
... That experience taught me a lot about what really matters in programming. It is not about solving puzzles and being the brightest kid in the class. It is about realizing that the complexity of software dwarfs even the most brilliant human; that cleverness cannot win.
I like this. "The only weapons we have are simplicity and convention." Well said. I think we need to espouse a new software paradigm: DFS-oriented programming. (Dead F*ing Simple-oriented programming.)
Yes, complexity is the enemy. However, I'm afraid the answer isn't that simple. If I understood you, you advocate using a low level language, mainly because that makes it hard to "get fancy." But by doing so, you are forcing complexity to go up significantly in a different way. For example, how much harder would it be to code a make tool like Rake in C than in Ruby? Yes, C might handcuff the programmer from getting too fancy, but it will result in 2 or 5 or 10 times more lines of code, and that in and of itself is complexity -- which, as you correctly stated, is the enemy.
Not sure if more lines are per definition more complex. Often they are, but check many Ruby (and Rails) gems and apps; they hide a lot at the expense of easy comprehension. That's why it is often referred to by inexperienced programmers as 'magic'; stuff in Rails works nicely if it works. If it doesn't work, you get to wade through miserable stacks of meta programming (with the worst debugger ever made). Give me C or Haskell (yes I know, big difference; I'm just skilled at both) any day, complexity wise. It's more about the programmer than the language of course, but the Ruby community generally seems to suffer from making stuff non transparant.
Similarly, I've found that keeping the need in mind that the code addresses helps keep you focused on what is core vs non-core. It also helps create a useful product that someone will want.
Apart from not having complexity in the first place, the next best thing is to recognize that software is a theory, and try to improve your understanding. Treat the first version as a learning experience, trying to understand the problem: Plan to throw one away, you will anyway. (Brooks)
Totally agree that complexity is evil. Who hasn't seen code bloated beyond all recognition to the point where it has to be rewritte? Or fantastically complex engines where it was impossible to debug all the edge cases? The list goes on and on...
That said I wouldn't want to write Python like it's C. Every tool has it's best practices. It's funny he made that statement, Python is probably the simplest OO language there is.
I like Ocaml for these reasons. It's actually a very simple language and has the C-like elegance. If you already know a few programming languages and can "think functionally", you can pick it up in about a week.
C's major win, in its time, was context-independence: people could drop into a 200,000-line C project and have a good idea of what the code they're looking at is doing. And if C code has been written well, it's not hard to look further if needed. This is not really possible when metaprogramming features like self-modifying code and insane macros are pulled out. Lisp is great in the right hands, but I've seen undisciplined "rock star" programmers produce write-only code in it and that's ugly.
Ocaml also has this context-independence, but it has most of the power of a language like Lisp, it's statically typed and functional, but it's pretty much the minimally complex language that has that power. It loses on libraries, but as far as the language itself goes, it's one of the best languages out there.
I'm an OCaml fan as well and having written in a number of languages I can say that complexity in and of itself is probably not the enemy. I think the problem is more related to modularity. For instance I don't think it is the best policy to make a skip list implementation as simple as possible because it is self contained and will likely be used by many of my programs in the future. Complexity only becomes a problem when it is no longer possible for a developer to fit all the pieces of the program in her mind at once. Abstraction and modularity can generally help with this.
Just because you personally cannot handle macros does not make the code "read only". As I remember, your CL skill is minimal, and your macro skill even smaller.
I am the "rock star" programmer, here referred, and all my code has been easily worked on by three other people, well versed in CL, with only compliments for readability.
An ant is an easily understood organism behaviorally, however, when working in unison according to a very simple set of operating conditions they generate incredibly complex behavior.
Most people want to design complex automatons that understand the entire solution space instead of designing simple automatons that interact with other simple automatons to converge on a reasonable approximation of the solution.
You can see this in the article itself where the article describes C code as clear. Yes, it's very clear to anyone reading C exactly what assembly instructions will be produced but it's about as clear as mud as to what the code is actually accomplishing. I tend to care more what the code does as opposed to how it does it which is why I prefer functional languages where I have no clue as to what assembly will be produced but have a good idea of what is actually being accomplished.
Apparently C supports something called a string but all I ever see is a pointer to a char.