Today with terabyte harddrives, gigabytes of RAM and broadband connections, when is the binary size a more important factor than both execution speed and ease of development? Especially when the binary size difference is probably not huge?
Shouldn't the advice of this article just be "use exceptions"?
when is the binary size a more important factor than both execution speed and ease of development?
Binary size (or maybe more accurately in this case, binary code layout) can be highly relevant for speed due to the instruction cache.
As for ease of development, there are issues with C++ exceptions regarding this as well: some C++ libraries aren't exception safe, and neither are practically all C libraries. This is something you need to worry about whenever you pass a function pointer into a library, as there might be an exception-unsafe function higher up in the stack. Propagating an exception up through it is potentially extremely dangerous.
That said, using exceptions can still be a good idea, especially if your code doesn't need to be portable or if you know the platforms in advance, and you are careful about passing around function pointers. All you need to do is ensure that any of your code that might be called from third-party code with questionable exception semantics won't throw or propagate any exceptions, e.g. by installing a catch-all exception handler in it.
Binary working set sizes are often lower with exceptions than without, because exception handling code can be moved elsewhere by the compiler. Error-checking code, on the other hand, cannot be so easily detected, and hence moved.
I think the C++ implementation of exceptions has a lot to answer for though, in poisoning too many developers on the concept. It really is an awful implementation.
From the Gentoo wiki: -Os is very useful for large applications, like Firefox, as it will reduce load time, memory usage, cache misses, disk usage etc. Code compiled with -Os can be faster than -O2 or -O3 because of this. It's also recommended for older computers with a low amount of RAM, disk space or cache on the CPU. But beware that -Os is not as well tested as -O2 and might trigger compiler bugs.
I believe Apple compiles (a lot or all?) of their stuff with -Os.
How many hundreds of megabytes or gigabytes is our operating system installation?
Size matters a lot because harddrives are stinkin' snails when compared to CPU and RAM. All that stuff needs to be loaded from somewhere, and while SSD's have changed the scheme a bit, there's still a major gap between storage and memory.
On my current platform (STM32L micro) we'll have 256K flash, 48K RAM, no hard drive. It's very reasonable to use C++ on such a processor but exception handling might not be something you want to pay for.
It is not. Unless for trivial things, using C++ for a system with 48KB of RAM is completely non sense (48KB is quite good amount of memory for plain C but not for C++).
Symbian uses their own kind-of-exceptions (called TRAP, I think) and I've heard that decision to not use C++ exception was funded on binary size constraints.
My 486 built in 1993 had 8 KB cache, 4 MB RAM, and a 120 MB HD.
My desktop built in 2009 has 2 MB cache, 2 GB RAM, and a 250 GB HD.
Okay, the cache has lagged behind by one or three doublings compared to the other storage types. But that's still pretty close to proportional in a world of exponential gains.
Memory speeds have increased much more slowly than processors have, so the cost of page faults, bad locality, etc. have grown proportionally worse over time.
> My 486 built in 1993 had 8 KB cache,
> My desktop built in 2009 has 2 MB cache
That "2 MB" is either L2 or L3, which your 486 didn't have.
The L1 on your desktop is not much larger than the only cache that your 486 had.
As frequency increases, the length of the path that a signal can travel in one clock decreases. Fortunately, cycle-time increases have been accompanied by transistor size decreases, so the net result is that L1 sizes have been roughly constant.
I bet that your 1993 486 had 256KB of L2 cache on the motherboard, so from 256KB to 2MB is less than 10 times, for 500 times the RAM and 2000 times the hard disk size.
The value of Cache does not increase linearly with size. You also run into latency issues, so Having L4 cache on a modern motherboard would have little value.
The value of cache memory is the "hit ratio". If increasing cache size by 50% increases cache-hit from 95% to 99%, is worth it, as the 5% of cache-misses could reduce CPU performance to one half.
Here's a different take, then, and probably harder to verify, but I am guessing is true:
Cache utilization has increased much more than RAM or HD, not just because programs are handling more data but also because of increases in program size and number of programs being run simultaneously.
Your hard drive is probably not full... RAM could be, depends on your workload... but I bet most caches are churning like mad, more than they used to be.
Shouldn't the advice of this article just be "use exceptions"?