I really don't understand the negativity here. I sense a very dismissive tone, but most of the complaints are implementation details, or that this has been tried before (so what?).
I think anybody who really thinks about it would have to agree modern OSes are a disgusting mess.
-- Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel? It's not the hardware. No app should have the capability, even if it tried, to slow down the OS/UI (without root access).
-- Yes, it should be a database design, with permissions.
-- Yes, by making it a database design, all applications get the ability to share their content (i.e. make files) in a performant searchable way.
-- Yes, permissions is a huge issue. If every app were confined to a single directory (docker-like) then backing up an app, deleting an app, terminating an app would be a million times easier. Our OSes will never be secure until they're rebuilt from the ground up.
[Right now windows lets apps store garbage in the 'registry' and linux stores your apps data strewn throughout /var/etc, /var/log, /app/init, .... These should all be materialized views [i.e. sym-links])
-- Mac Finder is cancer. If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement (like you can with car parts).
-- By having an event-driven architecture, this gives me exact tracking on when events happened. I'd like a full record of every time a certain file changes, if file changes can't happen without an event, and all events are indexed in the DB, then I have perfect auditability.
-- I could also assign permission events (throttle browser CPU to 20% max, pipe all audio from spotify to removeAds.exe, pipe all UI notifications from javaUpdater to /dev/null)
I understand the "Well who's gonna use it?" question, but it's circular reasoning. "Let's not get excited about this, because nobody will use it, because it won't catch on, because nobody got excited about it." If you get an industry giant behind it (Linus, Google, Carmack) you can absolutely reinvent a better wheel (e.g. GIT, chrome) and displace a huge marketshare in months.
> -- Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel? It's not the hardware. No app should have the capability, even if it tried, to slow down the OS/UI (without root access).
Back in 1999 I saw a demo of Nemesis at the Cambridge Computer Lab: a multithreaded OS that was designed to resist this kind of thing. Their demo was opening up various applications with a video playing in the corner and pointing out that it never missed a frame.
Even back then I understood that this was never going to make it to the mainstream.
> If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement
You can do shell replacements and shell extensions on Windows. You can replace whatever you want on Linux. Non-customisability of MacOS is a Jobsian deliberate choice.
> event-driven architecture
Windows is actually rather good at this.
> all applications get the ability to share their content
> Back in 1999 I saw a demo of Nemesis at the Cambridge Computer Lab: a multithreaded OS that was designed to resist this kind of thing. Their demo was opening up various applications with a video playing in the corner and pointing out that it never missed a frame.
Yes, but Nemesis was a proper real time OS. The video-playing application had asked the OS for a guarantee that it would get X MB/s of disc bandwidth, and that it would have Y ms of CPU time every Z ms. The scheduler then gave that application absolute priority over everything else running while inside those limits, in order to make that happen.
This isn't hard. However, it conflicts with the notion of fair access to resources for all. The OS can only give a real-time guarantee to a limited number of processes, and it cannot rescind that guarantee. Why should one application get favourable access to resources just because it was the first one to reserve them all? How does the OS tell a genuine video-playing application from an application that wants to slow down the OS/UI?
This is why applications need special privileges (i.e. usually root) in order to request real-time scheduling on Linux. It's complicated.
Nemesis also did some nifty stuff with the GUI - individual applications were given responsibility to manage a set of pixels on the screen, and would update those pixels directly. This was specifically to avoid the problems inherent in the X-windows approach of high-priority tasks being funnelled through a lower-priority X server.
> Even back then I understood that this was never going to make it to the mainstream.
Back in 1998[1][2] Apple demoed the then beta versions of OS X doing exactly that. Multiple video streams playing without missing frames, being resized into the dock, while still playing [3] (a feature that is not present any more), and even resisting highjacking by a bomb app. It all worked back then and it still does today.
> Non-customisability of MacOS is a Jobsian deliberate choice.
Also, there are multiple Finder replacements apps for the Mac, the thing is, nobody cares because the Finder is good enough for most people.
Containerized apps are a step in the right direction. I like the concept that Mac/iOS uses for app installation. Although I've seen users lose an app they've just run way too often
> Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel?
Take a moment to consider your expectations of that operating system, and the expectations upon software used thirty years ago.
Thirty years ago it was uncommon for users to expect more than one application to be operating at once, and so scheduling resource use wasn't an issue. Now, your PC has ll manner of processes doing work while you go about your business; applications are polling for updates from remote servers, media players are piping streams to a central mixer and attempting to play them simultaneously and seamlessly, your display is drawn with all manner of tricks to improve visual appeal and the _feeling_ of responsiveness, and your browser is going all this over again in the dozens of tabs you have open simultaneously.
So once in a while a resource lock is held for a little too long, or an interrupt just happens to cause a cascade of locks that block your system for a period, or you stumble across a corner-case that wasn't accounted for by the developers.
Frankly, it's nothing short of a miracle that PCs are able to operate as well as they do, despite our best efforts to overload them with unnecessary work.
And yes, I too hate Electron, but in all my decades of working on PCs I can't really recall a time that was as... Actually, BeOS was pretty f'n great.
How about 15 years ago? I was doing everything you describe as part of my daily computer use (a few browser windows open, a text editor running, a multimedia player, an IM and mail client running, etc) and had the same performance and usability frustrations.
The main difference is that if I render a video now, I'll do it in 4k instead of 640x480, and that if I download a game, it's 50GB instead of 500MB. But scalin in that direction is expected; my machine isn't any more stable, or anything that goes in the way of the examplea described in the article.
If I showed my mobile phone to 2002 me, they'd be extremely impressed. If I showed the form factor and specs of my laptop, they'd be extremely impressed. But if I showed them how I use my desktop OS? The only cool thing would be Dropbox, I think.
I don't think responsiveness of the system is an unreasonable expectation. App A should not be able to perform a denial-of-service on app B simply by being badly programmed.
Yes we do all kinds of tricks with context switching to make it seem like it is doing a whole bunch of things at the same time.
But if we had visualized the activity in human terms, it would be an assembly line that is constantly switching between cars, trucks, scooters and whatsnot at a simply astonishing rate.
Multicore processors literally run several things at the same time. Even a single core can literally run several instructions at the same time thanks to instruction level parallelism, in addition to reordering instructions, predicting and speculatively executing branches, etc. The processor also has a cache subsystem which is interacting with the memory subsystem on behalf of the code -- but this all works in parallel with the code. Memory operations are executed as asynchronously as possible in order to maximize performance.
What's more, outside a processor, what we call "a computer" is actually a collection of many interconnected systems all working in parallel. The northbridge and southbridge chips coordinate with the instructions running on the CPU, but they're not synchronously controlled by the CPU, which means they are legitimately doing other things at the same time as the CPU.
When you read something off disk, your CPU sends a command to an IO controller, which sends a command to a controller in the disk, which sends a command to a motor or to some flash chips. Eventually the disk controller gets the data you requested and the process goes back the other way. Disks have contained independent coprocessors for ages; "IDE" stands for "Integrated Drive Electronics", and logical block addressing (which requires on-disk smarts) has been standard since about 1996.
Some part of your graphics card is always busy outputting a video signal at (say) 60 FPS, even while some other part of your graphics card is working through a command queue to draw the next frame. Audio, Ethernet, wifi, Bluetooth, all likewise happen simultaneously, with their own specialized processors, configuration registers, and signaling mechanisms.
Computers do lots of things simultaneously. It's not an illusion caused by rapid context switching. Frankly, the illusion is that anything in the computer is linear :-)
Following up on that last thought, about how linearity is the illusion: this talk explains in detail the Herculean effort expended by hardware and language designers to create something comprehensible on top of the complexities of today's memory subsystems. It's three hours and focused on how it affects C++, but IMO it's well worth the time and accessible to anyone who has some idea of what happens at the machine code level.
Absolutely. You can get systems which don't perform the illusion for you, like e.g. the Tilera manycore processors, and they've not taken off because they're a pain to program.
> I think anybody who really thinks about it would have to agree modern OSes are a disgusting mess.
And if you think about it just a bit longer, you conclude that if all modern operating systems are a disgusting mess, then being a disgusting mess optimizes for survival somehow. And until you figure that piece out, you're never going to design something that's viable.
Haven't we figured that out, though? They're disgusting messes because they got to where they are through incremental additions, without getting rid of the vestigial parts. That's no reason to believe that being a disgusting mess is literally a requirement for survival.
You are misusing the idea of evolution. Evolution requires a sufficient number of generations and individuals. I think both are not given in this context.
I cannot speak about macOS, but I can about Windows. I've used Windows since I was 5 years old, for a bit more than 10 years. I've used Windows XP, Vista and 7. Then I've switched to Linux.
Windows is frankly terrible. I've also tried Windows 8 and I have Windows 10 in a virtual machine (with plenty of resources). It's true that it has too many layers, too many services, too much in the way of doing everything. It cannot run for 15 minutes without some services that crashes or some windows that stops being responsive.
Truth be told, the same thing happened, to a lesser
degree, when I used Ubuntu (first distribution I tried). The experience was more pleasant overall, but the OS felt still too bloated.
My journey among Linux distributions led me to ArchLinux. I've been using it for a few years now, and all I can say is that it's been exceptional. 99% of the times the package upgrades just work (and don't take 2 hours like on Ubuntu), I've yet to experience an interface freeze, and I'm extremely productive with the workflow I came up with. My environment is extremely lean: first thing I did was to replace desktop environments, which just slow you down, with window managers (at the moment I'm using Bspwm and it's the best I tried so far, even better than i3wm). Granted, the downside of this is that you have be somewhat well-versed in the art of Unix and Linux, but I would say that in most cases it's just one more skill added to the skill set.
All of this to say that, in my opinion, un-bloated OSes are already here. The messiest component on my system is undoubtedly the kernel, but what can you do? Surely you cannot expect to have a kernel tailored to the computers released in the last year.
Agreed. It's always bizarre to me that people make the choice to use bloated, crappy OSes and then complain about the very existence of these options as though they're compelled to use them. I've used Linux as my primary system since the first year of college, and that's taken me all the way from a snazzy, effects-laden setup to my current stripped down Openbox setup with no panels, controlled largely by keyboard shortcuts.
Not that my system is perfect, obviously, but bloat on your system that's not due to something you're explicitly running is a concern I just can't relate to at all, and frankly, I don't know why people put up with it.
> but most of the complaints are implementation details
... because the answer to a lot of the posed questions are those implementation details.
It's a bit like saying "There should be peace in the middle east. The details of the politics there are largely irrelevant. They should just make peace there, then it would be better".
I think anybody who really thinks about it would have to agree modern OSes are a disgusting mess.
-- Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel? It's not the hardware. No app should have the capability, even if it tried, to slow down the OS/UI (without root access).
-- Yes, it should be a database design, with permissions.
-- Yes, by making it a database design, all applications get the ability to share their content (i.e. make files) in a performant searchable way.
-- Yes, permissions is a huge issue. If every app were confined to a single directory (docker-like) then backing up an app, deleting an app, terminating an app would be a million times easier. Our OSes will never be secure until they're rebuilt from the ground up. [Right now windows lets apps store garbage in the 'registry' and linux stores your apps data strewn throughout /var/etc, /var/log, /app/init, .... These should all be materialized views [i.e. sym-links])
-- Mac Finder is cancer. If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement (like you can with car parts).
-- By having an event-driven architecture, this gives me exact tracking on when events happened. I'd like a full record of every time a certain file changes, if file changes can't happen without an event, and all events are indexed in the DB, then I have perfect auditability.
-- I could also assign permission events (throttle browser CPU to 20% max, pipe all audio from spotify to removeAds.exe, pipe all UI notifications from javaUpdater to /dev/null)
I understand the "Well who's gonna use it?" question, but it's circular reasoning. "Let's not get excited about this, because nobody will use it, because it won't catch on, because nobody got excited about it." If you get an industry giant behind it (Linus, Google, Carmack) you can absolutely reinvent a better wheel (e.g. GIT, chrome) and displace a huge marketshare in months.