Once, just for fun, I had an SGI do window management, another one serve up fonts, a linux box serving apps via NFS, and yet another headless SGI running a shared app displayed on yet another Linux machines X window server session.
It was surreal. Launch that app, and all those machines contributed to the interaction of it.
Plus, it made that instance of RH Linux look like an SGI. 4dwm has a distinct look and feel.
For the most part, running over a 100T network, one could not tell all that was happening. (Was a high end CAD app.)
Later, I did run that same app remotely, using its Windows NT build and the Exceed X server for Win NT. Cool.
DEC or HP (I no longer remember which) wrote a cool demo that had a fish swim across as many displays as you gave it. It properly rendered the fish split across display edges. With walls of displays being so common now it doesn't sound like much but at the time it was a pretty slick demo of X window and its network capability.
No this predated all that. I saw the demo in 1991 or 1992; you had a simple flat file with a list of displays to use or could supply it on the command line. The program itself had to do all the heavy lifting to determine how much of each fish to put on each display.
That chapter was dumb even when it came out. X is awesome and it's amazing how well it has stood the test of time. I am not looking forward to the day when I won't be able to do half the things I do today with my EXWM setup because we needed to have "every frame perfect".
Agree to disagree then. X is shit. I remember first reading that chapter in 1998, and it resonated then just as well as it does now. X is shit. It is the weirdest abstraction of network "transparency" which inevitably manages to require state on both ends of the network connection -- it isn't robust. How well does your X session do when there is a network hiccup?
The protocol is round-trip heavy. Even with xcb, X clients need to make a tremendous number of round-trips, especially early on. There have been mitigations for X for low bandwidth links, but X networking has never been usable on high latency links. Ever forward X on an airplane? I pity you.
The drawing model of X is an antiquated relic. Server side rendering is pretty much useless, such that X has basically become a sort of bastardized inefficient VNC.
There's a reason no widely used screen remoting or thin-client system uses the X design. It was a poor idea then, and it is still deficient today.
That reinvention happened years ago, and it is in the likes of things such as Citrix -- which has existed for years now. It is used widely in industries outside of software development.
What I really mean by "reinvent" is the multi-user graphical computing model itself.
We've done a stellar job pushing a desktop out to someone. I've used a variety of these things, from the basic VNC on up, and they are pretty great. The better ones are damn tough to differentiate from local UX. People are running CAD on stuff like Citrix and it works. Better than one might expect too.
That's more like plucking out the killer use case and optimizing it. Great move, worth it, not bad, etc...
There are a lot of ways to use computers, and applications though.
X, warts and all, did offer up pretty fantastic flexibility and usability (when apps were well realized).
A few things:
Today we've got cloud computing, local computing, mobile computing, wearables. Lots of displays, more coming.
Thinking about it from the X POV, sort of how they thought about it back then, that looks an awful lot like:
Application server, desktop computer, other computer. They envisioned more than one person using a single computer. Multiple people using multiple computers, multiple people using modest computers to do UX with one or more powerful ones.
And on that point, I used to use multiple powerful computers with a single modest one all the time.
Maybe rethinking it again, like they did with X back in the day, would push the boundaries some. Who knows what people would do?
Someone walks into a room, has presentation on their wearable, and pushes it's display to a nearby machine, or both display and UX. They keep their data where it lives, on their device, but can benefit from the more robust UX.
Mobile computing devices are getting really powerful. Run an app on one from a laptop, easily, again, perhaps keeping data local to the mobile, for whatever reasons.
Just spitballing here, but the general thought I was trying to convey, without realizing it earlier, is maybe rethinking things from the basics forward, allowing for a lot of options, possible options, just might shake out other ways to do things, use all these crazy devices.
I'm thinking you may not have actually used RDP over a medium-to-low bandwidth or medium-to-high latency connection. Because RDP is still usable and X is a laggy mess.
Xlib, the original C binding to the protocol is not great in that situation, because it makes many asynchronous requests look synchronous. Most toolkits are pretty terrible in that situation. libxcb instead of Xlib can be a great improvement as it exposes what can be asynchronous (though it often isn't used well).
I regularly ran X over ISDN back when that was a thing. Framemaker wasn't exactly snappy, but it was usable. Lighter stuff was fine. Watching contemporary RDP keep up over 64k was painful.
With server side font rendering all but forgotten among other things, any modern app will struggle over a low bandwidth link well. And X has always been crap for latency, even with xcb, it ends up being very round-trip heavy. Modern technology has addressed the bandwidth problem, but has a much bigger challenge with the second.
Modern RDP can do seemless remote audio and full 60fps video and 3d graphics, on windows servers. Linux server stuff like libvirtd just uses it as a glorified VNC.
I like being able to run a Linux desktop in a VM on windows and "extend" to another monitor with VM X apps rendering to a Cygwin Xorg instance. Wayland can't do that.
Yup. In my humble opinion, the people who built X really did think through what multi user graphical computing meant.
I like X. Took me a while to understand it, but once I did, I used it hard and appreciated what it allowed me to do.
Doing stuff like dual or multi head computers. Multi users each with their own keyboard and mouse can happen with X. And if one needs another user, doing it via network just works.
> I like being able to run a Linux desktop in a VM on windows and "extend" to another monitor with VM X apps rendering to a Cygwin Xorg instance. Wayland can't do that.
There is no reason wayland couldn't do that, other than that your compositor doesn't support the concept of virtual screens. I can't say that this is a very high-priority feature for any compositors at the moment.
It won't be matched unless someone sees enough of a need to make it happen. Wayland the protocol doesn't have an explicit goal of doing everything that X did. The design is different.
As far as I can tell, the original vision of X was to unify all the proprietary Unix vendors with a standard windowing protocol. The network capabilities happened because they wanted to support dumb terminals connected to mainframes. Most of those proprietary Unix vendors are now gone. Currently it's not clear to me what role X is supposed to have, if any.
Wayland (the protocol) supports the necessary bits and it's up to the compositors if they want to go the extra mile to support everything.
My own personal take is that most compositors won't because X does not represent multi user graphical computing in the way that most people use it today. That role is fulfilled by web applications.
The compositor would have to talk to the virtual GPU driver to allocate additional surfaces on the host and make them available to the guest.
The next best alternative currently is looking glass[0]. Linux host with GPU1, Windows guest with GPU2 via pass-through and then copy the framebuffer from the guest to the host.
Agreed. And I may be showing my age, but I learned C and C++ on singles MIPS boxes (R3000 processors) shared between 20 users, all on X terminals. The thought of sharing 1 32-bit processor running at 25Mhz between 20 people running any kind of graphical environment it pretty wild by today's standards.
I am just gonna ramble for a minute. Plz either indulge or ignore. I miss this stuff! Thanks.
I would have loved to profile that little system. It's crazy how lean those older systems really were.
A 30Mhz R4K could play MP3 files streamed via NFS. (90 percent CPU utilization, not headless, with local desktop logged in and active) I had one, at that clock useless by the time I got it, doing just that, because why not?
After reading your comment, I realize I had a machine just playing a compressed audio file with the resources needed to serve 20 of you a useful computing environment!
What kind of network were you on? Was it even ethernet? 10T?
My first multi-user experience was horrible! It was on a 386/25, running Xenix and it had capacity for 16 users via either 19200 or 9600 baud SERIAL and VT100 terminals... I had 12 users at peak on that one, and most of them ran multi-user WordPerfect and some MRP thing, Symix, I think.
I got the job because I read all the manuals. First sysadmin post and it was a pig. Learned a ton though! Enough to realize I could have a lot of fun on the better UNIXes, and scored a position working with SGI computers.
(in that timeframe those computers with all the spiffy software so well engineered were like computing from the future, so much fun)
Another time, while doing sysadmin on an SGI Irix multi-cpu Origin system serving some 30 users doing high end cad, I did some data logging.
That system had 16GB Ram, 4 R12K CPU's with super fat caches, dual homed 1000T ethernet, BTW. I forget the clock. Probably 400Mhz
I logged user activity every few seconds for roughly a week. This was an effort to see whether overall system speed was impacting productivity. I had a few users grumble during a peak time, so...
Learned a few things:
The system itself could perform near max capability when there were 4 concurrent user CAD demands. OS overhead was a few percent tops, in almost all circumstances. This was not in of itself any kind of issue. Irix ran pretty damn lean.
Once in a blue moon, the system would page, but the Irix file caches are gold. This never really was an issue either. Maybe for a few minutes at a time during a serious push tops.
A few times per week, usually on a Friday, they would max it out, and more than 4 concurrent, high demand sessions would be present. Deffo slowdown.
Among the users, the average full compute demand was roughly 2 minutes. Some instances of this were longer for power users working on more substantial models. 5 to 10 minutes. These were sporadic, during the day. The reason was simple, and it was a model recompute. Happens as users roll up all their changes.
Interestingly, light duty users, doing detail drawings, or minor model updates, sustaining engineering type stuff, they used the system heavy, perhaps 60 percent of their workday, but rarely hit peak single core compute for more than a few seconds at a time. Was hard to even see these. They did not really do big recomputes, but they did do the occasional big drawing render, and those were as bad as the power users. But not even daily.
Most users hit the system both harder and less frequently. 20 to 30 percent of their day, but did post up regular ~2minute single core max compute demands. These people were working on sub systems, and would often copy one, modify, update, check it in, done, next.
The power users hit the ~5 to 10 minute peak compute and used the system half the day. They were working at the entire product level. Big demands on RAM, CPU, Network. Multiple times daily.
What I did was put local compute resources in front of the power users and restructure the network to give them a quieter, higher throughput subnet.
There were a few of these users. They did became network bound, as the data transfer was significant compared to everyone else running on the main server, with high throughput disks. (15K RPM Cheetah drives) This impacted their get going times, but they only did that occasionally. Once things got cached locally, non issue overall. They adjusted how they work and marginalized it. The biggest saving was honestly to just keep it logged in, screen locked, etc...
The rest of the users reported a significant speed up after these things were done.
Now, here's the thing!
A second data logging actually did show the power users improve. And they responded by burying those boxes, opening up a second session on the main one to continue. LOL, fine. It was good enough before, so I said nothing. They were power users and working at the product level and gains there paid off nicely in terms of total project capacity per year. No worries. There to help, blah, blah. But I did chuckle about them very rapidly figuring out how things worked.
(They used the excellent virtual desktop feature in the 4dwm window manager common to SGI computers. One click and they were viewing their local session, one more click and they were viewing their remote one, next.)
But everyone else demonstrated no real change!
The data just was not there to support a big gain, and it totally was not there after the power users did their thing by running two sessions anyway.
So, what was it?
Turned out to be network. When I put the local compute where it needed to be, we moved some switches and had inadvertently improved load balancing at the physical layer. Some users, prior to this, were basically blocking others.
Latency went down for everyone, and average throughput went up, and that proved to be the single most gratifying improvement for them, and it was just faster interactive response. No effective change in actual compute / time unit available, nor demanded from them.
The parallels today are obvious. Get that page response time down. Effective throughput, what users can get done may bump up a little, but the vast majority won't do it.
But, their feel, perception will ramp right up to the positive.
This is true even when it comes at the expense of other things maybe being slower, taking longer. Just get them feedback NOW, and they will feel better about it all anyway.
Interesting story. Some design issues don't change!
We were on Ethernet - old fashioned 10Mbs Ethernet (I think) with physical "taps" into a shared medium and terminators on the end. This is easy to remember as I recall the class doofus one day sitting there absently chucking an object up and down in his hand, while people angrily wondered why a whole string of Xterms weren't working. Duty programmer storms into the room and confiscates the item and plugs it in again.
This same guy, a couple years later, manages to helpfully try to remove "." files from someone's directory while logged in as root, and manages to remove all of the honours students' home directories.
I was in a used bookstore a few years ago, and ran across a book titled something like "User Interface Design with MOTIF". Well, it had been a power of years since I last touched X-Windows (with intent to draw pixels, anyway) and I started leafing through it.
Looking for user interface examples.
Leaf, leaf, nothing.
Flip, flip, flipflipflip . . . oh, we're at the index. Nothing.
Over 300 pages and no pictures. Zero diagrams. No user interface principles or discussion about good or bad practices. It was all descriptions of APIs. There was nothing about user interface design in there at all.
IMHO, that's all you really need to know about X-Windows.
> IMHO, that's all you really need to know about X-Windows.
That there was at least one poor, misleadingly titled book about MOTIF?
I've had many X-related textbooks over the years, and frankly the documentation was exceptionally good and felt like a total waste on something as byzantine as X.
(An excerpt from another longer reply I just posted):
From: npg@East (Neil Groundwater - Sun Consulting)
Date: Wed, Jun 27, 1990
Subject: Humor from Dennis Ritchie (at USENIX)
(Actually Dennis's latter remark was attributed by him to Rob Pike)
from Unix Today 6/25 page 5.
"..., Richie reminded the audience that Steve Jobs stood at the same podium
a few years back and announced that X was brain-dead and would soon die. "He
was half-right," Ritchie said. "Sometimes when you fill a vacuum, it still
sucks."
And yet, here we are, over a quarter-century later, and that "disaster" is still allowing our programs to run on one computer and display the result on another computer on the other side of the world.
The 'Unix Hater's Handbook' is supposedly a humorous work. But it was mainly used in the 1990s by non-Unix companies as anti-Unix propaganda. Unfortunately for them, all of today's main computer/software companies either use Unix or Unix-like operating systems (Apple:BSD Unix, Google:Android, IBM and Others:Linux, Microsoft:WSL). https://en.wikipedia.org/wiki/The_Unix-Haters_Handbook
As a someone who has used Unix for 35+ years, I feel the book is great. It actually does more to explain why things are the way they are in UNIX than any other book I've read, pro-Unix or not.. I don't agree with everything it covers and many of the complaints of the book have been addressed over the years, but it definitely still relevant today.
To be fair: the UHH is a legitimately great book, a fun read, and absolutely something every modern developer should know. It's opinionated in exactly the right way, even if some of its conclusions turn out to have been wrong in hindsight.
But as for X11: yeah. You can complain all you want at microdesign issues, but longevity beats all. I can run an unmodified X11R1 client from 1987 against the default install of pretty much any newly installed Linux/BSD box and it still works, something not true of literally any other contemporary GUI toolkit of the era.
Generally I agree. However I would say with regards to iOS (not MacOS) and Android. While underneath they are both Linux and Darwin based respectively. Almost the entire OS anyone interacts (including most devs) is done through the Android and iOS APIs. They could replace what is underneath with something very different and you wouldn't be any of the wiser. There isn't really much to separate something like Ubuntu and MacOS in terms of what they are (other than polish).
In Jaron Laniers book "You are not a gadget" he says the open source community are very good at making clones of existing things but not inventing new things themselves. I would argue that programmers in general are very good at making clones of existing things. This has allowed Unix-like operating be ubiquitous as they are easy to copy and are good enough.
dmr’s forward is my favorite part though. As others have said a lot of the criticism in the book has been addressed over the years. I came late to the party so I missed out on a lot of the early bugs and mis-features of UNIX.
No it isn't. Almost nobody uses raw X over long distances because it is unusable. Everybody uses something else like VNC, RDP or NX. Note that NX actually moved away from the X protocol and is now just a VNC-like protocol, i.e. it only deals with pixels. Every modern remote desktop protocol works that way because it is better.
One of the fun things about working at Sun back in the day was that there were folks trying to fix some of the issues that were legitimately raised by the UHH.
That said, I shared Don's disgust with X. The difference between running X on a workstation and SunTools felt like a 300% slowdown.
And yet today I sit here typing into a window that is being displayed by an X server running on a Windows OS on a computer that is acting pretty much like an X-terminal for my Linux server. It has warts, true, but its still flying as they would say on Firefly.
I have often wondered what the world would be like if we had decided to move the GPU into the monitor and ran a parameterized display system over HDMI rather than something that looks like a cancerous NTSC composite video signal.
>One of the fun things about working at Sun back in the day was that there were folks trying to fix some of the issues that were legitimately raised by the UHH.
Like opening core files (which have gigantic gaps of unmapped zeros) by loading them into the XView text editor when you double click on them in the XView file manager? ;)
That's actually how core dumps grow and reproduce and evolve, like those wasps that lay their eggs in other insects and turn them into zombies, by embedding themselves in an unwitting host process and making it core dump even bigger!
That disaster is the major reason why UNIX FOSS clones will never get desktop or mobile computing devices beyond a single digit number.
You can have all the fun you want with your beloved X, I rather use modern Windowing platforms and their composition engines.
Also, UNIX underpinnings are completly irrelevant on Android, iOS and ChromeOS.
In what concerns macOS and Windows, they are there to cater to the GNU/Linux crowd that buys Macs and Windows devices instead of buying GNU/Linux hardware.
> But it was mainly used in the 1990s by non-Unix companies as anti-Unix propaganda.
Some strange revisionism there as few of the authors of UHH were involved with Windows.. which is what ended up completely eating commercial Unix lunch... a win that was not so much due to Microsoft’s competence as their competitors stupidity.
Meanwhile calling Mac OS X or Android Unix-like just because they have some tools in common (MAC OS X is only vaguely a BSD — and launchd seems to get some love by the same people that have a conniption fit over systems) .. none of those systems use X for sure. Desktop Linux is a rounding error.. the part of Linux that gets used heavily for servers that is the reason that WSL is a thing (to ease development)?? Yeah that has basically fuck all to do with the disasters in SySV/commercial Unix... not too many people using SySV IPC on Linux, and cgroups, epoll, io_uring hardly remind me of the miserable APIs of now ancient Unix.
> allowing our programs to run on one computer and display the result on another computer on the other side of the world.
I feel sorry for anyone having to use X over a link of that latency. Especially if their internet service has a tendency to go down. How many X clients handle loss of connection better than just terminating?
And are there really that many people using X remote as compared to RDP or Citrix. I would be surprised to find out that remote X is widely used in 2020 (if you can even call it’s brief heyday in the 90s while commercial UNIX was losing its balls to Windows NT due to vendor incompetence).. and I would risk the age discrimination suits and fire the IT team.
X was already an ugly obsolete piece of crap in the 90s when it was foisted upon us by all the major commercial Unix vendors. It should have rightfully died with SGI, Sun, and/or DEC.
A lot of the criticisms have been addressed though. Basically no one outside of enthusiasts uses X any more; Android and Chrome OS use an entirely different graphics system. The filesystems have been replaced. Shells have been replaced and/or obviated for most users.
Really, and this is an OK choice, but is a choice all the same:
What happened is others settled on a subset of multi user graphical computing.
That is not replacing X. It is something different, less.
Maybe less is more too. I have an open mind.
However that goes, X nailed multi user graphical computing. Pretty much any crazy thing you can think of can be done on X.
Want to make a computer that several people use locally, each with their own screen, keyboard, mouse? X can do that, and permit those users to view and interact with any combination of each others display data.
One says, hey can you sort this for me? Another says, "sure", and a window pops up on their screen...
Now, replicate that same setup, each user has their own powerful computer, maybe with a common, shared display among them, like a wall display or something.
No problem. X does that too, and chances are only the sysadmin differs. The apps will have no clue.
It may be that we just do not work that way, or maybe not enough know how, but let's be clear here.
X is not being replaced. A subset at best is being put forward.
I think that means we just are not really doing multi user graphical computing. And that is OK.
X has never supported reliable screen locking. If the screen locker program crashes or is killed because the OS ran out of memory, the X server is unprotected.
It's an unpleasant feeling to come back to an unlocked screen in a shared building.
Wow, can you tell me where I can find out about how to do those things, like:
> Want to make a computer that several people use locally, each with their own screen, keyboard, mouse? X can do that, and permit those users to view and interact with any combination of each others display data.
> One says, hey can you sort this for me? Another says, "sure", and a window pops up on their screen...
Is there literature or something about setting up complex things like this in X? I'd be interested in learning more.
Btw, I have shells, keyboards, mice, pens for my Android device and use all of them.
I think some of the X capability would be amazing on mobile.
We have hobbled mobile. Not completely as I know I can do really powerful things still, and when I do people gawk and point and usually ask a lot of questions.
I remain unconvinced this is all a good idea. We will see in a decade or two.
Here is one advantage of X that was notable, even today:
Keeping user data out of user hands.
A high end CAD application I used to do sales, service, sysadmin for used X and the SUID permission bit in UNIX to accomplish keeping data out of users hands.
On my bigger systems, I would run one application image, shared by a lot of users. This app, when run by a user, had the data permission and "ran as" data admin. The user has no permission.
All access to the data was through the app. And the users had no access to the app or data, just a remote X session.
I had tens of users interacting on big designs that way. No worries.
Secondly, I could where needed, spawn a second copy of the app to run local to be served up to or even just ran directly by needy users and everything still worked fine. I just had the app run as a different user and the access barrier remained intact.
But the big win was one application, one data repository, many users via X. I made a scripted login for them. They would launch it, do their thing, and that's it.
All I needed was a respectable X server for whatever box they ran locally.
I could admin that setup, and did, on a free JUNO dialup from all over the country.
Context matters. The notable part today isn't hacker protection. I'm sorry I implied that.
It's user error / protection. And at the time, getting that on anything that required data move around? Horrors!
We've got that today, though it took an amazing amount of thrashing about to get where I was then. And it's more secure in the adversarial sense too. Still, a lot of data has to move around, and it's still a PITA when it does.
The better tools today are rapidly converging on that same basic model: App serve to users through display only protocols.
At the time when X was first being developed, there simply were no open source alternatives. I used a few of the closed source ones and they were nice enough, but generally available on only a single platform (e.g., HP), which made them fairly useless.
Even if Wayland succeeds, it'll be a long time before it has truly replaced X. In reality, I think it's more likely that both will be (or already have been) been run over by web apps.
The window system needs to be displayed somehow too. What works for the window system works for the browser too. There's no need for an additional layer of widow system once you have a browser that can draw on the screen itself. A web browser can make a much better scriptable window manager / desktop environment all by itself, than anything that's possible with X-Windows.
If you want to be pedantic, I didn't call it "X- Windows". I called it "X-Windows". Nobody ever puts a space between the "X-" and the "Windows". The title of the article we're discussing that I wrote and named uses the term "X-Windows" because I specifically told the editors of the book to spell the name that way on purpose, to annoy X fanatics. That fact was stated in the last sentence of the article.
So what is there about drawing on the screen and handling input events and handling network requests that a window system can do, but a web browser can't? Why does there need to be a window system, if you already have a web browser?
Does you phone have a web browser? Does it also have a window system too? How often do you open and close icons and move and resize windows around on your phone, compared to how often you browse the web on your phone? Name one thing a window system can do that a web browser can't these days.
For example, is this a web browser or a window system?
You should read up on the history of window management and alternative designs for window systems and interactive graphical user interfaces. Things weren't always the way they are now, and there are many different ways of doing things, that are a hell of a lot better than the status quo. Back before everybody blindly imitated Google and Facebook and Apple and Microsoft because they didn't know any better and never experienced anything different, there were a lot of interesting original ideas. But now everybody's into Cargo Cult programming and interface design, blindly imitating shallow surface appearances and over-reacting to the latest trendy craze (like flat design) that was an over-reaction to the previous trendy craze (like skeuomorphism), without ever looking deeper into the reasons, or god forbid scientific studies and research and user testing, or further back than a few months into the past, and never understanding why things are the way they are, or how and why they got to be that way.
To illustrate my point, and lead you to enlightenment (and I don't mean the Enlightenment window manager):
Here is one of Gosling's earlier papers about NeWS (originally called "SunDew"), published in 1985 at an Alvey Workshop, and the next year in an excellent Springer Verlag book called "Methodology of Window Management" that is now available online for free. [1]
Chapter 5: SunDew - A Distributed and Extensible Window System, by James Gosling [2]
Another interesting chapter is Warren Teitelman's "Ten Years of Window Systems - A Retrospective View". [3]
Also, the Architecture Working Group Discussion [4] and Final Report [5], and the API Task Group [6] have a treasure trove of interesting and prescient discussion between some amazing people.
F R A Hopgood, D A Duce, E V C Fielding, K Robinson, A S Williams
29 April 1985
This is the Proceedings of the Alvey Workshop at Cosener's House, Abingdon that took place from 29 April 1985 until 1 May 1985. It was input into the planning for the MMI part of the Alvey Programme.
The Proceedings were later published by Springer-Verlag in 1986.
5. SunDew - A Distributed and Extensible Window System
James Gosling
SunDew is a distributed, extensible window system that is currently being developed at SUN. It has arisen out of an effort to step back and examine various window system issues without the usual product development constraints. It should really be viewed as speculative research into the right way to build a window system. We started out by looking at a number of window systems and clients of window systems, and came up with a set of goals. From those goals, and a little bit of inspiration, we came up with a design.
GOALS
A clean programmer interface: simple things should be simple to do, and hard things, such as changing the shape of the cursor, should not require taking pliers to the internals of the beast. There should be a smooth slope from what is needed to do easy things, up to what is needed to do hard things. This implies a conceptual organization of coordinated, independent components that can be layered. This also enables being able to improve or replace various parts of the system with minimal impact on the other components or clients.
Similarly, the program interface probably should be procedural, rather than simply exposing a data structure that the client then interrogates or modifies. This is important for portability, as well as hiding implementation details, thereby making it easier for subsequent changes or enhancements not to render existing code incompatible. [...]
DESIGN SKETCH
The work on a language called PostScript [1] by John Warnock and Charles Geschke at Adobe Systems provided a key inspiration for a path to a solution that meets these goals. PostScript is a Forth-like language, but has data types such as integers, reals, canvases, dictionaries and arrays.
Inter process communication is usually accomplished by sending messages from one process to another via some communication medium. They usually contain a stream of commands and parameters. One can view these streams of commands as a program in a very simple language. What happens if this simple language is extended to being Turing-equivalent? Now, programs do not communicate by sending messages back and forth, they communicate by sending programs which are elaborated by the receiver. This has interesting implications on data compression, performance and flexibility.
What Warnock and Geschke were trying to do was communicate with a printer. They transmit programs in the PostScript language to the printer which are elaborated by a processor in the printer, and this elaboration causes an image to appear on the page. The ability to define a function allows the extension and alteration of the capabilities of the printer.
This idea has very powerful implications within the context of window systems: it provides a graceful way to make the system much more flexible, and it provides some interesting solutions to performance and synchronization problems. SunDew contains a complete implementation of PostScript. The messages that client programs send to SunDew are really PostScript programs. [...]
4. Ten Years of Window Systems - A Retrospective View
Warren Teitelman
4.1 INTRODUCTION
Both James Gosling and I currently work for SUN and the reason for my wanting to talk before he does is that I am talking about the past and James is talking about the future. I have been connected with eight window systems as a user, or as an implementer, or by being in the same building! I have been asked to give a historical view and my talk looks at window systems over ten years and features: the Smalltalk, DLisp (Interlisp), Interlisp-D, Tajo (Mesa Development Environment), Docs (Cedar), Viewers (Cedar), SunWindows and SunDew systems.
The talk focuses on key ideas, where they came from, how they are connected and how they evolved. Firstly, I make the disclaimer that these are my personal recollections and there are bound to be some mistakes although I did spend some time talking to people on the telephone about when things did happen. [...]
The membership of the Architecture Working Group was as follows:
George Coulouris (Chairman). James Gosling. Alistair Kilgour. David Small. Dominic Sweetman. Tony Williams. Neil Wiseman.
[...] The possibility of allowing the client process to download a procedure to be executed in response to a specific class of input events was discussed, and felt to be desirable in principle. However, more work was needed to establish the practicality in general of programmable window managers. The success of Jim Gosling's SunDew project would be an indicator, but it was felt that it would be fruitful to initiate a UK investigation into this issue. John Butler pointed out in discussion that in the Microsoft MS- Windows system an input event received by a client process could be sent back to the window manager for interpretation by one of a set of translation routines. [...]
[...] There was a strong feeling that, at this stage in their development, window managers need to be very flexible. The downloading-of-procedures idea in James Gosling's work was seen as a nice way to achieve this. In this context protection issues were seen to be important. There need to be some limits on loading arbitrary code, especially since the window manager has in some sense the status of an operating system in that it must be reliable and not crash. One idea for achieving protection was through the use of applicative languages which are by their nature side-effect free. [...]
21.4 DISCUSSION
Teitelman: Referring to point (3) in your list, can you characterize the conditions under which a window manager would refuse requests from a client? It feels so soft that the user might feel uneasy. Is the window manager surly? Is it the intention that requests are honoured most of the time, and that failure is rare?
Gosling: Yes, but failure should be handled gracefully.
Bono: I think that there are two situations which arise from the same mechanism. The first is occasional failure such as a disk crash. The program environment should be robust enough to deal with it. The other situation is where device independence is written into the system. What happens if a colour device is used to run the program today, where a black and white device was used yesterday? This may show up in the same mechanism, so you cannot say that it is rare.
Gosling: When an application makes a request, it should nearly always be satisfied. The application program can inspect the result to see if it is satisfied exactly. If it asks for pink and it doesn't get it, it should be able to find out what it did get. Only then should the application deal with the complex recovery strategy that it may need. We need some sort of strategy specification. What sort of strategy should we use to select a font or colour if there is no exact match? What feature is more important in matching a 10 point Roman font, its size or its typeface? At CMU, if you point at a thing and want 14 point Roman you may get 14 point Cyrillic, which is not very useful. On point (7), are you implying a dynamic strategy, or one determined at system configuration?
Gosling: Harold (Thimbleby) is all for downline loading this. In reality this is not usually very easy. GKS adopts a compromise - an integer is used to select a predefined procedure. As you may only have 32 bits, this does not give you many Turing machines. Something of that flavour would not be a bad idea.
Cook: Justify synchrony in point (2).
Gosling: This is mostly a matter of complexity of program. Not many languages handle asynchrony very well. If we have Cedar or Mesa then this is possible.
Teitelman: How we do it in Cedar is that the application is given the opportunity to take action. In Mesa we require that the application catches the signal and takes any action. In the absence of the application program intervening, something sensible should be done, but it may impose a little bit more of a burden on the implementer.
Gosling: In Unix software there is no synchronization around data objects. In Cedar/Mesa there are monitors which continue while the mainline code is running; there are no notions of interrupt routines.
Teitelman: This is a single address space system. We are unlikely to see this in Unix systems.
Newman: How realistic is it to design an interface using your criteria?
Gosling: Bits and pieces already appear all over the place. The CMU system deals with most of this OK, but is poor on symmetry. The SUN system is good for symmetry, but not for synchrony. It is terrible on hints, and has problems with redraw requests. There is no intrinsic reason why we can't deal with all of these though. The problem is dealing with them all at the same time.
Williams: A point that I read in the SunWindows manual was that once a client has done a 'create window' then the process will probably get a signal to redraw its windows for the first time.
Gosling: Right, but it's a case of maybe rather than will. Some programs may redraw and redraw again if multiple events aren't handled very well, and give screen flicker.
Hopgood: Do you have a view on the level of interface to the window manager?
Gosling: Clients don't want to talk to the window manager at all, but should talk to something fairly abstract. Do you want to talk about this as the window manager as well? The window manager shouldn't implement scroll bars, or buttons or dialogues, we need another name for the thing which handles the higher level operations.
>That fact was stated in the last sentence of the article.
I know, that's what I was referring to you. Myself, I spell "X-Windows" as "X- Windows" to annoy anti- fanatics.
I'm confused about your point about web browsers. Web browsers do not handle any sort of graphics or display on my phone. There an app. They handled graphics and display within the app. It's the same thing for chrome OS. Some people seem to think that chromeOS is a giant web browser. It's not. It's essentially a regular Linux distribution.
I was using Linux for years before Facebook existed or apple was considered a legitimate computing company again.
I'm pretty confused about your comparisons to web browsers or Windows 93. I would consider those more in the realm of window managers, if even that.
It seems that this entire discussion has people confused about window systems and window managers. If you want to write direct to a frame buffer, go for it I guess.
The comment I replied to said X and Wayland would be 'overrun by web apps' and I'm not sure what that means. ChromeOS is not a web app and neither is chrome.
Clarifying my comment, if I was to create (say) a tool today to monitor my network, with graphical charts, push buttons, etc., it's almost inconceivable that I would choose to write it using an X toolkit of some kind, even if it were only meant to run on Linux desktops.
Instead, I would toss up a web server and write the front end using web toolkits, etc. I think a lot of others would too, and in that sense, desktop app development is being overrun by the web (plus Javascript, DOM, CSS, etc.)
It's true that web browsers have to run on something, but on some platforms, that something looks a lot more like bare metal. I think it's conceivable that X and Wayland will head into the sunset as tech like emacs has. That doesn't mean I don't like them--I do--but realistically their future is unclear.
ChromeOS is not a web based OS in the sense of being a 'web app' and neither is it's display or windowing system. I don't think anyone here is using the same definition of 'web app'.
It is a Web app jungler OS without any trace of X.
In any case it was just an example, there are plenty of others where X existence is meaningless, replaced by Web apps, HTML 5 variant of Citrix, IoT interfaces, or the whole set of SaaS applications without any native presence on FOSS UNIX clones.
It's like the Eight Megabytes And Constantly Swapping joke for Emacs. These days a minimal Emacs is more like 18MB RSS rather than 8 but that's still tiny compared to modern apps.
I bet 99% of that 12MB is graphical assets: different sized icons for different display geometries. I always found asset management for multiple platforms to be most annoying when developing for mobile.
This reminds me of when people started to realize Microsoft Foundation Class (MFC) was blowing up and were bemoaning code-bloat. In 1997.
The very mention of MFC gave me shivers down my spine. The macro horror show and indecipherable error messages...
Thankfully, I never had to do much MFC professionally, but holy shit, if you inadvertently messed with one of the generated macros, good luck. I remember it sometimes being easier to throw away everything and start from scratch than debug the horrible compiler errors.
I don't know about your phone but mine which is not new has 6gb of ram and the latest Samsung s20 flagship has 12gb of ram. In comparison to the total ram, it's much better nowadays.
I have been using X-Windows in that timeframe - on VMS workstations. The biggest problem was indeed, that X-Windows mostly was a place to run xterms on. But there were a few native GUI applicatons and they were quite nice. The biggest problem was, that Motiv wasn't freely available. You had to pay license costs. If not for that, "Unix on the desktop" might have actually happened. And as the time progressed, Motiv wasn't unbearably slow any more.
I remember those days too, the lack of a good GUI toolkit felt like such a big deal. Funny that nowadays I find myself using applications that don't use a UI toolkit, Emacs (compiled without toolkit), st, mupdf, games that do their own UI. The only application I have running that uses GTK is chromium.
The point I was trying to make was, that there were too few gui applications at all, most of the application I was running were just terminal applications. A widely available GUI toolkit might have helped this. The funniest example was a chemical database software, which would be VT100 based running inside an xterm, but for graphical input/output would spawn a separate window based on X toolkit graphics. But beyond that, those expensive workstations (like $20k in todays money) were mostly running xterm. While being technically widely superior to the PCs of that time - there were also Win3.1 machines around, running a dreadful VT100 emulation - they made little out of it, because there was so little software. So they were eventually replaced with PCs at a fraction of the price.
Ironically, even if it were the times of Linux 1.0, I suggested they should run Linux, as they would get excellent X Windows support, unfortunately the suggestion wasn't taken upon.
"don't call it 'X-Windows'. It is 'X', or 'The X Window System', if you must."
This was in an environment where upper management wanted to take away the desktop workstations from non-developers in favor of X terminals. "We move all that work to the server now; You don't need all that expense on all these desks. Server-side is the future."
Good freaking luck if you were Tech Support or QA.
The fun part: we were spending as much on the X terminals and supporting equipment as we could get more Sparcstations for at our great corporate rate (my desktop was on its third generation by then).
Unsurprisingly, this was merely an early checkpoint in what became a tailspin.
"The right graphical client/server model is to have an extensible server. Application programs on remote machines can download their own special extension on demand and share libraries in the server. Downloaded code can draw windows, track input eents, provide fast interactive feedback, and minimize network traffic by communicating with the application using a dynamic, high-level protocol."
Huh, that's surprisingly prescient.
Applications on remote machines can download their own special JavaScript on demand and share code in the server. Downloaded code can draw windows, track input events, provide fast interactive feedback, and minimize network traffic by communicating with the application using HTTP.
PS was amazing, I've even seen a Z Machine V3 interpreter at rec.games.int-fiction. Enough to run Zork and old versions of Curses!. But writting the interpreter is easier than finishing the last mentioned game.
When I first saw X, I had already grown up with PCs, so the whole concept of running the “main program” on another computer just to display it on my computer seemed so illogical I thought I must just be misunderstanding it. I mean.. I have a computer right here, why not run the program on this computer…? However, if you think about it - the X philosophy was actually mostly correct (or at least more useful than it initially seemed), they just did it in a clunky way. We’ve been slowly migrating to web applications which are the same basic concept: the program runs on the web server and your browser displays the results. This actually is better from a security standpoint (the server can access data that isn’t distributed to the client) and a maintenance standpoint (I can deploy a new version to the server and it’s instantaneously available to all clients). Running a “program” remotely and displaying the results locally is so ubiquitous now that my teenage kids are confused by the entire concept of desktop applications. What I’d love to see, and wish I had the free time to implement, is a Linux UI based on something like SVGALib that doesn’t use X at all; just enough to run a terminal and a browser (and maybe an email client and a word processor if somebody wanted to write one).
I'd like to dig into this point for a moment. Why was it necessary to have a client/server architecture for a windowing system? Was it because X was developed before practical GUI alternatives were on the market? Did the computing power of institutional mainframes exceed that of the user-facing terminal systems (clients)? As time passed and "client-only" windowing systems came onto the scene, could the client/server design choice not have been revisited for greater efficiency?
I think there's kind of two aspects to that (why client/server):
1. The X protocol was roughly like an evolution of the VT-like protocols which were used for text-based terminals back in the day. Those protocols supported cursor control, simple character-based graphics, and later, simple vector graphics. X was an evolution of that to deal with bitmapped displays, mice, etc.
2. Hardware was pretty expensive. Giving everyone in the office an X terminal was far cheaper than giving everyone a workstation. Again, that was an evolution of VT terminals vs. minicomputers.
Even X terminals weren't cheap. In the early 90's a monochrome 15" terminal could be a couple thousand dollars while the workstation was at least 10x more.
The classical use case is EDA software, which still largely runs on Linux. I can run simvision (a cadence tool) on a large server with ~96 cores and ~1TB of ram and the spawned windows integrate (almost) seamlessly with my multi monitor MacBook Pro setup. That Software only supports Solaris (recent releases dropped that I believe), specific versions of Red Hat Linux and Windows. Large simulations easily consume all server resources and require a SSD raid to function effectively. This also isn’t software you want to have to install or maintain yourself, plus it need access to proprietary information protected by NDA, which is not allowed to be copied to private hardware.
The modern equivalent of SVGALib is Linux' Direct Rendering Infrastructure.
If you just want fullscreen app, you can run them on top of DRI.
For "a terminal and a browser" at the same time you have to create some sort of protocol for windowed display, and an implementation of one side of that protocol that provides window management, and apps would implement the other side.
People did that. They named the protocole "Wayland", the window management implementation "Weston", and a number of graphical toolkits can target Wayland (GTK, QT, SDL...). Ref: https://wayland.freedesktop.org/faq.html#heading_toc_j_2
As a sidenote, if you want to run such a program standalone, fullscreen, can you do that without running a "server" or a "window manager"? IIRC, programs intended for Wayland can run fullscreen on top of plain DRI (or perhaps only SDL provides this facility).
The simplest thing I know of is the minimal Cage compositor for wayland, that only supports running a single fullscreen program: https://github.com/Hjdskes/cage
With some hackery, you could probably build this as a backend for GTK/Qt that would run the server in the same process. But I don't know how useful that would really be.
I think first successful implementation of Client-Server OS (and wide adoption in the banking industry) was BTOS/CTOS that was developed by Burroughs Technologies (BTOS) and then continued by Convergent Technologies (CTOS). It was sweet to works with - with modular Hardware and built in debugger on the CLI!
A disaster, and yet I remember reading about attempts to replace it when I was still in University, and now here I am over a decade into my career and Wayland still isn't ready for primetime, especially if you use a video card from the #1 producer of performance video cards on the planet. Small use case I guess. X lets you do forwarding over the network and maybe sometime in the 2020's Wayland will have some sort of remote desktop solution? Still waiting on that to be ironed out...
Replacing X feels like the nuclear fusion project of the Linux world, constantly receeding into the future even as it maintains its promises of fixing all our problem When It's Done(tm).
Yes, the quality of these varies, pick the one that is right for you. X11 forwarding is not without issues either. With VNC you'll probably also want to tunnel the connection over SSH to secure it rather than rely on server authentication anyway.
This not universally good, but I would never open a port to a VNC server anyway, encrypted or not. Tunneling a passwordless unencrypted protocol through a SSH tunnel is fine, still.
You can use NVIDIA cards with Wayland, you just can't use NVIDIA's proprietary driver with wlroots (which is by no means the only Wayland compositor). Wayland is the default in many distributions, and if you're using one of the more common desktop environments, you'll get by just fine when using the proprietary driver.
The UNIX Haters Handbook was funny at the time but doesn't make much sense these days unless you understand all the rival hacker groups and cultures that were largely eventually unified under "Free Software" and, later, "Open Source". Even the LISPers have come in from the cold!
Much of this criticism of X was prescient for things to come. Autotools, dependency problems, modularity issues, compatibility, "DLL-hell", etc. Many of these problems don't have solutions today that are orders of magnitude better than then.
On the other hand, one thing that's really holding X back, especially on handhelds, is 'Myth: X is "Device Independent"'.
If the difference between a 75dpi screen and a 100dpi workstation screen was too much, today's "retina" displays really struggle to run a usable X. You're heavily reliant on the toolkits (Qt, GTK, etc) to scale things for you and they mostly understand about font scaling but not about widget scaling.
Lots of distributions that might want to use it, for example Maemo Leste, Ubuntu Touch or Gemian, are having to do a lot of UI work and hand-pick applications for "porting" to the screens and input devices (touch) that they want to use them on.
This means that Linux-on-the-smartphone is not yet in any position to give Android or iOS a run for it's money because it really struggles to effectively leverage its existing app ecosystem.
Even ignoring security models, X struggles to provide even a compelling visual of a vision for handheld devices.
Don't even mention Wayland: that has other economic incentives that are not yet properly aligned for widespread success (including X's app ecosystem ones).
> If the difference between a 75dpi screen and a 100dpi workstation screen was too much, today's "retina" displays really struggle to run a usable X.
I've been running X11 on retina displays since 2014 without much of an issue. The only big offenders were chrome and firefox. Nowadays everything just works.
...but I've had trouble with the defaults for almost everything I've tried: xterm, libreoffice, browsers. Under KDE the menus are too small by default. Under non-Gnome-or-KDE window managers or desktop environments almost everything is rendered too small.
The chances are that I can't just install an application with `apt-get` and have it do something sensible by default without a lot of fiddling and customisation.
This forces fonts to scale, but what about bitmaps? They will scale if the increased text size causes the widget to grow (and also affect parent widgets in this manner). But pure icons aren't scaled - indeed, on your screenshot, there are some icons that are noticeably smaller than they should be proportional to the text.
It works if you have more or less the same pixel density across all screens. As soon as you mix very different ones, X becomes essentially unusable. Microsoft and Apple have solved that issue years ago.
It would be solvable with compositing XServer, like Xsgi.
Unfortunatelyh X.Org is based on the lowest-common denominator code, and nobody dared to make a redesign that would really split from the X.Org mission of "providing a code dump for vendors to start working"
Legacy applications are somewhat blurry when moved from high-dpi to low-dpi, but anything modern works perfectly well. In my experience, X with current gnome or KDE is completely unusable in the same scenario. Either everything is huge on the low-dpi screen, or tiny on the high-dpi screen.
There is a third option: xrandr lets you choose different resolutions per-screen. Nothing ends up too large or too small, you just don't get the benefit of the high-dpi screen.
This makes it completely pointless to even have a high-dpi screen. I think it is a pretty common use-case to have a new 4K monitor together with older FHD ones, or a high-dpi laptop with a low-dpi external screen. X really shows its age in relying on everything having roughly the same pixel density.
Not completely; I do this while docked and the laptop screen is further away, when high-dpi doesn't matter as much, and go full-res when mobile.
> or a high-dpi laptop with a low-dpi external screen
Yep, that's where I am right now, which is why I wanted to mention the third option - our IT support people upgrading everyone's laptops didn't even know this could be done.
Exactly. Many people seem to read it as "the system named X Window", but, as you say, it's actually "the window system named X". The Linux Operating System is an operating system and the X Window System is a window system.
Maybe part of the reason is the Title Case convention in English? (And the other part may be the name of Microsoft's window system leading the mind along this path.)
You made my day! I'm so delighted to hear you're pissed that people call it X-Windows. ;) Thank you for letting me know my long term project of always calling it X-Windows worked perfectly as planned. I've been systematically calling it X-Windows to piss people off since June 1988, when I read the article "Things That Happen When You Say ‘X Windows’" in my copy of Volume 1 Number 2 of the June 1988 of the “XNextEvent” newsletter, “The Official Newsletter of XUG, the X User’s Group” (which I quoted above):
>Don wrote the chapter on the X-Windows Disaster. (To annoy X fanatics, Don specifically asked that we include the hyphen after the letter "X,", as well as the plural of the word "Windows," in his chapter title.
I started programming X10 in June 1986. Here's my first X10 program, a pie menu test application (I was calling them "theta menus" then):
After that I modified the X10 "uwm" window manager to support pie menus, and integrated it with FORTH so I could interactivally program and extend the window manager in FORTH, and use it to implement and perform an experiment comparing pie menus with linear menus.
Then I moved on from X10 to NeWS (and from FORTH to PostScript), because NeWS was so much better than X10 or X11, and supported round windows and non-terrible graphics.
At Sun in 1991 I helped write an X-Windows ICCCM X11 window manager in PostScript so NeWS could manage X-Windows, which supported non-rectangular tabbed window frames, pie menus in round windows, multiple displays, multiple rooms, scrolling virtual desktops (with an iconic map that let you scroll your view around), and many other nice features that other X window managers couldn't do at the time. Plus it was faster and more efficient than any X window manager because it actually ran in the window server instead of running in a separate process and communicating over the network, so it was able to immediately synchronously handle events, provide feedback, draw rubber-band lines in the overlay, pop up and track menus immediately, perfectly support quick mouse-ahead gestures, move and resize windows instantly, all without context switching or network traffic or grabbing the server or freezing event distribution or lagging behind or missing any events, instead of being asynchronous and laggy and flakey and dropping events like all other outboard networked X11 window managers do by design. (This was extremely important when running on a slow diskless Sun 3/50 paging over the network.) The idea of running something as crucial and interactive as the window manager in a separate process using an asynchronous protocol is quite insane, inefficient, wasteful, and flakey, but that's how X was designed to be.
In 1992 I made a pie menu window manager called "piewm" based on "tvtwm", and I also ported SimCity to X11, using TCL/Tk, redesigning it to be a multi player networked game, and encountering and solving all kinds of problems trying to make X11 play a game, support multiple users, do fast shared memory graphical animation, mix sounds with a network audio mixer, and support different types of screens and devices (both black and white and color). Those experiences were what motivated me to write the Unix-Haters Handbook chapter on the X-Windows Disaster, and compare my experiences with NeWS and X-Windows.
>The "piewm" X11 window manager with pie menus. When I was a research programmer at CMU, faced with using X11, I realized that I needed pie menus to help me move my windows around, so I went shopping around for a reasonable window manager, and since there weren't any, I choose the least unreasonable one at the time, "tvtwm", and hacked it up with an updated version of the old X10 "uwm" pie menus, and called it "piewm". The source code is available here: piewm.tar.Z.
>The TCL/Tk pie menu widget. I accidentally ported the HyperLook version of SimCity to X11, making it multi player in the process, using the TCL/Tk toolkit, which was the only X11 toolkit that didn't suck. I needed some fancy looking graphical pie menus, so I made a TCL/Tk pie menu widget, whose window shape could be shrink-wrapped around iconic labels. The source code is available here: tkpie.tar.gz. I recorded an X11 SimCity demo, showing pie menus in action. SimCity for X11 won a "best product of 1992" award from Unix World!
Here's how fun it is trying to draw the SimCity map efficiently with X-Windows: if only a few pixels have changed (which is a very common case), instead of sending the entire image, it sorted the changed pixels by color, then used XDrawPoints to draw each batch of different colored pixels that changed. Changing colors is expensive, so if you changed colors for every dot it would be very slow, but X-Windows lets you draw a bunch of dots at any position in the same color efficiently, so SimCity finds the pixels that changed, and if there are few enough of them, it sorts them by color, then goes through setting each color and drawing every pixel that changed to that color with XDrawPoints. If more than max_pix (256) changed, it just sends the whole image. Yes, that was the most efficient way to do it if the server didn't support shared memory or was across the network on a different host, since the X protocol and graphics API is so awkward and inefficient. And even if you have the shared memory extension in your server, X still requires that you ALSO write code to support the case of no shared memory, since it still needs to be able to run over the network. Decades later when I updated the code to support different bit depths than an 8 bit color palette, computers and networks were astronomically faster, and the code was too tricky for its own good, and SimCity on the OLPC was usually running locally with shared memory anyway, so I punted, commented out that code, and just sent the image every time.
/*
* Sending the whole image is 108108 bytes.
* Sending points is 4.4 bytes per point.
* One image is as big as 24570 points.
* But we have to sort these dang things.
*/
#define MAX_PIX 256
int max_pix = MAX_PIX;
[...]
/* TODO: Fix this. I disabled this incremental drawing code for now since it seems to be buggy. */
/* Sort the changed pixels by their color */
qsort(pix, different, sizeof (struct Pix), (int (*)())CompareColor);
/* Draw the points of each color that have changed */
points = (XPoint *)malloc(sizeof (XPoint) * different);
last = 0; pts = 0;
for (i = 0; i <= different; i++) {
if ((i == different) ||
(pix[i].color != pix[last].color)) {
XSetForeground(view->x->dpy, view->x->gc, pix[last].color);
XDrawPoints(view->x->dpy, view->pixmap, view->x->gc,
points, pts, CoordModeOrigin);
if (i == different)
break;
pts = 0;
last = i;
}
points[pts].x = pix[i].x;
points[pts].y = pix[i].y;
pts++;
}
free(points);
The map editor "DrawOverlay" code also has a lot of fun hairy optimizations to draw fast over the network, and it measure how long it takes in different situation and uses the most efficient technique (drawing the lines every time, or caching them in an offscreen pixmap overlay):
When I was timing and optimizing SimCity to make it run as fast as possible on X-Windows, I noticed that when you ran it super-duper flat-out fast, skipping screen updates to run the simulator in a tight loop, a huge amount of time was wasted simply redrawing the date field, since it changed so many times per screen refresh. So I implemented a special custom TCL/Tk widget just for displaying the date, which only lazily updates occasionally at a fixed frequency, and knows how to draw the text in an offscreen pixmap and "fake blur" the parts of the date that are changing quickly by overprinting the letters and digits in gray (you can't easily blur or blend text in X-Windows with an 8 bit color mapped display, so I did the next best thing that was possible and efficient, and it got the point across that time was moving very fast, without slowing the computer down or making it unresponsive).
> To annoy X fanatics, Don specifically asked that we include the hyphen after the letter "X,", as well as the plural of the word "Windows," in his chapter title.
This seems to imply that the correct name is "X-Windows" without the hyphen and plural, i.e. "X Window", but that's just as wrong (and much uglier) - the actual name of the window system is "X".
As a getting-names-right and things-making-sense fanatic, I'm more annoyed by "X Window" than "X-Windows".
Maybe that was the point, though....? Meta-trolling by getting people who wanted to use the correct name to apply the reverse transformation and incorrectly call it "X Window"? :)
>Don wrote the chapter on the X-Windows Disaster. (To annoy X fanatics, Don specifically asked that we include the hyphen after the letter "X,", as well as the plural of the word "Windows," in his chapter title.
I published a better version on Medium that has a few typos fixed, and includes a flyer that was handed out at the X-Windows conference, and an article I found in my archives from Volume 1 Number 2, of the June 1988 of the “XNextEvent” newsletter, “The Official Newsletter of XUG, the X User’s Group”:
Official Dangerous Virus Notice Distributed at the X-Windows Conference
Official Notice
Post Immediately
X x
X x
X x
X
x X
x X
x X
Dangerous Virus!
First, a little history. The X window system escaped from Project Athena at MIT where it was being held in isolation. When notified, MIT stated publicly that “MIT assumes no responsibility…”. This is a very disturbing statement. It then infiltrated Digital Equipment Corporation, where it has since corrupted the technical judgement of this organization.
After sabotaging Digital Equipment Corporation, a sinister X consortium was created to find a way to use X as part of a plan to dominate and control interactive window systems. X windows is sometimes distributed by this secret consortium free of charge to unsuspecting victims. The destructive cost of X cannot even be guessed.
X is truly obese — whether it’s mutilating your hard disk or actively infesting your system, you can be sure it’s up to no good. Innocent users need to be protected from this dangerous virus. Even as you read this, the X source distribution and the executable environment is being maintained on hundreds of computers, maybe even your own.
Digital Equipment Corporation is already shipping machines that carry this dreaded infestation. It must be destroyed.
This is what happens when software with good intentions goes bad. It victimizes innocent users by distorting their perception of what is and what is not good software. This malignant window system must be destroyed.
Ultimately DEC and MIT must be held accountable for this heinous software crime, brought to justice, and made to pay for a software cleanup. Until DEC and MIT answer to these charges, they both should be assumed to be protecting dangerous software criminals.
Don’t be fooled! Just say no to X.
X-Windows: …A mistake carried out to perfection. X-Windows: …Dissatisfaction guaranteed. X-Windows: …Don’t get frustrated without it. X-Windows: …Even your dog won’t like it. X-Windows: …Flaky and built to stay that way. X-Windows: …Complex non-solutions to simple non-problems. X-Windows: …Flawed beyond belief. X-Windows: …Form follows malfunction. X-Windows: …Garbage at your fingertips. X-Windows: …Ignorance is our most important resource. X-Windows: …It could be worse, but it’ll take time. X-Windows: …It could happen to you. X-Windows: …Japan’s secret weapon. X-Windows: …Let it get in your way. X-Windows: …Live the nightmare. X-Windows: …More than enough rope. X-Windows: …Never had it, never will. X-Windows: …No hardware is safe. X-Windows: …Power tools for power fools. X-Windows: …Putting new limits on productivity. X-Windows: …Simplicity made complex. X-Windows: …The cutting edge of obsolescence. X-Windows: …The art of incompetence. X-Windows: …The defacto substandard. X-Windows: …The first fully modular software disaster. X-Windows: …The joke that kills. X-Windows: …The problem for your problem. X-Windows: …There’s got to be a better way. X-Windows: …Warn your friends about it. X-Windows: …You’d better sit down. X-Windows: …You’ll envy the dead.
----
Things That Happen When You Say ‘X Windows’
I was digging through some old papers, and ran across a 15 year old “XNextEvent” newsletter, “The Official Newsletter of XUG, the X User’s Group”, Volume 1 Number 2, from June 1988. Here’s an article that illustrates how far the usage of the term “X Windows” has evolved over the past 15 years. (Too bad The Window System Improperly Known as X Windows itself hasn’t evolved.)
Someone on slashdot asks, “Why is it still called X-Windows?”. Predictably, the first reply says: “It isn’t. It’s called ‘The X Window System.’ Or simply ‘X’. ‘X Windows’ is a misnomer.”
He didn’t ask why it is “X-Windows”. He asked why it’s called “X-Windows”. You’re wrong that it isn’t called “X-Windows”. It is! It’s just that it isn’t “X-Windows”. Being something is independent of being called something.
The answer to the question ‘Why is it still called X-Windows?’ is: It’s still called X-Windows in order to annoy the X-Windows Fanatics, who take it upon themselves to correct you every time you call it X-Windows. That’s why it’s called X-Windows.
The following definitive guide to the consequences of saying “X Windows” is from the June 1988 “XNextEvent” newsletter, “The Official Newsletter of XUG, the X User’s Group”, Volume 1 Number 2:
Things That Happen When You Say ‘X Windows’
THE OFFICIAL NAMES
The official names of the software described herein are:
X
X Window System
X Version 11
X Window System, Version 11
X11
Note that the phrases X.11, X-11, X Windows or any permutation thereof, are explicitly excluded from this list and should not be used to describe the X Window System (window system should be thought of as one word).
The above should be enough to scare anyone into using the proper terminology, but sadly enough, it’s not. Recently, certain people, lacking sufficient motivation to change their speech patterns, have fallen victim to various ‘accidents’, or ‘misfortune’. I’ve compiled a short list of happenings, some of which I have witnessed, others which remain heresay. I’m not claiming any direct connection between their speech habits and the reported incidents, but you be the judge… And woe betide any who set the cursed phrase into print!
You are forced to explain toolkit programming to X neophytes.
Bob Schiefler says, “You should know better than that!”
The Power Supply (and unknown boards) on your workstation mysteriously give up the ghost.
Ditto for the controller board for the disk on your new Sun.
Your hair falls out.
xmh refuses to come up in a useful size, no matter what you fiddle.
You inexplicitly lose both of your complete Ultrix Doc sets.
R2 won’t build.
Bob Schiefler says “Type ‘man X’”.
Your nifty new X screen saver just won’t go away.
The window you’re working in loses input focus. Permanently.
> For some perverse reason that's better left to the imagination, X insists on calling the program running on the remote machine "the client." This program displays its windows on the "window server."
Um, what? By your own description, X's terminology is correct and straightforward:
> The idea was to allow a program, called a client, to run on one computer and allow it to display on another computer that was running a special program called a window server.
The "service being provided" is "display a window", that the client connects to. There is no inversion here.
The inversion is at a higher level of the conceptual stack, where a user generally thinks of their own computer as a client in all things. When they connect out to another machine, that's the server.
So if you ssh to another machine (a server) and then use a program on it that now displays on your machine, the direction of the connection is absolutely "you're the server, they're the client" but it inverts the user's intuition of what is in which role.
I am not confused about the metaphor. Or at least, I haven't been since like.. 1997 or so.
I am explaining why it confuses people. It's pretty undeniable that it does, and telling people that they're wrong about their confusion is shockingly unhelpful.
I'm not telling you you're wrong; simply stating a fact. You are looking at it the wrong way.
Projects/solutions often don't make sense unless you look at them from the perspective of the implementers; and like it or not, there is nothing wrong with the reversal of client and server in reference to X. The rendering to be done requires that your particular local hardware be controlled and orchestrated by X aware software. It makes no sense for the remote machine to in any way be burdened with the implementation details of every prospective display set up it could potentially have to talk to. Therefore, the better approach is to define an abstract API from which the remote machine can just speak "X" and leave the details up to the server on the other end to display. Thus, the reversal of Client and Server makes sense. The Server translates your local interactions and display state into "X" protocol messages while the remote client tracks and translates the specifics of the raw display interactions into the implementation details specific to how what is displayed on your local machine interacts with the system present on the remote machine.
You're getting confused by Client and Server meaning something in particular, when that isn't really the case. X isn't doing just one thing. The X Server provides the service allowing the client to draw on your hardware. The remote Client is also a Server in that it provides an interface through which actions that take place through your local system are translated into useful functionality on the remote system; however that is a fundamentally different perspective from which to view what X does than through the lens of being a rendering mechanism alone. X does both things. As a rendering technology, the "X client" is remote, the "X server" is local. As a binding between I/O events and program functionality, you can absolutely view the "X server" running on your machine as a client behaviorwise of the server that is the "X client" on the remote machine; but it is still a server in the sense of being aware what your specific hardware setup is, and how to handle it, because the "X client" on the remote is blissfully unaware, and is only there to translate what your "X server" reports to it into meaningful state changes on the remote host. Client/Server is not an atomic concept in computing. You can be both depending on how you are looking at the system. It's a bit like a duck-rabbit.
Is it complicated? Yes. Why? Because something that powerful is pretty much guaranteed to be so, and because we try to apply terminology that is generally used in a much more straightforward manner elsewhere to a system that taken as a whole adopts a bidirectional client/server relationship with regards to different subsets of the functionality that the software is intended to enable; namely, to allow graphical interfaces to be utilized between machines with wildly different characteristics, without burdening the "source" machine (the remote machine) with the responsibility of driving our (the local machine's) hardware.
Again, There is nothing wrong with the approach except that yes, at first blush, it feels unpleasant to wrap your head around terminology-wise because the conventional way the terms are used exists in both directions in the same software. I don't expect most to grok X the first time. You just have to dig in and get uncomfortably familiar with the hell that is the rat's nest of abstractions that makes remote graphical computing possible.
> X-Windows started out as one man's project in an office on the fifth floor of MIT's Laboratory for Computer Science. A wizardly hacker, who was familiar with W, a window system written at Stanford University as part of the V project, decided to write a distributed graphical display server.
An overview of how this came to be:
"A Political History of X" - Keith Packard (LCA 2020)
Yeah sure, having CAD display on a PC with x11 server hooked to a Unix workstation with dialup is a complete disaster and total waste of resources when compared to uber apps of today like slack
Bill Joy referred to X-Windows as "Rasterop on Wheels".
I gave a NeWS/Pie Menus/HyperTies/Emacs demo to Steve Jobs once, on the trade show floor at the Educom conference, right after he finally released the NeXT Machine, in November of 1988.
Sun was letting me demo NeWS and the stuff we were developing at the UMD Human Computer Interaction Lab on a workstation at their booth, and NeXT's booth was right across the aisle, so Ben Shneiderman rope-a-doped him and dragged him over for a demo. He jumped up and down and yelled "That sucks! That sucks! Wow, that's neat. That sucks!"
I figure a 3:1 sucks:neat ratio was pretty good for him comparing something different than his newborn baby NeXT Step, which critics had taken to calling NeVR Step, since it had been vaporware for so long until then.
When I tried to explain to him how flexible NeWS was, he told me "I don't need flexibility -- I got my window system right the first time!"
Here's Ben's account (I love how he gently put it that "Jobs had little patience with the academic side of things" -- and by "engage" he meant "jump up and down and yell"):
Date: Tue, 1 Nov 88 07:55:48 EST
From: Ben Shneiderman <ben@mimsy.umd.edu>
To: hcil@tove.umd.edu
Subject: steve jobs visit
I just couldn't resist telling this story....
On Tuesday I was at the EDUCOM conference in DC and Steve Jobs was showing
NeXT...I was quite impressed...a nice step forward, improving, refining
and developing good ideas. I invited him to see our Hyperties SUN version
that we were showing at the SUN booth. The gang managed to get the new
NeWS version working and the Space Telescope looked great...Jobs spent
about a half hour with us going from positive comments such as "Great!"
to "That sucks"...he had a terrific sensitivity to the user interface
issues and could articulate his reasons wonderfully.
On Wednesday he came out to the lab and spent time looking at a few more
of our demos...the students (and me too) were delighted to have him as
a visitor.
What impresses me is that he took his ideas and really put them to work...
pushing back the frontier a bit further. His system really works and
has much attractive in the hardware and software domains. Jobs had little
patience with the academic side of things, but was very ready to engage
over interface issues.
Do check out NeXT...there are things to criticize, but I did come away
more impressed and pleased than ready to pick at the flaws.
-- Ben
Here's some old email from the Sun internal NeWS mailing list I saved:
From: npg@East (Neil Groundwater - Sun Consulting)
Date: Wed, Jun 27, 1990
Subject: Humor from Dennis Ritchie (at USENIX)
(Actually Dennis's latter remark was attributed by him to Rob Pike)
from Unix Today 6/25 page 5.
"..., Richie reminded the audience that Steve Jobs stood at the same podium
a few years back and announced that X was brain-dead and would soon die. "He
was half-right," Ritchie said. "Sometimes when you fill a vacuum, it still
sucks."
I never thought about it from this person's perspective because I never had to program X beyond the graphics and OS classes I took in the early 90's. (Motif's insane verbosity makes code look awful.)
I've been using X + MWM since 1991 (AIX and Solaris, anyone?). It is still everywhere, and I wonder how it managed to persist for so long.
If you have any ANGSTFUL Motif code, comments, documentation, or resources, please share them with me! Here is some of the stronger stuff I've found. Note: this is only for official Open Software Foundation Motif inspired Angst. If you're experiencing TCL/Tk That Only Looks Like Motif But Doesn't Suck Angst, then you should stop whining and fix the problem yourself, if somebody else hasn't already.
Once, just for fun, I had an SGI do window management, another one serve up fonts, a linux box serving apps via NFS, and yet another headless SGI running a shared app displayed on yet another Linux machines X window server session.
It was surreal. Launch that app, and all those machines contributed to the interaction of it.
Plus, it made that instance of RH Linux look like an SGI. 4dwm has a distinct look and feel.
For the most part, running over a 100T network, one could not tell all that was happening. (Was a high end CAD app.)
Later, I did run that same app remotely, using its Windows NT build and the Exceed X server for Win NT. Cool.