That reinvention happened years ago, and it is in the likes of things such as Citrix -- which has existed for years now. It is used widely in industries outside of software development.
What I really mean by "reinvent" is the multi-user graphical computing model itself.
We've done a stellar job pushing a desktop out to someone. I've used a variety of these things, from the basic VNC on up, and they are pretty great. The better ones are damn tough to differentiate from local UX. People are running CAD on stuff like Citrix and it works. Better than one might expect too.
That's more like plucking out the killer use case and optimizing it. Great move, worth it, not bad, etc...
There are a lot of ways to use computers, and applications though.
X, warts and all, did offer up pretty fantastic flexibility and usability (when apps were well realized).
A few things:
Today we've got cloud computing, local computing, mobile computing, wearables. Lots of displays, more coming.
Thinking about it from the X POV, sort of how they thought about it back then, that looks an awful lot like:
Application server, desktop computer, other computer. They envisioned more than one person using a single computer. Multiple people using multiple computers, multiple people using modest computers to do UX with one or more powerful ones.
And on that point, I used to use multiple powerful computers with a single modest one all the time.
Maybe rethinking it again, like they did with X back in the day, would push the boundaries some. Who knows what people would do?
Someone walks into a room, has presentation on their wearable, and pushes it's display to a nearby machine, or both display and UX. They keep their data where it lives, on their device, but can benefit from the more robust UX.
Mobile computing devices are getting really powerful. Run an app on one from a laptop, easily, again, perhaps keeping data local to the mobile, for whatever reasons.
Just spitballing here, but the general thought I was trying to convey, without realizing it earlier, is maybe rethinking things from the basics forward, allowing for a lot of options, possible options, just might shake out other ways to do things, use all these crazy devices.
I'm thinking you may not have actually used RDP over a medium-to-low bandwidth or medium-to-high latency connection. Because RDP is still usable and X is a laggy mess.
Xlib, the original C binding to the protocol is not great in that situation, because it makes many asynchronous requests look synchronous. Most toolkits are pretty terrible in that situation. libxcb instead of Xlib can be a great improvement as it exposes what can be asynchronous (though it often isn't used well).
I regularly ran X over ISDN back when that was a thing. Framemaker wasn't exactly snappy, but it was usable. Lighter stuff was fine. Watching contemporary RDP keep up over 64k was painful.
With server side font rendering all but forgotten among other things, any modern app will struggle over a low bandwidth link well. And X has always been crap for latency, even with xcb, it ends up being very round-trip heavy. Modern technology has addressed the bandwidth problem, but has a much bigger challenge with the second.
Modern RDP can do seemless remote audio and full 60fps video and 3d graphics, on windows servers. Linux server stuff like libvirtd just uses it as a glorified VNC.
The people who created X thought multi user graphical computing all the way through.
We may decide that should not be a thing.
I think we will struggle, some time will pass, and then it will be a thing, meaning X will just get reinvented.