Moreover, if you change the UI, people might notice changes that aren't even there. For example, we recently added an upload speed indicator ( http://woobius-dev.posterous.com/connection-information ), and as a result people have commented that the application feels faster.
Conversely, when we allowed people to interact with the data that was already loaded without waiting for it to refresh (via a clever sync mechanism), the first impression was that it was slower, because people things appeared to change unexpectedly while they were using the app. We had to add loading spinners back in so that the user would be aware that the application is still doing something, in order for people to actually feel that it was faster.
So, don't make just any UI change just for the heck of it - UI changes need to be as carefully considered as under-the-hood changes.
This is a great paper on how Progress Bars affect our model of the computation. It looks at how our natural model of time is non-linear and how this can be used to make us feel that that system is faster.
The gist of the conclusion (from my super brief skimming) seemed to be that users hate long pauses as you get close to the end of a progress bar, and are much more tolerant of them in the beginning. If possible, you should optimize the progress bar to get possibly slow and sticky stuff over with at the beginning, and move smoothly and quickly as you reach the conclusion.
Only if it's entirely unpredictable. Most of the things happening while most progress bars are, er, progressing (or not) are pretty predictable. I bet that at least 95% of bad progress bars could be improved considerably; whether it would be worth the trouble is of course a different question.
But how often is a progress bar actually related to "progress" in the simplistic sense, like sorting through a bunch of identical items?
More often we use progress bars to represent progress through a multi-step process, each one of indeterminate duration. This IMHO is a mis-use of the progress bar.
A far better way to do it is to show the user the steps you are moving through, and have an individual progress bar for each (if performance is expected to be linear), or simply an activity indicator. Check/grey items out as they become completed.
[edit] Another note: A lot of times progress bars are not backed up by correct code. For example, if you are, say, compressing a bunch of files - the progress bar should not represent how many files you have finished, but rather the size of those files. Too often progress bars are used to signify a quantity that is not closely correlated with TIME.
Exposing the steps and having them each have their own progress bar or "reset" the progress bar is always preferable to me as a user, both because I like feeling like I know what my software is doing (even -- or, in some cases, especially -- if it is behind the veneer of plain speech and a nice UI), and also the more fine granularity feeds my boredom and often does "make it seem quicker." If you have something to look at and track, you don't feel like it could take forever. Letting people see the light at the end of the tunnel /always/ makes it more bearable. Giving them "sub-tunnels" helps that further, I think.
Another example of this is online checkout paths. Obviously the fewer pages and fewer form fields a user has to fill out, the better, but even if its just a single page and a confirmation page, I /always/ prefer checkout paths which list the number of steps and number of pages at the top, and indicate how far along in the process I am. It's always so exasperating when online shopping when you think you're about to make a purchase, only to be taken to an unexpected "confirm everything you just did" page, followed by two order confirmation pages. Its not the same as an installer or other crunching process, but granulated progress bars get you a surprising amount of mileage in many linear processes you can put your users through.
> Don't bother improving your product unless it results in visible changes the user can see, find, and hopefully appreciate.
That's ridiculous and blatantly incorrect. Users may not notice it but that doesn't mean the development team won't find it easier to understand or the support staff won't find it better to troubleshoot. It is possible to write the most convoluted code for any product that functions in the exactly the same way as the the cleanest, most-easy to understand code. The user will not notice the difference but which one would you rather want to be involved in?
Also, there are a lot of code improvements you can do that maybe nobody notices today but will certainly avoid catastrophes in the future. For example, hardening security. Most users won't notice or care about your security policies when you just release your app but once your product starts to gain traction, you should spend time making sure you improve security on a constant basis. Users do not care if you still use MD5 for hashes but you might as well go ahead and rewrite your code to use better hashing algorithm already.
There's also the financial aspect. As the business needs change, maybe you can adapt your code to perform better on cheaper hardware while giving up some potential features that you had planned on. I ran a moderately popular website that could handle 1-2m visitors/day on some beefy hardware but seeing as how most of our traffic came from embeds, I was able to switch hosts and platform and satisfy the critical load using very cheap yet stable hosting. The users did not see any performance or feature change but I saved $700/month.
So yeah, I would definitely improve code if it has more substantial, long-lasting benefits than immediately obvious to end-users.
I think the author is aware of this but his "Don't bother improving..." statement seems to be a result of him trying to add more punch to his post. Seems to be a recent trend with his articles.
I think that if you drop the semantics and take the article for what it's talking about -- new front-end features you specifically want your user to be using, or at least to investigate and become aware -- then he is wholly right, you should give the UI a bump, tiny or big, if you want someone to notice.
Too often a programmer will drop a new button into a list of buttons and then, two weeks later, incredulously ask everyone why they haven't found it yet.
(Of course, A UI bump doesn't always just mean cover it with new gloss. Simply moving elements around, creating spacing, changing up the colors or weighting/priority of items in a list can be all the change you need. Obviously not any one of those things arbitrarily... Any piece of UI, regardless of how pretty or "artistic" it is, should always have an intent, and the intent in this case is to do something that causes people to investigate your new functionality.)
I think the author's Windows calculator example is a great case for this. Over time more functions have been added to that thing, but when its literally another button added to the grid, few people are going to notice it and investigate it. Some will surely come across it because they'll need to perform some mathematical function for the first time, but nobody will actively investigate the calculator for the new stuff because nothing and nobody has told them to.
Conversely, if you crack open calculator for the first time on a new PC or new OS install and see that its appearance has been cleaned up across the board, or that a row of buttons has been newly split out and given its own breathing space in the window layout, you might actually look at them and discover what they do.
I don't think that this article applies to code improvements under the hood, or even really to improvements to existing functions that will streamline the user experience. This is especially true if those user paths are well traveled. If they're already there, you don't need to bump anything to tell them to go there -- they'll figure it out.
It's when new things are added, or when existing-but-underutilized things are beefed up, that adding some new gloss or shifting your design's visual priorities a little bit can go a long way.
I imagine people that routinely had to substitute 10.2 from 10.21 would notice.
This title should be something like "Don't expect much of a buzz if you don't change the UI". Swombat's comment perfectly highlights how UI affects human psychology. If people feel a difference then they will talk about it.
I never thought about this and it may be a good idea to make UI changes so improvements are more visible, but he just goes way overboard with his last statement:
Don't bother improving your product unless it results in visible changes the user can see, find, and hopefully appreciate.
I would say improve your product regardless if the changes aren't visible, but make UI changes every now and then so users feel improvements are being made.
Right now, UI is only a fiction made up for the user by the programmers. Usually, it is an accurate fiction, but there is certainly no guarantee. There are only a few currents in UI that connect users directly to what's actually happening. CRUD interfaces are one of these. Unfortunately, they kinda fit their name. Morphic is another one of these, and I think it does quite a bit better than CRUD. There's also the grandaddy: the command line.
A suspension/steering system/drivetrain can interface a driver with a whole lot of complicated machinery. Why can't we do this with UIs?
Umm, ok, a UI is a fiction made up for the user by the programmer...
And how exactly is a steering wheel not a fiction made up for the user by the engineer?
Just because the steering wheel is physically connected to the car's wheels doesn't make it any more "real". It's still an abstraction on a complicated piece of machinery (in that case, a differential and power steering system for turning the wheels of the car). A browser is no different - it's an abstraction on a complicated piece of software.
In the case of something like rack and pinion steering, there is a direct physical connection between this steering wheel the driver is holding and the wheels. If done well, this kind of high fidelity connection can allow the driver to directly experience the physics of slinging their car around a turn. In this case the "user" has a very direct connection to those physics. (There's an actual physical linkage!) In many cases, an engineer can add something to this like power steering, but still maintain this high fidelity.
In lots of software, the connection is very indirect. It's as if one were sitting on a hydraulic platform, watching rendered 3D models on screens, driving an actual car by remote control. If this were implemented well, it might feel great, but if it were implemented badly, it would absolutely suck -- just like a lot of UIs do. Heck, even something much simpler like badly implemented power steering can still suck.
In essence, there are often too many layers of abstraction in lots of software. At each level, you can lose some fidelity, and the end result is discomfort for the user. There is no "physics" involved, but there are often underlying models which are not represented with high fidelity. (Relations are a prime example.)
(I was watching an account by an Israeli Mirage pilot who also flew the F-16s. He felt there was some "immediacy" missing from the "fly by wire" F-16.)
I think your point falls down because it appears to only apply to physical devices. In which case, the only shortcoming of software (all software) is that it's not physical... Can you come up with an example of an interface to a non-physical device (whether software or something else) that doesn't have such a "shortcoming"?
Sorry, you are just missing my point. Of course, software always involves abstractions. Another way to put it is:
* Not all abstractions are created equal
* Not all implementations of abstractions are equal
* Not all presentations of abstractions are equal
* All else equal, fewer abstractions are better
(Your rebuttals seem to rest on the opposite being true.)
Returning to my original post -- UIs are a fiction created for the user. The salient point here is that the range of possible fidelity is huge. You can write a piece of historical fiction that's very close to actual events, or you can totally butcher the facts. The range of freedom afforded programmers allows them to think they can get away with way more than they actually can.
I get that, but your point was that "UI is only a fiction made up for the user by the programmers". Do you mean that this is true of any interface that doesn't physically link the user to a physical device? If so, the implications of this point to software seem a little thin, since software, being abstract, will always be non-physical...
With physical systems, you can have fidelity to something underlying, like physics. Layers of abstraction can degrade this fidelity. With completely software systems, there can still be something underlying with which you can represent with different levels of fidelity. There too, levels of abstraction can degrade this fidelity.
Generalize, please. Don't mistake your not getting a general principle for it not being there. (If you are trying to prove the null hypothesis, you use different approaches. Trying the same one N times is missing the point.)
"UI is only a fiction made up for the user by the programmers". Do you mean that this is true of any interface that doesn't physically link the user to a physical device?
What I mean is that the worst sins possible in software aren't even limited by the laws of physics. Often in software, there is apparently nothing elegant underlying the UI like physical laws or mathematics. Sometimes there isn't even "common sense." Sometimes even basic psycholgical principles like Object Permanence are missing or inconsistent.
Exactly which of the general principles I've been harping on doesn't pertain also to software systems?
Also, physical systems like the steering wheel are presentations of things that we were evolved to grok, like physics. Programmers should choose abstractions that have qualities that also make them easy to grok.
Conversely, when we allowed people to interact with the data that was already loaded without waiting for it to refresh (via a clever sync mechanism), the first impression was that it was slower, because people things appeared to change unexpectedly while they were using the app. We had to add loading spinners back in so that the user would be aware that the application is still doing something, in order for people to actually feel that it was faster.
So, don't make just any UI change just for the heck of it - UI changes need to be as carefully considered as under-the-hood changes.