This article put Nielsen in the corner of "technically correct", but the influence he had on me at least was a strong focus on "empirically correct". i.e. doing actual tests (with humans) on what kind of things work to convey information. He did this to the detriment of "looking good", which is why his stuff ended up looking "hopelessly outdated", but I think he was on the right side of the fight.

Back then, it felt like he was one of the rare few people who was actually focused on serving the needs of the user. Those were the days when too many sites thought it was a good idea to show a Flash splash screen before entering a site, and designers seemed to have a grudge against text that was big enough for a normal person to read.

Teenage me thought there was NOTHING cooler than a flashy splash page and those micro bitmap fonts a la "silkscreen".

Who am I kidding I still think it's awesome.

8pt Tahoma is the GOAT and I miss it desperately.

I remember vividly when Windows (XP I think?) introduced a new kind of font smoothing that messed with the look of those fonts. In hindsight, I feel like that moment was part of the catalyst toward Web 2.0-style designs. Screens started to get bigger, sites became higher resolution as bandwidth increased, and the tiny pixel font started to be both less relevant (you could fit more, larger text onscreen) and less beautiful (it rendered differently with font smoothing).

IIRC this shift also coincided with the shift toward Wordpress, including a more homogeneous set of pre-packaged "themes", and away from custom CMSes (or no CMS at all), the OG blogging "scripts" like Greymatter and b2.

8pt Tahoma, lowercase, and using colons for decoration, like this:

  :: news :: contact :: last updated 2000-07-31 ::

Yep! I'm guilty of continuing to use the double-colon separators to this very day. Just shipped an internal app for my company a few months ago that utilizes them in page titles.

> 8pt Tahoma is the GOAT and I miss it desperately.

So good it is bug when it 8pt Tahoma looks off: https://github.com/jdan/98.css/issues/10

> IIRC this shift also coincided with the shift toward Wordpress, including a more homogeneous set of pre-packaged "themes", and away from custom CMSes (or no CMS at all), the OG blogging "scripts" like Greymatter and b2.

Shout-out to Geeklog, Textpattern, and the monstrosity that was PHPNuke.

Search for Artwiz under Unix. Same feelings.

Well, the screen resolutions and pixel densities of that time also made those micro bitmap fonts to be not so micro.

I miss it too.

Serious lack of anti aliasing contributed too.

I miss my 90s / 00s active desktop with random gifs of battlemechs walking around.

It seems the next battle we'll have to fight is for fonts that actually present enough information to the user to disambiguate "Weird Al" from "Weird AI". Seems like we used to have these things called "serifs" but modern design knows nothing of such heresies.

Actually, Weird Al could see so far into the future that he called himself that on purpose.

Most splash screens had a "skip" button though. If you were visiting the website frequently, you as the user could always bookmark the internal page that the intro screen pointed to.

That 2Advanced flash intro tho...

I can still hear the music

I always had more respect for Nielsen’s lineage of human-computer interaction than I did for Nielsen himself. At the time I remember thinking how neither designers nor classic HCI people (or programmers) really got the web. Nielsen was at least focused on the web, but the problem is that he was fixated on user expectations for a brand new medium without recognizing that it was early days and would inevitably evolve. He would say stuff like “hyperlinks should always be blue and underlined” because that’s what users expect, without realizing that at that point in time we were still so early in the adoption of the web that it made no sense to apply such rigid rules.

Ben Shneiderman's the "hyperlinks should always be blue" guy. ;)

https://blog.mozilla.org/en/internet-culture/why-are-hyperli...

https://news.ycombinator.com/item?id=29897811

Seriously, while he was the first to use blue for links in HyperTIES, there was a historical context (like the IBM PC's color palette), and he never meant it in a "640k ought to be enough for anybody" way. His reasons for recommending blue are based on empirical studies, measuring visibility, comprehension, retention, etc.

Blue is good not just because users recognize it (they didn't in 1983), but for how it stands out, because of how the human visual system works. He was originally a fan of cyan aka "light blue".

Ben Shneiderman wrote:

>"Red highlighting made the links more visible, but reduced the user’s capacity to read and retain the content of the text… blue was visible, on both white and black backgrounds and didn’t interfere with retention,"

>"We conducted approximately 20 empirical studies of many design variables which were reported at the Hypertext 1987 conference and in array of journals and books. Issues such as the use of light blue highlighting as the default color for links, the inclusion of a history stack, easy access to a BACK button, article length, and global string search were all studied empirically.”

>"My students conducted more than a dozen experiments (unpublished) on different ways of highlighting and selection using current screens, e.g. green screens only permitted, bold, underscore, blinking, and I think italic(???). When we had a color screen we tried different color highlighted links. While red made the links easier to spot, user comprehension and recollection of the content declined. We chose the light blue, which Tim adopted."

HyperTIES Discussions from Hacker News:

https://donhopkins.medium.com/hyperties-discussions-from-hac...

Ahh, memories. Ben was the advisor for my Master's thesis...

> He would say stuff like “hyperlinks should always be blue and underlined” because that’s what users expect, without realizing that at that point in time we were still so early in the adoption of the web that it made no sense to apply such rigid rules.

I always remember recommendations from Nielsen as (a) backed by some testing with real users, (b) temporal, i.e. “at this time users expect…” and ( c) only focused on usability, that is, in practice there are other things to consider like design, performance, etc.

I will say that most of this nuance gets rounded to a Boolean like most advice.

In creating documents with hyperlinks for training students, I have found blue underlined still catches the most fish, for example some do not realize that accordion-style content can be clicked to reveal more content if it is not blue underlined. Have tested icons, highlighting, different colors of underlining.

I think part of the issue is that early users of the internet were more tech-savvy, and now internet users are simply "anyone with a phone"—in a sense we're going backwards because a higher percentage of users are not learning/adapting to attempts at new approaches/standards.

Honestly, I believe that the Web would have been better had we stuck to those expectations more diligently and evolved more slowly and thoughtfully. That one can does not imply that one should.

Blue links and purple visited links were fine. And now on most sites there is no differentiation, and it’s sometimes difficult to tell what is a link, and a lot of sites don’t even bother linking. This is not an improvement!

Blue and purple links wouldn't be visible on any website that chose to use those as background colors (or any range of background colors where the contrast would have been too low to be visible).

The web at the time was an "anything goes" multimedia format, not a dry digital paperback or textbook where all the content had to fit within the publisher's specifications to limit printing, weight and distribution costs.

Nowadays, most browsers have a "reading mode" that can flatten the content into something that satisfies those Nielsen conditions though.

> any range of background colors

Backgrounds should only be #808080

I don't disagree with the opinion, but what individual experts think does not factor in much when you have a groundswell of adoption like the web did. At that point people are going to hack whatever they can on top of it, and there are too many varied interests to have any central control, and so things just evolve well beyond the intent or control of any individual mind or architect.

For me, usability mattered a lot and I saw how a lot of the web design experimentation was falling short, but Nielsen was just too backwards looking. We needed forward thinking UX rooted specifically in web culture, and that's what we got through the Zeldmans, Veens, and 37signals of the era.

> Blue links and purple visited links were fine.

Red when active

“hyperlinks should always be blue and underlined”

this honestly make life so much easier...

I read it more as "blue and underlined" because if we all do that users have a chance at learning what to expect. With an implied: Once they are confident we can be much more flexable.

Why didn't he say the same thing about links:

> he was saying that each browser should define how headers would be displayed to their users.

And let the user define the color and underline style?

I took a number of courses from the NNG Group, over the years, including from Nielsen and Tog (I don’t think Don Norman ever gave classes).

It taught me great respect for usability.

Designers hated Nielsen.

Yes, to be fair, Nielsen essentially has had the last laugh. Simple navigation, consistency, fast loading times, and ruthless minimalism, and the full Flash intro page is a relic.

The full flash intro page is only a relic because Apple dropped support for Flash. Now, so many designers have a full page video that play, and prevent text from loading until every bit of bloated JavaScript finishing downloading and executing.

It's a different package, but it's the same junk.

My biggest pet peeve these days is a Nav Bar that takes up too much vertical space and follows when I scroll. Usually these are mobile-first designs, but especially on phone when I rotate to view more horizontal content using iPhone 13 I’ve got like two text lines visible!

I despise the full-background, 4K video pages.

Makes connecting from bad cells a royal pain.

But some of the dependency libraries can be almost as bad.

I don't like 1MB pages, so a button can be animated.

"Simple navigation, consistency, fast loading times, and ruthless minimalism"

Modern websites have none of those. It's all pop ups asking you to subscribe and/or give feedback before you have even had a chance to read anything, content that jumps around as images (ads) load, and huge blobs of JavaScript. I feel like the web has regressed massively in the last few years

In the long run, Flash was a blip on the web. 2004-2010 tops.

NNGroup "best practices" have been obsolete for at least 15 years, because the purpose of a website is no longer about displaying free information. Websites have become a fully commercial enterprise focused on conversion, so every trick in the book is used:

- Infinite scroll and autoplaying video on social media and blogspam sites

- Layouts shifting after content load because of the Javascript ad delay

- "Other Articles you might like" blocks in the middle of an article

- "Subscribe to our email newsletter" popups/modals everywhere

- "You are reading 1 of x free articles" dickbars

and that's just scratching the surface.

I think most practical designers saw the value of what Nielsen was showing but hated how he completely eschewed aesthetics. Fortunately the advent of CSS and the need for responsive mobile design forced everyone to learn how to integrate functionality with aesthetics.

> Designers hated Nielsen.

Several of the designers I worked with liked him, in as much as he gave them research to back them in their arguments with clients that the site should actually be usable.

It is still one of the high points of my career that I was part of a team that shipped an internet banking application that worked well in the then-current major browsers of IE 6 and Navigator 4, but also worked in Lynx and on a Palm Pilot browser.

We've now degenerated to the point that "engineers" demand Chrome everywhere.

I don’t think designers hated Nielsen. I was doing web design at the time, and the general sentiment seemed to be: “Sure, he’s probably right—but the client wants it done their way instead, so…”

Still, his bite-sized advice stuck around and continues to shape the conversation. That’s where everyone learned about Fitts’ Law, Hick’s Law, optimal text column widths, the value of usability testing with just a few users, and the deep shame you should feel for making text hard to read. He may not have invented those ideas, but his articles popularized them. And because he was one of the few doing serious usability research and publishing it online, his authoritative voice gave those ideas real weight that designers could leverage to make the case to their bosses and clients.

Certain designers may have hated Nielsen, but their users hated them, and they have more users hating them than Nielsen has designers hating him, and users matter much more than designers, so I think he came out way ahead.

Bruce Tognazzini is the OG GUI Guru of 80's user interface design!

https://asktog.com/atc/about-bruce-tognazzini/

Tog not just invented and implemented, but also deeply rationalized and documented a lot of great user interface techniques, like the "mile high menu bar", which partially exploits Fitts' Law (in the "up" direction), but made more sense on the original single small Mac screens. (While pie menus more fully exploit Fitts' law (in "all" directions") and they work great on large screens, giving you even more "leverage".)

https://www.joelonsoftware.com/2000/04/27/designing-for-peop...

>When the Macintosh was new, Bruce “Tog” Tognazzini wrote a column in Apple’s developer magazine on UI. In his column, people wrote in with lots of interesting UI design problems, which he discussed. These columns continue to this day on his web site. They’ve also been collected and embellished in a couple of great books, like Tog on Software Design, which is a lot of fun and a great introduction to UI design. (Tog on Interface was even better, but it’s out of print.)

>Tog invented the concept of the mile high menu bar to explain why the menu bar on the Macintosh, which is always glued to the top of the physical screen, is so much easier to use than menu bars on Windows, which appear inside each application window. When you want to point to the File menu on Windows, you have a target about half an inch wide and a quarter of an inch high to acquire. You must move and position the mouse fairly precisely in both the vertical and the horizontal dimensions.

>But on a Macintosh, you can slam the mouse up to the top of the screen, without regard to how high you slam it, and it will stop at the physical edge of the screen – the correct vertical position for using the menu. So, effectively, you have a target that is still half an inch wide, but a mile high. Now you only need to worry about positioning the cursor horizontally, not vertically, so the task of clicking on a menu item is that much easier.

>Based on this principle, Tog has a pop quiz: what are the five spots on the screen that are easiest to acquire (point to) with the mouse? The answer: all four corners of the screen (where you can literally slam the mouse over there in one fell swoop without any pointing at all), plus, the current position of the mouse, because it’s already there.

>The principle of the mile-high menu bar is fairly well known, but it must not be entirely obvious, because the Windows 95 team missed the point completely with the Start push button, sitting almost in the bottom left corner of the screen, but not exactly. In fact, it’s about 2 pixels away from the bottom and 2 pixels from the left of the screen. So, for the sake of a couple of pixels, Microsoft literally “snatches defeat from the jaws of victory”, Tog writes, and makes it that much harder to acquire the start button. It could have been a mile square, absolutely trivial to hit with the mouse. For the sake of something, I don’t know what, it’s not. God help us.

Another great technique he documented in the original Apple Human Interface Guidelines was the "drag delay" of popping up "pull right" submenus, to mitigate a problem that linear menus have, but pie menus don't. People keep forgetting and re-inventing it in sometimes better, sometimes worse ways, but he invented and implemented it for the original Mac, then most importantly documented it in the first edition of the Apple's 1987 Human Interface Guidelines, and the Mac UI still supports it. It's the kind of thing nobody notices if it works well, that's invisibly built into the toolkit, that nobody appreciates how much thought and nuance went into it, that deserves a lot of user testing and iteration to get right. (Or you could just use pie menus and not have that problem! ;)

https://news.ycombinator.com/item?id=39210672

>aidenn0 on Jan 31, 2024 | parent | context | favorite | on: Kando: The Cross-Platform Pie Menu

>>For example, while moving horizontally to a sub-menu, you can easily cross the width of a single line since it's not easy to move your mouse absolutely steady horizontally (in pro graphic apps you'd usually hold a Shift for that), so instead of moving to a sub-menu, you switch to another item. In a Pie menu that's much harder since as you move further the menu's area increases, so the tolerance is higher

>This is why properly implemented context menus don't strictly require you to move in a straight line. Implementations vary; I just tried it with the firefox context menu on linux and found that, once the submenu was open, I could move the cursor quickly to the submenu on any path, even taking a diagonal line to the most extreme options in it. I have also seen implementations where you had a ever widening path you could take as the cursor moved closer to the submenu, making the active area of the currently selected parent item trapezoidal.

>DonHopkins on Feb 2, 2024 | prev [–]

>That astonishingly clever technique was invented by Bruce "Tog" Tognazzini and described in the first edition of the Apple's 1987 Human Interface Guidelines (page 87, "drag delay").

https://news.ycombinator.com/item?id=32961306

https://archive.org/details/applehumaninterf00appl

https://andymatuschak.org/files/papers/Apple%20Human%20Inter...

>>Two delay values enable submenus to function smoothly, without jarring distractions to the user. The submenu delay is the length of time before a submenu appears as the user drags the pointer through a hierarchical menu item. It prevents flashing caused by rapid appearance-disappearance of submenus. The drag delay allows the user to drag diagonally from the submenu title into the submenu, briefly crossing part of the main menu, without the submenu disappearing (which would ordinarily happen when the pointer was dragged into another main menu item). This is illustrated in Figure 3-42.

>Implementations certainly do vary, but the point is that it's essentially a weird magical non-standardized behavior that isn't intuitively obvious to users why or how or when it's happening. It's extremely difficult to implement correctly (there's not even a definition of what correct means), and requires a whole lot of user testing and empirical measurements and iterative adjustments to get right (which nobody does any more, not even Apple like they did in the old days of Tog). Many gui toolkits don't support it, and most roll-yer-own web based menu systems don't. So users can't expect it to work, and they're lucky when it works well.

>Pie menus geometrically avoid this problem by popping up sub-menus centered on the cursor with each item in a different direction, so no magic invisible submenu tracking kludges are necessary. Don't violate the Principle of Least Astonishment!

https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...

>I think it's important for users to intuitively understand how the computer is going to interpret their gesture, without astonishment, and for the computer to provide high fidelity unambiguous instantaneous feedback of how it will interpret any gesture.

>I like how Ben Shneiderman defined "Direct Manipulation" as involving "continuous representation of objects of interest together with rapid, reversible, and incremental actions and feedback".

https://en.wikipedia.org/wiki/Direct_manipulation_interface

>>In computer science, human–computer interaction, and interaction design, direct manipulation is an approach to interfaces which involves continuous representation of objects of interest together with rapid, reversible, and incremental actions and feedback. As opposed to other interaction styles, for example, the command language, the intention of direct manipulation is to allow a user to manipulate objects presented to them, using actions that correspond at least loosely to manipulation of physical objects. An example of direct manipulation is resizing a graphical shape, such as a rectangle, by dragging its corners or edges with a mouse.

>Those ideals also apply to pie menus. Pie menus should strive to provide as much direct feedback as possible, via tracking callbacks, previewing the reversible effect of the currently selected item (possibly even using the distance as a parameter), so you can easily use them without ever popping up the menu.

>For both novice and expert users, the directly obvious geometric way pie menus track and respond to input is more intuitively comprehensible, predictable, reliable, and most importantly REVERSIBLE than traditional gesture recognition (like Palm Graffiti, or StrokePlus.net) or "magical" kludges like the submenu hack.

>With pie menus there's a sharp crisp line between every possible gesture, that you can see on the screen.

>But with a gesture / handwriting recognition system, you wonder where is the dividing line between "u" and "v"? The neural net (or whatever) is a black box to the user (and even the programmer). Some gestures are too close together. And most gestures are useless syntax errors. And there's no way to cancel or change a gesture once you've started. And there's no way to learn the possible gestures.

>But with complex magical invisible submenu hacks, you wonder if it's based on how long you pause, how fast you move, where you move, what is the shape, why can't I see it, how does it change, what if you pause, what if my computer is lagging, what if I go back, what if I didn't want the submenu, how do I make it go away, why can't I select the item I want, what do I do?

>But with pie menus, if you make a mistake or it doesn't behave like you expect, you can at least see and understand what went wrong (you were on the wrong side of the line) and change it (move back into the slice you meant to select). No fuzzy gray area or no-man's-land or magic hand waving. And the further out you move, the more "leverage" and precision you have.

>The area and shape of each item target area should not be limited or defined by the font height and the width of the longest label. It should be maximized, not limited, to encompass the entire screen, all the way out to the edges, like the slices of a pie menu. If you move far enough, it's practically impossible to make a mistake, as the target gets wider and wider, so you can even use pie menus during an earthquake or car chase.

> the "drag delay" of popping up "pull right" submenus

Funny enough, this was actually removed in the early versions of OS X: https://arstechnica.com/gadgets/1999/12/macos-x-dp2/#:~:text...

But today it seems to be back.

He was (still is, I believe) NOT a fan of the Dock: https://www.asktog.com/columns/044top10docksucks.html

He was a great teacher, as well. Not sure if he still gives classes.

I didn't believe that Discount Usability Engineering was useful until we tried it. I was absolutely blown away by the results and have continued the practice for every design and re-design. Thank you Mr. Nielsen.

The old UseIt.com https://web.archive.org/web/19990125092506/http://useit.com/ will forever live rent-free in my brain.

> i.e. doing actual tests (with humans) on what kind of things work to convey information.

E.g., "Why You Only Need to Test with 5 Users":

* https://www.nngroup.com/articles/why-you-only-need-to-test-w...

It depends on how you define the "fight."

In the Nielsen days, two things were happening:

1. People were creating quirky, whimsical, odd corners of the internet for nobody but themselves. Art.

2. Entrepreneurs were starting to build sophisticated web applications for other people, i.e. customers.

Nielsen's dogma was excellent for the latter, and disastrous for the former.

History has been kind to Nielsen in the way that the modern web has lost most/all of its charm for the sake of answering the question "but how does it make money?"

He did a book ("Designing Web Usability" I think) with an unconventional layout and it clearly hadn't been user-tested as it had a flaw (text too close to the binding) that made it ironically hard to use.

I think he was on point with a lot of stuff, but I've been a bit jaded ever since!

I thought the same thing of his website when he first hit the scene. Great info, but the design was so bad it made it difficult to read. It was quick though, and today’s reader view would have fixed that issue. Being usable doesn’t mean zero design; everything needs to work together.

It’s kind of ironic that Nielsen’s site and even his book layouts were often frustrating to use. But maybe that proves his own points.

[deleted]

Yeah, strong agree here. Nielsen brought a certain weight of rigor to the debate back in those days which made sense to the way I wanted to think about web design as an engineer. I don't really think there's a "winner" or "most right" person amongst the trio, but Nielsen's ethos appealed to me more than the others mentioned.

Same. A pretty button is useless if it's not where you expect to find it. You can always make it "less dated" later by changing colors and stuff, but the usability is the most important part.

The Flash 2 screenshot in the article looks dated. But the experience of using it wouldn't change a bit even if it got less 90-sy buttons and looked "modern".

I realise I’m judging the book (and possibly the authors) by the cover but Nielsen’s book cover is objectively more readable.

It’s also probably the only one that would still look new, or current, if it was released today

It seems like one is stuck in the past for good (well, the author and the book I suppose).

Fully agree. And we all know beautiful but totally broken UIs and UX, and "ugly" but extremely functional UIs and UXs, that actually make them beautiful

And UIs that are neither beautiful nor function (looking at you Salesforce, Oracle, SAP, and many other "Enterprise" applications).

If any of these were functional, most users wouldn't care about the visual appeal. NN were correct, but apparently their message didn't reach that particular sphere of web application developers.

If an 'Enterprise' applications website were functional people would be able to navigate to assistance when using the app. Therefore costing money in competent support techs or improving the product itself, neither of which are as easy as just being anti-competitive and monopolistic

I was referring to the web applications themselves, rather than the marketing, documentation, or support websites. In large vendors, those tend to come from a different part of the organisation, so are often superior to the products themselves.

The productivity drain of a poor UI is largely felt by the customers' employees, while the vendor benefits from sales of professional services and premium support contracts.

On the web, the user is rarely a monolith. For a lot of websites (as compared to, say, business software or automobiles), the user could be everyone and anyone. They may all have different mental models, expectations, abilities, etc.

This is important to keep in mind when focusing on user centered design for a general purpose website. You need a testing pool representative of your users (or who you want your user to be), you need to figure out what to do if there are conflicts among users, during testing, etc. It might be obvious, and you can probably still fit in into a framework, but what I'm getting at is that it is less empirical than it might seem at first pass. There is still an art to user centered design, and if you have this in mind, your designs don't have to look hopelessly outdated.

> On the web, the user is rarely a monolith.

Usability folks have understood this for decades. Alan Cooper was writing about defining multiple separate personas [1] to represent different cohorts of your userbase in the 90s.

> what I'm getting at is that it is less empirical than it might seem at first pass.

I would argue that it is still exactly as empirical. You just have to be careful how you aggregate your data and don't try to reduce things to too few clusters. Otherwise you end up making the classic mistake of offering a single T-shirt size at your conference that mostly only fits men because they are the majority of attendees.

> There is still an art to user centered design,

Agreed. No amount of analysis will do your synthesis for you. You still have to make.

[1]: https://en.wikipedia.org/wiki/Persona_(user_experience)

When I imagine what he would think of the current internet it’s really mind-boggling.

[dead]

Nielsen was mostly concerned with labeling himself a “guru” to boost his consultancy firm. The idea of user-driven design goes all the way back to the late 60s with the rise of Participatory Design.