So again, a set of random thoughts. But it culminates with wondering whether the official story behind at least one of the major UI changes of the 21st Century isn't... bollocks.
History of the GUI
So let's go back to 1983. Apple releases a computer called the Lisa. It's slow, expensive, and has a few hardware flaws, notably the stretched pixels of the screen that seemed OK when they were designing it but obviously broke it later on. But to many, it's the first glimpse of the modern GUI. Drop down menus. Click and double click. Icons. Files represented by icons. Windows representing documents. Lisa Office was, by all accounts (I've never used it) the pioneer that set the stage for everything that came afterwards. Apple is often accused of stealing from Xerox, and certainly the base concepts came from Doug Engelbart's pioneering work and Xerox's subsequent development of office workstations, but the Lisa neatly fixed most of the issues, and packaged everything in a friendly box.
The Mac came out a year later, and while the Mac is often described as a low cost version of the Lisa, that's not really fair to the Mac team. The latter were developing, for the most part, their system at the same time as the Lisa, and swapped ideas with one another. The Mac contained hardware changes such as 1:1 pixels that never made it into the Lisa, cut a sizable number of things down so they'd work on lower capacity hardware, and introduced an application-centric user interface compared to the Lisa's more document-centric approach.
Meanwhile Microsoft and Digital Research tried their hands at the same thing, Microsoft influenced primarily by the Lisa, and DR by the Mac, with the latter's GEM system coming out almost exactly a year after the Mac, and Microsoft's Windows, after a lot of negotiations and unusual development choices, coming out nearly a year after that.
The cat was out of the bag, and virtually every 16/32 bit computer after the Macintosh came with a Mac/Lisa inspired GUI from 1985 onwards. There are too many to name, and I'd offend fans of {$random 1980s 16/32 bit platform} by omitting their favorite trying to list them all, but there were a lot of choices, a lot of great and not so great decisions made, some were close to the Mac, others were decidedly different, though everyone adopted the core window, icon, pointer, mouse, scrollbars, drop downs, etc, concepts, from NeXT to Commodore.
By the early 1990s, most mainstream GUIs, Windows and NEXTSTEP excepted, were very similar, and in 1995, Microsoft's Windows 95 brought Windows to within spitting distance of the that common user interface. The start menu, task bar, and the decision to put menus on the tops of every window instead of the screen, distinguished Windows from the others, but it was close enough that someone who knew how to use an Amiga, ST, or Mac could use a Windows PC and vice versa without effort.
Standardization
But what made these UIs acting in a similar way useful wasn't cross platform standardization, but cross application standardization. Even before Windows 95, there was an apex to the learning curve that everyone could reach. If you knew how to use Microsoft Excel, and you knew what a word processor was, you could use Microsoft Word. You could also use Wordperfect. You could also use Lotus 123, at least, the Windows version when it finally came out.
This was because despite differences in features, you operated each in the same way. The core applications built a UI from similar components. Each document had its own window. The menu was ordered in approximately the same way. The File menu allowed you to load and save, the Edit menu allowed block copying, if there was a format menu, that allowed you to change roman to italics, etc. Tools? You could reasonably guess what was there.
Underneath the menu or, in the Mac's case, usually as moveable “palettes” were toolbars, which were frequently customizable. The blank sheet of paper on one let you create a new document, the picture of the floppy save it. The toolbar with the bold B button, underlined U button, and drop down with a list of fonts, allowed you to quickly adjust formatting. So you didn't have to go into the menus for the most common options.
The fact was all programs worked that way. It's hard to believe in 2024, because most developers have lost sight of why that was even useful. To an average dev in 2024, doing the same thing as another program is copying it. But to a dev in 1997, it meant you could release a complex program to do complex difficult understand things that people already knew how to use.
Microsoft breaks everything
You may have noticed that's just not true any more. Technically both, say, Chrome and Firefox have the regular drop down menus still, but they've gone to great levels to hide it, and encourage people to use an application specific “Hamburger menu” instead. And neither has a toolbar. The logic is something like “Save screen space and also make it work like the mobile version”, but nobody expects the latter, and “saving screen space” is... well, an argument for another time.
(Side note: I've been arguing for a long time among fellow nerds that the Mac's “top of screen” rather than “Top of window” approach to having menus is the superior option (I'm an old Amiga user), and explained Fitt's Law to them and how putting a menu at the top of the screen makes it easy to quickly select the menu options when trying to do it when the menu is at the top of a window is fiddly, and usually the response comes back “Oh so you're saying it saves screen space? Pffft who needs to, we all have 20” monitors now”, and I shake my head in annoyance at the fact nobody reads anything any more, not even things they think they're qualified to reply to. Dumbasses. No wonder GUIs have gone to hell. Anywho...)
Anyway, while it's kind of relevant that nerds don't appear to understand why UIs are designed the way they are and aren't interested in finding out why, that's not the point I was making, which was obviously if “We all have 20” monitors now so have plenty of space” is some kind of excuse for wasting desktop space, then refusing to have a menu in the first place isn't justifiable on that basis.
But Google and Mozilla are just following a trend. The trend wasn't set by either (though they're intent on making things worse), and wasn't even set by the iDevices when Apple announced them (though those have given devs excuses to make custom UIs for their apps.) It was set by Microsoft, in 2007, with the introduction of The Ribbon.
The Ribbon is an Office 2007 feature where menus and toolbars have been thrown out and replaced by a hard coded, giant, toolbarish thing. Things are very, very, roughly categorized, and then you have to scan the entire thing to find the function you want on the ribbon tab you think it might appear on because they've been put on in no particular order.
It is, hands down, the single worst new UX element ever introduced in the post-1980s history of GUIs. Not only do you now need to learn how to use an application that uses it, because your knowledge of how other similar applications work no longer applies, but you can spend your whole life not realizing basic functionality exists because it's hidden behind a component in the ribbon that's not immediately relevant.
And learning how to use Excel, and knowing how a word processor works (maybe you used Office 2003?) brings you no closer to knowing how to use Microsoft Word if you use a ribbon version.
Microsoft was roundly criticized for this UI innovation, and a good thing to, but Microsoft decided, rather than responding to criticism, to dig in their heels and wait for everything to blow over. They published usability studies that claimed it was more productive, but it's unclear how that could possibly be true. The claim was also made that it was based upon earlier usability studies. Users, it was claimed, always used the toolbar and almost never used menus, for everything!
Well, no sugar Sherlock. Most toolbars are set up by default to have the most frequently used features on them. And many of the menu options users remember the keyboard shortcuts so use those. So of course people will rarely dig into the menus. The menus are there to make it easy to find every feature, not just the most frequently used features, so it stands to reason they'd be rarely used if they're only being used to find infrequently used functionality!
My personal theory though is that this wasn't a marketing department making a bad choice and wanting to stand by it to save face. This was a deliberate decision by Microsoft to push through a UI change that would intentionally make even Office harder to use. After all, where would the harm have been supporting both user interfaces? Chrome and Firefox do it, and there was nothing in the Ribbon that couldn't have been triggered by a menu.
Anti-Trust and the importance of Office.
The work that lead to the Ribbon almost certainly started shortly after Microsoft's anti-trust problems concluded and during a phase where they were under even more anti-trust scrutiny. Until the 2001 Bush administration, Microsoft had been butting heads with the DoJ culminating in Judge Jackson's finding-of-fact that Microsoft had illegally used its market position to force out competitors.
While Microsoft's issues with Digital Research/Caldera (DR DOS) and IBM (OS/2) were highlights of the FoF, the issues that had sparked intervention were related to their attempts to dominate web browsers and Microsoft's decision to integrate web browsers into their operating system. Microsoft had made the decision to do so in order to own the web, in order to tie what should have been an open standard into the Windows APIs. By 1999, Internet Explorer had an even more extreme hold on Internet usage than Chrome does today, with many websites banning other browsers, and many others being broken on websites that weren't IE. These weren't obscure websites nobody needed to use either, I personally recall being blocked from using various banking and governmental websites at this time.
In 2000, the courts ruled in favor of a break up of Microsoft into an applications company and operating system company. In 2001, this was overturned, but a sizable reason for the appeal court doing so was related to the Judge's conduct rather than the merits of the case. Bush's DoJ stopped pushing for a break-up in late 2001, largely in line with Bush's opposition to anti-trust actions, and Microsoft was given more or less free rein, outside of an agreement to be more open their APIs.
From Microsoft's point of view, “winning” the anti-trust case must have been bittersweet because of the way it was won. The findings of fact were upheld throughout the legal proceedings, and Microsoft only avoided break-up because they successfully wound up the judge enough for him to behave unprofessionally, and because they waited out the clock and were able to get the end of the legal case overseen by a more sympathetic government. There were no guarantees the same thing would happen next time.
It's not clear exactly when Microsoft started to judge Office as being more important than Windows to their future, but certainly in the 2000s we saw the beginning of changes of attitude that made it clear Microsoft was trying to find a way forward that was less reliant on Windows. Office was the most obvious second choice – most corporate PCs run Office, as do a sizable number of non-corporate PCs. Even Macs run Office. Office had a good reputation, it was (and is) extremely powerful. And because of its dominance of the wordprocessing and spreadsheets market, the files it generated were themselves a form of lock-in. If you wanted to interact with another Word user, you needed to use Word. There were third party wordprocessors that tried to support Word's file format, but it turned out supporting the format was only half the problem: if your word processor didn't have the exact same document model that Word did, then it would never be able to successfully import a Word document or export one that would look the same in Word as it would in your third party wordprocessor.
But until 2006, Office's dominance due to file incompatibility wasn't certain. In 2000, Microsoft had switched to a more open file format, and in 2006, under pressure from the EU, had published the complete specification. Critics at the time complained it was too complicated (the entire format is 6,000 pages), but bear in mind this includes the formats for all applications under the Office umbrella.
Two decades later, compatibility from third party applications remains elusive, most likely because of internal model conflicts. But it wasn't clear in the early 2000s that even publishing the Office file formats wouldn't be enough to allow rivals to interoperate within the Office eco-system
The importance of UI balkanization
So, faced with the belief that third parties were about to create office clones that would cut a significant chunk of Microsoft's revenue, and knowing that they couldn't use the operating system any more to just force people to use whatever applications Microsoft wanted users to buy, Microsoft took an extreme and different approach – destroying the one other aspect of interoperability that is required for users to move from one application to another – familiarity.
As I said above, in the late 1990s, if you knew Word, you knew how to use any wordprocessor. If you knew Excel, and you knew about wordprocessing, you could use Word. The standardization of menus and toolbars had seen to that.
To kill the ability of customers to move from a Microsoft wordprocessor to a non-Microsoft wordprocessor, Microsoft needed to undermine that standardization. In particular, it needed a user interface where there was no standard, intuitive, way to find advanced functionality. While introducing such a kludgy, unpleasant, user interface was unpopular, Microsoft had the power to impose such a thing in the early 2000s, as its office suite was a near monopoly. Customers would buy Office with the ribbon anyway, because they didn't have any choice. And with the right marketing, they could even make it sound as if the changes were a positive.
Hence the Ribbon. Until you actually try to use it, it doesn't look unfriendly, making it very easy to market. And for, perhaps, the most popular wordprocessing features, it's no worse than a toolbar. But learning it doesn't help you learn the user interface of any other application. Anyone introduced to wordprocessing through the Ribbon version of Word will have no idea how to use LibreOffice, even if LibreOffice has a ribbon. The user interface will have to be relearned.
Note that Microsoft didn't merely introduce the Ribbon as an optional user interface. Firefox and Chrome, to this day, still have the ability to bring up a traditional menu in addition to their hamburger menu because they know end users benefit from it. It's just, inexplicably, hidden (use the ALT key!) But in Word, there is no menu, there's nothing to make it easier for end users to transition to the ribbon or keep doing things the way they always did, despite the ease with which Microsoft could have implemented that.
We forgot everything
Microsoft's success foisting the Ribbon on the world basically messed up user interfaces from that point onwards. With the sacred cow of interoperable user interfaces slaughtered, devs started to deprecate standardization and introduce “new” ways to do things that ignored why they'd been developed in the first time. Menus have been replaced with buttons, scrollbars have been replaced by... what the hell are those things... and there's little or no logic behind any of the changes beyond “It's new so it doesn't look old”. Many of the changes have been implemented to be “simpler” but in most cases the aesthetic is all that's been simplified, finding the functionality that a user wants to find is harder than ever before.
It would help if devs had realized at the time Microsoft had done this for all the wrong reasons. It's not as if most trust Microsoft or believe they do things for the right reasons.