#?.info

Yes, another showerthought inspired TL;DR blog. But “DR” is in the name of the blog so it's not exactly as if I'm bothering you with this.

Anyway. Microkernels. Good idea? Or bad idea? Torvalds was right... or wrong?

Well, both. Let's have a talk about what they are first.

In the 1980s, Microkernels were considered THE FUTURE. Virtually every respected academic pointed out that shoving lots of complicated code into one computer program that your entire computer relied upon was insanely stupid. They argued that it should be relatively easy to split out your kernel into “servers”, single purpose programs that managed specific parts of the system. For example, you might have a Disk Operating System that applications use to access files. That Disk Operating System might talk to Handler processes that manage the file systems on each device. And those in turn might talk to the Device Drivers. Each part of what in modern terms would be a kernel talking to programs that can, at worst, crash without taking down the rest of the system. The servers – DOS, the File System Handlers, the Device Drivers, would all be processes running under a very simple kernel (the microkernel! Geddit?!) that would schedule processes, manage memory, and provide ways for the servers to communicate with one another.

(Microkernels should not be confused with hypervisors, which is a thin kernel intended to run multiple operating systems. Though much of the early hype about microkernels overlapped, with advocates pointing out that in theory you could create “personalities”. For example, in the above example, in addition to having the Disk Operating System, you could have a completely different API server, with that server providing a view of the world that looked like Unix. And that server could talk to its own set of handlers or the same set.)

Academics generally agreed that microkernels were the only acceptable design for modern operating systems. Against this were more traditional operating systems, Unix being an obvious example, which had a single monolithic kernel with all the file systems and device drivers compiled into it. (Systems like CP/M and MS DOS weren't really advanced enough to have a discussion about.)

Microkernels enter the real world

Academia brought us MINIX and Mach during the 1980s. Mach was the basis of several commercial projects such as MkLinux and, more successfully, XNU (the kernel of NEXTSTEP and Mac OS X) but those commercial projects weren't microkernels, they were always hybrid kernels – kernels where most of the servers were integrated into a single space where they could freely communicate with one another at the cost of security.

The commercial world in turn tried to implement the concept but inevitably failed. Many readers will have read my description of how microkernels work above, mentioning “DOS” and “handlers” and “Device drivers” and immediately think of AmigaOS, which was structured as a microkernel based system, but wasn't one. At first sight it's easy to see why, the Amiga had no memory management chip so it literally wasn't possible to sandbox the different components. But in reality the problems were deeper than that. AmigaOS demonstrated that you could get good performance out of a microkernel operating system if the different components could quickly and easily communicate with one another. In AmigaOS, a device driver could talk to a handler just by sending it, via the kernel, the address of, say, where it had just loaded some data from disk. Suddenly that handler had 512 bytes of data available to do with whatever it needed to do. But that's not compatible with how memory management is done in modern CPUs. Modern CPUs are about sandboxing processes, sending 512 bytes from one process to another means rather more than simply sending it a four byte address, it involves either reconfiguring the memory map of both processes to see the same 512 byte block of RAM, or asking the kernel to copy that data byte by byte. These are expensive operations. AmigaOS only worked because there was no memory management as we know it, just a giant shared block of memory everything used. And because memory was shared, a crash by one device driver could, actually, take the entire system down, rather than just affect access to the device involved.

This expense in the end crippled a series of other commercial projects that almost certainly looked like a good idea at the time, elegant, modular, exactly the type of thing every programmer starts to develop only to realize will never work once they start coding. A big question for me in the 1980s was why Acorn lumbered the amazing ARM-based computers they created, with a crappy third rate operating system descended from Acorn's 8 bit BBC OS, “MOS”. The answer is... they did try to create a modern microkernel-based OS for it, called ARX, and immediately got stuck. Despite running on one of the world's fastest microcomputer environments, the system had performance issues that the creators couldn't get around. The moment the elegant design hit reality, it failed, and Arthur/MOS's “good enough” environment was expanded into RISC OS, which used cooperative multitasking and other kludges to make something useful out of a woefully underpowered base.

On the other side of the Atlantic, other companies enthusiastically writing next generation operating systems also had the same issues. Apple started, then co-funded, then walked away from, Taligent. DEC was keeping Dave Cutler busy with MICA, which didn't go anywhere. Finally Microsoft, which was working on a more traditional system with IBM (OS/2), for various reasons hired Dave Cutler away from DEC after MICA's cancellation to develop Windows NT.

The latter was the nearest commercial microkernel based operating system to achieve some level of success. In practice though, Microsoft didn't feel comfortable making Windows NT its primary operating system, despite high levels of compatibility from NT 4 onwards, until the early 2000s, at which point the system was no longer a classic microkernel system, with many essential services, including the graphics drivers (!), integrated into the main kernel.

So why the failures?

At first sight, it's easy to blame the failure of microkernels on performance issues. But that's not actually what happened. There are two bigger issues: the first was that in the case of most commercial microkernel projects, the microkernels were a part of a much bigger project that attempted to build an operating system from scratch that were elegant and well designed. The microkernel was only one component.

But the second was modern memory management. At some point in the 1980s, the major makers of microcomputer CPUs started to release advanced, secure, memory management systems for their existing CPUs. Motorola and Intel both grafted virtual memory systems onto their existing systems by allowing operating systems to rearrange the addressable memory as needed. This was all that was needed for Unix to work, and Unix was considered the most advanced operating system a personal computer user would want to run.

And yes, Unix somehow managed to be both a very big deal and an irrelevance in the personal computing world. Microsoft, acknowledging MS DOS 1.0 was little more than a CP/M like program loader, saw DOS's future as converging with Xenix, its Unix fork. The press described anything with multitasking, from AmigaOS to OS-9, as “Unix-like”, no matter how unlike Unix it was, because Unix was seen as The Future.

So from the point of view of the big CPU makers, a simple memory remapping system was “good enough” for the most advanced operating systems envisaged as running on their chips. There was another factor behind both Intel and Motorola designing MMUs this way: Motorola had designed a very successful 32-bit ISA for its CPUs that programmers adored. Intel's segmented approach had proven to be a failure, propped up only by IBM's decision to include the 8088 in its PC. Intel was focusing on making a pure 32 bit ISA for its next generation of processors, while Motorola saw no need to change its ISA, and saw MMUs as something that could be bolted on to the architecture of a 68000-based system. By the time it became important, neither saw any value in taking a risk and introducing architectures that would integrate memory management with their ISAs.

Why is this important? Well, go back to the AmigaOS description earlier. In the Amiga, the pseudo-microkernel was fast because servers only needed to send each other addresses to transmit large amounts of data between them. On the 68000 ISA there is no way to graft security onto this system – you can't validate a pointer or the memory it points to, but in the mid-1960s and early 1970s, hardware memory management systems were devised that allowed exactly this kind of thing. The system is called Capability Addressing. Capabilities are pointers to blocks of memory, typically with permissions associated with them (like a file.) Creating new capabilities is a privileged operation, you can't just use some pointer arithmetic to create one. Storing a capability in memory requires the CPU have some way to flag that value as being a capability, typically an extra bit for every word of memory. This way programs can load and store capabilities in memory without risking reading normal data as a pointer or vice versa.

A capability architecture would be perfect for an operating system like AmigaOS. It would, with relatively small modifications, be secure. The different servers would be able to communicate by passing capabilities instead of pointers. If one crashes, it wouldn't be able to write to memory not allocated to it because it wouldn't have any capabilities in its memory space that point at that data.

The problem, of course, is that no popular CPUs support capabilities, and most of those that did were also considered failures. Intel tried to produce a system in the very 1980s called iAPX 432 which was not part of their 80x6 family. It was chronically slow. And the 1980s were not a time to produce such a chip, the extra bit required for each 32 bit (at minimum) pointer would have been considered cost prohibitive at a time when computers came wit hundreds of kilobytes of RAM.

It would be remiss of me to mention there was also another theoretical possibility: managed code. In managed code, programs are compiled to an intermediate language, which can be proven “secure” – that is, unable to access resources it hasn't been given direct access to. The two most famous examples are the Java Virtual Machine and .NET. Both systems have problems however: their garbage collectors require the memory of the machines they're running on be locked for indeterminate amounts of time while they account for what's in use (a process called “marking”), though it's worth mentioning that Rust's alternative approach to garbage collection proves a VM could be built with better real time behavior. Another problem was that during the 1980s C became the standard applications development language, with personal computers not being taken seriously unless they were capable of running it: but the high level approach of a VM is at serious odds with C's low level memory management, making it impossible to create an efficient C compiler for such an environment.

So, TL;DR, it wasn't that microkernels were wrong, it's that the technology choices of the 1980s and 1990s, the time when it was most important, made microkernels inefficient and difficult to implement. By the time memory prices had fallen to a point that a CPU architecture optimized for microkernels would have been viable, the world had standardized on operating systems and system architectures that weren't compatible with the concept.

The failures of the 1980s were mostly because developers were being overly ambitious and didn't have the right architectures to work with in the first place.

All of which is a shame. I'd love my primary OS to be like AmigaOS is/was, but with security.

I have no raw figures, I have tried to get them from the Internet. All I have are memories. And the biggest I had was a colleague distributing Ubuntu CDs at work during the mid-2000s. Ubuntu was the next big thing, it was said. And when I tried it, I had to admit, I couldn't blame anyone from saying so.

Ubuntu in the 2000s was a first class OS. It had the following massive features:

  • It ran on pretty much anything powerful enough. The installer was first rate, booted in the most trying conditions, and installed an image with your wireless, sound and accelerated video all ready to use. Well, OK, ATI cards needed that fglrx thing to get acceleration, and I can't remember exactly how you installed it, but I do know it wasn't hard.
  • It ran GNOME 2. For those who are wondering why that was a good thing, GNOME 2 was basically an intuitive user interface that was somewhat more obvious and consistent than Windows, and maybe a small step back from Mac OS X. It was customizable but...
  • ...it had sane defaults everywhere. The default Ubuntu desktop at that time was easy to understand.

Did you have to drop into the command line to do anything? That depended. You sometimes did in the same way you sometimes have to with Windows or Mac OS X. You have an obscure set of technical conditions, and you need to debug something or configure something equally obscure, and just like Mac OS X and Windows you'd have to use the “nerd” user interface. But an “average” user who just wanted a web browser and office suite would not ever need to do that.

So it wasn't surprising that, anecdotally (like I said, it seems to be rough getting any concrete figures, Statcounter claims a 0.65 marketshare for “Linux” in 2009 but I don't trust them as far as I can throw them, and more importantly they have no pre-2009 figures online making it hard to show growth during that period. Also it's contradicted by other information I'm finding on the web) Ubuntu started to drive installs of GNU/Linux. People really seemed to like it. I even heard major figures in the Mac world at the time switching. Ubuntu was the OS everyone wanted, it “just worked”.

So what happened in the 2010s to halt this progress? Everything changed? Yes.

And by everything, I mean Ubuntu.

Ubuntu decided to change its user interface from GNOME 2 to Unity. In part this was driven by the GNOME team themselves who, for whatever reason, decided the GNOME 2 user interface was obsolete and they should do something different.

I'm not necessarily opposed to this thinking, except the “obsolete” part, but what neither party (Canonical, authors of Ubuntu and the Unity user interface, and the GNOME team) did was to go about this understanding the impact on existing users. Namely:

  • The user interfaces they proposed were in most cases radically different from GNOME 2. So existing users wanting to upgrade would find they would literally have to learn how to use their computers again.
  • The user interfaces proposed only partially used the paradigms that everyone had gotten used to and trained on during the 1990s. GNOME 3 in particular switched to a search model for almost everything. Unity was a little more standard, but launching infrequently used applications in both environments was confusing. These user interfaces were only slightly closer to what had become standard in the 1990s than the new mobile touchscreen UIs that doubtless had influenced their authors.

To understand how massive a problem this was, look at Apple and Microsoft's experience with user interface refreshes.

Apple does it right

Let's start with Apple, because Apple didn't fail:

In the 1990s and early 2000s, Apple switched from their 1980s MacOS operating system to the NEXTSTEP derived Mac OS X. NEXTSTEP and MacOS were nothing alike from a user interface point of view, making shipping NEXTSTEP with new Macs a non-starter. So Apple took pains to rewrite the entire NEXTSTEP user interface system to make it look and feel as close as possible to contemporary MacOS.

The result was Rhapsody. Rhapsody had some “feel” issues in the sense of buttons not quite responding the same way they did in MacOS, some things were in a different place, and running old MacOS applications felt clumsy, but a MacOS user could easily switch to Rhapsody and while they would be aware they were running a new operating system, they knew how to use it out of the box.

Rhapsody was well received by those who got it (it was released in beta form to developers, and sold for a while as Mac OS X Server 1.0), but from Apple's point of view, they still had time to do better. So they gave the operating system's theme an overhaul, creating Aqua. But the overhaul was far more conservative than people give Apple credit for:

  • If something was recognizably a button in Rhapsody/MacOS, it was recognizably a button in Aqua.
  • If something was a window in Rhapsody/MacOS, it was recognizably a window in Aqua.
  • If you did something by dragging it or clicking it or poking your tongue out at it in Rhapsody/MacOS, you'd do the same thing in Aqua.
  • If it was in the top left corner in Rhapsody/MacOS, it was in the top left corner in Aqua. Positions generally stayed the same.

...and so on. The only major new user interface element they added was a dock. Which could even be hidden if the user didn't like it.

So the result, when Apple finally rolled this out, was an entirely new operating system with a modern user interface that looked fantastic that was completely 100% usable by people used to the old one.

Microsoft “pulls a Ubuntu/GNOME” but understands how to recover

In some ways saying Apple did it right and Microsoft didn't is unfair, because Microsoft has done operating system upgrades correctly more times than you might imagine. And they even once even managed a complete GNOME-style UI overhaul that actually succeeded: replacing Windows 3.x's UI with Windows 95's UI. At this time they were successful though for a variety of reasons:

  • Windows 3.x was really hard to use. Nobody liked it.
  • The new Windows 95 user interface was a composite UI based upon Mac OS, Amiga OS, GEM, Windows 1.x, OS/2, and so on. It was instantly familiar to most people who had used graphical mouse-driven user interfaces before.
  • In 1995, there were still people using DOS. Windows 3.x was gaining acceptance but wasn't universally used.

Since then, from 1995 to 2012, Microsoft managed to avoid making any serious mistakes with the user interface. They migrated NT to the 95 UI with Windows NT 4. They gave it a, in my view ugly, refresh with Windows XP which was a purely visual clean up similar to, though not as radical as, the Rhapsody to Aqua user interface changes I noted above. But like Rhapsody to Aqua, no serious changes in the user interface paradigm were made.

They did the same thing with Vista/7 creating a clean, composited, UI that was really quite beautiful, yet, again, kept the same essential paradigms so a Windows 95 user could easily switch to Windows 7 without having to relearn anything.

Then Microsoft screwed up. Convinced, as many in the industry were at the time, the future was touch user interfaces and tablets, they released Windows 8, which completely revamped the user interface and changed how the user interacted with the computer. They moved elements around, they made things full screen, they made things invisible.

Despite actually being very nice on a tablet, and despite PC manufacturers pushing 2 in 1 devices hard on the back of Windows 8's excellent touch screen support, users revolted and refused to have anything to do with it.

Windows 8 generated substantial panic at Microsoft, resulting in virtually all the user interface changes being taken out of Windows 10, its major successor. Windows 10 itself was rushed out, with early versions being buggy and unresponsive. But compared to Windows 7, the user interface changes were far less radical. It retained the Windows 7 task bar, the start menu, and buttons were where you'd expect them. A revised preferences system was introduced that... would have been controversial if it wasn't for the fact earlier versions of Windows had a fragmented system of half written preferences systems anyway. A notifications bar was introduced, but it wasn't particularly radical.

But windows, buttons, etc, all operated the same way they did in Windows 7 and its predecessors.

What is NOT the reason Ubuntu ceased to be the solution in the 2010s.

Amazingly, I've heard the argument Ubuntu failed because the underlying operating system is “too nerdy”. It isn't. It's no more nerdy than Mac OS X, which was based on a similar operating system.

Mac OS X is based on a kernel called XNU, which in turn is based on a kernel called Mach, that's been heavily modified, and a userland that's a combination of – let's call it user interface code – and BSD. There are some other small differences like the system to manage daemons (in old school BSD this would have been bsdinit), but nothing that you'd immediately notice as an end user.

All versions of GNU/Linux, including Ubuntu, are based on a kernel called Linux, and a userland that's a combination of the GNU project and some other projects like X11 (which maintains the core windowing system) and some GNU projects like GNOME (which does the rest of the UI.) There are multiple distribution specific changes to things like, well, the system to manage daemons.

So both are XNU or Linux, BSD or GNU, and then some other stuff that was bolted on.

XNU and Linux are OS kernels designed as direct replacements for the Unix kernel. They're open source, and they exist for slightly different reasons, XNU's Mach underpinnings being an academic research project, and Linux being Linus Torvald's efforts to get MINIX and GNU working on his 386 computer.

BSD and GNU are similar projects that ultimately did the same things as each other but for very different reasons. They're both rewrites of Unix's userland, that started as enhancements, and ultimately became replacements. In BSD's case it's just a project to enhance Unix that grew into a replacement because of frustration at AT&T's inability to get Unix out to a wider audience. In GNU's case, it was always the plan to have it replace Unix, but it started as an enhancement because it's easier to build a replacement if you don't have to do the whole thing at once.

So... that's all nerd stuff right? Sure. But dig into both OSes and you'll find they're pretty much built the same way. A nice friendly user interface bolted onto that Unix-like underpinnings that'll never be friendly to non-nerds. So saying Ubuntu failed because it's too nerdy is silly. Mac OS X would have failed for the same reason if that were true. The different origins between the two does not change the fact they're similar implementations of the same underlying concept.

So what did Ubuntu do wrong and what should it have done?

The entire computer industry at this point seems to be obsessed with changing things for the sake of doing so, to make it appear they're making progress. In reality, changes should be small, and cosmetic changes are better for both users and (for want of a better term) marketing reasons than major paradigm changes. The latter is bad for users, and doesn't necessarily help “marketing” as much as marketing people think it helps them.

Ubuntu failed to make GNU/Linux take off because it clumsily changed its entire user interface in the early 2010s for no good reason. This might have been justifiable if:

  • The changes were cosmetic as they were for the user interfaces in Windows 95 vs XP vs Vista/7 vs 10/11, and Rhapsody vs Aqua. They weren't.
  • The older user interface it was replacing was considered user unfriendly (like the replacement of Windows 3.1's with 95.) It was, in fact, very popular and easy to use.
  • The older user interface prevented progress in some way. If this is the reason, the apparent progress GNOME 3+ and Unity enabled has yet to be identified.
  • The older user interface was harder for users migrating from other systems to get used to than its replacements. This is laughably untrue.

Radically changing a user interface is a bad idea. It makes existing users leave unless forced to stay. And unless it's sufficiently closer to the other user interfaces people are using, it won't attract new users. It was a colossal misstep on GNOME and Canonical's part.

GNOME 3/Unity should, to put it bluntly, have had the same fundamental paradigm as GNOME 2. Maybe with an optional dock, but not the dock-and-search focused system they put in instead.

Where both teams should have put their focus is simple modernization of the look and focused larger changes on less frequently used parts of the system or internals needed to attract developers. I'm not particularly pro-Flatpak (and Snap can die a thousand deaths) but making it easier to install third party applications (applications not in repositories) would have also addressed some of the few holes in Ubuntu that other operating systems did better. There's a range of ways of doing this that do not involve sandboxing things and forcing developers to ship and maintain all the dependencies of their applications such as:

  • Identifying a core subset of packages that will only ever be replaced by backward compatible versions in the foreseeable future, and will always be installed by default and encouraging static linking for libraries outside of those packages, even making static linking default. (glibc and the GTK libraries are obvious examples of the former, libraries that should be fully supported going forward with complete backward compatibility, while more obscure libraries and those that have alternatives, image file parsers would be an example, should be statically linked by default.)
  • Supporting signed .DEBs
  • Making it easy to add a third party repository while sandboxing it (to ensure only relevant packages are ever loaded from it) and authenticating the identity of the maintainer at the time it's added. (Canonical's PPA system is a step in the right direction but it does force the repos to be maintained by them.)
  • Submitting Kernel patches that allow for more userland device drivers (giving them a stable ABI)

Wait! This is all “nerd stuff”. But non-nerds don't need to know it, from their perspective they just need to know that if they download an application from a website, it'll “just work”, and continue to when they install GreatDistro 2048.1 in 24 years.

What is NOT the solution?

The solution is not an entirely different operating system, because any operating system that gets the same level of support of GNU/Linux will find itself making the same mistakes. To take, for example, off the top of my head, no particular reason to select this one except it's a well regarded open source OS that's not GNU/Linux, ooh, Haiku, the OS inspired by BeOS?

Imagine Haiku becoming popular. Imagine who will be in charge of it. Will these people be any different to those responsible for GNOME and Canonical's mistakes?

No.

Had Haiku been the basis of Ubuntu in the 2000s, it's equally possible that Haiku would have suffered an unnecessary user interface replacement “inspired” by the sudden touch screen device craze. Why wouldn't it? It happened to GNOME and Ubuntu. It happened to Windows for crying out loud. Haiku didn't go there not because it's inherently superior but because it was driven by BeOS loving purists in the time period in question. If Haiku became popular, it wouldn't be driven by BeOS loving purists any more.

Frankly, I don't wait Haiku to become popular for that reason, it'd ruin it, I'd love however for using fringe platforms to be more practical...

Been using this today:

https://cambridgez88.jira.com/wiki/spaces/OZVM/overview

The Z88 was the last computer released by Sinclair Research (using the name Cambridge Computer as Amstrad by then had bought the rights to the Sinclair name.) The Z88 was an A4-paper (that's “Like Letter-size but sane” to 'murricans) sized slab-style laptop computer. By slab-style I mean the screen and keyboard were built into a single rectangular slab, it didn't fold like a modern laptop. It was Z80 based, had solid state memory, and a 640x64 monochrome (supertwist LCD) display which looked gorgeous. There was 32k of battery backed RAM but my understanding is functionality was very limited unless you put in a RAM expansion – other than the Spectrum that was a Sinclair trademark. In classic Sinclair style it had a rubber “dead flesh” keyboard, though there was a justification given, that the keyboard was “quiet” and that was probably legitimately a selling point.

Sir Clive had a dream dating back to the early 1980s that everyone should have a portable computer that was their “main” computer. The idea took shape during the development of the ZX81, and was originally the intended use of the technologies that went into the QL. Some of the weirder specifications of the QL, such as its 512x256 screen being much wider than the viewable area of most TVs, came from Sinclair's original intention to use a custom CRT with a Fresnel lens set up as the main display for the machine. Early on it was found that the battery life of the portable computer designed around the ZX83 chips was measured in minutes, and the idea was discarded. (I believe, from Tebby's account, that the ZX83 chips remained unchanged because they started to have difficulty getting new ULA designs tested.)

So... after selling up to Amstrad, Sinclair tried one last time and made a Z80-based machine. He discarded both Microdrives (which weren't energy efficient, and I suspect belonged to Amstrad at this point) and his cherished flat screen CRT technologies (which were widely criticized) and finally adopted LCDs. And at that point it looks like everything came together. There were still issues – the machine needed energy efficient static RAM which did (and does) cost a small fortune, so the Z88 had limited storage in its base form. Flash was not a thing in 1988, EEPROMs were expensive and limited, but more conventional EPROMs (which used UV light to reset them) were affordable storage options.

So, with a combination wordprocessor/spreadsheet (Pipedream), BASIC, Calendar/clock, and file management tools, the computer was definitely finally useful.

I never got a Z88 as I was still a teenager at the time and the cost was still out of my league. When I got my QL it was 80GBP (on clearance at Dixons) which I just had enough savings for. Added a 25GBP monitor a few months later. But that gives you some idea of the budget I was on during the height of the original computer boom.

Anywho, IIRC the Z88 ended up being around 200GBP and the media was even more expensive, which would have been a hell of a gamble for me at the time given despite Sir Clive's intentions it was far from a desktop replacement. It had limited programmability – it came with BBC BASIC (not SuperBASIC, as Amstrad now had the rights to that) but otherwise development was expensive. And a 32K Z80 based computer in 1988 was fairly limited.

But I really would have gotten one had I had the money. I really loved the concept.

The emulator above comes as a Java package that requires an older version of Java to run. It wouldn't start under OpenJDK 17 (as comes with Debian 12), but I was able to download OpenJDK 6 from Oracle's site (https://www.oracle.com/java/technologies/javase-java-archive-javase6-downloads.html) which ran fine from the directory I installed it into without having to mess with environment variables.

Anyway, a little glimpse into what portable computing looked like in the 1980s, pre-smartphones and clamshell laptops.

See also:

There's also the ill-fated Commodore LCD, a 6502 KERNAL based system designed by Bill Herd. It wasn't a slab, having a fold out screen, but was similar in concept. It was killed by an idiotic Commodore Manager who asked Radio Shack if they should enter the market with a cheap laptop, and who believed the Radio Shack executive he spoke to when said exec told him there wasn't a market. Radio Shack was, of course, selling the TRS-80 Model 100 at the time, and making money hand over fist.

Final comment: these types of slab computer weren't the only “portable computers” in the 1980s. Excluding luggables (which weren't true portables in the sense they couldn't run without a mains power connection), and a few early attempts at clamshell laptops, there were also “pocket computers”. Made mostly by Casio and Sharp, these were miracles of miniaturization, usually with only a few kilobytes of memory at most and a one or two line alphanumeric LCD display. I had a Casio PB-80 which had about 500 bytes of usable memory. (IIRC they called bytes “steps”, reflecting the fact these things were designed by their manufacturer's programmable calculator divisions) They did have full versions of BASIC, and arguably their modern successors are graphing calculators. These devices were nice, but their lack of any communications system or any way to load/save to external media made them limited for anything beyond really simple games and stock calculator functions.

So again, a set of random thoughts. But it culminates with wondering whether the official story behind at least one of the major UI changes of the 21st Century isn't... bollocks.

History of the GUI

So let's go back to 1983. Apple releases a computer called the Lisa. It's slow, expensive, and has a few hardware flaws, notably the stretched pixels of the screen that seemed OK when they were designing it but obviously broke it later on. But to many, it's the first glimpse of the modern GUI. Drop down menus. Click and double click. Icons. Files represented by icons. Windows representing documents. Lisa Office was, by all accounts (I've never used it) the pioneer that set the stage for everything that came afterwards. Apple is often accused of stealing from Xerox, and certainly the base concepts came from Doug Engelbart's pioneering work and Xerox's subsequent development of office workstations, but the Lisa neatly fixed most of the issues, and packaged everything in a friendly box.

The Mac came out a year later, and while the Mac is often described as a low cost version of the Lisa, that's not really fair to the Mac team. The latter were developing, for the most part, their system at the same time as the Lisa, and swapped ideas with one another. The Mac contained hardware changes such as 1:1 pixels that never made it into the Lisa, cut a sizable number of things down so they'd work on lower capacity hardware, and introduced an application-centric user interface compared to the Lisa's more document-centric approach.

Meanwhile Microsoft and Digital Research tried their hands at the same thing, Microsoft influenced primarily by the Lisa, and DR by the Mac, with the latter's GEM system coming out almost exactly a year after the Mac, and Microsoft's Windows, after a lot of negotiations and unusual development choices, coming out nearly a year after that.

The cat was out of the bag, and virtually every 16/32 bit computer after the Macintosh came with a Mac/Lisa inspired GUI from 1985 onwards. There are too many to name, and I'd offend fans of {$random 1980s 16/32 bit platform} by omitting their favorite trying to list them all, but there were a lot of choices, a lot of great and not so great decisions made, some were close to the Mac, others were decidedly different, though everyone adopted the core window, icon, pointer, mouse, scrollbars, drop downs, etc, concepts, from NeXT to Commodore.

By the early 1990s, most mainstream GUIs, Windows and NEXTSTEP excepted, were very similar, and in 1995, Microsoft's Windows 95 brought Windows to within spitting distance of the that common user interface. The start menu, task bar, and the decision to put menus on the tops of every window instead of the screen, distinguished Windows from the others, but it was close enough that someone who knew how to use an Amiga, ST, or Mac could use a Windows PC and vice versa without effort.

Standardization

But what made these UIs acting in a similar way useful wasn't cross platform standardization, but cross application standardization. Even before Windows 95, there was an apex to the learning curve that everyone could reach. If you knew how to use Microsoft Excel, and you knew what a word processor was, you could use Microsoft Word. You could also use Wordperfect. You could also use Lotus 123, at least, the Windows version when it finally came out.

This was because despite differences in features, you operated each in the same way. The core applications built a UI from similar components. Each document had its own window. The menu was ordered in approximately the same way. The File menu allowed you to load and save, the Edit menu allowed block copying, if there was a format menu, that allowed you to change roman to italics, etc. Tools? You could reasonably guess what was there.

Underneath the menu or, in the Mac's case, usually as moveable “palettes” were toolbars, which were frequently customizable. The blank sheet of paper on one let you create a new document, the picture of the floppy save it. The toolbar with the bold B button, underlined U button, and drop down with a list of fonts, allowed you to quickly adjust formatting. So you didn't have to go into the menus for the most common options.

The fact was all programs worked that way. It's hard to believe in 2024, because most developers have lost sight of why that was even useful. To an average dev in 2024, doing the same thing as another program is copying it. But to a dev in 1997, it meant you could release a complex program to do complex difficult understand things that people already knew how to use.

Microsoft breaks everything

You may have noticed that's just not true any more. Technically both, say, Chrome and Firefox have the regular drop down menus still, but they've gone to great levels to hide it, and encourage people to use an application specific “Hamburger menu” instead. And neither has a toolbar. The logic is something like “Save screen space and also make it work like the mobile version”, but nobody expects the latter, and “saving screen space” is... well, an argument for another time.

(Side note: I've been arguing for a long time among fellow nerds that the Mac's “top of screen” rather than “Top of window” approach to having menus is the superior option (I'm an old Amiga user), and explained Fitt's Law to them and how putting a menu at the top of the screen makes it easy to quickly select the menu options when trying to do it when the menu is at the top of a window is fiddly, and usually the response comes back “Oh so you're saying it saves screen space? Pffft who needs to, we all have 20” monitors now”, and I shake my head in annoyance at the fact nobody reads anything any more, not even things they think they're qualified to reply to. Dumbasses. No wonder GUIs have gone to hell. Anywho...)

Anyway, while it's kind of relevant that nerds don't appear to understand why UIs are designed the way they are and aren't interested in finding out why, that's not the point I was making, which was obviously if “We all have 20” monitors now so have plenty of space” is some kind of excuse for wasting desktop space, then refusing to have a menu in the first place isn't justifiable on that basis.

But Google and Mozilla are just following a trend. The trend wasn't set by either (though they're intent on making things worse), and wasn't even set by the iDevices when Apple announced them (though those have given devs excuses to make custom UIs for their apps.) It was set by Microsoft, in 2007, with the introduction of The Ribbon.

The Ribbon is an Office 2007 feature where menus and toolbars have been thrown out and replaced by a hard coded, giant, toolbarish thing. Things are very, very, roughly categorized, and then you have to scan the entire thing to find the function you want on the ribbon tab you think it might appear on because they've been put on in no particular order.

It is, hands down, the single worst new UX element ever introduced in the post-1980s history of GUIs. Not only do you now need to learn how to use an application that uses it, because your knowledge of how other similar applications work no longer applies, but you can spend your whole life not realizing basic functionality exists because it's hidden behind a component in the ribbon that's not immediately relevant.

And learning how to use Excel, and knowing how a word processor works (maybe you used Office 2003?) brings you no closer to knowing how to use Microsoft Word if you use a ribbon version.

Microsoft was roundly criticized for this UI innovation, and a good thing to, but Microsoft decided, rather than responding to criticism, to dig in their heels and wait for everything to blow over. They published usability studies that claimed it was more productive, but it's unclear how that could possibly be true. The claim was also made that it was based upon earlier usability studies. Users, it was claimed, always used the toolbar and almost never used menus, for everything!

Well, no sugar Sherlock. Most toolbars are set up by default to have the most frequently used features on them. And many of the menu options users remember the keyboard shortcuts so use those. So of course people will rarely dig into the menus. The menus are there to make it easy to find every feature, not just the most frequently used features, so it stands to reason they'd be rarely used if they're only being used to find infrequently used functionality!

My personal theory though is that this wasn't a marketing department making a bad choice and wanting to stand by it to save face. This was a deliberate decision by Microsoft to push through a UI change that would intentionally make even Office harder to use. After all, where would the harm have been supporting both user interfaces? Chrome and Firefox do it, and there was nothing in the Ribbon that couldn't have been triggered by a menu.

Anti-Trust and the importance of Office.

The work that lead to the Ribbon almost certainly started shortly after Microsoft's anti-trust problems concluded and during a phase where they were under even more anti-trust scrutiny. Until the 2001 Bush administration, Microsoft had been butting heads with the DoJ culminating in Judge Jackson's finding-of-fact that Microsoft had illegally used its market position to force out competitors.

While Microsoft's issues with Digital Research/Caldera (DR DOS) and IBM (OS/2) were highlights of the FoF, the issues that had sparked intervention were related to their attempts to dominate web browsers and Microsoft's decision to integrate web browsers into their operating system. Microsoft had made the decision to do so in order to own the web, in order to tie what should have been an open standard into the Windows APIs. By 1999, Internet Explorer had an even more extreme hold on Internet usage than Chrome does today, with many websites banning other browsers, and many others being broken on websites that weren't IE. These weren't obscure websites nobody needed to use either, I personally recall being blocked from using various banking and governmental websites at this time.

In 2000, the courts ruled in favor of a break up of Microsoft into an applications company and operating system company. In 2001, this was overturned, but a sizable reason for the appeal court doing so was related to the Judge's conduct rather than the merits of the case. Bush's DoJ stopped pushing for a break-up in late 2001, largely in line with Bush's opposition to anti-trust actions, and Microsoft was given more or less free rein, outside of an agreement to be more open their APIs.

From Microsoft's point of view, “winning” the anti-trust case must have been bittersweet because of the way it was won. The findings of fact were upheld throughout the legal proceedings, and Microsoft only avoided break-up because they successfully wound up the judge enough for him to behave unprofessionally, and because they waited out the clock and were able to get the end of the legal case overseen by a more sympathetic government. There were no guarantees the same thing would happen next time.

It's not clear exactly when Microsoft started to judge Office as being more important than Windows to their future, but certainly in the 2000s we saw the beginning of changes of attitude that made it clear Microsoft was trying to find a way forward that was less reliant on Windows. Office was the most obvious second choice – most corporate PCs run Office, as do a sizable number of non-corporate PCs. Even Macs run Office. Office had a good reputation, it was (and is) extremely powerful. And because of its dominance of the wordprocessing and spreadsheets market, the files it generated were themselves a form of lock-in. If you wanted to interact with another Word user, you needed to use Word. There were third party wordprocessors that tried to support Word's file format, but it turned out supporting the format was only half the problem: if your word processor didn't have the exact same document model that Word did, then it would never be able to successfully import a Word document or export one that would look the same in Word as it would in your third party wordprocessor.

But until 2006, Office's dominance due to file incompatibility wasn't certain. In 2000, Microsoft had switched to a more open file format, and in 2006, under pressure from the EU, had published the complete specification. Critics at the time complained it was too complicated (the entire format is 6,000 pages), but bear in mind this includes the formats for all applications under the Office umbrella.

Two decades later, compatibility from third party applications remains elusive, most likely because of internal model conflicts. But it wasn't clear in the early 2000s that even publishing the Office file formats wouldn't be enough to allow rivals to interoperate within the Office eco-system

The importance of UI balkanization

So, faced with the belief that third parties were about to create office clones that would cut a significant chunk of Microsoft's revenue, and knowing that they couldn't use the operating system any more to just force people to use whatever applications Microsoft wanted users to buy, Microsoft took an extreme and different approach – destroying the one other aspect of interoperability that is required for users to move from one application to another – familiarity.

As I said above, in the late 1990s, if you knew Word, you knew how to use any wordprocessor. If you knew Excel, and you knew about wordprocessing, you could use Word. The standardization of menus and toolbars had seen to that.

To kill the ability of customers to move from a Microsoft wordprocessor to a non-Microsoft wordprocessor, Microsoft needed to undermine that standardization. In particular, it needed a user interface where there was no standard, intuitive, way to find advanced functionality. While introducing such a kludgy, unpleasant, user interface was unpopular, Microsoft had the power to impose such a thing in the early 2000s, as its office suite was a near monopoly. Customers would buy Office with the ribbon anyway, because they didn't have any choice. And with the right marketing, they could even make it sound as if the changes were a positive.

Hence the Ribbon. Until you actually try to use it, it doesn't look unfriendly, making it very easy to market. And for, perhaps, the most popular wordprocessing features, it's no worse than a toolbar. But learning it doesn't help you learn the user interface of any other application. Anyone introduced to wordprocessing through the Ribbon version of Word will have no idea how to use LibreOffice, even if LibreOffice has a ribbon. The user interface will have to be relearned.

Note that Microsoft didn't merely introduce the Ribbon as an optional user interface. Firefox and Chrome, to this day, still have the ability to bring up a traditional menu in addition to their hamburger menu because they know end users benefit from it. It's just, inexplicably, hidden (use the ALT key!) But in Word, there is no menu, there's nothing to make it easier for end users to transition to the ribbon or keep doing things the way they always did, despite the ease with which Microsoft could have implemented that.

We forgot everything

Microsoft's success foisting the Ribbon on the world basically messed up user interfaces from that point onwards. With the sacred cow of interoperable user interfaces slaughtered, devs started to deprecate standardization and introduce “new” ways to do things that ignored why they'd been developed in the first time. Menus have been replaced with buttons, scrollbars have been replaced by... what the hell are those things... and there's little or no logic behind any of the changes beyond “It's new so it doesn't look old”. Many of the changes have been implemented to be “simpler” but in most cases the aesthetic is all that's been simplified, finding the functionality that a user wants to find is harder than ever before.

It would help if devs had realized at the time Microsoft had done this for all the wrong reasons. It's not as if most trust Microsoft or believe they do things for the right reasons.

I started watching a lot of videos on retrocomputing recently. Well, the era they call retro I call “when I learned what I know now”. The 1980s was a fun time, as far as computers were concerned. There was variety, and computer companies were trying new things.

The more jarring thing I watched though was a review of the Timex Sinclair 2068, essentially the US version of the Sinclair Spectrum, which – as you'd imagine from the subject – was a very American view of why that computer failed. And the person reviewing the 2068 felt it failed because it represented poor value compared to... the Commodore VIC 20?

Which now I've spent some time thinking about it, I think I understand the logic. But it wasn't easy. You see, when I was growing up the school yard arguments were not about the ZX Spectrum vs the VIC 20, but it's vastly superior sibling, the Commodore 64. And both sides had a point, or so it seemed at the time.

The principle features of the ZX Spectrum were:

  • A nice BASIC. That was considered kind of important then, even in a world where actually the primary purpose of the computer was gaming. Everyone understood that in order for people to get to the point they were writing games in the first place, the computer had to be nice to program.
  • 48k of RAM, of which 41-42k was available to programmers.
  • A fixed, single, graphics mode of 256x192, with each 8x8 pixel block allowed to use two colours picked from a palette of... I want to say 16 but I can't remember for sure.
  • An awful keyboard. There was a revision called the Spectrum+ that had a slightly better keyboard based on the Sinclair QL's (but not really like the QL's, the QL's had a lighter feel to it.)
  • A beeper type sound device, driven directly by the CPU
  • Loading and saving from tape.
  • A single crude expansion slot that was basically the Z80's pins on an edge connector.

The Commodore VIC 20 had 5k of RAM, 3.5k available. It had a single raw text mode, 22x24 IIRC, with each character position allowed to have two colours. It did allow characters to be user defined. BASIC was awful. Expansion was sort of better, it had a serial implementation of IEEE488 that was broken, a cartridge port, and a serial port. Like the Spectrum it was designed to load and save programs primarily from tape. Despite the extra ports, it just wasn't possible to do 90% of the things a Spectrum could do, so I'm baffled the reviewer saw fit to compare the two. They were only similar in terms of price. And the VIC 20 was way cheaper than the Spectrum in the UK.

The Commodore 64, on the other hand, was, on paper, superior:

  • OK, BASIC wasn't. It was the same version as the VIC 20.
  • 64k of RAM. Now we're getting somewhere.
  • A mix of graphics and text modes, including a “better than ZX Spectrum” mode which used a similar attribute system for 8x8 blocks of pixels, but had a resolution of 320x200 and which supported sprites. And programmers could also drop the resolution to 160x200 and have four colours per 8x8 cell.
  • A great keyboard
  • A dedicated sound processor, the famous SID
  • Loading and saving from tape.
  • That weird serial implementation of IEEE488 that the VIC 20 had, with the bug removed... but a with a twist.
  • Cartridge, and a port for hooking up a modem. And a monitor port. And, well, ports.

So if the C64 was so much technically better, why the schoolyard arguments? Other than kids “not knowing” because they didn't understand the technical issues, or wanting to justify their parents getting the slightly cheaper machine? Well, it was because the details mattered.

  • Both systems had BASIC, but Commodore 64 BASIC was terrible.
  • The extra 16k of RAM was a nice to have, but in the end both machines were in the same ballpark. (Oddly the machine in the UK considered to be superior to both, the BBC Micro, only had 32k.)
  • Programmers loved the 160x200 4 colour mode. It meant there was less “colour clash”, an artifact issue resulting from limiting the palette per character cell. But oddly, the kids were split on that. Most preferred higher resolution graphics over less colour clashing issues. So even though the Commodore 64 was superior technologically, it was encouraging programmers to do things that were unpopular. One factor there was that most kids were hooking up the computer to their parent's spare TV, which was usually monochrome.
  • The keyboard really didn't matter, to kids. Especially given the computer was being used to play games, and Sinclair's quirky keyword input system and proto-IDE was arguably slightly more productive for BASIC programming than a “normal” keyboard in a world full of new typists.
  • Both computers loaded and saved from tape, but the Spectrum used commodity cassette recorders and loaded and saved programs at a higher speed, around 1500bps vs 300bps.
  • The IEEE488 over serial thing was... just not under consideration. Floppy drives were an expensive luxury that didn't take off until the 16 bit era in the UK when it came to home computers. But, worse, the Spectrum actually ended up being the better choice if random access storage was important to you. Sinclair released a system called the ZX Microdrive, similar to the American stringy-floppy concept (except smaller! Comparable to 2-3 full size SD cards stacked on top of one another), where the drives and interface for the Spectrum came to less than 100GBP (and additional drives were somewhere in the region of 50GBP.) The Commodore floppy drives, on the other hand, cost 300-500GBP each. Worse, they were slower than they'd been on the VIC 20 (about as slow as the cassette drive no less!), despite the hardware bug being fixed, because the computer couldn't keep up with the incoming data.
  • Cartridge ports should also have been a point in Commodore's favour, but for some reason cartridges were very expensive compared to software on tape. (I didn't learn until the 2000s that cartridges were actually cheaper to make.)
  • The other ports were for things kids just weren't interested in. Modems? In Britain's metered phone call system they just weren't going to be used by anyone under the age of 25. Monitors? TVs are cheaper and you can watch TV shows on them!

Over time many of these issues were resolved. Fast loaders improved the Commodore 64 software loading times, though the Spectrum had them too. But in the mean time, the kids didn't see the two platforms as “Cheap Spectrum vs Technically Amazing C64”, they were seen as equals, and to be honest, I don't think it was completely unfair in that context they were seen that way. There's no doubt the C64, with its sound and sprites, was the superior machine, but the slow cassette interface and expensive and broken peripheral system undermined the machine. As did programmers using features the kids didn't like.

Go across the pond and, sure, nobody would compare the TS2068 with the C64. Americans weren't using tape drives with their C64s. But I'm still not sure why they'd compare the TS2068 to the VIC 20 either.

The Spectrum benefited from its fairly lightweight limited spec. Not only did it undercut the more advanced C64 on price, it also meant it didn't launch with as many unsolvable hardware bugs. The result was Sinclair and third parties could sell the add-ons needed to make the Spectrum equal or better its otherwise technically superior rivals, and the entire package still ended up costing less. In the mean time, the feature set on launch was closer to what the market – kids who just wanted a cheap computer to hook up to their parent's spare TV set to play games – wanted.

All of which said, the TS2068 probably didn't fail because Americans were comparing it to the VIC 20, so much as it being released late and the home computer market being already decided by that point. Word of mouth mattered and nobody would have been going into a computer store in 1984 undecided about what computer to buy. Timex Sinclair had already improved the TS2068 over the Spectrum by adding a dedicated sound chip, and could have added sprites, and maybe even integrated the microdrives into the system, and fixed the keyboard, and not added much to the cost (the microdrives were technologically simpler than cassette recorders, so I suspect would have cost under $10 each to add) and the system would still have bombed. It was too late, the C64 and Apple II/IBM PC dominated the popular and high ends of the US market respectively, there wasn't any space for another home computer.

Finally set up Writefreely to do my long form blogging which, hopefully, will mean I can write longer stuff of the type most people will skip over. Once I figured out why it didn't work the first time, it seems to work fine. My own platform is one I want to share with friends so there are multiple complications: it's behind a reverse proxy, and I'm using Keycloak to supply SSO.

The only issue I have with what I've configured is that registration is still a “process”, you don't automatically get dropped into the system the first time you log in with openid-connect.

For those interested, my Keycloak OpenID-Connect configuration required the following:

[app]
...
single_user           = false
open_registration     = true
disable_password_auth = true

[oauth.generic]
client_id          = (client id from Keycloak)
client_secret      = (Client secret from Keycloak)
host               = https://(keycloak prefix)/realms/(realm)
display_name       = Virctuary Login
callback_proxy     = 
callback_proxy_api = 
token_endpoint     = /protocol/openid-connect/token
inspect_endpoint   = /protocol/openid-connect/userinfo
auth_endpoint      = /protocol/openid-connect/auth
scope              = profile email
allow_disconnect   = false
map_user_id        = preferred_username
map_username       = preferred_username
map_display_name   = name
map_email          = email

In the above (client id) and (client secret) are from the configuration I set up in Keycloak's client configuration for WriteFreely. For the Keycloak prefix, if you haven't reverse proxied the /auth part of Keycloak URIs away, then you'll need that part to look something like domain/auth, otherwise just domain, eg:

host = https://login.example.social/auth/realms/example/
host = https://login.example.social/realms/example/

In terms of use, I'm still getting used to Writefreely. The formatting takes some getting used to, it's a mixture of raw HTML (the fixed font blocks above are in HTML <PRE> tags) and Markdown. In theory Markdown supports fixed font blocks too, but I can't get it to work. The fact you can always resort to raw HTML is good though, and only an issue if you actually need to use < anywhere...

One other thing, for some reason WriteFreely's installation instructions include this block in their example reverse proxy configuration:

location ~ ^/(css|img|js|fonts)/ { root /var/www/example.com/static; # Optionally cache these files in the browser: # expires 12M; }

This breaks everything. Either remove it, or introduce some smart caching for those paths. Another default configuration snafu is that the built in configurator has Writefreely listening on localhost if you tell it you're using a reverse proxy, but there's absolutely no reason for it to assume the reverse proxy is on the same computer. So when you edit your config afterwards, change “bind” from localhost to [::] if you're using an external reverse proxy.