Rustaceans cometh

In the mid to late twentieth century I expected software to become a significant driver of wealth creation. Better automation, software configured products, advanced business management systems, and the internet looked as if they would supercharge productivity and make everyone richer.

I can still feel the excitement of building customisable features into new products. Hard-wired functions became variables a user could tweak. If the user wanted to change a setting, they simply jacked a portable computer into the product and changed it. More importantly, settings could be changed without exposing sensitive components to the hands of a technician. “Service-induced failure” was the popular euphemism for “the technician broke it.” A critical goal was to eliminate anything that required technicians to open up a product after it had left the factory. “After-sales service” was costing our customers far too much in dollars and lost time.

Back then, software bugs were things to be eliminated. A bug that found its way into the market was a genuine threat: Customers would not put up with buggy internal software, any more than they would tolerate radio equipment that didn’t work or cars that kept breaking down.

Computer Apps Got a Free Pass

Already by the mid 1980s software companies were selling desktop computer apps with no warranty: If your new word processor or spreadsheet was full of bugs, too bad. Computer application software manufacturers managed to exempt themselves from the rules that applied to other manufacturers. Depressingly, as computer systems became ever more complex, software bugs became more common.

As a hardware designer part of my job was to keep up with developments in quality control. With software becoming more important as time goes on, I have often asked colleagues and other people about software quality management. How did they assure themselves that the software they were writing was bug-free and would function as intended? Without exception, everyone I spoke to seemed to equate testing with quality control.

Everything I have learned about quality control contradicts that idea. No amount of testing will fix a badly designed product. This was one of the most important lessons learned from Toyota’s disruption of the US car market during the 1960s and 70s. Cars like the Toyota Corolla were designed from the ground up to do their intended job without breaking down or falling apart. Most importantly, Toyota taught manufacturing engineers that once you’d sold the product, it must stay sold. Customers don’t want to bring it back next week for an “upgrade”. It’s quite difficult to get them to bring it in for regular maintenance such as oil changes. Testing was only part of a comprehensive waste elimination strategy designed to save money by preventing quality problems such as assembly errors and warranty failures.

Just Testing is not Quality Control

The idea that “testing-is-quality-control” completely ignores the way embedded software was designed in the early 1980s. We used techniques such as programming standards, precise data structure, and clearly defined product specifications to eliminate software bugs. Those projects were fairly simple two-layer systems: Assembly code produced on a macro-assembler, running on a stand-alone processor with hard-wired internal code.

Larger systems need more advanced quality control tools, in addition to the basics. Computer applications are becoming bigger and more complex. In 2025, global IT spending reached $US5.6 trillion, almost five percent of all the wealth created that year. I would have expected to see a constant stream of press releases hyping up the latest all-conquering quality control system for software.

Still, when I ask software engineers, or search the internet, for information on the latest software quality control techniques, I find nothing new. Most computer failures don’t even make the news. And for those that do, the offending manufacturers get brownie points for rolling out a “patch”. The higher the stakes, it seems, the less attention is paid to the consequences of failure. Software manufacturers stay in business. Programmers still get paid. I’m getting the impression that software professionals no longer see bugs as “failures”.

Meanwhile the human cost of bad software is adding up. The British Post Office’s Horizon point of sale system is the most egregious documented example, with software bugs leading to wrongful imprisonments and suicides over a sixteen-year period. Robert N. Charette reported last year that Horizon was still in use.

Where are the Good Apps?

Computer applications should be able to deliver real economic and social benefits.

That will not happen while software manufacturers are being paid to roll out systems containing hundreds of thousands of bugs, and then getting paid again for “upgrades” that contain most of the old bugs plus a swarm of new ones.

Organised criminals only need to bribe the right bureaucrat to get control of AI systems such as Claude Mythos, which is capable of finding and exploiting security bugs in any new online computer system, within seconds of the system going online.

The situation has been so grim that I started thinking about a post-internet world in which computers would be nothing more than toys for bureaucrats and criminals. How long, I wondered, until smart people unplug and go offline?

Rust Awakens

Imagine my delight when I stumbled upon an April 2026 blog post that totally transformed my thinking. Written by Evan Johnson and Justin Cappos, who are computer professors at New York University, the post drew my attention to the kind of quality tools I had been looking for.

According to these professors, a memory safe computer language such as Rust can eliminate memory management bugs, including some that enable cyber-criminals to break into online systems. Johnson & Cappos also talked up “formal verification”. The goal of formal verification is to prove, in the mathematical sense, that a block of code will work as intended, and only as intended.* That’s built-in quality.

Why Not Use AI?

AI manufacturers have been hyping their products’ ability to automatically find software bugs and propose solutions. The trouble is that every bug-fix needs human input. While the bugs are being fixed, AI cyber-criminals are doing their thing.

Johnson & Cappos point out that: “AI bug scanners treat the symptom, not the cause.” That’s not new. Quality managers and manufacturing engineers have known that since Deming, Ishikawa, and their contemporaries explained why effective quality control saves money, increases profits, and produces better grocery-getters.**

I was rapt to read that some software companies seem to be serious about software quality. According to Johnson & Cappos, Amazon Web Services, Cloudflare, and Google are using formal verification. DARPA and Mozilla, among others, are using Rust. Bug-free software is the only realistic response to AI cyber-crime, and it will eliminate wasteful bugs and software “upgrades”. Software doesn’t wear out. “Software maintenance” is a euphemism for “fixing the bugs we have found since the last upgrade.

Formal verification and memory-safe languages won’t solve all the software industry’s quality problems. They are small steps in the right direction.

Quality control can be challenging. Getting a product right first time takes more effort than merely throwing a prototype together. However, project teams that take too long to build a prototype usually fail. It’s a difficult trade-off, but it’s worth the effort. Increased profitability is pretty-much a side-effect. The big payoff is happier customers and a better reputation. What manufacturer would not want a reputation like Toyota’s?

As Johnson & Cappos wrote: “The lasting solution is software that doesn’t produce vulnerabilities in the first place.”

 

We are technorg

 

Feature Image: Rustaceans Cometh, starring Ferris, by Karen Rustad Tölva, with background by Peter Schulz on Unsplash.

* A system may need several levels of provability. The kspline, for example, is a provable algorithm. Practical implementation (in software) requires a separate level of proof.

** It’s hard to find a better grocery-getter than the Toyota Corolla.

Related Post