The late Arvind was my occasional teaching colleague at MIT. For about the last ten years I have been part of the teaching staff for the undergraduate computer systems class (which used to be called 6.033, then semi-pointlessly renumbered to be 6.1800). For at least a couple of years, Arvind was likewise on the teaching staff for that class. When we overlapped, we had the occasional exchange about computer systems or how to best teach the students.
However, my clearest memory of Arvind was decades further back, when he was teaching the graduate computer architecture class, and I was one of his students. One of his memorably confident assertions in that class was that the Intel instruction-set architecture was dead. An “architecture” in this sense is a specification of what a particular “family” of different computers share, so that the same programs can run on all of them. The models of the family may differ in size or speed, but the common architecture means that a program that runs on one will run on the others as well.
At the point where he made this claim, in 1984 or 1985, Intel was by far the dominant commercial microprocessor architecture. Predicting its demise was a clear attention-grabber. It was also wrong.
The Intel architecture is still with us some 40 years later (cue Monty Python: “I’m not quite dead yet!”). However, my point here isn’t that he made a bold prediction that was spectacularly incorrect. Indeed, I think he was right from a strictly technical perspective. In terms of what was understood at the time and sensible technical analysis, Intel’s architecture was a hopeless mess in ways that seemed impossible to fix. He explained that clearly to his audience of eager young computer science students, and it all made sense at the time.
What was missing from that analysis was the significance of non-technical factors. There was, and still is, an enormous amount of capital associated with the Intel architecture. The Intel architecture was also impressively profitable (it’s not as profitable now). That capital and those profits could fund substantial spending on smart people to find ways to stave off technical demise. When billions of dollars are at stake, as they certainly were at Intel, it's remarkable how much money can be found and applied to solve difficult problems.
Accordingly, whenever I encounter some kind of technical competition today, I remember to also consider the economic aspects. It’s rarely enough to observe that an incumbent has a conspicuously inferior technology. It’s important to understand what other moves might be possible: whether the incumbent could pivot to the new technology or could spend large sums of money in fixing their inferior technology. Even in the 1980s, semiconductor manufacturing was already one of the most capital-intensive businesses in the world. Indeed, building new generations of semiconductor manufacturing appears to be more constrained by financial issues than by technology per se.
A purely technical analysis of competitive position may be reasonably accurate in situations where there is no dominant player, or where the industry is not very capital intensive. But in Intel’s situation, where the industry was highly capital intensive with a single dominant vendor, the technical issues were not primary. Instead, the dominant player could invest huge sums – which were still only a small fraction of the overall capital involved – to retain their competitive standing, despite an apparently inferior technology base.
Of course, in some ways all that work has just been deferring the inevitable. Intel chips are not relevant to mobile devices, for reasons that were well understood even 40 years ago, and other well-capitalized companies like Arm have sprung up to take those markets. The scale of the mobile computing market dwarfs that for larger machines, so Intel is now a factor only in a smaller part of the computing market. Likewise, Nvidia dominates in the GPUs that are most relevant to modern AI, and again Intel’s weaknesses in relevant dimensions were apparent even 40 years ago.
But the company and its primary architecture are not gone. Considering Arvind’s prediction of the mid-80s, it’s striking that the Intel architecture outlived him – a cautionary tale about the potential superiority of capital over technical concerns.