The release of Intel's 8086 microprocessor in 1978 was a watershed moment for personal computing. The DNA of that chip is likely at the center of whatever computer--Windows, Mac, or Linux--you're using to read this, and it helped transform Intel from merely one of many chip companies to the world's largest.
What's most surprising about the tremendous success of the 8086, though, is how little people expected of it when it was first conceived. The history of this revolutionary processor is a classic tale of how much a small team of bright engineers can accomplish when they're given the freedom to do their jobs in innovative ways.
When development of the 8086 began in May 1976, Intel executives never imagined its spectacular impact. They saw it as a minor stopgap project. They were pinning the company's hopes on a radically different and more sophisticated processor called the 8800 (later released as the iAPX 432). In an era when most chips still used 8-bit data paths, the 8800 would leapfrog all the way up to 32 bits. Its advanced multitasking capabilities and memory-management circuitry would be built right into the CPU, allowing operating systems to run with much less program code.
But the 8800 project was in trouble. It had encountered numerous delays as Intel engineers found that the complex design was difficult to implement with then-current chip technology. And Intel's problems didn't stop there--it was being outflanked by Zilog, a company started by former Intel engineers. Zilog had quickly captured the midrange microprocessor market with its Z80 CPU. Released in July 1976, it was an enhanced clone of Intel's successful 8080--the processor that had effectively launched the personal-computer revolution. Intel had yet to come up with an answer to the Z80.
Enter the Architect
Picking Morse was surprising for another reason: He was a software engineer. Previously, CPU design at Intel had been the domain of hardware engineers alone. "For the first time, we were going to look at processor features from a software perspective," says Morse. "The question was not 'What features do we have space for?' but 'What features do we want in order to make the software more efficient?'" That software-centric approach proved revolutionary in the industry.
Although the 8086 was Morse's pet project, he didn't work alone. Joining Morse's team were other Intel employees, including Bill Pohlman, Jim McKevitt, and Bruce Ravenel, all of whom were essential in bringing the 8086 to market in the summer of 1978.
Beyond laying down some basic requirements--that the 8086 be compatible with software written for the popular 8080 chip and that it be able to address 128KB of memory--Intel leadership stayed out of Morse's way. "Because nobody expected the design to live long, no barriers were placed in my way, and I was free to do what I wanted," he says.
Lackluster Release
Upon its release, Morse's creation hardly took the computing world by storm. The midrange personal-computer market was saturated with cookie-cutter business machines based on the Z80 and running CP/M, the OS du jour of the late 1970s. The 8086 first appeared in a few unremarkable PCs and terminals. It gained a bit of a foothold in the portable computer market (in the form of the 80C86). Eventually it found acceptance in the microcontroller and embedded-applications market, most notably in the NASA Space Shuttle program, which uses 8086 chips to control diagnostic tests on its solid-rocket boosters to this day. (The space agency buys electronic relics on eBay to scavenge for the processors.)
In March 1979, Morse left Intel. Then a series of seemingly unremarkable events conspired to make the 8086 an industry standard.
A few weeks after Morse's departure, Intel released the 8088, which Morse calls "a castrated version of the 8086" because it used an adulterated version of the 8086's 16-bit capability. Since many systems were still 8-bit, the 8088 sent out the 16-bit data in two 8-bit cycles, making it compatible with 8-bit systems.
Two years later, IBM began work on the model 5150, the company's first PC to consist only of low-cost, off-the-shelf parts. It was a novel concept for IBM, which previously emphasized its proprietary technology to the exclusion of all others.
Obviously, an off-the-shelf system demanded an off-the-shelf microprocessor. But which to choose? IBM decided early on that its new machine required a 16-bit processor, and narrowed the choices down to three candidates: the Motorola 68000 (the powerful 16-bit processor at the heart of the first Macintosh), the Intel 8086, and its "castrated" cousin, the Intel 8088.
According to David J. Bradley, an original member of the IBM development team, the company eliminated the Motorola chip from consideration because IBM was more familiar and comfortable with Intel processors. Tipping the scales was the fact that Microsoft had a ready and working BASIC interpreter available for the 8086 and, since it shared the same base code, the 8088.
IBM then had to choose between the 8086 and the 8088. Ultimately, the decision came down to the simple economics of reducing chip count. IBM selected the 8088, a decision that allowed the company to build cheaper machines because it could use fewer ROM modules and less RAM, Bradley says.
In a sense, though, it didn't matter which of the Intel chips IBM chose. Both were built on the same underlying 8086 code written by Stephen Morse.