How Intel Unintentionally Became the CPU Leader

How Intel Unintentionally Became the CPU Leader

We often imagine that major technological successes are born from a clear strategic vision, flawlessly executed with tactical precision. But in reality, they are frequently the result of improvisation, contingency, and even accidents. 

Intel is a striking example.


In 1968, Gordon Moore, Robert Noyce (the co-inventor of the integrated circuit), and Andy Grove left Fairchild Semiconductor to found Intel - continuing the long tradition of "betrayals" that shaped the early semiconductor industry.

Their initial vision was to leverage their expertise in integrated circuits and increasing chip density to dominate the emerging DRAM memory market.

The notion of creating a general-purpose microprocessor was not central to Intel’s strategy - so much so that it wasn't even mentioned in Intel’s 1971 IPO filings, even though the 4004 was only months away from launch.

Andy Grove, Robert Noyce and Gordon Moore, For Intel's 10th Anniversary in 1978

 The 4004

The story of the first microprocessor - the Intel 4004 - begins not inside Intel, but in Japan. Busicom, a calculator manufacturer, launched what came to be known as the "Busicom Project," aiming to design a chipset for its new 141-PF calculator using large-scale integration (LSI) technology.

In April 1968, Busicom engineer Masatoshi Shima, under the supervision of Tadashi Tanba, began work on a seven-chip custom LSI system that included a three-chip CPU, with specialized components like adders, multipliers, and ROM. The architecture was specifically tailored for decimal computation.

Busicom 141-PF

Seeking to expand beyond calculators into teller machines and billing systems, Busicom decided to generalize the design and approached U.S. companies - including Mostek and Intel - for manufacturing.

Intel, secured the contract thanks to its pioneering silicon gate MOS process. When Shima traveled to Intel in June 1969 to present the schematics, Intel engineers, lacking experience in logic design, requested simplification.

Inspired by Sharp's Tadashi Sasaki, Intel's Ted Hoff proposed a radical reduction to just four chips, including a single-chip CPU. While Hoff's concept lacked detailed technical specifications, Shima developed the architecture and, together with Intel’s Stanley Mazor, translated it into a working design.

Busicom 141-PF's motherboard

Shima’s contributions were pivotal. He integrated a 10-bit static shift register for I/O, enhanced the instruction set, refined RAM behavior, optimized data flow, and defined a decimal-centric functional specification.

By late 1969, Intel and Busicom had finalized the four-chip architecture. However, when Shima returned in early 1970, he found the project stalled.

Hoff had moved on, and Federico Faggin - recently hired from Fairchild Semiconductor, where he had led the development of silicon gate MOS - took over. Faggin brought the project to life, with Shima working alongside him at Intel from April to October 1970.

Eventually, Busicom sold the rights to the 4004 to Intel, retaining only rights for its use in calculators.

The 4004

The 8008

The story of the 8008 - Intel’s second microprocessor - is just as serendipitous.

In 1969, Computer Terminal Corporation (CTC) was developing a successor to its Datapoint 3300 terminal and sought to reduce heat and power consumption by implementing a CPU on a single chip. CTC co-founder Gus Roche approached Intel with a design, but Bob Noyce hesitated.

He warned that while a processor might sell one chip per machine, memory chips could sell by the hundreds. Moreover, Intel worried that offering its own CPU could alienate customers using Intel memory.

Datapoint 3300

Despite reservations, Intel signed a $50,000 development contract. Around the same time, Stan Mazor of Intel and a CTC engineer discussed a simpler, programmable CPU instead of a fixed-function logic chipset.

The result was the 1201 design, later built by both Intel and Texas Instruments (TI). TI’s samples were buggy, and Intel’s were delayed. CTC, running out of time, built the Datapoint 2200 using discrete TTL logic instead.

When the 1201 finally arrived in late 1971, CTC had already moved on and declined to pay the development fee - leaving Intel with the rights.

Datapoint 2200

Intel renamed the chip the 8008 and launched it in April 1972, pricing it at $120 (about $900 in today’s dollars).

Though unrelated to the 4004 in design, the name positioned it as a logical successor.

The 8008 eventually found success, leading to the 8080 and the x86 architecture.

8008 Architecture

Early adopters included Bill Pentz’s team at California State University, Sacramento, which developed the Sac State 8008 microcomputer - possibly the first complete system of its kind, featuring PROM-based disk OS, color display, modem, printer, and keyboard.

In the UK, EMI’s team under Tom Spink used a pre-release 8008 to prototype a sophisticated microcomputer with fault recovery, a direct screen printer, and a PROM-based OS, but management canceled it before commercialization. 

The 8008 powered early commercial non-calculator microcomputers like the SCELBI (U.S.), Micral N (France), and MCM/70 (Canada). It also appeared in Hewlett-Packard’s 2640 terminal family. In 1973, Intel released INTERP/8, a simulator written by Gary Kildall in FORTRAN IV, enabling developers to test 8008 software without hardware.

Micral N

The Grand Future Plan: the iAPX 432

By the mid-1970s, Intel recognized the transformative potential of microprocessors. However, the 4004, 8008, and 8080 were all shaped by external constraints or derived from customer designs.

None represented a clean-slate, forward-looking vision.

To address this, Intel launched an ambitious internal project in 1976: the iAPX 432.

iAPX 432

Dubbed a "micromainframe," the 432 was revolutionary in scope. It abandoned traditional register-based architectures in favor of a stack machine model, and was designed from the ground up for high-level languages - especially Ada.

Its features included hardware support for object-oriented programming, garbage collection, multitasking, and memory protection. It aimed to dramatically reduce the amount of code required to build modern systems.

Intel saw the iAPX 432 as the future. The 8086 project, launched a year later, was considered a stopgap - an evolutionary path that would bridge the gap until the 432 matured.

But history had other plans.

Released in late 1981, the iAPX 432 was a "technical monster" - and a commercial failure.

Its complexity, microcode overhead, and modest performance made it impractical for most use cases. Critics described it as an over-engineered monster, an attempt to include every fashionable computing concept without a coherent architectural vision.

Meanwhile, IBM selected Intel’s 8088 - a modified version of the 8086 - for its new personal computer, the IBM PC 5150. This single decision sealed Intel’s fate.

What Intel viewed as a temporary, backward-compatible architecture became the foundation of the most successful processor family in computing history.

IBM PC 5150

Ironically, the architecture Intel once considered obsolete became the cornerstone of its dominance, while the grand vision of the iAPX 432 faded into obscurity.


We are far from the linear success stories that are often told:

  • Intel aimed at the memory market and was even afraid of losing customers by competing with them in manufacturing processors.

  • The first microprocessor (4004) is the result of an order and the purchase of commercial rights from its client.

  • The second processor (8008) also came from an order and the abandonment of rights to the result of that order, which the client would not use.

  • The grand vision of a 100% Intel CPU, the iAPX 432, was a crushing failure.

  • Triumph came from a product line that was destined to disappear, but the success of the PC and its compatibles transformed it into a dominant product for decades and a cash machine. 

 


Guillaume Crinon

Director, IoT Business Strategy

3mo

Nvidia comes next 😀 Hadn't seen Bitcoin coming...

Like
Reply
Bruno MUSSARD

IoT Security Standards & Regulations at STMicroelectronics

3mo

A great narrative that debunks the myth of perfect commercial & technology strategy. Thanks for sharing.

Eduard Drusa

CMRX real-time OS | RTOS Innovator | Cybersecurity for Embedded Software | Tinkerer | Vintage Cars | Vintage Computers | Software is not a crankshaft

3mo

> "Critics described it as an over-engineered monster, an attempt to include every fashionable computing concept without a coherent architectural vision." I think this is a common trait of pretty much every Intel in-house CPU design. Even the x86 itself became something that could be described as this. The result is that x86 carries some ancient stuff that is rarely or never used and bothers everyone with it (e.g. segmentation). i960 could be seen as a major exception to this rule if not for binary incompatible sub-variants which only supported part of the spec. Something recently seen with AVX512 and its successors. Even ARM understood that exceptional flexibility is not worth breaking binary compatibility and after having very flexible cores where you could stack whatever you wanted (remember ARMv6TDMIJ et al?) they consolidated the offering to just a handful of Cortex cores with predictable features and binary compatibility. Once I've been told that if ARM Cortex did not happen, Atmel could become the dominant 32-bit architecture for embedded.

Like
Reply
Stéphane Dalbera

Founder & Manager of Atopos (MoCap & 3D CGI)

3mo

And since I’m finishing on the IBM PC, a brief look at the OS side, which would be the key to Microsoft’s fortune.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics