4. Computer Evolution
Human powered simple calculating device.
Manual
Calculating device with mechanical process
though still human powered
Mechanical
Mechanical calculating machine powered by
electric motors
Electro-Mechanical
Fully electronic computer
Electronic
5. Earliest
Computer
• Original calculations were computed
by humans, whose job title was
computers
• These human computers were
typically engaged in the calculation of
mathematical expression
• The calculations of this period were
specialized and expensive, requiring
years of training in mathematics.
• The first use of the word “computer”
was recorded in 1613, referring to a
person who carried out calculations, or
computations, and the word
continued to be used in that sense
until the middle of the 20th
century
6. Tally Stick
A tally stick was an ancient memory aid device to record and document
numbers, quantities, or even messages.
The original credit cards
9/3/20XX History of Computes 6
7. 7
Abacus
Invented in Babylonia 2400 B.C.
Used to perform basic arithmetic
operation
The form we’re familiar with was first
used in China in around 500 B.C.
History of Computes
9. 9
Napier’s Bone
Invented by John Napier in 1614
Allowed the operator to multiply, divide
and calculate square and cube roots
by moving the rods around and placing
them in specially constructed boards.
History of Computes
11. Slide Rule
• Invented by William
Oughtred in 1622
• Based on Napier’s idea
about logarithms
• Used primarily for;
multiplication, division, roots,
logarithm, trigonometry
• Not normally used for
addition or substraction
9/3/20XX History of Computes 11
13. Mechanical Computer
• A mechanical computer is a computer built
from mechanical components such as levers
and gears rather than electronic components.
The most common examples are adding
machines and mechanical counters, which use
the turning of gears to increment output
displays. More complex examples could carry
out multiplication and division—Friden used a
moving head which paused at each column—
and even differential analysis. One model, the
Ascota 170 accounting machine sold in the
1960s calculated square roots.
• Mechanical computers can be either analog,
using smooth mechanisms such as curved
plates or slide rules for computations; or digital,
which use gears.
History of Computes 13
14. Pascaline
Invented by Blaise Pascal in 1642
It was its limitation to addition and
substraction
It is too expensive
16. Jacquard Loom
Mechanical loom invented by Joseph-
Marie Jacquard in 1881
An automatic loom controlled by
punched cards
18. Arithmometer
• A mechanical calculator
Invented by Thomas de
Colmar in 1820
• The first reliable, useful and
commercially successful
calculating machine
• The machine could perform
the four basic mathematic
functions
• The first mass-produces
calculating machine
9/3/20XX History of Computes 18
19. Difference Engine and
Analytical Engine
It’s an automatic, mechanical
calculator designed to tabulate
polynomial functions
Invented by Charles Babbage (Father
of Cumputer) in 1882 and 1834
The first mechanical computer
Per Georg Scheutz's third difference engine
21. 21
Ada Lovelace
The first computer programmer
In 1840, Augusta Ada King, Countess of
Lovelace (née Byron), suggest to
Babbage that he use the binary system
She writes programs for the analytical
engine
History of Computes
25. 25
Electro-Mechanical
Computer
• Early electrically powered computers
constructed from switches and relay logic
rather than vacuum tubes (thermionic
valves) or transistors (from which later
electronic computers were constructed) are
classified as electro-mechanical computers.
These varied greatly in design and
capabilities, with some later units capable of
floating point arithmetic. Some relay-based
computers remained in service after the
development of vacuum-tube computers,
where their slower speed was compensated
for by good reliability. Some models were
built as duplicate processors to detect errors,
or could detect errors and retry the
instruction. A few models were sold
commercially with multiple units produced,
but many designs were experimental one-off
productions.
History of Computes
26. Tabulating
Machine
Invented by Herman Hollerith in 1890 To
Assist in summarizing information and
accounting
Mechanical tabulator based on punch
card
Capable of tabulating statistic and record
or sort data or information
Used by U.S. census in the year 1890
Hollerith;s Tabulating Machine Company
later became International Business
Machine (IBM)
29. Z1
• The first programmable computer.
• Created by Konrad Zuse in Germany
from 1936 to 1938.
• To program the Z1 required that the
user insert punch tape into a punch
tape reader and all output was also
generated through punch tape.
30. Harvard Mark 1
Also known as IBM Automatic
Sequence Controlled Calculator
(ASCC).
• Invented by Howard H. Aiken in 1943.
• The first electro-mechanical
computer.
33. Electronic
Digital Computer
• Early electrically powered computers
constructed from switches and relay logic
rather than vacuum tubes (thermionic
valves) or transistors (from which later
electronic computers were constructed) are
classified as electro-mechanical computers.
These varied greatly in design and
capabilities, with some later units capable of
floating point arithmetic. Some relay-based
computers remained in service after the
development of vacuum-tube computers,
where their slower speed was compensated
for by good reliability. Some models were
built as duplicate processors to detect errors,
or could detect errors and retry the
instruction. A few models were sold
commercially with multiple units produced,
but many designs were experimental one-off
productions.
34. Atanasoff-Berry
Computer (ABC)
Purely electronic circuit elements soon
replaced their mechanical and
electromechanical equivalents, at the same
time that digital calculation replaced
analog. Machines such as the Z3, the
Atanasoff–Berry Computer, the Colossus
computers, and the ENIAC were built by
hand, using circuits containing relays or
valves (vacuum tubes), and often used
punched cards or punched paper tape for
input and as the main (non-volatile) storage
medium.
Add-subtract module
35. Eniac
• ENIAC stands for Electronic Numerical
Integrator and Computer.
• It was the first electronic general
purpose computer.
• Completed in 1946.
• Developed by John Presper Eckert
and John W. Mauchly.
37. UNIVAC 1
• The UNIVAC I (UNIVersal Automatic
Computer 1) was the first commercial
computer.
• Designed by J. Presper Eckert and
John Mauchly.
38. EDVAC
• EDVAC stands for Electronic Discrete
Variable Automatic Computer.
• The First Stored Program Computer
• Designed by Von Neumann in 1952.
• It has a memory to hold both a stored
program as well as data.
39. Osborne 1
• The first portable computer.
• Released in 1981 by the Osborne
Computer Corporation
40. The First Computer
Company
• The first computer company was the
Electronic Controls Company.
• Founded in 1949 by J. Presper Eckert
and John Mauchly.
45. 45
First Generation
• The first computers used vacuum tubes for
circuitry and magnetic drums for memory, and
were often enormous, taking up entire rooms.
• They were very expensive to operate and in
addition to using a great deal of electricity,
generated a lot of heat, which was often the
cause of malfunctions.
• First generation computers relied on machine
language, the lowest-level programming
language understood by computers, to perform
operations, and they could only solve one
problem at a time.
• Input was based on punched cards and
paper tape, and output was displayed on
printouts.
History of Computes
46. 46
Second
Generation
• Transistors replaced vacuum tubes and ushered in
the second generation of computers.
• One transistor replaced the equivalent of 40
vacuum tubes.
• Allowing computers to become smaller, faster,
cheaper, more energy-efficient and more reliable.
History of Computer • Still generated a great deal
of heat that can damage the computer.
• Second-generation computers moved from
cryptic binary machine language to symbolic, or
assembly, languages, which allowed programmers
to specify instructions in words.
• Second-generation computers still relied on
punched cards for input and printouts for output.
• These were also the first computers that stored
their instructions in their memory, which moved from
a magnetic drum to magnetic core technology.
History of Computes
47. 47
Third
Generation
• The development of the integrated circuit was the
hallmark of the third generation of computers.
• Transistors were miniaturized and placed on silicon
chips, called semiconductors, which drastically
increased the speed and efficiency of computers.
• Much smaller and cheaper compare to the
second generation computers. History of Computer
• It could carry out instructions in billionths of a
second.
• Users interacted with third generation computers
through keyboards and monitors and interfaced
with an operating system, which allowed the
device to run many different applications at one
time with a central program that monitored the
memory.
• Computers for the first time became accessible to
a mass audience because they were smaller and
cheaper than their predecessors.
History of Computes
48. 48
Fourth
Generation
• The microprocessor brought the fourth
generation of computers, as thousands
of integrated circuits were built onto a
single silicon chip.
• As these small computers became
more powerful, they could be linked
together to form networks, which
eventually led to the development of
the Internet.
• Fourth generation computers also saw
the development of GUIs, the mouse
and handheld devices.
History of Computes
49. 49
Fifth Generation
Based on Artificial Intelligence (AI). •
Still in development.
• The use of parallel processing and
superconductors is helping to make
artificial intelligence a reality.
• The goal is to develop devices that
respond to natural language input and
are capable of learning and self-
organization.
• There are some applications, such as
voice recognition, that are being used
today.
History of Computes
#3:A computer is an electronic machine that accepts information (data), processes it according to specific instructions, and provides the results as new information
A computer is a digital electronic machine that can be programmed to carry out sequences of arithmetic or logical operations (computation) automatically. Modern computers can perform generic sets of operations known as programs. These programs enable computers to perform a wide range of tasks.
According to the Oxford English Dictionary, the first known use of computer was in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were often hired as computers because they could be paid less than their male counterparts.[1] By 1943, most human computers were women.[2]
The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine".[3]
#5:The term "computer", in use from the early 17th century (the first known written reference dates from 1613),[1] meant "one who computes": a person performing mathematical calculations, before electronic computers became commercially available. Alan Turing described the "human computer" as someone who is "supposed to be following fixed rules; he has no authority to deviate from them in any detail."[2] Teams of people, often women from the late nineteenth century onwards, were used to undertake long and often tedious calculations; the work was divided so that this could be done in parallel. The same calculations were frequently performed independently by separate teams to check the correctness of the results.
Since the end of the 20th century, the term "human computer" has also been applied to individuals with prodigious powers of mental arithmetic, also known as mental calculators.
Origins in sciences
Astronomers in Renaissance times used that term about as often as they called themselves "mathematicians" for their principal work of calculating the positions of planets. They often hired a "computer" to assist them. For some men, such as Johannes Kepler, assisting a scientist in computation was a temporary position until they moved on to greater advancements. Before he died in 1617, John Napier suggested ways by which “the learned, who perchance may have plenty of pupils and computers” might construct an improved logarithm table.[3]: p.46
The National Advisory Committee for Aeronautics (NACA) was a United States federal agency founded on March 3, 1915, to undertake, promote, and institutionalize aeronautical research.[1] On October 1, 1958, the agency was dissolved and its assets and personnel were transferred to the newly created National Aeronautics and Space Administration (NASA). NACA is an initialism, i.e., pronounced as individual letters, rather than as a whole word[2] (as was NASA during the early years after being established).[3]
Among other advancements, NACA research and development produced the NACA duct, a type of air intake used in modern automotive applications, the NACA cowling, and several series of NACA airfoils,[4] which are still used in aircraft manufacturing.
During World War II, NACA was described as "The Force Behind Our Air Supremacy" due to its key role in producing working superchargers for high altitude bombers, and for producing the laminar wing profiles for the North American P-51 Mustang.[5] NACA also helped in developing the area rule that is used on all modern supersonic aircraft, and conducted the key compressibility research that enabled the Bell X-1 to break the sound barrier.
#7:The abacus (plural abaci or abacuses), also called a counting frame, is a calculating tool which has been used since ancient times. It was used in the ancient Near East, Europe, China, and Russia, centuries before the adoption of the Hindu-Arabic numeral system.[1] The exact origin of the abacus has not yet emerged. It consists of rows of movable beads, or similar objects, strung on a wire. They represent digits. One of the two numbers is set up, and the beads are manipulated to perform an operation such as addition, or even a square or cubic root.
In their earliest designs, the rows of beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation. Abacuses are still made, often as a bamboo frame with beads sliding on wires. In the ancient world, particularly before the introduction of positional notation, abacuses were a practical calculating tool. The abacus is still used to teach the fundamentals of mathematics to some children, for example, in Russia.
#8:Left Image: These mathematical tables from 1925 were distributed by the College Entrance Examination Board to students taking the mathematics portions of the tests
Mathematical tables are lists of numbers showing the results of a calculation with varying arguments. Trigonometric tables were used in ancient Greece and India for applications to astronomy and celestial navigation, and continued to be widely used until electronic calculators became cheap and plentiful, in order to simplify and drastically speed up computation. Tables of logarithms and trigonometric functions were common in math and science textbooks, and specialized tables were published for numerous applications.
History and use[edit]
The first tables of trigonometric functions known to be made were by Hipparchus (c.190 – c.120 BCE) and Menelaus (c.70–140 CE), but both have been lost. Along with the surviving table of Ptolemy (c. 90 – c.168 CE), they were all tables of chords and not of half-chords, that is, the sine function.[1] The table produced by the Indian mathematician Āryabhaṭa (476–550 CE) is considered the first sine table ever constructed.[1] Āryabhaṭa's table remained the standard sine table of ancient India. There were continuous attempts to improve the accuracy of this table, culminating in the discovery of the power series expansions of the sine and cosine functions by Madhava of Sangamagrama (c.1350 – c.1425), and the tabulation of a sine table by Madhava with values accurate to seven or eight decimal places.
These mathematical tables from 1925 were distributed by the College Entrance Examination Board to students taking the mathematics portions of the tests
Tables of common logarithms were used until the invention of computers and electronic calculators to do rapid multiplications, divisions, and exponentiations, including the extraction of nth roots.
Mechanical special-purpose computers known as difference engines were proposed in the 19th century to tabulate polynomial approximations of logarithmic functions – that is, to compute large logarithmic tables. This was motivated mainly by errors in logarithmic tables made by the human computers of the time. Early digital computers were developed during World War II in part to produce specialized mathematical tables for aiming artillery. From 1972 onwards, with the launch and growing use of scientific calculators, most mathematical tables went out of use.
One of the last major efforts to construct such tables was the Mathematical Tables Project that was started in the United States in 1938 as a project of the Works Progress Administration (WPA), employing 450 out-of-work clerks to tabulate higher mathematical functions. It lasted through World War II.[citation needed]
Tables of special functions are still used. For example, the use of tables of values of the cumulative distribution function of the normal distribution – so-called standard normal tables – remains commonplace today, especially in schools, although the use of scientific and graphical calculators is making such tables redundant.
Creating tables stored in random-access memory is a common code optimization technique in computer programming, where the use of such tables speeds up calculations in those cases where a table lookup is faster than the corresponding calculations (particularly if the computer in question doesn't have a hardware implementation of the calculations). In essence, one trades computing speed for the computer memory space required to store the tables.
#9:Napier's bones is a manually-operated calculating device created by John Napier of Merchiston, Scotland for the calculation of products and quotients of numbers. The method was based on lattice multiplication, and also called 'rabdology', a word invented by Napier. Napier published his version in 1617.[1] printed in Edinburgh, dedicated to his patron Alexander Seton.
Using the multiplication tables embedded in the rods, multiplication can be reduced to addition operations and division to subtractions. Advanced use of the rods can extract square roots. Napier's bones are not the same as logarithms, with which Napier's name is also associated, but are based on dissected multiplication tables.
#10:Napier's bones is a manually-operated calculating device created by John Napier of Merchiston, Scotland for the calculation of products and quotients of numbers. The method was based on lattice multiplication, and also called 'rabdology', a word invented by Napier. Napier published his version in 1617.[1] printed in Edinburgh, dedicated to his patron Alexander Seton.
Using the multiplication tables embedded in the rods, multiplication can be reduced to addition operations and division to subtractions. Advanced use of the rods can extract square roots. Napier's bones are not the same as logarithms, with which Napier's name is also associated, but are based on dissected multiplication tables.
#11:The slide rule is a mechanical analog computer[3][4] which is used primarily for multiplication and division, and for functions such as exponents, roots, logarithms, and trigonometry. It is not typically designed for addition or subtraction, which is usually performed using other methods. Maximum accuracy for standard linear slide rules is about three decimal significant digits, while scientific notation is used to keep track of the order of magnitude of results.
Slide rules exist in a diverse range of styles and generally appear in a linear, circular or cylindrical form, with slide rule scales inscribed with standardized graduated markings. Slide rules manufactured for specialized fields such as aviation or finance typically feature additional scales that aid in specialized calculations particular to those fields. The slide rule is closely related to nomograms used for application-specific computations. Though similar in name and appearance to a standard ruler, the slide rule is not meant to be used for measuring length or drawing straight lines.
At its simplest, each number to be multiplied is represented by a length on a pair of parallel rulers that can slide past each other. As the rulers each have a logarithmic scale, it is possible to align them to read the sum of the numbers' logarithms, and hence calculate the product of the two numbers.
The English mathematician and clergyman Reverend William Oughtred and others developed the slide rule in the 17th century based on the emerging work on logarithms by John Napier. Before the advent of the electronic calculator, it was the most commonly used calculation tool in science and engineering.[5] The slide rule's ease of use, ready availability, and low cost caused its use to continue to grow through the 1950s and 1960s, even as electronic computers were being gradually introduced. The introduction of the handheld electronic scientific calculator around 1974 made slide rules largely obsolete, and most suppliers left the business.
#12:The slide rule is a mechanical analog computer[3][4] which is used primarily for multiplication and division, and for functions such as exponents, roots, logarithms, and trigonometry. It is not typically designed for addition or subtraction, which is usually performed using other methods. Maximum accuracy for standard linear slide rules is about three decimal significant digits, while scientific notation is used to keep track of the order of magnitude of results.
Slide rules exist in a diverse range of styles and generally appear in a linear, circular or cylindrical form, with slide rule scales inscribed with standardized graduated markings. Slide rules manufactured for specialized fields such as aviation or finance typically feature additional scales that aid in specialized calculations particular to those fields. The slide rule is closely related to nomograms used for application-specific computations. Though similar in name and appearance to a standard ruler, the slide rule is not meant to be used for measuring length or drawing straight lines.
At its simplest, each number to be multiplied is represented by a length on a pair of parallel rulers that can slide past each other. As the rulers each have a logarithmic scale, it is possible to align them to read the sum of the numbers' logarithms, and hence calculate the product of the two numbers.
The English mathematician and clergyman Reverend William Oughtred and others developed the slide rule in the 17th century based on the emerging work on logarithms by John Napier. Before the advent of the electronic calculator, it was the most commonly used calculation tool in science and engineering.[5] The slide rule's ease of use, ready availability, and low cost caused its use to continue to grow through the 1950s and 1960s, even as electronic computers were being gradually introduced. The introduction of the handheld electronic scientific calculator around 1974 made slide rules largely obsolete, and most suppliers left the business.
#13:A mechanical computer is a computer built from mechanical components such as levers and gears rather than electronic components. The most common examples are adding machines and mechanical counters, which use the turning of gears to increment output displays. More complex examples could carry out multiplication and division—Friden used a moving head which paused at each column—and even differential analysis. One model, the Ascota 170 accounting machine sold in the 1960s calculated square roots.
Mechanical computers can be either analog, using smooth mechanisms such as curved plates or slide rules for computations; or digital, which use gears.
Mechanical computers reached their zenith during World War II, when they formed the basis of complex bombsights including the Norden, as well as the similar devices for ship computations such as the US Torpedo Data Computer or British Admiralty Fire Control Table. Noteworthy are mechanical flight instruments for early spacecraft, which provided their computed output not in the form of digits, but through the displacements of indicator surfaces. From Yuri Gagarin's first manned spaceflight until 2002, every manned Soviet and Russian spacecraft Vostok, Voskhod and Soyuz was equipped with a Globus instrument showing the apparent movement of the Earth under the spacecraft through the displacement of a miniature terrestrial globe, plus latitude and longitude indicators.
Example of Mechanical Computer
The Norden Mk. XV, known as the Norden M series in U.S. Army service, is a bombsight that was used by the United States Army Air Forces (USAAF) and the United States Navy during World War II, and the United States Air Force in the Korean and the Vietnam Wars. It was an early tachometric design that directly measured the aircraft's ground speed and direction, which older bombsights could only estimate with lengthy manual procedures. The Norden further improved on older designs by using an analog computer that continuously recalculated the bomb's impact point based on changing flight conditions, and an autopilot that reacted quickly and accurately to changes in the wind or other effects.
The Antikythera mechanism (/ˌæntɪkɪˈθɪərə/ AN-tih-kih-THEER-ə) is an Ancient Greek hand-powered orrery, described as the oldest example of an analogue computer[1][2][3] used to predict astronomical positions and eclipses decades in advance.[4][5][6] It could also be used to track the four-year cycle of athletic games which was similar to an Olympiad, the cycle of the ancient Olympic Games.[7][8][9]
This artefact was among wreckage retrieved from a shipwreck off the coast of the Greek island Antikythera in 1901.[10][11] On 17 May 1902, it was identified as containing a gear by archaeologist Valerios Stais.[12] The device, housed in the remains of a wooden-framed case of (uncertain) overall size 34 cm × 18 cm × 9 cm (13.4 in × 7.1 in × 3.5 in),[13][14] was found as one lump, later separated into three main fragments which are now divided into 82 separate fragments after conservation efforts. Four of these fragments contain gears, while inscriptions are found on many others.[13][14] The largest gear is approximately 13 centimetres (5.1 in) in diameter and originally had 223 teeth.[15]
#14:Pascal's calculator (also known as the arithmetic machine or Pascaline) is a mechanical calculator invented by Blaise Pascal in 1642. Pascal was led to develop a calculator by the laborious arithmetical calculations required by his father's work as the supervisor of taxes in Rouen.[2] He designed the machine to add and subtract two numbers directly and to perform multiplication and division through repeated addition or subtraction.
Pascal's calculator was especially successful in the design of its carry mechanism, which adds 1 to 9 on one dial, and carries 1 to the next dial when the first dial changes from 9 to 0. His innovation made each digit independent of the state of the others, enabling multiple carries to rapidly cascade from one digit to another regardless of the machine's capacity. Pascal was also the first to shrink and adapt for his purpose a lantern gear, used in turret clocks and water wheels. This innovation allowed the device to resist the strength of any operator input with very little added friction.
Pascal designed the machine in 1642.[3] After 50 prototypes, he presented the device to the public in 1645, dedicating it to Pierre Séguier, then chancellor of France.[4] Pascal built around twenty more machines during the next decade, many of which improved on his original design. In 1649, King Louis XIV of France gave Pascal a royal privilege (similar to a patent), which provided the exclusive right to design and manufacture calculating machines in France. Nine Pascal calculators presently exist;[5] most are on display in European museums.
Many later calculators were either directly inspired by, or shaped by the same historical influences which led to, Pascal's invention. Gottfried Leibniz invented his Leibniz wheels after 1671, after trying to add an automatic multiplication feature to the Pascaline.[6] In 1820, Thomas de Colmar designed his arithmometer, the first mechanical calculator strong enough and reliable enough to be used daily in an office environment. It is not clear whether he ever saw Leibniz's device, but he either re-invented it or utilised Leibniz's invention of the step drum.
#15:The stepped reckoner, also known as Leibniz calculator, was a mechanical calculator invented by the German mathematician Gottfried Wilhelm Leibniz around 1672 and completed in 1694.[1] The name comes from the translation of the German term for its operating mechanism, Staffelwalze, meaning "stepped drum". It was the first calculator that could perform all four arithmetic operations.[2]
Its intricate precision gearwork, however, was somewhat beyond the fabrication technology of the time; mechanical problems, in addition to a design flaw in the carry mechanism, prevented the machines from working reliably.[3][4]
Two prototypes were built; today only one survives in the National Library of Lower Saxony (Niedersächsische Landesbibliothek) in Hanover, Germany. Several later replicas are on display, such as the one at the Deutsches Museum, Munich.[5] Despite the mechanical flaws of the stepped reckoner, it suggested possibilities to future calculator builders. The operating mechanism, invented by Leibniz, called the stepped cylinder or Leibniz wheel, was used in many calculating machines for 200 years, and into the 1970s with the Curta hand calculator.
The Curta is a hand-held mechanical calculator designed by Curt Herzstark.[1] It is known for its extremely compact design: a small cylinder that fits in the palm of the hand. It was affectionately known as the "pepper grinder" or "peppermill" due to its shape and means of operation; its superficial resemblance to a certain type of hand grenade also earned it the nickname "math grenade".[2]
Curtas were considered the best portable calculators available until they were displaced by electronic calculators in the 1970s.[1]
#16:The Jacquard machine (French: [ʒakaʁ]) is a device fitted to a loom that simplifies the process of manufacturing textiles with such complex patterns as brocade, damask and matelassé.[3] The resulting ensemble of the loom and Jacquard machine is then called a Jacquard loom. The machine was patented by Joseph Marie Jacquard in 1804,[4][5][6][7] based on earlier inventions by the Frenchmen Basile Bouchon (1725), Jean Baptiste Falcon (1728), and Jacques Vaucanson (1740).[8] The machine was controlled by a "chain of cards"; a number of punched cards laced together into a continuous sequence.[9] Multiple rows of holes were punched on each card, with one complete card corresponding to one row of the design.
Both the Jacquard process and the necessary loom attachment are named after their inventor. This mechanism is probably one of the most important weaving innovations as Jacquard shedding made possible the automatic production of unlimited varieties of complex pattern weaving. The term "Jacquard" is not specific or limited to any particular loom, but rather refers to the added control mechanism that automates the patterning. The process can also be used for patterned knitwear and machine-knitted textiles, such as jerseys.[10]
This use of replaceable punched cards to control a sequence of operations is considered an important step in the history of computing hardware, having inspired Charles Babbage's Analytical Engine.
Importance in computing
The Jacquard head used replaceable punched cards to control a sequence of operations. It is considered an important step in the history of computing hardware.[24] The ability to change the pattern of the loom's weave by simply changing cards was an important conceptual precursor to the development of computer programming and data entry. Charles Babbage knew of Jacquard machines and planned to use cards to store programs in his Analytical Engine. In the late 19th century, Herman Hollerith took the idea of using punched cards to store information a step further when he created a punched card tabulating machine which he used to input data for the 1890 U.S. Census. A large data processing industry using punched-card technology was developed in the first half of the twentieth century—dominated initially by the International Business Machine corporation (IBM), with its line of unit record equipment. The cards were used for data, however, with programming done by plugboards.
Some early computers, such as the 1944 IBM Automatic Sequence Controlled Calculator (Harvard Mark I) received program instructions from a paper tape punched with holes, similar to Jacquard's string of cards. Later computers executed programs from higher-speed memory, though cards were commonly used to load the programs into memory. Punched cards remained in use in computing up until the mid-1980s.
#18:The arithmometer (French: arithmomètre) was the first digital mechanical calculator strong enough and reliable enough to be used daily in an office environment. This calculator could add and subtract two numbers directly and could perform long multiplications and divisions effectively by using a movable accumulator for the result.
Patented in France by Thomas de Colmar in 1820[1] and manufactured from 1851[2] to 1915,[citation needed] it became the first commercially successful mechanical calculator.[3] Its sturdy design gave it a strong reputation for reliability and accuracy[4] and made it a key player in the move from human computers to calculating machines that took place during the second half of the 19th century.[5]
Its production debut of 1851[2] launched the mechanical calculator industry[3] which ultimately built millions of machines well into the 1970s. For forty years, from 1851 to 1890,[6] the arithmometer was the only type of mechanical calculator in commercial production, and it was sold all over the world. During the later part of that period two companies started manufacturing clones of the arithmometer: Burkhardt, from Germany, which started in 1878, and Layton of the UK, which started in 1883. Eventually about twenty European companies built clones of the arithmometer until the beginning of World War I.
#19:A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. It was designed in the 1820s, and was first created by Charles Babbage. The name, the difference engine, is derived from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial co-efficients. Some of the most common mathematical functions used in engineering, science and navigation, were, and still are computable with the use of the difference engine's capability of computing logarithmic and trigonometric functions, which can be approximated by polynomials, so a difference engine can compute many useful tables of numbers.
The principle of a difference engine is Newton's method of divided differences. If the initial value of a polynomial (and of its finite differences) is calculated by some means for some value of X, the difference engine can calculate any number of nearby values, using the method generally known as the method of finite differences. For example, consider the quadratic polynomial
p(x)=2x^{2}-3x+2 with the goal of tabulating the values p(0), p(1), p(2), p(3), p(4), and so forth. The table below is constructed as follows: the second column contains the values of the polynomial, the third column contains the differences of the two left neighbors in the second column, and the fourth column contains the differences of the two neighbors in the third column:
Scheutzian calculation engine[edit]
Per Georg Scheutz's third difference engine
Inspired by Babbage's difference engine in 1834, Per Georg Scheutz built several experimental models. In 1837 his son Edward proposed to construct a working model in metal, and in 1840 finished the calculating part, capable of calculating series with 5-digit numbers and first-order differences, which was later extended to third-order (1842). In 1843, after adding the printing part, the model was completed.
In 1851, funded by the government, construction of the larger and improved (15-digit numbers and fourth-order differences) machine began, and finished in 1853. The machine was demonstrated at the World's Fair in Paris, 1855 and then sold in 1856 to the Dudley Observatory in Albany, New York. Delivered in 1857, it was the first printing calculator sold.[17][18][19] In 1857 the British government ordered the next Scheutz's difference machine, which was built in 1859.[20][21] It had the same basic construction as the previous one, weighing about 10 cwt (1,100 lb; 510 kg).[19]
#21:Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852) was an English mathematician and writer, chiefly known for her work on Charles Babbage's proposed mechanical general-purpose computer, the Analytical Engine. She was the first to recognise that the machine had applications beyond pure calculation, and to have published the first algorithm intended to be carried out by such a machine. As a result, she is often regarded as the first computer programmer.[2][3][4]
First computer program
Lovelace's diagram from "note G", the first published computer algorithm
In 1840, Babbage was invited to give a seminar at the University of Turin about his Analytical Engine. Luigi Menabrea, a young Italian engineer and the future Prime Minister of Italy, transcribed Babbage's lecture into French, and this transcript was subsequently published in the Bibliothèque universelle de Genève in October 1842. Babbage's friend Charles Wheatstone commissioned Ada Lovelace to translate Menabrea's paper into English. She then augmented the paper with notes, which were added to the translation. Ada Lovelace spent the better part of a year doing this, assisted with input from Babbage. These notes, which are more extensive than Menabrea's paper, were then published in the September 1843 edition of Taylor's Scientific Memoirs under the initialism AAL.[72]
Insight into potential of computing devices
In her notes, Ada Lovelace emphasised the difference between the Analytical Engine and previous calculating machines, particularly its ability to be programmed to solve problems of any complexity.[77] She realised the potential of the device extended far beyond mere number crunching. In her notes, she wrote:
[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine...Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.[78][79]
This analysis was an important development from previous ideas about the capabilities of computing devices and anticipated the implications of modern computing one hundred years before they were realised. Walter Isaacson ascribes Ada's insight regarding the application of computing to any process based on logical symbols to an observation about textiles: "When she saw some mechanical looms that used punchcards to direct the weaving of beautiful patterns, it reminded her of how Babbage's engine used punched cards to make calculations."[80] This insight is seen as significant by writers such as Betty Toole and Benjamin Woolley, as well as the programmer John Graham-Cumming, whose project Plan 28 has the aim of constructing the first complete Analytical Engine.[81][82][83]
According to the historian of computing and Babbage specialist Doron Swade:
Ada saw something that Babbage in some sense failed to see. In Babbage's world his engines were bound by number...What Lovelace saw...was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation—and looking back from the present high ground of modern computing, if we are looking and sifting history for that transition, then that transition was made explicitly by Ada in that 1843 paper.[2]
#24:Scheutzian calculation engine[edit]
Per Georg Scheutz's third difference engine
Inspired by Babbage's difference engine in 1834, Per Georg Scheutz built several experimental models. In 1837 his son Edward proposed to construct a working model in metal, and in 1840 finished the calculating part, capable of calculating series with 5-digit numbers and first-order differences, which was later extended to third-order (1842). In 1843, after adding the printing part, the model was completed.
In 1851, funded by the government, construction of the larger and improved (15-digit numbers and fourth-order differences) machine began, and finished in 1853. The machine was demonstrated at the World's Fair in Paris, 1855 and then sold in 1856 to the Dudley Observatory in Albany, New York. Delivered in 1857, it was the first printing calculator sold.[17][18][19] In 1857 the British government ordered the next Scheutz's difference machine, which was built in 1859.[20][21] It had the same basic construction as the previous one, weighing about 10 cwt (1,100 lb; 510 kg).[19]
#25:Mechanical computers reached their zenith during World War II, when they formed the basis of complex bombsights including the Norden, as well as the similar devices for ship computations such as the US Torpedo Data Computer or British Admiralty Fire Control Table. Noteworthy are mechanical flight instruments for early spacecraft, which provided their computed output not in the form of digits, but through the displacements of indicator surfaces. From Yuri Gagarin's first manned spaceflight until 2002, every manned Soviet and Russian spacecraft Vostok, Voskhod and Soyuz was equipped with a Globus instrument showing the apparent movement of the Earth under the spacecraft through the displacement of a miniature terrestrial globe, plus latitude and longitude indicators.
#26:The tabulating machine was an electromechanical machine designed to assist in summarizing information stored on punched cards. Invented by Herman Hollerith, the machine was developed to help process data for the 1890 U.S. Census. Later models were widely used for business applications such as accounting and inventory control. It spawned a class of machines, known as unit record equipment, and the data processing industry.
The term "Super Computing" was used by the New York World newspaper in 1931 to refer to a large custom-built tabulator that IBM made for Columbia University.[1]
1890 census[edit]
The 1880 census had taken eight years to process.[2] Since the U.S. Constitution mandates a census every ten years to apportion both congressional representatives and direct taxes among the states, a combination of larger staff and faster-recording systems was required.
Changes from the 1880 census included the larger population, the number of data items to be collected from individuals, the Census Bureau headcount, the volume of scheduled publications, and the use of Hollerith's electromechanical tabulators. The net effect of these changes was to reduce the time required to process the census from eight years for the 1880 census to six years for the 1890 census.[3]
In the late 1880s Herman Hollerith, inspired by conductors using holes punched in different positions on a railway ticket to record traveler details such as gender and approximate age, invented the recording of data on a machine-readable medium. Prior uses of machine-readable media had been for lists of instructions (not data) to drive programmed machines such as Jacquard looms. "After some initial trials with paper tape, he settled on punched cards..."[3] Hollerith used punched cards with round holes, 12 rows, and 24 columns. The cards measured 3-1/4 inches by 6-5/8 inches.[4] His tabulator used electromechanical solenoids to increment mechanical counters. A set of spring-loaded wires were suspended over the card reader. The card sat over pools of mercury, pools corresponding to the possible hole positions in the card. When the wires were pressed onto the card, punched holes allowed wires to dip into the mercury pools, making an electrical contact[5][6] that could be used for counting, sorting, and setting off a bell to let the operator know the card had been read. The tabulator had 40 counters, each with a dial divided into 100 divisions, with two indicator hands; one which stepped one unit with each counting pulse, the other which advanced one unit every time the other dial made a complete revolution. This arrangement allowed a count of up to 9,999. During a given tabulating run, counters could be assigned to a specific hole or, by using relay logic, to a combination of holes, e.g. to count married females.[7] If the card was to be sorted, a compartment lid of the sorting box would open for storage of the card, the choice of compartment depending on the data in the card.[8]
Hollerith's method was used for the 1890 census. Clerks used keypunches to punch holes in the cards entering age, state of residence, gender, and other information from the returns. Some 100 million cards were generated and "the cards were only passed through the machines four times during the whole of the operations."[4] According to the U.S. Census Bureau, the census results were "... finished months ahead of schedule and far under budget."[9]
Following the 1890 census[edit]
The advantages of the technology were immediately apparent for accounting and tracking inventory. Hollerith started his own business as The Hollerith Electric Tabulating System, specializing in punched card data processing equipment.[10] In 1896 he incorporated the Tabulating Machine Company. In that year he introduced the Hollerith Integrating Tabulator, which could add numbers coded on punched cards, not just count the number of holes. Punched cards were still read manually using the pins and mercury pool reader. 1900 saw the Hollerith Automatic Feed Tabulator used in that year's U.S. census. A control panel was incorporated in the 1906 Type 1.[11]
In 1911, four corporations, including Hollerith's firm, were amalgamated (via stock acquisition) to form a fifth company, the Computing-Tabulating-Recording Company (CTR). The Powers Accounting Machine Company was formed that same year and, like Hollerith, with machines first developed at the Census Bureau. In 1919 the first Bull tabulator prototype was developed. Tabulators that could print, and with removable control panels, appeared in the 1920s. In 1924 CTR was renamed International Business Machines (IBM). In 1927 Remington Rand acquired the Powers Accounting Machine Company. In 1933 The Tabulating Machine Company was subsumed into IBM. These companies continued to develop faster and more sophisticated tabulators, culminating in tabulators such as 1949 IBM 407 and 1952 Remington Rand 409. Tabulating machines continued to be used well after the introduction of commercial electronic computers in the 1950s.
Many applications using unit record tabulators were migrated to computers such as the IBM 1401. Two programming languages, FARGO and RPG, were created to aid this migration. Since tabulator control panels were based on the machine cycle, both FARGO and RPG emulated the notion of the machine cycle and training material showed the control panel vs. programming language coding sheet relationships.
#28:A Turing machine is a mathematical model of computation describing an abstract machine[1] that manipulates symbols on a strip of tape according to a table of rules.[2] Despite the model's simplicity, it is capable of implementing any computer algorithm.[3]
The machine operates on an infinite[4] memory tape divided into discrete cells,[5] each of which can hold a single symbol drawn from a finite set of symbols called the alphabet of the machine. It has a "head" that, at any point in the machine's operation, is positioned over one of these cells, and a "state" selected from a finite set of states. At each step of its operation, the head reads the symbol in its cell. Then, based on the symbol and the machine's own present state, the machine writes a symbol into the same cell, and moves the head one step to the left or the right,[6] or halts the computation. The choice of which replacement symbol to write and which direction to move is based on a finite table that specifies what to do for each combination of the current state and the symbol that is read.
The Turing machine was invented in 1936 by Alan Turing,[7][8] who called it an "a-machine" (automatic machine).[9] It was Turing's Doctoral advisor, Alonzo Church, who later coined the term "Turing machine" in a review.[10] With this model, Turing was able to answer two questions in the negative:
Does a machine exist that can determine whether any arbitrary machine on its tape is "circular" (e.g., freezes, or fails to continue its computational task)?
Does a machine exist that can determine whether any arbitrary machine on its tape ever prints a given symbol?[11][12]
Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem ('decision problem').[13]
Turing machines proved the existence of fundamental limitations on the power of mechanical computation.[14] While they can express arbitrary computations, their minimalist design makes them unsuitable for computation in practice: real-world computers are based on different designs that, unlike Turing machines, use random-access memory.
Turing completeness is the ability for a system of instructions to simulate a Turing machine. A programming language that is Turing complete is theoretically capable of expressing all tasks accomplishable by computers; nearly all programming languages are Turing complete if the limitations of finite memory are ignored.
In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine (devised by English mathematician and computer scientist Alan Turing). This means that this system is able to recognize or decide other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.
A related concept is that of Turing equivalence – two computers P and Q are called equivalent if P can simulate Q and Q can simulate P. The Church–Turing thesis conjectures that any function whose values can be computed by an algorithm can be computed by a Turing machine, and therefore that if any real-world computer can simulate a Turing machine, it is Turing equivalent to a Turing machine. A universal Turing machine can be used to simulate any Turing machine and by extension the computational aspects of any possible real-world computer.[NB 1]
To show that something is Turing-complete, it is enough to show that it can be used to simulate some Turing-complete system. No physical system can have infinite memory, but if the limitation of finite memory is ignored, most programming languages are otherwise Turing-complete.
#29:The Z1 was a motor-driven mechanical computer designed by Konrad Zuse from 1936 to 1937, which he built in his parents' home from 1936 to 1938.[1][2] It was a binary electrically driven mechanical calculator with limited programmability, reading instructions from punched celluloid film.
The “Z1” was the first freely programmable computer in the world which used Boolean logic and binary floating-point numbers, however it was unreliable in operation.[3][4] It was completed in 1938 and financed completely from private funds. This computer was destroyed in the bombardment of Berlin in December 1943, during World War II, together with all construction plans.
The Z1 was the first in a series of computers that Zuse designed. Its original name was "V1" for VersuchsModell 1 (meaning Experimental Model 1). After WW2, it was renamed "Z1" to differentiate from the flying bombs designed by Robert Lusser.[5] The Z2 and Z3 were follow-ups based on many of the same ideas as the Z1.
Design[edit]
Diagrams from Zuse's May 1936 patent for a binary switching element using a mechanism of flat sliding rods. The Z1 was based on such elements.
The Z1 contained almost all the parts of a modern computer, i.e. control unit, memory, micro sequences, floating-point logic, and input-output devices. The Z1 was freely programmable via punched tape and a punched tape reader.[6] There was a clear separation between the punched tape reader, the control unit for supervising the whole machine and the execution of the instructions, the arithmetic unit, and the input and output devices. The input tape unit read perforations in 35-millimeter film.[7]
The Z1 was a 22-bit floating-point value adder and subtractor, with some control logic to make it capable of more complex operations such as multiplication (by repeated additions) and division (by repeated subtractions). The Z1's instruction set had eight instructions and it took between one and twenty one cycles per instruction.
The Z1 had a 16-word floating point memory, where each word of memory could be read from – and written to – the control unit. The mechanical memory units were unique in their design and were patented by Konrad Zuse in 1936. The machine was only capable of executing instructions while reading from the punched tape reader, so the program itself was not loaded in its entirety into internal memory in advance.
The input and output were in decimal numbers, with a decimal exponent and the units had special machinery for converting these to and from binary numbers. The input and output instructions would be read or written as floating-point numbers. The program tape was 35 mm film with the instructions encoded in punched holes.
#30:The Harvard Mark I, or IBM Automatic Sequence Controlled Calculator (ASCC), was a general-purpose electromechanical computer used in the war effort during the last part of World War II.
One of the first programs to run on the Mark I was initiated on 29 March 1944[1] by John von Neumann. At that time, von Neumann was working on the Manhattan Project, and needed to determine whether implosion was a viable choice to detonate the atomic bomb that would be used a year later. The Mark I also computed and printed mathematical tables, which had been the initial goal of British inventor Charles Babbage for his "analytical engine" in 1837.
The Mark I was disassembled in 1959, but portions of it were displayed in the Science Center as part of the Harvard Collection of Historical Scientific Instruments until being moved to the new Science and Engineering Complex in Allston, Massachusetts in July 2021.[2] Other sections of the original machine had much earlier been transferred to IBM and the Smithsonian Institution.
The original concept was presented to IBM by Howard Aiken in November 1937.[3] After a feasibility study by IBM engineers, the company chairman Thomas Watson Sr. personally approved the project and its funding in February 1939.
Howard Aiken had started to look for a company to design and build his calculator in early 1937. After two rejections,[4] he was shown a demonstration set that Charles Babbage’s son had given to Harvard University 70 years earlier. This led him to study Babbage and to add references of the Analytical Engine to his proposal; the resulting machine "brought Babbage’s principles of the Analytical Engine almost to full realization, while adding important new features."[5]
The ASCC was developed and built by IBM at their Endicott plant and shipped to Harvard in February 1944. It began computations for the U.S. Navy Bureau of Ships in May and was officially presented to the university on August 7, 1944.[6]
Design and construction[edit]
The ASCC was built from switches, relays, rotating shafts, and clutches. It used 765,000 electromechanical components and hundreds of miles of wire, comprising a volume of 816 cubic feet (23 m3) – 51 feet (16 m) in length, 8 feet (2.4 m) in height, and 2 feet (0.61 m) deep. It weighed about 9,445 pounds (4.7 short tons; 4.3 t).[7] The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot (15 m) drive shaft coupled to a 5 horsepower (3.7 kW) electric motor, which served as the main power source and system clock. From the IBM Archives:
The Automatic Sequence Controlled Calculator (Harvard Mark I) was the first operating machine that could execute long computations automatically. A project conceived by Harvard University’s Dr. Howard Aiken, the Mark I was built by IBM engineers in Endicott, N.Y. A steel frame 51 feet long and 8 feet high held the calculator, which consisted of an interlocking panel of small gears, counters, switches and control circuits, all only a few inches in depth. The ASCC used 500 miles (800 km) of wire with three million connections, 3,500 multipole relays with 35,000 contacts, 2,225 counters, 1,464 tenpole switches and tiers of 72 adding machines, each with 23 significant numbers. It was the industry’s largest electromechanical calculator.[8]
A patch is a set of changes to a computer program or its supporting data designed to update, fix, or improve it.[1] This includes fixing security vulnerabilities[1] and other bugs, with such patches usually being called bugfixes or bug fixes.[2][better source needed] Patches are often written to improve the functionality, usability, or performance of a program. The majority of patches are provided by software vendors for operating system and application updates.
#33:The Lehmer sieve is an example of a digital non-electronic computer, specialized for finding primes and solving simple Diophantine equations. When digital electronic computers appeared, they displaced all other kinds of computers, including analog computers and mechanical computers
In computer science, a digital electronic computer is a computer machine which is both an electronic computer and a digital computer. Examples of a digital electronic computers include the IBM PC, the Apple Macintosh as well as modern smartphones. When computers that were both digital and electronic appeared, they displaced almost all other kinds of computers, but computation has historically been performed in various non-digital and non-electronic ways: the Lehmer sieve is an example of a digital non-electronic computer, while analog computers are examples of non-digital computers which can be electronic (with analog electronics), and mechanical computers are examples of non-electronic computers (which may be digital or not).
An example of a computer which is both non-digital and non-electronic is the ancient Antikythera mechanism found in Greece. All kinds of computers, whether they are digital or analog, and electronic or non-electronic, can be Turing complete if they have sufficient memory. A digital electronic computer is not necessarily a programmable computer, a stored program computer, or a general purpose computer, since in essence a digital electronic computer can be built for one specific application and be non-reprogrammable.
The term digital was first suggested by George Robert Stibitz and refers to where a signal, such as a voltage, is not used to directly represent a value (as it would be in an analog computer), but to encode it. In November 1937, Stibitz, then working at Bell Labs (1930–1941),[67] completed a relay-based calculator he later dubbed the "Model K" (for "kitchen table", on which he had assembled it), which became the first binary adder.[68] Typically signals have two states – low (usually representing 0) and high (usually representing 1), but sometimes three-valued logic is used, especially in high-density memory. Modern computers generally use binary logic, but many early machines were decimal computers. In these machines, the basic unit of data was the decimal digit, encoded in one of several schemes, including binary-coded decimal or BCD, bi-quinary, excess-3, and two-out-of-five code.
The mathematical basis of digital computing is Boolean algebra, developed by the British mathematician George Boole in his work The Laws of Thought, published in 1854. His Boolean algebra was further refined in the 1860s by William Jevons and Charles Sanders Peirce, and was first presented systematically by Ernst Schröder and A. N. Whitehead.[69] In 1879 Gottlob Frege develops the formal approach to logic and proposes the first logic language for logical equations.[70]
In the 1930s and working independently, American electronic engineer Claude Shannon and Soviet logician Victor Shestakov both showed a one-to-one correspondence between the concepts of Boolean logic and certain electrical circuits, now called logic gates, which are now ubiquitous in digital computers.[71] They showed[72] that electronic relays and switches can realize the expressions of Boolean algebra. This thesis essentially founded practical digital circuit design. In addition Shannon's paper gives a correct circuit diagram for a 4 bit digital binary adder.[71]: pp.494–495
#34:The Atanasoff–Berry computer (ABC) was the first automatic electronic digital computer.[1] Limited by the technology of the day, and execution, the device has remained somewhat obscure. The ABC's priority is debated among historians of computer technology, because it was neither programmable, nor Turing-complete.[2] Conventionally, the ABC would be considered the first electronic ALU (arithmetic logic unit) – which is integrated into every modern processor's design.
Its unique contribution was to make computing faster by being the first to use vacuum tubes to do the arithmetic calculations. Prior to this, slower electro-mechanical methods were used by Konrad Zuse's Z1, and the simultaneously developed Harvard Mark I. The first electronic, programmable, digital machine,[3] the Colossus computer from 1943 to 1945, used similar tube-based technology as ABC.
It was not programmable, which distinguishes it from more general machines of the same era, such as Konrad Zuse's 1941 Z3 (or earlier iterations) and the Colossus computers of 1943–1945. Nor did it implement the stored-program architecture, first implemented in the Manchester Baby of 1948, required for fully general-purpose practical computing machines.
Add-subtract module (reconstructed) from Atanasoff–Berry computer
The machine was, however, the first to implement:
Using vacuum tubes, rather than wheels, ratchets, mechanical switches, or telephone relays, allowing for greater speed than previous computers
Using capacitors for memory, rather than mechanical components, allowing for greater speed and density
The memory of the Atanasoff–Berry computer was a system called regenerative capacitor memory, which consisted of a pair of drums, each containing 1600 capacitors that rotated on a common shaft once per second. The capacitors on each drum were organized into 32 "bands" of 50 (30 active bands and two spares in case a capacitor failed), giving the machine a speed of 30 additions/subtractions per second. Data was represented as 50-bit binary fixed-point numbers. The electronics of the memory and arithmetic units could store and operate on 60 such numbers at a time (3000 bits).
The alternating current power-line frequency of 60 Hz was the primary clock rate for the lowest-level operation
#35:ENIAC (/ˈɛniæk/; Electronic Numerical Integrator and Computer)[1][2] was the first programmable, electronic, general-purpose digital computer, completed in 1945.[3][4] There were other computers that had these features, but the ENIAC had all of them in one package. It was Turing-complete and able to solve "a large class of numerical problems" through reprogramming.[5][6]
Although ENIAC was designed and primarily used to calculate artillery firing tables for the United States Army's Ballistic Research Laboratory (which later became a part of the Army Research Laboratory),[7][8] its first program was a study of the feasibility of the thermonuclear weapon.[9][10]
ENIAC was completed in 1945 and first put to work for practical purposes on December 10, 1945.[11]
ENIAC was formally dedicated at the University of Pennsylvania on February 15, 1946, having cost $487,000 (equivalent to $5,900,000 in 2020), and was heralded as a "Giant Brain" by the press.[12] It had a speed on the order of one thousand times faster than that of electro-mechanical machines; this computational power, coupled with general-purpose programmability, excited scientists and industrialists alike. The combination of speed and programmability allowed for thousands more calculations for problems. As ENIAC calculated a trajectory in 30 seconds that took a human 20 hours, one ENIAC could replace 2,400 humans.[13]
ENIAC was formally accepted by the U.S. Army Ordnance Corps in July 1946. It was transferred to Aberdeen Proving Ground, Maryland in 1947, where it was in continuous operation until 1955.
Reliability[edit]
ENIAC used common octal-base radio tubes of the day; the decimal accumulators were made of 6SN7 flip-flops, while 6L7s, 6SJ7s, 6SA7s and 6AC7s were used in logic functions.[32] Numerous 6L6s and 6V6s served as line drivers to drive pulses through cables between rack assemblies.
Several tubes burned out almost every day, leaving ENIAC nonfunctional about half the time. Special high-reliability tubes were not available until 1948. Most of these failures, however, occurred during the warm-up and cool-down periods, when the tube heaters and cathodes were under the most thermal stress. Engineers reduced ENIAC's tube failures to the more acceptable rate of one tube every two days. According to an interview in 1989 with Eckert, "We had a tube fail about every two days and we could locate the problem within 15 minutes."[33] In 1954, the longest continuous period of operation without a failure was 116 hours—close to five days.
Programmers[edit]
Programmers Betty Jean Jennings (left) and Fran Bilas (right) operate ENIAC's main control panel at the Moore School of Electrical Engineering. (U.S. Army photo from the archives of the ARL Technical Library)
Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Meltzer, Fran Bilas, and Ruth Lichterman were the first programmers of the ENIAC. They were not, as computer scientist and historian Kathryn Kleiman was once told, "refrigerator ladies", i.e., models posing in front of the machine for press photography.[42] Nevertheless, some of the women did not receive recognition for their work on the ENIAC in their lifetimes.[19] After the war ended, the women continued to work on the ENIAC. Their expertise made their positions difficult to replace with returning soldiers. The original programmers of the ENIAC were neither recognized for their efforts nor known to the public until the mid-1980s.[43]
These early programmers were drawn from a group of about two hundred women employed as computers at the Moore School of Electrical Engineering at the University of Pennsylvania. The job of computers was to produce the numeric result of mathematical formulas needed for a scientific study, or an engineering project. They usually did so with a mechanical calculator. The women studied the machine's logic, physical structure, operation, and circuitry in order to not only understand the mathematics of computing, but also the machine itself.[19] This was one of the few technical job categories available to women at that time.[44] Betty Holberton (née Snyder) continued on to help write the first generative programming system (SORT/MERGE) and help design the first commercial electronic computers, the UNIVAC and the BINAC, alongside Jean Jennings.[45] McNulty developed the use of subroutines in order to help increase ENIAC's computational capability.[46]
Herman Goldstine selected the programmers, whom he called operators, from the computers who had been calculating ballistics tables with mechanical desk calculators, and a differential analyzer prior to and during the development of ENIAC.[19] Under Herman and Adele Goldstine's direction, the computers studied ENIAC's blueprints and physical structure to determine how to manipulate its switches and cables, as programming languages did not yet exist. Though contemporaries considered programming a clerical task and did not publicly recognize the programmers' effect on the successful operation and announcement of ENIAC,[19] McNulty, Jennings, Snyder, Wescoff, Bilas, and Lichterman have since been recognized for their contributions to computing.[47][48][49] Three of the current (2020) Army supercomputers Jean, Kay, and Betty are named for Jean Bartik (Betty Jennings), Kay McNulty, and Betty Snyder respectively.[50]
The "programmer" and "operator" job titles were not originally considered professions suitable for women. The labor shortage created by World War II helped enable the entry of women into the field.[19] However, the field was not viewed as prestigious, and bringing in women was viewed as a way to free men up for more skilled labor. Essentially, women were seen as meeting a need in a temporary crisis.[19] For example, the National Advisory Committee for Aeronautics said in 1942, "It is felt that enough greater return is obtained by freeing the engineers from calculating detail to overcome any increased expenses in the computers' salaries. The engineers admit themselves that the girl computers do the work more rapidly and accurately than they would. This is due in large measure to the feeling among the engineers that their college and industrial experience is being wasted and thwarted by mere repetitive calculation".[19]
Following the initial six programmers, an expanded team of a hundred scientists was recruited to continue work on the ENIAC. Among these were several women, including Gloria Ruth Gordon.[51] Adele Goldstine wrote the original technical description of the ENIAC.[52]
#36:ENIAC (/ˈɛniæk/; Electronic Numerical Integrator and Computer)[1][2] was the first programmable, electronic, general-purpose digital computer, completed in 1945.[3][4] There were other computers that had these features, but the ENIAC had all of them in one package. It was Turing-complete and able to solve "a large class of numerical problems" through reprogramming.[5][6]
Although ENIAC was designed and primarily used to calculate artillery firing tables for the United States Army's Ballistic Research Laboratory (which later became a part of the Army Research Laboratory),[7][8] its first program was a study of the feasibility of the thermonuclear weapon.[9][10]
ENIAC was completed in 1945 and first put to work for practical purposes on December 10, 1945.[11]
ENIAC was formally dedicated at the University of Pennsylvania on February 15, 1946, having cost $487,000 (equivalent to $5,900,000 in 2020), and was heralded as a "Giant Brain" by the press.[12] It had a speed on the order of one thousand times faster than that of electro-mechanical machines; this computational power, coupled with general-purpose programmability, excited scientists and industrialists alike. The combination of speed and programmability allowed for thousands more calculations for problems. As ENIAC calculated a trajectory in 30 seconds that took a human 20 hours, one ENIAC could replace 2,400 humans.[13]
ENIAC was formally accepted by the U.S. Army Ordnance Corps in July 1946. It was transferred to Aberdeen Proving Ground, Maryland in 1947, where it was in continuous operation until 1955.
Reliability[edit]
ENIAC used common octal-base radio tubes of the day; the decimal accumulators were made of 6SN7 flip-flops, while 6L7s, 6SJ7s, 6SA7s and 6AC7s were used in logic functions.[32] Numerous 6L6s and 6V6s served as line drivers to drive pulses through cables between rack assemblies.
Several tubes burned out almost every day, leaving ENIAC nonfunctional about half the time. Special high-reliability tubes were not available until 1948. Most of these failures, however, occurred during the warm-up and cool-down periods, when the tube heaters and cathodes were under the most thermal stress. Engineers reduced ENIAC's tube failures to the more acceptable rate of one tube every two days. According to an interview in 1989 with Eckert, "We had a tube fail about every two days and we could locate the problem within 15 minutes."[33] In 1954, the longest continuous period of operation without a failure was 116 hours—close to five days.
Programmers[edit]
Programmers Betty Jean Jennings (left) and Fran Bilas (right) operate ENIAC's main control panel at the Moore School of Electrical Engineering. (U.S. Army photo from the archives of the ARL Technical Library)
Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Meltzer, Fran Bilas, and Ruth Lichterman were the first programmers of the ENIAC. They were not, as computer scientist and historian Kathryn Kleiman was once told, "refrigerator ladies", i.e., models posing in front of the machine for press photography.[42] Nevertheless, some of the women did not receive recognition for their work on the ENIAC in their lifetimes.[19] After the war ended, the women continued to work on the ENIAC. Their expertise made their positions difficult to replace with returning soldiers. The original programmers of the ENIAC were neither recognized for their efforts nor known to the public until the mid-1980s.[43]
These early programmers were drawn from a group of about two hundred women employed as computers at the Moore School of Electrical Engineering at the University of Pennsylvania. The job of computers was to produce the numeric result of mathematical formulas needed for a scientific study, or an engineering project. They usually did so with a mechanical calculator. The women studied the machine's logic, physical structure, operation, and circuitry in order to not only understand the mathematics of computing, but also the machine itself.[19] This was one of the few technical job categories available to women at that time.[44] Betty Holberton (née Snyder) continued on to help write the first generative programming system (SORT/MERGE) and help design the first commercial electronic computers, the UNIVAC and the BINAC, alongside Jean Jennings.[45] McNulty developed the use of subroutines in order to help increase ENIAC's computational capability.[46]
Herman Goldstine selected the programmers, whom he called operators, from the computers who had been calculating ballistics tables with mechanical desk calculators, and a differential analyzer prior to and during the development of ENIAC.[19] Under Herman and Adele Goldstine's direction, the computers studied ENIAC's blueprints and physical structure to determine how to manipulate its switches and cables, as programming languages did not yet exist. Though contemporaries considered programming a clerical task and did not publicly recognize the programmers' effect on the successful operation and announcement of ENIAC,[19] McNulty, Jennings, Snyder, Wescoff, Bilas, and Lichterman have since been recognized for their contributions to computing.[47][48][49] Three of the current (2020) Army supercomputers Jean, Kay, and Betty are named for Jean Bartik (Betty Jennings), Kay McNulty, and Betty Snyder respectively.[50]
The "programmer" and "operator" job titles were not originally considered professions suitable for women. The labor shortage created by World War II helped enable the entry of women into the field.[19] However, the field was not viewed as prestigious, and bringing in women was viewed as a way to free men up for more skilled labor. Essentially, women were seen as meeting a need in a temporary crisis.[19] For example, the National Advisory Committee for Aeronautics said in 1942, "It is felt that enough greater return is obtained by freeing the engineers from calculating detail to overcome any increased expenses in the computers' salaries. The engineers admit themselves that the girl computers do the work more rapidly and accurately than they would. This is due in large measure to the feeling among the engineers that their college and industrial experience is being wasted and thwarted by mere repetitive calculation".[19]
Following the initial six programmers, an expanded team of a hundred scientists was recruited to continue work on the ENIAC. Among these were several women, including Gloria Ruth Gordon.[51] Adele Goldstine wrote the original technical description of the ENIAC.[52]
#37:The UNIVAC I (Universal Automatic Computer I) was the first general-purpose electronic digital computer design for business application produced in the United States. It was designed principally by J. Presper Eckert and John Mauchly, the inventors of the ENIAC. Design work was started by their company, Eckert–Mauchly Computer Corporation (EMCC), and was completed after the company had been acquired by Remington Rand (which later became part of Sperry, now Unisys). In the years before successor models of the UNIVAC I appeared, the machine was simply known as "the UNIVAC".[1]
The first Univac was accepted by the United States Census Bureau on March 31, 1951, and was dedicated on June 14 that year.[2][3] The fifth machine (built for the U.S. Atomic Energy Commission) was used by CBS to predict the result of the 1952 presidential election. With a sample of a mere 5.5% of the voter turnout, it famously predicted an Eisenhower landslide. (The 1952 United States presidential election was the 42nd quadrennial presidential election and was held on Tuesday, November 4, 1952. Republican Dwight D. Eisenhower won a landslide victory over Democrat Adlai Stevenson II, which ended a string of Democratic Party wins that stretched back to 1932.)
Major physical features[edit]
UNIVAC I used about 5,000 vacuum tubes,[17] weighed 16,686 pounds (8.3 short tons; 7.6 t),[18] consumed 125 kW, and could perform about 1,905 operations per second running on a 2.25 MHz clock. The Central Complex alone (i.e. the processor and memory unit) was 4.3 m by 2.4 m by 2.6 m high. The complete system occupied more than 35.5 m2 (382 ft²) of floor space.[citation needed]
#38:EDVAC (Electronic Discrete Variable Automatic Computer) was one of the earliest electronic computers. It was built by Moore School of Electrical Engineering, Pennsylvania.[1][2]: 626–628 Along with ORDVAC, it was a successor to the ENIAC. Unlike ENIAC, it was binary rather than decimal, and was designed to be a stored-program computer.
ENIAC inventors, John Mauchly and J. Presper Eckert, proposed the EDVAC's construction in August 1944. A contract to build the new computer was signed in April 1946 with an initial budget of US$100,000. EDVAC was delivered to the Ballistic Research Laboratory in 1949. The Ballistic Research Laboratory became a part of the US Army Research Laboratory in 1952.
Functionally, EDVAC was a binary serial computer with automatic addition, subtraction, multiplication, programmed division and automatic checking with an ultrasonic serial memory[3] having a capacity of 1,024 44-bit words. EDVAC's average addition time was 864 microseconds and its average multiplication time was 2,900 microseconds.
Impact on future computer design[edit]
John Von Neumann's famous EDVAC monograph, First Draft of a Report on the EDVAC, proposed the main enhancement to its design that embodied the principal "stored-program" concept that we now call the Von Neumann architecture. This was the storing of the program in the same memory as the data. The British computers EDSAC at Cambridge and the Manchester Baby were the first working computers that followed this design. And it has been followed by the great majority of computers made since. Having the program and data in different memories is now called the Harvard architecture to distinguish it.
#39:The Osborne 1 is the first commercially successful portable computer, released on April 3, 1981 by Osborne Computer Corporation.[1] It weighs 24.5 lb (11.1 kg), cost US$1,795, and runs the CP/M 2.2 operating system. It is powered from a wall socket, as it has no on-board battery, but it is still classed as a portable device since it can be hand-carried when the keyboard is closed.
The computer shipped with a large bundle of software that was almost equivalent in value to the machine itself, a practice adopted by other CP/M computer vendors. Competitors quickly appeared, such as the Kaypro II
Architecture[edit]
The 64 KB main memory is made of four rows of eight type 4116 dynamic RAM chips, each with 16,384 bits. Memory is shared, with 60 KB available for software and 4 KB reserved for video memory. No parity is provided and no provision for memory expansion exists on the motherboard. The boot program loader and significant parts of the BIOS are stored in a 4 kilobyte EPROM, which is bank-switched. A second EPROM is used as a fixed character generator, providing 96 upper and lower case ASCII characters and 32 graphic symbols; the character generator is not accessible to the CPU. The eighth bit of an ASCII character is used to select underlined characters. Serial communications are through a memory-mapped Motorola MC6850 Asynchronous Communications Interface Adapter (ACIA); a jumper on the motherboard allows the MC6850 to be set for either 300 and 1200 baud or 600 and 2400 baud communications, but other bit rates are not available.[17]
The floppy disk drives are interfaced through a Fujitsu 8877 disk controller integrated circuit, a second-source of the Western Digital 1793. The parallel port is connected through a memory-mapped Motorola MC6821 Peripheral Interface Adapter (PIA) which allows the port to be fully bidirectional; the Osborne manuals state that the port implemented the IEEE-488 interface bus but this is rarely used. The parallel port uses a card-edge connector etched on the main board, exposed through a hole in the case; any IEEE-488 or printer cable has to be modified for the Osborne.[17]
The diskette drives installed in the Osborne 1 are Siemens FDD 100-5s (MPI drives were also used later), which were actually manufactured in California by GSI, a drive manufacturer that the German firm had purchased. They utilize a custom controller board that Osborne produced, which among other things has a single connector for the power and data lines. The FDD 100-5 was trouble-prone as Osborne's quality control was lacking, and many of the controller boards have soldering defects. In addition, the drive cable is not keyed and can be easily installed upside-down, which shorts out components in the computer. There are also problems with the drive head going past track 0 and getting stuck in place. The combo power/data cable also has a tendency of overheating.[17]
The video system uses part of the main memory and TTL logic to provide video and sync to an internal 5-inch monochrome monitor. The same signals are provided on a card-edge connector for an external monitor; both internal and external monitor display the same video format.[17] The internal monitor is specified as 3.55" horizontal, and 2.63" vertical making the actual viewing size even smaller at 4.42". Osborne also provided a 12" GM-12 external monitor.
The processor, memory, floppy controller, PIA, ACIA and EPROMs are interconnected with standard TTL devices.[17]
The Osborne 1 has bank switched memory. Unusual for a system based on the Z80, all I/O is memory mapped, and the Z80 I/O instructions are only used to select memory banks. Bank 1 is "normal" mode, where user programs run; this includes a 4 KB area at the top of the address space which is video memory. Bank 2 is called "shadow". The first 4 KB of this address space is the ROM, and 4 KB is reserved for the on-board I/O ports: The disk controller, the keyboard, the parallel port PIA, the serial port ACIA, and a second PIA chip used for the video system. All memory above the first 16 KB is the same memory as Bank 1. This is the mode of the system on power up, because this is where the boot ROM was mapped. Bank 3 has only 4 KB by 1 bit of memory, used solely to hold the "dim" attribute of the video system.
Operating system[edit]
The computer runs on the [4] CP/M 2.2 operating system. A complete listing of the ROM BIOS is in the Osborne technical manual.[17]
.
Games[edit]
ADVENT (Colossal Cave Adventure) running on an Osborne Computer c. 1982
Since, like most CP/M systems, the display of the Osborne does not support bit-mapped graphics, games are typically character based games, like Hamurabi or text adventures (the 1982 game Deadline, for example, packaged in a dossier type folder and came on two 51⁄4" diskettes.). Compiled and MBASIC interpreted versions of Colossal Cave Adventure are available for the Osborne. Some type in games use the Osborne's character-mode graphics.[20]
#40:The Eckert–Mauchly Computer Corporation (EMCC) (March 1946 – 1950) was founded by J. Presper Eckert and John Mauchly. It was incorporated on December 22, 1947. After building the ENIAC at the University of Pennsylvania, Eckert and Mauchly formed EMCC to build new computer designs for commercial and military applications. The company was initially called the Electronic Control Company, changing its name to Eckert–Mauchly Computer Corporation when it was incorporated. In 1950, the company was sold to Remington Rand, which later merged with Sperry Corporation to become Sperry Rand, and survives today as Unisys.
#42:Digital data, in information theory and information systems, is information represented as a string of discrete symbols each of which can take on one of only a finite number of values from some alphabet, such as letters or digits. An example is a text document, which consists of a string of alphanumeric characters . The most common form of digital data in modern information systems is binary data, which is represented by a string of binary digits (bits) each of which can have one of two values, either 0 or 1.
Digital data can be contrasted with analog data, which is represented by a value from a continuous range of real numbers. Analog data is transmitted by an analog signal, which not only takes on continuous values, but can vary continuously with time, a continuous real-valued function of time. An example is the air pressure variation in a sound wave.
The word digital comes from the same source as the words digit and digitus (the Latin word for finger), as fingers are often used for counting. Mathematician George Stibitz of Bell Telephone Laboratories used the word digital in reference to the fast electric pulses emitted by a device designed to aim and fire anti-aircraft guns in 1942.[1] The term is most commonly used in computing and electronics, especially where real-world information is converted to binary numeric form as in digital audio and digital photography.
#44:There are five generations of computer:
• First Generation – 1946 – 1958
• Second generation – 1959 – 1964
• Third generation – 1965 – 1970
• Fourth generation – 1971 – today
• Fifth generation – Today to future