The word “computer” was first used in 1613 to describe a human who performed calculations. The definition remained the same until the end of the 19th century when the tech industry gave rise to machines whose primary task was to calculate.
There is no easy answer to the question, “When was the first computer invented?” because there are many different classifications of computers. The first mechanical computing machine was invented by Charles Babbage in 1822. However, it doesn’t really resemble what most people would consider a computer at present.
There are a lot of things that we don’t know about the rapid progress of computer development and the contributions of numerous scientists that haven’t been mentioned in our textbooks.
To fill you in with all the necessary computer knowledge, we have gathered a few notable inventions in computer science, from the first machine to the microchip era. We’ve tried to present a panoramic view of the monumental inventions that have shaped the digital landscape.
Table of Contents
27. First Computer: “Difference Engine” – 1821
Image source: Wikimedia
Charles Babbage (also known as the Father of computers) started working on the Difference Engine based on the principle of finite differences. It uses only arithmetic addition and removes the need for multiplication and division, which are more difficult to implement mechanically. It was strictly designed to calculate and tabulate polynomial functions.
The project was commissioned by the British government. However, due to its high production cost, the funding was stopped in the middle, and the machine was never completed.
26. First General Purpose Computer: Analytical Engine – 1834
Image source: Wikimedia
Charles Babbage developed a more ambitious machine, the first general-purpose programmable computing engine, later called Analytical Engine. It has many essential features found in the modern digital computer.
The machine was programmable using punched cards; the engine had a “Store” (where numbers and intermediate results could be held) and a separate “Mill” (where the arithmetic operations were performed).
The engine could also perform direct multiplication and division, parallel processing, microprogramming, iteration, latching, conditional branching, and plus-shaping, though Babbage never used these terms. Unfortunately, like Difference Engine, this machine was also not completed.
25. First Computer Program – 1841
The world’s first computer programmer, Ada Lovelace, translated Luigi Menabrea’s (Italian Mathematician) records on Babbage’s analytical engine in 1841. She understood how to make it do the things computers do, and she suggested the data input that would program the machine to calculate Bernoulli numbers.
Ada was a visionary mathematician – she knew numbers could be used to represent more than just quantities. She predicted that machines like Analytical Engine could be used to produce graphics, compose music, and be useful to science.
24. Tabulating Machine – 1884
Herman Hollerith worked on his idea of a machine punch and count cards in the late 19th century. He developed a machine that can record statistics by electrically reading and sorting punched cards.
Hollerith developed the Tabulating Machine Company in 1896 in New York, which later grew into IBM. The machine was a success in the US but drew even more attention in Europe, where it was widely adopted for various statistical purposes.
23. First Analog Computer: Differential Analyzer – 1930
Image credit: Wikimedia
The first modern analog computer was developed by MIT engineer Vannevar Bush. Actually, it was an analog calculator that could solve some specific set of differential equations, which are most commonly used in physics and engineering applications.
The machine-generated practical solutions, although they were only approximate.
In this machine, shaft motion represented variables, and multiplication and addition were done by feeding values into gears. A knife-edge wheel rotating at different radii on a circular table performed the integration part. Furthermore, to solve differential equations, different mechanical integrators were interlinked.
22. First Working Programmable Computer: Z3 – 1941
Konrad Zuse (inventor and computer pioneer) designed the first series of Z computers in 1936. Z1 was completely mechanical and only worked for a few minutes at a time at most. It works on Boolean operations and flip-flops on the basis of vacuum tubes. The use of different technology in the coming decades led to Z2 and, eventually, Z3.
The Z3 was built with 2000 relays, implementing a 22-bit length. Constant data and program code were stored on punched film; thus, no rewiring was necessary to change programs.
Z3 was a secret project of the German government, put to use by the German Aircraft Research Institute in order to perform statistical analyses of wing flutter. The original machine was destroyed in 1943 during an Allied bombardment of Berlin.
21. First Electronic Computer: ABC – 1942
Atanasoff-Berry Computer (ABC) was designed and built by John Vincent Atanasoff and his assistant, Clifford E. Berry. It was the first machine to use capacitors for storage (as in current RAM) and was capable of performing 30 simultaneous operations.
ABC was designed to solve systems of linear equations and was capable of solving systems with up to 29 unknowns. The computer was not programmable; however, it pioneered some important elements of modern computing, including binary arithmetic and electronic switching elements.
20. First Programmable Computer: Colossus – 1943
Colossus was the world’s first electronic, programmable computer created by Tommy Flowers. It was used by the British to read secret German messages (encrypted by the Lorenz cipher) during World War II.
The Colossus was not supposed to decrypt all of a message. It just found close settings for the Lorenz machines. The idea was that the frequencies of letters and numerals in German messages could be used to translate the message itself.
Until the 1970s, these computers were very secret. After the war, all Colossus machines were broken into pieces, and designs were destroyed. No one knew the developers and engineers who worked on Colossus. In 2007, a team of researchers made a working prototype of Colossus.
19. The First Computer Network – 1940
Between 1940 and 1946, George Stibitz and his team developed a series of machines with telephone technologies – employing electromechanical relays. These machines served more than one user. Soon, they became obsolete because they were based on slow mechanical relays rather than electronic switches.
Today, the dominant basis of data communication is Packet Switching: the ARPANET (Advanced Research Projects Agency Network) was an early packet-switching network and the first network to implement the protocol suite TCP/IP (in 1982). Both became the technical foundation of the Internet.
18. First Trackball – 1941/1952
A related pointed device called trackball was invented in 1941 by Ralph Benjamin for a fire-control radar plotting system named Comprehensive Display System(CDS). Benjamin’s previous project used analog computers to measure the future position of target aircraft based on various input points entered by a user with a joystick.
He realized that a more efficient input device was needed, so he invented what they called a rollerball for this purpose. This new device had a ball to control the X-Y coordinates of a cursor on the screen. It was patented in 1947 and was kept as a military secret.
Another early trackball, DATAR, was built in 1952 by British electrical engineer Kenyon Taylor along with his coworkers Fred Longstaff and Tom Cranston. It was similar in concept to Benjamin’s device.
The trackball used four disks to pick up motion, two each for the X and Y coordinates. A digital computer calculated the tracks and transferred the resulting data to other ships in a task force using pulse-code modulation radio signals. The design was not patented, as it was a secret military project as well.
17. First General Purpose Programmable Electronic Computer: ENIAC – 1946
Electronic Numerical Integrator And Computer (ENIAC) was a Turing-complete digital machine that could solve a wide range of numerical problems through reprogramming. It was primarily used to calculate artillery firing tables and helped with computations for the feasibility of the thermonuclear weapon.
By the end of its operation (1955), ENIAC contained 7200 crystal diodes, 17468 vacuum tubes, 10000 capacitors, 70,000 resistors, and over 5 million hand-soldered joints. It was roughly 8x3x100 feet in size, weighed 30 tons, and consumed 150 kW of electricity.
It used card readers for input and card punch for output. The computer had a speed on the order of one thousand times faster than that of electro-mechanical machines.
16. First Complete High-Level Language: Plankalkül – 1948
The German computer scientist Konrad Zuse, the creator of the first relay computer, started working on the high-level programming language in 1941. He developed ideas as to how his machines (Z4 computers) could be programmed in a powerful way.
The Plankalkül is a typed high-level imperative programming language with a wide range of features like non-recursive functions, local variables, assignment operations, conditional statements, WHILE construct for iteration, logical operation, fundamental data types, and more. Plankalkül was eventually comprehensively published in a 1972 paper, while its first compiler was built as late as in 1998.
15. First Stored-Program Computer Electronic Digital: SSEM -1948
SSEM (Manchester Small-Scale Experimental Machine), nicknamed Baby, executed its first program on 21st June 1948. The program was written by Tom Kilburn (who actually built this computer) and designed by his mentor, Frederic Williams. It was the first working machine to contain all the modules essential to a modern computer.
SSEM featured a 32-bit word length, single address format order code, a memory of 32 words, and a computing speed of around 1.2 milliseconds per instruction. The bit was stored in the form of a charge on the CRT phosphor that could be controlled by an electron beam to write 1 or 0. All arithmetic operations were implemented in the software except subtraction and negation.
14. First Assembler: Initial Orders – 1949
An assembler translates software programs written in assembly language into machine code and instructions that a computer can execute. The first assembler was designed for EDSAC (Electronic Delay Storage Automatic Calculator).
Initial orders (written by David Wheeler) had 31 instructions, which were hard-wired on uniselectors, a mechanical read-only memory. The second version of initial orders occupied the full 41 words of read-only memory and included facilities for relocation (or coordination) to facilitate the use of subroutines.
13. First Personal Computer: Simon – 1950
Edmund Berkeley’s Simon was a relay-based computer designed for the educational purpose of demonstrating the concept of digital computers. Users entered data via punched paper or by five keys on the front panel, and the program ran from a standard paper tape. The ALU and registers stored only 2 bits, and that’s the reason it could not be used for any significant practical computation.
Besides the data entry machine, punched tape served as memory storage. All instructions were executed in sequence as the machine read them from the tape. It could perform four operations: addition, greater than, negation, and selection. The output was provided by five lamps.
12. First Real-Time Graphics Display Computer: AN/FSQ-7 – 1951
The AN/FSQ-7, developed by IBM, was by far the largest computer ever built. It consisted of 2 Whirlwind II computers installed in a 4-story building.
It was a control and command system used in the air defense network. It calculated one or more predicted interception points for assigning aircraft or CIM-10 Bomarc missiles to intercept an intruder using the ATABE (Automatic Target and Battery Evaluation) algorithm.
With over 60,000 vacuum tubes and using 3000 kilowatts of electricity, the machine executed 74,000 instructions per second. Each machine supported more than 100 users. IBM used to keep one unit operating and one on hot standby, which resulted in better uptime (about 99%).
Lets go way back for #ThrowbackThursday, here is….
1960 IBM SAGE AN-FSQ-7 Computer console from 22nd NORAD Region#vintagecomputing pic.twitter.com/Ezgxa6RlCk
— Nostalgia Troy (@The_Real_TroyS) October 26, 2017
11. First Compiler for Electronic Computer: A-0 System – 1951
A compiler is a special program that converts high-level language into machine code. Grace Hopper wrote the arithmetic language version 0 (or A-0 system) for UNIVAC I, which aims to convert a sequence of subroutines and arguments into machine code.
The subroutines were identified via a numeric code, and the arguments were integrated directly after each subroutine code. The A-0 turned these specifications into machine language that could be fed into the computer to execute the said program.
10. First Open Source Software: A-2 System – 1953
Grace Murray Hopper at the UNIVAC keyboard
The A-0 system later evolved into A-2, released as ARITH-MATIC. It was developed at the UNIVAC division of Remington Rand and released to customers by the end of 1953. Users were provided the source code for A-2 and invited to send their enhancements back to UNIVAC.
9. First Autocode: Glennie’s Autocode – 1952
In the 1960s, the term ‘Autocode’ was used more generically as to refer to any high-level programming language using a compiler. Alick Glennie developed the first Autocode for the Mark 1 computer at the University of Manchester. It is considered to be the first compiled programming language.
Glennie’s primary goal was to make the abstruse code of the Mark 1 machine comprehensible. Although the resulting language was much more organized and clearer than the machine code, it was still very much machine-dependent.
The second Autocode for the Mark 1 was developed by R.A. Brooker in 1955. Unlike the first one, it was almost machine-independent and had floating-point arithmetic. However, it allowed only one operation per line, and it had no way to define user subroutines.
8. First Popular High-Level Language: FORTRAN – 1957
FORTRAN code on a punched card
FORTRAN (FORmula TRANslator) was developed by a team led by John Backus at IBM. It dominated the programming field from the mid to late 20th century. For over five decades, it has been widely used in scientific and engineering fields, including computational fluid dynamics, finite element analysis, computational chemistry, and computational physics.
The goal during the design of Fortran was to create a language that is easy to learn, machine-independent, versatile for various applications, and enables the expression of complex mathematical equations in a way similar to regular algebraic notation.
Since it was easier to code, programmers were able to write code 5 times faster than before; however, the execution efficiency decreased by 20 percent.
7. First Computer Mouse – 1964
The computer mouse was invented by Douglas Engelbart with the assistance of Bill English and was patented on 17th November 1970. It was just a tiny piece of a much larger project aimed at augmenting human intellect.
Engelbart wanted to develop a system that could interact with the information displayed on the screen. More specifically, he wanted to use some sort of hardware that could move a cursor on the screen. There were several devices available at that time, including lightpens and joysticks. However, he was looking for something more efficient.
The first prototype of the mouse was developed to be used with Graphical User Interface. It was referred to as the “X-Y Position Indicator for a Display System” and was first used with the Xerox Alto computer system in 1973.
It is quite strange that the inventor of one of the most popular computer interface devices didn’t receive any royalties for his invention. He got the patent as an assignor of SRI, and SRI licensed it to Apple for around $40,000. Douglas received nothing!
6. First Touchscreen – 1965
E.A Johnson described his work on the capacitive touchscreen (with no pressure sensitivity) in an article titled “Touch display – a novel input/output device for computers.” In that article, there was a diagram that described a touchscreen concept that is still in use today.
A couple of years later, Johnson detailed the concepts even further with photos and more diagrams in “Touch Displays: A Programmed Man-Machine Interface,” which was published in Ergonomics journal in 1967. The idea was adopted for use by air traffic controllers in the United Kingdom until the 1990s.
Furthermore, the first resistive touchscreen was developed by George Samuel Hurst (American innovator), who got US patent #3911215 in 1975.
5. First Commercial Personal Computer: Programma 101 – 1965
Programma 101 could perform the basic four arithmetic functions (addition, subtraction, multiplication, division) and calculate the absolute value, square root, and fractional part. It consisted of memory registers and featured 16 conditional jump instructions, an alphanumeric programming language, and internal memory.
It also featured magnetic cards and routines, which could be used without programming knowledge. Unlike earlier computers (that were very expensive and hard to use), P101 was economical and relatively easy to use.
Priced at $32,000, the company managed to sell over 44,000 units. It could print programs and results onto a roll of paper tape, similar to cash register paper and calculator.
4. First Object-Oriented Programming Language: Simula – 1967
Simula was developed by Ole-Johan Dahl and Kristen Nygaard at the Norwegian Computing Center. It retains the spirit of the ALGOL 60 programming language. Simula is the name of two simulation languages – Simula I and Simula 67.
Simula 67 introduced objects, classes, subclasses, inheritance, virtual procedures, coroutines, and garbage collection programs. It was used in a wide range of applications, including process modeling, algorithms, VLSI design, and computer graphics. The concept of Simula 67 is reimplemented in C++, C#, Pascal, Java, and many more languages.
3. First Public Packet-Switched Computer Network: ARPANET – 1969
ARPANET access points
ARPANET (Advanced Research Projects Agency Network) laid the foundation for what would later become the modern Internet. It was the pioneering computer network developed by the US Department of Defense’s Advanced Research Projects Agency.
ARPANET was initially launched in September 1969, connecting four major research institutions: the University of California, Santa Barbara, the University of California, Los Angeles, the University of Utah, and Stanford Research Institute. The first message sent over this network was the word “LOGIN,” but the system crashed after only two letters.
November 21, 1969
The first permanent ARPANET link is established between University California Los Angeles, @Stanford Research Institute and @UCLA —the birth of the Internet.
By 1990, the early form of internet had come into existence, with 300,000 systems connected worldwide. pic.twitter.com/h4HHf6UZl8
— Vala Afshar (@ValaAfshar) November 21, 2023
ARPANET gradually expanded its network to include more research institutions and military installations. The number of connected hosts grew steadily, creating a decentralized and resilient network architecture.
2. First Microprocessor: Intel 4004 – 1971
Started in April 1970, the design of the world’s first microprocessor was completed in January 1971 under the leadership of Federico Faggin. It was smaller than a human thumbnail.
It’s a 4-bit register with a clock speed of 740 kHz. It is made of 2,300 transistors (with 10-micron spacing) and is capable of carrying out 60,000 operations per second. In terms of computing performance, this tiny chip was as powerful as the ENIAC computer.
The Intel 4004 uses separate programs and data storage (contrary to Harvard architecture designs), a single multiplexed 4-bit bus for transferring 12-bit address, 8-bit instruction, and 4-bit data words. It is able to directly address 5120 bits of RAM and 32768 bits of ROM and support a 3-level deep internal subroutine stack.
The first commercial product to use this microprocessor was the Busicom calculator 141-PF.
OTD in 1971, an advertisement announced “A New Era in Integrated Electronics” with the @Intel 4004—the first commercially available, general-purpose programmable microprocessor. It helped advance innovation in electronic devices throughout the decade. https://t.co/IxzLwAqilf pic.twitter.com/n472p5GOJl
— Ajit Pai (@AjitPai) November 15, 2023
1. C Programming Language – 1972
Developed by Dennis Ritchie at Bell Labs, the C programming language has played a crucial role in the development of modern computing. It was written to be easily ported across different computer architectures (machine-independent).
This portability has contributed to the widespread adoption of C for developing system-level software and cross-platform applications. Unlike most other languages at the time, C was designed with a focus on simplicity and efficiency so programmers could express concepts clearly and concisely.
The simplicity of the C language makes it ideal for low-level programming, system development, and applications where performance is critical.
In 1983, the American National Standards Institute standardized the C language to ensure consistency across various implementations of C compilers, foster a uniform programming environment, and ease code portability.
C played an important role in developing operating systems (including Unix, which was mostly written in C). It also profoundly influenced several programming languages, including C++, Objective-C, C#, and Java.
Despite being over five decades old, C is still a fundamental and relevant programming language.
More to Know
What’s the significance of studying the history of computer science inventions?
It provides a multifaceted view through which we can understand the evolution of computer technology, its societal impact, and the continuous pursuit of innovation. There are many reasons to delve into the historical landscape of computer science inventions:
- It can help us identify recurring patterns and trends that have shaped the field over time
- It shows how different generations approached and solved complex problems
- It is replete with ethical dilemmas, successes, and failures.
- Insights from past successes and failures can serve as a guide for shaping the direction of future innovations
What are the main types of computers?
Based on purpose, computers can be categorized into four groups:
- Microcomputers include desktops, laptops, smartphones, gaming consoles, and programmable calculators
- Minicomputers are suitable for businesses and labs, and are embedded in high-end equipment like hospital CAT scanners.
- Mainframes can process massive amounts of data and respond to millions of users at a time. They are used in large corporations, banks, and government agencies.
- Supercomputers are used for carrying out resource-intensive tasks, such as weather forecasting, complex scientific computations, nuclear simulations, etc.
What’s the difference between artificial intelligence and machine learning?
AI is a broad branch of computer science concerned with developing smart machines that can perform tasks that require human intelligence. It aims to mimic human cognitive functions such as learning and problem-solving.
Machine learning is an application of artificial intelligence. It refers to algorithms and technologies that enable computers to detect patterns, make decisions, and improve those decisions through experience (using historical data without being explicitly programmed).
What is brain-inspired computing?
Brain-inspired computing aims to process unstructured and noisy data with extreme energy efficiency. It can enable machines to realize numerous coordination mechanisms and cognitive abilities of humans and ultimately exceed human intelligence levels.
Various neuromorphic chips have already been developed to mimic neuro-biological architectures present in the nervous system. They use physical artificial neurons (made of silicon) to process data and perform calculations. Qualcomm’s Zeroth, IBM’s TrueNorth, and Stanford’s Neurogrid are some of the most popular examples of such chips.
Read More
5 Quantum Processors That Feature New Computing Paradigm
nice list, really enjoyed the selection. it was cool to see Z3, ABC included.