It is anyone’s guess how far computer technology will
advance. And as software applications have become vital to virtually every
aspect of modern life, it is anyone’s guess how fully integrated technology
will become in modern living. Recent decades have rapidly evolved
technologically, building upon innovations of previous decades with greater
speed than at any other time in history. And speed is a primary reason for it.
Since the Industrial Revolution, humankind has sought ways to become ever more
efficient in all realms of life from production at the factory to cooking meals
at home. Today the mediating force between technology and humans is software.
Through machine code instructions called programming
languages, software allows individualized access to the complicated interaction
of input and output technology, memory, and processing—that is, individualized
control over the hardware components. Electronic data management proved useful
in its earliest commercial applications such as employee payroll and airline
reservations, but even the earliest pioneers in software development could
never have predicted the personal computer and its full range of applications
in the home. Software, though ubiquitous now, was only a gradual emergence in
the computer industry, and it took individual contributions by a number of
brilliant minds to evolve into the products and services we take for granted
today.
Mechanical methods of computation were forever changed by the
advent of electronics, and electronic computation was forever changed by the
versatility that software provided. The early problem in electronic computation
was in distinguishing among distinct numerical quantities. This problem led to
the development of an electronic pulse technique, or the simple distinction
between off and on, or 0 and 1: binary code (Glass 1998). Before programs began
to be written to make the most of electronic computing power, the computer
industry was dominated by engineers developing hardware. The history of the
computer industry extends at least as far back as Edwin Howard Armstrong, who
in the early twentieth century improved radio transmission with a receiver
called the “three-electrode valve (or triode)…[that] was to be the seed of
modern electronics, computers and the Internet” (Evans 2004).
In the first couple of decades of the twentieth century,
three distinct entrepreneurial forces combined to form the behemoth electronics
and computer company that has operated for nearly a century: International
Business Machines or, simply, IBM. First, the critically important punch-card
tabulating machine company of Herman Hollerith was absorbed by Charles Flint’s
Computer-Tabulating-Recording Company (CTR) in 1911. Second, Thomas Watson, the
man who would eventually transform CTR into IBM, cut his teeth at John
Patterson’s National Cash Register Company. Patterson was an intense and
distinguished salesperson who used rallying slogans, an emphasis on sales and
service, and technological innovation to create “America’s first national sales
force” (Evans 2004). And third, Watson’s unique ability to unite the divided
CTR combined with his sales and marketing experience helped him transform the
company through a focus on engineering and technology, such that by the time of
the New Deal, IBM was in the position to lead the nation in mechanical
computation products.
From the 1930s to the 1950s, punch cards became the driving
force of corporate America as they were used in virtually every office
accounting machine. Software pioneer Raymond Houghton recalls the “punched
little rectangular holes in [decks of] cards” that were read by computing
machines without operating systems well into the 1960s (Glass 1998). Cards were
notated with programming languages such as IBM’s FORTRAN (FORmula TRANslation)
and the U.S. Department of Defense’s COBOL (COmmon Business Oriented Language)
and they combined coded instruction sets such as compilers and assemblers to
try to make computing more efficient. Compilers “automated the process of
selecting and reusing code to create programs” while assemblers were
“program[s] that translated between a more recognizable assembly notation and
machine code” (Yost 2005). These slow, sometimes unreliable
language-programming methods were basically analogue precursors to digital
software programming techniques.
With pressure from emerging competitors in the field, Thomas
Watson steered his company into electronics and hired new engineers en
masse to develop IBM’s own mainframe calculating machines. The early
massive mainframe computation machines were composed of tons of steel and
glass, hundreds of thousands of parts, and thousands of vacuum tubes and clunky
relays. Early electronic relays, such as those on the University of
Pennsylvania’s Pentagon-sponsored Electronic Numerical Integrator and Computer
(ENIAC), had to be manually plugged and unplugged individually according the
specific computations being programmed. His son Thomas Watson, Jr. later
observed ENIAC in action and was not convinced such immense, unreliable
machines could ever be useful in business applications. Yet he would eventually
lead IBM into a dominating position in the computer industry as mainframes evolved
and software became a codependent but independent field.
At the time, however, ENIAC’s designers got the attention of
Prudential Insurance (a major client of punch-card technology that was finding
it increasingly difficult to store millions of programming and archived cards)
and the U.S. Census Bureau when they proposed digital computation with magnetic
tape technology for storage. According to Paul E. Ceruzzi of the Smithsonian,
this step was the critical one leading to development of programming “as something
both separate from and as important as hardware design” (Evans 2004). In the
1950s, computer hardware technology improved with the development of
magnetic-core memory, transistorized circuits instead of vacuum tubes, and
random-access storage. Software programs were needed to manage such complex
needs like that of the growing airline industry, which handled the massive
“flow of bookings, cancellations, seat assignments, availability of seats,
[and] connecting flights” among other such “complications.” The need for
computer software was becoming painfully evident (Evans 2004).
Birth of “Software” and the Interactive Minicomputer
According to Jeffery R. Yost, the term “software” was created
in the late 1950s and was soon adopted throughout the industry (2005). Coined
by statistician John Tukey, the term became a catchall, user-friendly term for
the work of computer programmers who were using terminology ranging from
“computer program” to “code.” The America Heritage New Dictionary of
Cultural Literacy describes software as “[t]he programs and instructions
that run a computer, as opposed to the actual physical machinery and devices
that compose the hardware.” Meanwhile, The Free On-Line Dictionary of
Computing adds that software is divided into two primary types: system
software and program applications. System software includes general program
execution processes such as compilers and, most recognizably, the disk
operating system (DOS), which has evolved in form in IBM PC-style computers
within the last two decades from the ubiquitous Microsoft DOS prompt (MS-DOS)
to stylish Windows-based platforms from Microsoft 2000 to Windows Vista.
Similarly, Apple has seen countless new releases from the Apple DOS 3.1 of 1977
to the OS X series of recent years. Program applications include everything
else, from gaming to multimedia to scientific applications. Finally, software
combines lines of source code written by humans with the work of compilers and
assemblers in executing machine code (Dictionary.com).
At the Massachusetts Institute of Technology in 1955, a
project called TX-O was given to Ken Olsen. The project hoped to develop
smaller research computers out of tiny, powerful transistor technology. MIT
programmer Wesley Clark designed the TX-O and with Olsen’s methodical and
persistent management helped develop the foundation of Olsen’s dream: “a
reliable computer…accessible by one person, inexpensive and low powered,
but…compact, fast, and exciting” (Evans 2004). After MIT, Olsen and his
assistant Harlan Anderson obtained venture capital to found the Digital
Equipment Corporation (DEC) to develop interactive minicomputers to sell on the
open market. Computer models such as DEC’s Programmed Data Processor series
used a concept called “open architecture” to allow personalized software to run
everything from submarines to refineries to neon displays at Times Square. DEC
used the millions of dollars gained by going public in 1966 to enter into the
field of networking by developing “standardized technologies and communication
protocols.” IBM machines didn’t have the networking capacity other companies
had begun to develop, resulting in the loss of most of its market share.
It re-entered the playing field in 1976 by developing minicomputers of its own,
entering into a field that so many had not believed in: the personal computer
(Evans 2004).
Advanced Hardware for Complex Applications
As early as 1939, scientists such as William Shockley
theorized that diminutive semiconductors would replace vacuum tubes. Indeed,
all of modern electronics is based on Shockley’s ideas. Semiconductors can
handle electronic pulses at the rate of billions of times per second, instead
of the 10,000-times-persecond speed of the clunky and precarious vacuum tubes.
Fairchild Semiconductor entered the market to compete with Shockley
Semiconductor, and soon Fairchild became known for an innovation in
semiconductors that is now familiar around the world: the use of silicon.
Silicon, “a commonplace mineral that constitutes 90 percent
of the earth’s surface” was first used by Fairchild for U.S. Air Force rockets
in transistors that needed to withstand intense heat. Additional elements were
combined with silicon on flattened transistors to create the first integrated
circuits capable of handling multiple devices and increasingly complex software
applications. “Silicon Valley” was born as innumerable high-tech companies
emerged on the scene, congregating in at the southern end of California's San
Francisco Bay area. Perhaps most notably, Integrated Electronics, or Intel, was
founded and new advances in memory chips and microprocessors allowed computers
to handle software light years more complex than the single mathematical
computations of the original mainframes (Evans 2004).
The Innovators of the Digital Age
Microsoft's MS-DOS was directly modeled on a now lesser-known
operating system called CP/M that was developed by University of Washington
graduate Gary Kildall’s Digital Research (DRI). Kildall’s work was essential to
Bill Gates and Microsoft (which was originally founded to sell the Beginners’
All-Purpose Symbolic Instruction Code (BASIC) programming language interpreter
for hobbyists to write their own programs), but so were the early personal
computer developments of Apple and its subsequent graphical user interface
(GUI) that preceded Windows. It is Kildall’s work, nevertheless, that truly
shaped Microsoft and much about modern computing. Evans theorizes that had
Kildall had his way, the personal computer industry would have had access to
multitasking windows-style platforms much sooner and the entire industry would
be much more advanced today. Still, Kildall is attributed with the ideas that
were “the genesis of the whole third-party software industry” (2004).
Gary Kildall’s style of programming helped drive the
transition from mechanical computing into digital computing. Kildall developed
open language programming years before IBM’s PC, and a number of months before
Apple. In short, before microcomputers even existed, Kildall authored a
programming language “for a microcomputer operating system and the first floppy
disk operating system” (Evans 2004). Intel’s microprocessors were already
running everything from microwaves to watches, but Kildall imagined them in
home computers running software that would drive networks and wouldn’t be
bogged down by hardware compatibility issues. His Programming Language for
Microcomputers (PL/M) evolved into the Control Program for Microcomputers
(CP/M), which contained the first PC prompt, wherein Kildall could open and
store files in directories--work that is now down seemingly automatically as
users click-and-drag files through virtual space on the computer desktop.
Next, Kildall’s basic input/output system (BIOS) could be easily
changed by programmers to adapt to their specific hardware. Kildall’s software
advancements were easily adapted into clone systems, though Kildall had largely
retained licensing rights to his software through encoded copyright and
encryption techniques. One operating system, however, Tim Patterson’s DOS, or
the Quick ’n’ Dirty Operating System (QDOS), was developed for Rod Brock’s
Seattle Computer Products. QDOS, according to Evans, “was yet another one of
the rip-offs of the CP/M design” that would not have necessarily mattered had
IBM’s business arrangements not aligned with those of Bill Gates. Spurred by
the success of Steve Jobs and Steve Wozniak’s Apple products from the late
1970s and 1980s, IBM entered the field of microcomputers. Bill Gates seized the
opportunity of Kildall’s delayed CP/M-86 (being designed for the faster Intel
chip IBM had decided upon) and purchased Patterson’s operating system in order
to strike a deal (2004).
The trouble was, Kildall had already made arrangements with
IBM and he thought he had successfully negotiated CP/M a share of the market
upon the release of IBM’s new personal computer in 1981. But the final price
point of CP/M was six times that of Microsoft’s PC-DOS, effectively flushing
CP/M out of the market. Kildall had been betrayed. Ironically, only Kildall
knew the limitations of CP/M and PC-DOS. His intentions for multitasking
operating software would have revolutionized the industry at that time, but the
IBM-Microsoft partnership dominated the American market and they evolved at
their own pace. Meanwhile, Kildall kept his operation afloat with his European
offices, which embraced the multitasking capacities of his MP/M OS.
While Kildall went on to innovate in areas from CD-ROMs to
computer networking, DRI combined the graphic display technology of Atari with
the expertise of former Microsoft programmer Kay Nishi and cloned the
single-tasking MS-DOS with their DR-DOS. Upon entering the market, DR-DOS not
only drove down Microsoft’s price point, but also fixed a number of MS-DOS
bugs. This move helped lead to Novell’s acquisition of DRI in 1991 for $120
million. Gates missed the opportunity to acquire DRI for $10 million a few
years earlier but, oddly enough, his investment in the ideas of Steve Jobs in
1996 helped Apple enter successful new fields of digital innovation such as the
iPod and music downloading software, a field that, of course, Microsoft soon
entered. Perhaps most importantly, Microsoft proved the power of owning the
operating system. After years of working with IBM as the provider of the
software for their hardware, Microsoft surpassed IBM (Evans 2004).
From Personal to Global Network: the World Wide Web
Early computer networking technology was commercialized
especially for use in corporations or other large organizations. Local area
networks (LANs) allowed both the internal exchange of information and the
sharing of peripheral devices such as expensive printers. Educational
institutions were especially likely to take advantage of personal computing, but
through the 1980s individuals were snapping up computers for home use at an
ever-increasing rate. The early computer hobbyists who facilitated the
expansion of the industry into one that made software applications both
desirable and accessible to broader audiences now saw these standalone tools
become centers for information and communication. Computer software has become
integral in virtually every industry from drafting software for architects to
editing software for filmmakers and everything in-between. Data and word
processing aside, software has quickly evolved from videodiscs and CD-ROMs
containing entire encyclopedias to Internet browsing software allowing access
to entire networks of libraries.
LANs evolved into WANs (wide area networks) and as
multiple-linked local area networks expanded, particular among universities,
the seeds for the Internet sprouted (Yost 2005). With Internet Service
Providers (ISPs), the computer became larger than the little box in which it
was contained. Browsing software such as Netscape’s Navigator (written by Marc
Andreessen and Eric Bina in 1993, and now owned by AOL), Microsoft’s Internet
Explorer, and Mozilla’s Firefox became the means for connecting to a digitized
multimedia world now largely powered by Google, which daily
guides hundreds of millions of users through billions of pages on the World
Wide Web.
The World Wide Web all began with Sir Tim Berners-Lee’s
fusion of the U.S. Defense Department’s Internet, which linked research
centers, and hypertext, which allows quick navigation among documents. The now
ubiquitous tools of the Internet devised by Berners-Lee include HTML (HyperText
Markup Language, the language of Internet formatting code), communication
protocols (called HyperText Transfer protocols or HTTP), and individually
accessible Web addresses (called Uniform Resource Locators or URLs). Most importantly,
Berners-Lee made “the Web a decentralized network” that could be accessed and
contributed to by anyone with a connection (Evans 2004). Software, once an
esoteric sparkle in the hardware engineer’s eye, has been democratized, and its
applications in the modern, digital world seem infinite.
No comments:
Post a Comment