Wednesday, November 18, 2009

Computer Languages - First-generation Language, Second-generation Language, Third-generation Language, Fourth-generation Language Read more: Computer

A computer language is the means by which instructions and data are transmitted to computers. Put another way, computer languages are the interface between a computer and a human being. There are various computer languages, each with differing complexities. For example, the information that is understandable to a computer is expressed as zeros and ones (i.e., binary language). However, binary language is incomprehensible to humans. Computer scientists find it far more efficient to communicate with computers in a higher level language.


Block-structured language

Block-structured language grew out of research leading to the development of structured programming. Structured programming is based on the idea that any computer program can be written using only three arrangements of the information. The arrangements are called sequential, selection, and iteration. In a sequential arrangement, each programming instruction (statement) is executed one after the other. This order is vital. The execution of the second statement is dependent on the prior execution of the first statement. There is more flexibility built into the selection arrangement, where choices are typically made with an IF...THEN...ELSE structure. Iteration is also known as loop structure. Loop structures specify how many times a loop will be executed. In other words, a command can be executed a number of times until the task is completed.

PASCAL, ALGOL, and MODULA-2 are examples of block-structured languages. Examples of non-block structured languages are BASIC, FORTRAN, and LISP. Refinements of BASIC and FORTRAN produced more structured languages.

Block-structured languages rely on modular construction. A module is a related set of commands. Each module in a block-structured language typically begins with a "BEGIN" statement and ends with an "END" statement.



Read more: Computer Languages - First-generation Language, Second-generation Language, Third-generation Language, Fourth-generation Language http://science.jrank.org/pages/1697/Computer-Languages.html#ixzz0XHx0ijUd

Second-generation language

Assembly or assembler language was the second generation of computer language. By the late 1950s, this language had become popular. Assembly language consists of letters of the alphabet. This makes programming much easier than trying to program a series of zeros and ones. As an added programming assist, assembly language makes use of mnemonics, or memory aids, which are easier for the human programmer to recall than are numerical codes.

Second-generation language arose because of the programming efforts of Grace Hopper, an American computer scientist and Naval officer. Hopper developed FLOW-MATIC, a language that made programming easier for the naval researchers using the ENIAC computer in the 1940s. FLOW-MATIC used an English-based language, rather than the on-off switch language the computer understood. FLOW-MATIC was one of the first "high-level" computer languages. A high-level computer language is one that is easier for humans to use but which can still be translated by another program (called a compiler) into language a computer can interpret and act on.




Second Generation Computers

By 1948, the invention of the transistor greatly changed the computer's development. The transistor replaced the large, cumbersome vacuum tube in televisions, radios and computers. As a result, the size of electronic machinery has been shrinking ever since. The transistor was at work in the computer by 1956. Coupled with early advances in magnetic-core memory, transistors led to second generation computers that were smaller, faster, more reliable and more energy-efficient than their predecessors. The first large-scale machines to take advantage of this transistor technology were early supercomputers, Stretch by IBM and LARC by Sperry-Rand. These computers, both developed for atomic energy laboratories, could handle an enormous amount of data, a capability much in demand by atomic scientists. The machines were costly, however, and tended to be too powerful for the business sector's computing needs, thereby limiting their attractiveness. Only two LARCs were ever installed: one in the Lawrence Radiation Labs in Livermore, California, for which the computer was named (Livermore Atomic Research Computer) and the other at the U.S. Navy Research and Development Center in Washington, D.C. Another addition in second generation computers was the introduction of assembly language. When assembly language replaced machine language, abbreviated programming codes to replaced long, difficult binary codes (Gersting 35).

Throughout the early 1960's, there were a number of commercially successful second generation computers used in businesses, universities, and government from companies such as Burroughs, Control Data, Honeywell, IBM, Sperry-Rand, and others. These second generation computers were also of solid state design, and contained transistors in place of vacuum tubes. They also contained all the components we associate with the modern day computer: printers, tape storage, disk storage, memory, and stored programs. One important example was the IBM 1401, which was universally accepted throughout industry, and is considered by many to be the Model T of the computer industry. By 1965, most large business routinely processed financial information using second generation computers (Gersting 218).

It was the stored program and programming language that gave computers the flexibility to finally be cost effective and productive for business use. The stored program concept meant that instructions to run a computer for a specific function (known as a program) were held inside the computer's memory, and could quickly be replaced by a different set of instructions for a different function. A computer could print customer invoices and minutes later design products or calculate paychecks. More sophisticated high-level languages such as COBOL (Common Business-Oriented Language) and FORTRAN (Formula Translator) came into common use during this time, and have expanded to the current day. These languages replaced cryptic binary machine code with words, sentences, and mathematical formulas, making it much easier to program a computer. New types of careers (programmer, analyst, and computer systems expert) and the entire software industry began with second generation computers (Gersting 131).

The Computer Chronicles


In the beginning ...
A generation refers to the state of improvement in the development of a product. This term is also used in the different advancements of computer technology. With each new generation, the circuitry has gotten smaller and more advanced than the previous generation before it. As a result of the miniaturization, speed, power, and memory of computers has proportionally increased. New discoveries are constantly being developed that affect the way we live, work and play.

The First Generation: 1946-1958 (The Vacuum Tube Years)
The first generation computers were huge, slow, expensive, and often undependable. In 1946two Americans, PresperEckert, and John Mauchly built the ENIAC electronic computer which used vacuum tubes instead of the mechanical switches of the Mark I. The ENIAC used thousands of vacuum tubes, which took up a lot of space and gave off a great deal of heat just like light bulbs do. The ENIAC led to other vacuum tube type computers like the EDVAC(Electronic Discrete Variable Automatic Computer) and the UNIVAC I (UNIVersal Automatic Computer).

The vacuum tube was an extremely important step in the advancement of computers. Vacuum tubes were invented the same time the light bulb was invented by Thomas Edison and worked very similar to light bulbs. It's purpose was to act like anamplifier and a switch. Without any moving parts, vacuum tubes could take very weak signals and make the signal stronger (amplify it). Vacuum tubes could also stop and start the flow of electricity instantly (switch). These two properties made the ENIAC computer possible.

The ENIAC gave off so much heat that they had to be cooled by gigantic air conditioners. However even with these huge coolers, vacuum tubes still overheated regularly. It was time for something new.
The Second Generation: 1959-1964 (The Era of the Transistor)
The transistor computer did not last as long as the vacuum tube computer lasted, but it was no less important in the advancement of computer technology. In 1947 three scientists, John Bardeen,William Shockley, andWalter Brattain working at AT&T's Bell Labs invented what would replace the vacuum tube forever. This invention was the transistor which functions like a vacuum tube in that it can be used to relay and switch electronic signals.

There were obvious differences between the transisitor and the vacuum tube. The transistor was faster, more reliable, smaller, and much cheaper to build than a vacuum tube. One transistor replaced the equivalent of 40 vacuum tubes. These transistors were made of solid material, some of which is silicon, an abundant element (second only to oxygen) found in beach sand and glass. Therefore they were very cheap to produce. Transistors were found to conduct electricity faster and better than vacuum tubes. They were also much smaller and gave off virtually no heat compared to vacuum tubes. Their use marked a new beginning for the computer. Without this invention, space travel in the 1960's would not have been possible. However, a new invention would even further advance our ability to use computers.
The Third Generation: 1965-1970 (Integrated Circuits - Miniaturizing the Computer)
Transistors were a tremendous breakthrough in advancing the computer. However no one could predict that thousands even now millions of transistors (circuits) could be compacted in such a small space. The integrated circuit, or as it is sometimes referred to as semiconductor chip, packs a huge number of transistors onto a singlewafer of silicon. Robert Noyce of Fairchild Corporation and Jack Kilby of Texas Instruments independently discovered the amazing attributes of integrated circuits. Placing such large numbers of transistors on a single chip vastly increased the power of a single computer and lowered its cost considerably.

Since the invention of integrated circuits, the number of transistors that can be placed on a single chip has doubled everytwo years, shrinking both the size and cost of computers even further and further enhancing its power. Most electronic devices today use some form of integrated circuits placed on printed circuit boards-- thin pieces of bakelite or fiberglass that have electrical connections etched onto them -- sometimes called a mother board.

These third generation computers could carry out instructions in billionths of a second. The size of these machines dropped to the size of small file cabinets. Yet, the single biggest advancement in the computer era was yet to be discovered.
The Fourth Generation: 1971-Today (The Microprocessor)
This generation can be characterized by both the jump to monolithic integrated circuits(millions of transistors put onto one integrated circuit chip) and the invention of the microprocessor (a single chip that could do all the processing of a full-scale computer). By putting millions of transistors onto one single chip more calculation and faster speeds could be reached by computers. Because electricity travels about a foot in a billionth of a second, the smaller the distance the greater the speed of computers.

However what really triggered the tremendous growth of computers and its significant impact on our lives is the invention of the microprocessor. Ted Hoff, employed by Intel (Robert Noyce's new company) invented a chip the size of a pencil eraser that could do all the computing and logic work of a computer. The microprocessor was made to be used in calculators, not computers. It led, however, to the invention of personal computers, or microcomputers.

It wasn't until the 1970's that people began buying computer for personal use. One of the earliest personal computers was the Altair 8800 computer kit. In 1975 you could purchase this kit and put it together to make your own personal computer. In 1977 the Apple II was sold to the public and in 1981 IBM entered the PC (personal computer) market.

Today we have all heard of Intel and its Pentium® Processors and now we know how it all got started. The computers of the next generation will have millions upon millions of transistors on one chip and will perform over a billion calculations in a single second. There is no end in sight for the computer movement.

The First Generation Computers

Bendix G-15 Computer

It is the Bendix G-15 General Purpose Digital Computer, a First Generation computer introduced in 1956.

Another picture (66k). And another (105k). And you can download larger versions of the following pictures on this page by clicking on them. (But be aware, they vary in size between 0.5MB and 1.5MB and downloading will be slow). Our G-15, front view

Why this interest in the Bendix G-15?

Against the odds, the Western Australian branch of The Australian Computer Museum Inc has rescued one from the scrap heap. That's it, over on the right.

It is in pretty good condition, considering its age, and we hope one day we can get it working again. We also have various programming, operating and technical manuals, and schematics. They have been scanned and you can download them here.

This web site started life in 1998 as a sort of begging letter, seeking more information about the maintenance procedures. We have since been told that there was no formal maintenance manual and that our documentation is complete so far as maintaining the machine is concerned. Still, if you can help with some of the other items we are missing or add anything at all to our store of knowledge about the Bendix G-15, please get in touch with me, David Green at email address. StatCounter - Free Web Tracker and Counter

First Generation Computers.

The first generation of computers is said by some to have started in 1946 with ENIAC, the first 'computer' to use electronic valves (ie. vacuum tubes). Others would say it started in May 1949 with the introduction of EDSAC, the first stored program computer. Whichever, the distinguishing feature of the first generation computers was the use of electronic valves.

My personal take on this is that ENIAC was the World's first electronic calculator and that the era of the first generation computers began in 1946 because that was the year when people consciously set out to build stored program computers (many won't agree, and I don't intend to debate it). The first past the post, as it were, was the EDSAC in 1949. The period closed about 1958 with the introduction of transistors and the general adoption of ferrite core memories.

OECD figures indicate that by the end of 1958 about 2,500 first generation computers were installed world-wide. (Compare this with the number of PCs shipped world-wide in 1997, quoted as 82 million by Dataquest).

Two key events took place in the summer of 1946 at the Moore School of Electrical Engineering at the University of Pennsylvania. One was the completion of the ENIAC. The other was the delivery of a course of lectures on "The Theory and Techniques of Electronic Digital Computers". In particular, they described the need to store the instructions to manipulate data in the computer along with the data. The design features worked out by John von Neumann and his colleagues and described in these lectures laid the foundation for the development of the first generation of computers. That just left the technical problems! Bendix G-15, side panel open

One of the projects to commence in 1946 was the construction of the IAS computer at the Institute of Advanced Study at Princeton. The IAS computer used a random access electrostatic storage system and parallel binary arithmetic. It was very fast when compared with the delay line computers, with their sequential memories and serial arithmetic.

The Princeton group was liberal with information about their computer and before long many universities around the world were building their own, close copies. One of these was the SILLIAC at Sydney University in Australia.

I have written an emulator for SILLIAC. You can find it here, along with a link to a copy of the SILLIAC Programming Manual.

First Generation Technologies

In 1946 there was no 'best' way of storing instructions and data in a computer memory. There were four competing technologies for providing computer memory: electrostatic storage tubes, acoustic delay lines (mercury or nickel), magnetic drums (and disks?), and magnetic core storage.

A high-speed electrostatic store was the heart of several early computers, including the computer at the Institute for Advanced Studies in Princeton. Professor F. C. Williams and Dr. T. Kilburn, who invented this type of store, described it in Proc.I.E.E. 96, Pt.III, 40 (March, 1949). A simple account of the Williams tube is given here.

The great advantage of this type of "memory" is that, by suitably controlling the deflector plates of the cathode ray tube, it is possible to redirect the beam almost instantaneously to any part of the screen: random access memory.

Acoustic delay lines are based on the principle that electricity travels at the speed of light while mechanical vibrations travel at about the speed of sound. So data can be stored as a string of mechanical pulses circulating in a loop, through a delay line with its output connected electrically back to its input. Of course, converting electric pulses to mechanical pulses and back again uses up energy, and travel through the delay line distorts the pulses, so the output has to be amplified and reshaped before it is fed back to the start of the tube. Bendix G-15, side panel and side door open

The sequence of bits flowing through the delay line is just a continuously repeating stream of pulses and spaces, so a separate source of regular clock pulses is needed to determine the boundaries between words in the stream and to regulate the use of the stream.

Delay lines have some obvious drawbacks. One is that the match between their length and the speed of the pulses is critical, yet both are dependent on temperature. This required precision engineering on the one hand and careful temperature control on the other. Another is a programming consideration. The data is available only at the instant it leaves the delay line. If it is not used then, it is not available again until all the other pulses have made their way through the line. This made for very entertaining programming!

A mercury delay line is a tube filled with mercury, with a piezo-electric crystal at each end. Piezo-electric crystals, such as quartz, have the special property that they expand or contract when the electrical voltage across the crystal faces is changed. Conversley, they generate a change in electrical voltage when they are deformed. So when a series of electrical pulses representing binary data is applied to the transmitting crystal at one end of the mercury tube, it is transformed into corresponding mechanical pressure waves. The waves travel through the mercury until they hit the receiving crystal at the far end of the tube, where the crystal transforms the mechanical vibrations back into the original electrical pulses.

Mercury delay lines had been developed for data storage in radar applications. Although far from ideal, they were an available form of computer memory around which a computer could be designed. Computers using mercury delay lines included the ACE computer developed at the National Physical Laboratory, Teddington, and its successor, the English Electric DEUCE.

A good deal of information about DEUCE (manuals, operating instructions, program and subroutine codes and so on) is available on the Web and you can find links to it here.

Nickel delay lines take the form of a nickel wire. Pulses of current representing bits of data are passed through a coil surrounding one end of the wire. They set up pulses of mechanical stress due to the 'magnetostrictive' effect. A receiving coil at the other end of the wire is used to convert these pressure waves back into electrical pulses. The Elliott 400 series, including the 401, 402, 403 used nickel delay lines. Much later, in 1966, the Olivetti Programma 101 desk top calculator also used nickel delay lines. Bendix G-15, side door fully open

The magnetic drum is a more familiar technology, comparable with modern magnetic discs. It consisted of a non-magnetic cylinder coated with a magnetic material, and an array of read/write heads to provide a set of parallel tracks of data round the circumference of the cylinder as it rotated. Drums had the same program optimisation problem as delay lines.

Two of the most (commercially) successful computers of the time, the IBM 650 and the Bendix G-15, used magnetic drums as their main memory.

The Massachusetts Institute of Technology Whirlwind 1 was another early computer and building started in 1947. However, the most important contribution made by the MIT group was the development of the magnetic core memory, which they later installed in Whirlwind. The MIT group made their core memory designs available to the computer industry and core memories rapidly superceded the other three memory technologies.

Where Does the Bendix G-15 Fit In?

Table 1 shows, in chronological order between 1950 and 1958, the initial operating date of computing systems in the USA. This is not to suggest that all of these computers were first generation computers, or that no first generation computers were made after 1958. It does give a rough guide to the number of first generation computers made.

Bendix introduced their G-15 in 1956. It was not the first Bendix computing machine. They introduced a model named the D-12, in 1954. However, the D-12 was a digital differential analyser and not a general purpose computer.

We don't know when the last Bendix G-15 was built, but about three hundred of the computers were ultimately installed in the USA. Three found their way to Australia. The one we have was purchased by the Department of Main Roads in Perth in 1962. It was used in the design of the Mitchell Freeway, the main road connecting the Northern suburbs to the city.

The G-15 was superceded by the second generation (transistorised) Bendix G-20.

Table 2 shows the computers installed or on order, in Australia, about December 1962. The three Bendix G-15s were in Perth (Department of Main Roads), Sydney (A.W.A. Service Bureau) and Melbourne (E.D.P Pty Ltd).Close-up of packages in situ

Overview of the G-15

The Bendix G-15 was a fairly sophisticated, medium size computer for its day. It used a magnetic drum for internal memory storage and had 180 tube packages and 300 germanium diode packages for logical circuitry. Cooling was by internal forced air.

Storage on the Magnetic Drum comprised 2160 words in twenty channels of 108 words each. Average access time was 4.5 milliseconds. In addition, there were 16 words of fast-access storage in four channels of 4 words each, with average access time of 0.54 milliseconds; and eight words in registers consisting of 1 one-word command register, 1 one-word arithmetic register, and 3 two-word arithmetic registers for double-precision operations.

A 108-word buffer channel on the magnetic drum allowed input-output to proceed simultaneously with computation.

Word size was 29 bits, allowing single-precision numbers of seven decimal digits plus sign during input-output and twenty nine binary digits internally, and double-precision numbers of fourteen decimal digits plus sign during input-output, fifty eight binary digits internally.

Each machine language instruction specified the address of the operand and the address of the next instruction. Double-length arithmetic registers permitted the programming of double-precision operations with the same ease as single-precision ones.A CA155 valve package

An interpreter called Intercom 1000 and a compiler called Algo provided simpler alternatives to machine language programming. Algo followed the principles set forth in the international algorithmic language, Algol, and permitted the programmer to state a problem in algebraic form. The Bendix Corporation claimed to be the first manufacturer to introduce a programming system patterned on Algol.

The basic computation times, in milliseconds, were as follows (including the time required for the computer to read the command prior to its execution). The time range for multiplication and division represents the range between single decimal digit precision and maximum precision.

Fifth generation computer

The PIM/m-1 machine.

The Fifth Generation Computer Systems project (FGCS) was an initiative by Japan'sMinistry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation using massive parallel processing. It was to be the end result of a massive government/industry research project in Japan during the 1980s. It aimed to create an "epoch-making computer" with supercomputer-like performance and usable artificial intelligencecapabilities.

The term fifth generation was intended to convey the system as being a leap beyond existing machines. Computers using vacuum tubes were called the first generation; transistors anddiodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs for added performance.

Opinions about its outcome are divided: Either it was a failure, or it was ahead of its time.

Contents

[hide]

[edit]History

In the late 1960s and early '70s, there was much talk about "generations" of computer hardware — usually "three generations".

  1. First generation: Vacuum tubes. Mid-1940s. IBM pioneered the arrangement of vacuum tubes in pluggable modules. The IBM 650 was a first-generation computer.
  2. Second generation: Transistors. 1956. The era of miniaturization begins. Transistors are much smaller than vacuum tubes, draw less power, and generate less heat. Discrete transistors are soldered to circuit boards, with interconnections accomplished by stencil-screened conductive patterns on the reverse side. The IBM 7090 was a second-generation computer.
  3. Third generation: Integrated circuits (silicon chips containing multiple transistors). 1964. A pioneering example is the ACPX module used in the IBM 360/91, which, by stacking layers of silicon over a ceramic substrate, accommodated over 20 transistors per chip; the chips could be packed together onto a circuit board to achieve unheard-of logic densities. The IBM 360/91 was a hybrid second- and third-generation computer.

Omitted from this taxonomy is the "zeroth-generation" computer based on metal gears (such as the IBM 407) or mechanical relays (such as the Mark I), and the post-third-generation computers based on Very Large Scale Integrated (VLSI) circuits.

There was also a parallel set of generations for software:

  1. First generation: Machine language.
  2. Second generation: Assembly language.
  3. Third generation: Structured programming languages such as C, COBOL and FORTRAN.
  4. Fourth generation: Domain-specific languages such as SQL (for database access) and TeX (for text formatting)

[edit]Background and design philosophy

Throughout these multiple generations up to the 1980s, Japan had largely been a follower in the computing arena, building computers following U.S. and British leads. The Ministry of International Trade and Industry (MITI) decided to attempt to break out of this follow-the-leader pattern, and in the mid-1970s started looking, on a small scale, into the future of computing. They asked the Japan Information Processing Development Center (JIPDEC) to indicate a number of future directions, and in 1979 offered a three-year contract to carry out more in-depth studies along with industry and academia. It was during this period that the term "fifth-generation computer" started to be used.

Prior to the 1970s, MITI guidance had successes such as an improved steel industry, the creation of the oil supertanker, the automotive industry, consumer electronics, and computer memory. MITI decided that the future was going to be information technology. However, the Japanese language, in both written and spoken form, presented and still presents major obstacles for computers. These hurdles could not be taken lightly. So MITI held a conference and invited people around the world to help them.

The primary fields for investigation from this initial project were:

  • Inference computer technologies for knowledge processing
  • Computer technologies to process large-scale data bases and knowledge bases
  • High performance workstations
  • Distributed functional computer technologies
  • Super-computers for scientific calculation

The project imagined a parallel processing computer running on top of massive databases (as opposed to a traditional filesystem) using a logic programming language to define and access the data. They envisioned building a prototype machine with performance between 100M and 1G LIPS, where a LIPS is a Logical Inference Per Second. At the time typical workstation machines were capable of about 100k LIPS. They proposed to build this machine over a ten year period, 3 years for initial R&D, 4 years for building various subsystems, and a final 3 years to complete a working prototype system. In 1982 the government decided to go ahead with the project, and established the Institute for New Generation Computer Technology (ICOT) through joint investment with various Japanese computer companies.

[edit]Implementation

So ingrained was the belief that parallel computing was the future of all performance gains that the Fifth-Generation project generated a great deal of apprehension in the computer field. After having seen the Japanese take over the consumer electronics field during the 1970s and apparently doing the same in the automotive world during the 1980s, the Japanese in the 1980s had a reputation for invincibility. Soon parallel projects were set up in the US as the Strategic Computing Initiative and the Microelectronics and Computer Technology Corporation (MCC), in the UK as Alvey, and in Europe as the European Strategic Program of Research in Information Technology (ESPRIT), as well as ECRC (European Computer Research Centre) in Munich, a collaboration between ICL in Britain, Bull in France, and Siemens in Germany.

Five running Parallel Inference Machines (PIM) were eventually produced: PIM/m, PIM/p, PIM/i, PIM/k, PIM/c. The project also produced applications to run on these systems, such as the parallel database management system Kappa, the legal reasoning system HELIC-II, and the automated theorem proverMGTP, as well as applications to bioinformatics.

[edit]Failure

The FGCS Project did not meet with commercial success for reasons similar to the Lisp machinecompanies and Thinking Machines. The highly parallel computer architecture was eventually surpassed in speed by less specialized hardware (for example, Sun workstations and Intel x86 machines). The project did produce a new generation of promising Japanese researchers. But after the FGCS Project,MITI stopped funding large-scale computer research projects, and the research momentum developed by the FGCS Project dissipated. It should be noted, however, that MITI/ICOT embarked on a Sixth Generation Project in the 1990s.

A primary problem was the choice of concurrent logic programming as the bridge between the parallel computer architecture and the use of logic as a knowledge representation and problem solving language for AI applications. This never happened cleanly; a number of languages were developed, all with their own limitations. In particular, the committed choice feature of concurrent constraint logic programminginterfered with the logical semantics of the languages.[1]

Another problem was that existing CPU performance quickly pushed through the "obvious" barriers that experts perceived in the 1980s, and the value of parallel computing quickly dropped to the point where it was for some time used only in niche situations. Although a number of workstations of increasing capacity were designed and built over the project's lifespan, they generally found themselves soon outperformed by "off the shelf" units available commercially.

The project also suffered from being on the wrong side of the technology curve. During its lifespan, Apple Computer introduced the GUI to the masses; the internet enabled locally stored databases to become distributed; and even simple research projects provided better real-world results in data mining.[citation needed] Moreover the project found that the promises of logic programming were largely negated by the use of committed choice.

At the end of the ten year period the project had spent over 50 billion yen (about US $400 million at 1992 exchange rates) and was terminated without having met its goals. The workstations had no appeal in a market where general purpose systems could now take over their job and even outrun them. This is parallel to the Lisp machine market, where rule-based systems such as CLIPS could run on general-purpose computers, making expensive Lisp machines unnecessary.[2]

In spite of the possibility of considering the project a failure, many of the approaches envisioned in the Fifth-Generation project, such as logic programming distributed over massive knowledge-bases, are now being re-interpreted in current technologies. The Web Ontology Language (OWL) employs several layers of logic-based knowledge representation systems, while many flavors of parallel computing proliferate, including multi-core architectures at the low-end and massively parallel processing at the high end.

[edit]Timeline

  • 1982: the FGCS project begins and receives $450,000,000 worth of industry funding and an equal amount of government funding.
  • 1985: the first FGCS hardware known as the Personal Sequential Inference Machine (PSI) and the first version of the Sequentual Inference Machine Programming Operating System (SIMPOS) operating system is released. SIMPOS is programmed in Kernel Language 0 (KL0), a concurrentProlog-variant with object oriented extensions.
  • 1987: a prototype of a truly parallel hardware called the Parallel Inference Machine (PIM) is built using several PSI:s connected in a network. The project receives funding for 5 more years. A new version of the kernel language Kernel Language 1 (KL1) which looks very similar to "Flat GDC" (Flat Guarded Definite Clauses) is created, influenced by developments in Prolog. The operating system written in KL1 is renamed Parallel Inference Machine Operating System, or PIMOS.

A Brief History of the Computer (b.c. – 1993a.d.)

In The Beginning…

The history of computers starts out about 2000 years ago, at the birth of the abacus, a wooden rack holding two horizontal wires with beads strung on them. When these beads are moved around, according to programming rules memorized by the user, all regular arithmetic problems can be done. Another important invention around the same time was the Astrolabe, used for navigation. Blaise Pascal is usually credited for building the first digital computer in 1642. It added numbers entered with dials and was made to help his father, a tax collector. In 1671, Gottfried Wilhelm von Leibniz invented a computer that was built in 1694. It could add, and, after changing some things around, multiply. Leibniz invented a special stepped gear mechanism for introducing the addend digits, and this is still being used. The prototypes made byPascal and Leibniz were not used in many places, and considered weird until a little more than a century later, when Thomas of Colmar (A.K.A. Charles Xavier Thomas) created the first successful mechanical calculator that could add, subtract, multiply, and divide. A lot of improved desktop calculators by many inventors followed, so that by about 1890, the range of improvements included:

  • Accumulation of partial results
  • Storage and automatic reentry of past results (A memory function)
  • Printing of the results

Each of these required manual installation. These improvements were mainly made for commercial users, and not for the needs of science.


Babbage

ImageWhile Thomas of Colmar was developing the desktop calculator, a series of very interesting developments in computers was started in Cambridge, England, by Charles Babbage (left, of which the computer store “Babbages, now GameStop, is named), a mathematics professor. In 1812, Babbage realized that many long calculations, especially those needed to make mathematical tables, were really a series of predictable actions that were constantly repeated. From this he suspected that it should be possible to do these automatically.

He began to design an automatic mechanical calculating machine, which he called adifference engine. By 1822, he had a working model to demonstrate with. With financial help from the British government, Babbage started fabrication of a difference engine in 1823. It was intended to be steam powered and fully automatic, including the printing of the resulting tables, and commanded by a fixed instruction program. The difference engine, although having limited adaptability and applicability, was really a great advance. Babbage continued to work on it for the next 10 years, but in 1833 he lost interest because he thought he had a better idea — the construction of what would now be called a general purpose, fully program-controlled, automatic mechanical digital computer. Babbage called this idea an Analytical Engine. The ideas of this design showed a lot of foresight, although this couldn’t be appreciated until a full century later. The plans for this engine required an identical decimal computer operating on numbers of 50 decimal digits (or words) and having a storage capacity (memory) of 1,000 such digits. The built-in operations were supposed to include everything that a modern general – purpose computer would need, even the all important Conditional Control Transfer Capability that would allow commands to be executed in any order, not just the order in which they were programmed. The analytical engine was soon to use punched cards (similar to those used in a Jacquard loom), which would be read into the machine from several different Reading Stations. The machine was supposed to operate automatically, by steam power, and require only one person there. Babbage’s computers were never finished. Various reasons are used for his failure. Most used is the lack of precision machining techniques at the time. Another speculation is that Babbage was working on a solution of a problem that few people in 1840 really needed to solve. After Babbage, there was a temporary loss of interest in automatic digital computers. Between 1850 and 1900 great advances were made in mathematical physics, and it came to be known that most observable dynamic phenomena can be identified by differential equations(which meant that most events occurring in nature can be measured or described in one equation or another), so that easy means for their calculation would be helpful. Moreover, from a practical view, the availability of steam power caused manufacturing (boilers), transportation (steam engines and boats), and commerce to prosper and led to a period of a lot of engineering achievements. The designing of railroads, and the making of steamships, textile mills, and bridges required differential calculus to determine such things as:

Even the assessment of the power output of a steam engine needed mathematical integration. A strong need thus developed for a machine that could rapidly perform many repetitive calculations.

Use of Punched Cards by Hollerith

ImageA step towards automated computing was the development of punched cards, which were first successfully used with computers in 1890 by Herman Hollerith (left) and James Powers, who worked for the US. Census Bureau. They developed devices that could read the information that had been punched into the cards automatically, without human help. Because of this, reading errors were reduced dramatically, work flow increased, and, most importantly, stacks of punched cards could be used as easily accessible memory of almost unlimited size. Furthermore, different problems could be stored on different stacks of cards and accessed when needed. These advantages were seen by commercial companies and soon led to the development of improved punch-card using computers created by International Business Machines (IBM), Remington (yes, the same people that make shavers), Burroughs, and other corporations. These computers used electromechanical devices in which electrical power provided mechanical motion — like turning the wheels of an adding machine. Such systems included features to:

  • feed in a specified number of cards automatically
  • add, multiply, and sort
  • feed out cards with punched results

As compared to today’s machines, these computers were slow, usually processing 50 – 220 cards per minute, each card holding about 80 decimal numbers (characters). At the time, however, punched cards were a huge step forward. They provided a means of I/O, and memory storage on a huge scale. For more than 50 years after their first use, punched card machines did most of the world’s first business computing, and a considerable amount of the computing work in science.

Electronic Digital Computers

ImageThe start of World War II produced a large need for computer capacity, especially for the military. New weapons were made for which trajectory tables and other essential data were needed. In 1942, John P. Eckert,John W. Mauchly (left), and their associates at the Moore school of Electrical Engineering of University of Pennsylvania decided to build a high – speed electronic computer to do the job. This machine became known asENIAC (Electrical Numerical Integrator And Calculator) The size of ENIAC’s numerical “word” was 10 decimal digits, and it could multiply two of these numbers at a rate of 300 per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was therefore about 1,000 times faster then the previous generation of relay computers. ENIAC used 18,000 vacuum tubes, about 1,800 square feet of floor space, and consumed about 180,000 watts of electrical power. It had punched card I/O, 1 multiplier, 1 divider/square rooter, and 20 adders using decimal ring counters, which served as adders and also as quick-access (.0002 seconds) read-write register storage. The executable instructions making up a program were embodied in the separate “units” of ENIAC, which were plugged together to form a “route” for the flow of information.

ImageThese connections had to be redone after each computation, together with presetting function tables and switches. This “wire your own” technique was inconvenient (for obvious reasons), and with only some latitude could ENIAC be considered programmable. It was, however, efficient in handling the particular programs for which it had been designed. ENIAC is commonly accepted as the first successful high – speed electronic digital computer (EDC) and was used from 1946 to 1955. A controversy developed in 1971, however, over the patentability of ENIAC’s basic digital concepts, the claim being made that another physicist, John V. Atanasoff (left) had already used basically the same ideas in a simpler vacuum – tube device he had built in the 1930’s while at Iowa State College. In 1973 the courts found in favor of the company using the Atanasoff claim.

The Modern Stored Program EDC

ImageFascinated by the success of ENIAC, the mathematician John Von Neumann (left) undertook, in 1945, an abstract study of computation that showed that a computer should have a very simple, fixed physical structure, and yet be able to execute any kind of computation by means of a proper programmed control without the need for any change in the unit itself. Von Neumann contributed a new awareness of how practical, yet fast computers should be organized and built. These ideas, usually referred to as the stored – program technique, became essential for future generations of high – speed digital computers and were universally adopted.

The Stored – Program technique involves many features of computer design and function besides the one that it is named after. In combination, these features make very – high – speed operation attainable. A glimpse may be provided by considering what 1,000 operations per second means. If each instruction in a job program were used once in consecutive order, no human programmer could generate enough instruction to keep the computer busy. Arrangements must be made, therefore, for parts of the job program (called subroutines) to be used repeatedly in a manner that depends on the way the computation goes. Also, it would clearly be helpful if instructions could be changed if needed during a computation to make them behave differently.

Von Neumann met these two needs by making a special type of machine instruction, called a Conditional control transfer – which allowed the program sequence to be stopped and started again at any point – and by storing all instruction programs together with data in the same memory unit, so that, when needed, instructions could be arithmetically changed in the same way as data. As a result of these techniques, computing and programming became much faster, more flexible, and more efficient with work. Regularly used subroutines did not have to be reprogrammed for each new program, but could be kept in “libraries” and read into memory only when needed. Thus, much of a given program could be assembled from the subroutine library.

The all – purpose computer memory became the assembly place in which all parts of a long computation were kept, worked on piece by piece, and put together to form the final results. The computer control survived only as an “errand runner” for the overall process. As soon as the advantage of these techniques became clear, they became a standard practice.

Image Image

The first generation of modern programmed electronic computers to take advantage of these improvements were built in 1947. This group included computers using Random – Access – Memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. . These machines had punched – card or punched tape I/O devices and RAM’s of 1,000 – word capacity and access times of .5 Greek MU seconds (.5*10-6 seconds). Some of them could perform multiplications in 2 to 4 MU seconds.

Physically, they were much smaller than ENIAC. Some were about the size of a grand piano and used only 2,500 electron tubes, a lot less then required by the earlierENIAC. The first – generation stored – program computers needed a lot of maintenance, reached probably about 70 to 80% reliability of operation (ROO) and were used for 8 to 12 years. They were usually programmed in ML, although by the mid 1950’s progress had been made in several aspects of advanced programming. This group of computers included EDVAC (above) and UNIVAC (right) the first commercially available computers.

Advances in the 1950’s

Early in the 50’s two important engineering discoveries changed the image of the electronic – computer field, from one of fast but unreliable hardware to an image of relatively high reliability and even more capability. These discoveries were the magnetic core memory and the Transistor – Circuit Element. These technical discoveries quickly found their way into new models of digital computers. RAM capacities increased from 8,000 to 64,000 words in commercially available machines by the 1960’s, with access times of 2 to 3 MS (Milliseconds). These machines were very expensive to purchase or even to rent and were particularly expensive to operate because of the cost of expanding programming. Such computers were mostly found in large computer centers operated by industry, government, and private laboratories – staffed with many programmers and support personnel.

This situation led to modes of operation enabling the sharing of the high potential available. One such mode is batch processing, in which problems are prepared and then held ready for computation on a relatively cheap storage medium. Magnetic drums, magnetic – disk packs, or magnetic tapes were usually used. When the computer finishes with a problem, it “dumps” the whole problem (program and results) on one of these peripheral storage units and starts on a new problem. Another mode for fast, powerful machines is called time-sharing. In time-sharing, the computer processes many jobs in such rapid succession that each job runs as if the other jobs did not exist, thus keeping each “customer” satisfied. Such operating modes need elaborate executable programs to attend to the administration of the various tasks.

Advances in the 1960’s

In the 1960’s, efforts to design and develop the fastest possible computer with the greatest capacity reached a turning point with the LARC machine, built for theLivermore Radiation Laboratories of the University of California by the Sperry – Rand Corporation, and the Stretch computer by IBM. The LARC had a base memory of 98,000 words and multiplied in 10 Greek MU seconds. Stretch was made with several degrees of memory having slower access for the ranks of greater capacity, the fastest access time being less then 1 Greek MU Second and the total capacity in the vicinity of 100,000,000 words. During this period, the major computer manufacturers began to offer a range of capabilities and prices, as well as accessories such as:

  • Consoles
  • Card Feeders
  • Page Printers
  • Cathode – ray – tube displays
  • Graphing devices

These were widely used in businesses for such things as:

  • Accounting
  • Payroll
  • Inventory control
  • Ordering Supplies
  • Billing

CPU’s for these uses did not have to be very fast arithmetically and were usually used to access large amounts of records on file, keeping these up to date. By far, the most number of computer systems were sold for the more simple uses, such as hospitals (keeping track of patient records, medications, and treatments given). They were also used in libraries, such as the National Medical Library retrieval system, and in theChemical Abstracts System, where computer records on file now cover nearly all known chemical compounds.

More Recent Advances

The trend during the 1970’s was, to some extent, moving away from very powerful, single – purpose computers and toward a larger range of applications for cheaper computer systems. Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, now used computers of smaller capability for controlling and regulating their jobs.

In the 1960’s, the problems in programming applications were an obstacle to the independence of medium sized on-site computers, but gains in applications programming language technologies removed these obstacles. Applications languages were now available for controlling a great range of manufacturing processes, for using machine tools with computers, and for many other things. Moreover, a new revolution in computer hardware was under way, involving shrinking of computer-logic circuitry and of components by what are called large-scale integration (LSI techniques.

In the 1950s it was realized that “scaling down” the size of electronic digital computer circuits and parts would increase speed and efficiency and by that, improve performance, if they could only find a way to do this. About 1960 photo printing of conductive circuit boards to eliminate wiring became more developed. Then it became possible to build resistors and capacitors into the circuitry by the same process. In the 1970’s, vacuum deposition of transistors became the norm, and entire assemblies, with adders, shifting registers, and counters, became available on tiny “chips.”

In the 1980’s, very large scale integration (VLSI), in which hundreds of thousands of transistors were placed on a single chip, became more and more common. Many companies, some new to the computer field, introduced in the 1970s programmableminicomputers supplied with software packages. The “shrinking” trend continued with the introduction of personal computers (PC’s), which are programmable machines small enough and inexpensive enough to be purchased and used by individuals. Many companies, such as Apple Computer and Radio Shack, introduced very successful PC’s in the 1970s, encouraged in part by a fad in computer (video) games. In the 1980s some friction occurred in the crowded PC field, with Apple and IBM keeping strong. In the manufacturing of semiconductor chips, the Intel and Motorola Corporations were very competitive into the 1980s, although Japanese firms were making strong economic advances, especially in the area of memory chips.

By the late 1980s, some personal computers were run by microprocessors that, handling 32 bits of data at a time, could process about 4,000,000 instructions per second. Microprocessors equipped with read-only memory (ROM), which stores constantly used, unchanging programs, now performed an increased number of process-control, testing, monitoring, and diagnosing functions, like automobile ignition systems, automobile-engine diagnosis, and production-line inspection duties. CrayResearch and Control Data Inc. dominated the field of supercomputers, or the most powerful computer systems, through the 1970s and 1980s.

In the early 1980s, however, the Japanese government announced a gigantic plan to design and build a new generation of supercomputers. This new generation, the so-called “fifth” generation, is using new technologies in very large integration, along with new programming languages, and will be capable of amazing feats in the area of artificial intelligence, such as voice recognition.

Progress in the area of software has not matched the great advances in hardware. Software has become the major cost of many systems because programming productivity has not increased very quickly. New programming techniques, such as object-oriented programming, have been developed to help relieve this problem. Despite difficulties with software, however, the cost per calculation of computers is rapidly lessening, and their convenience and efficiency are expected to increase in the early future. The computer field continues to experience huge growth. Computer networking, computer mail, and electronic publishing are just a few of the applications that have grown in recent years. Advances in technologies continue to produce cheaper and more powerful computers offering the promise that in the near future, computers or terminals will reside in most, if not all homes, offices, and schools.