What are von Neumann principles? John von Neumann's principles. computer generations Classification of modern computers

Von Neumann Principles (Von Neumann Architecture)

    Computer architecture

In 1946, D. von Neumann, G. Goldstein and A. Berks, in their joint article, outlined new principles for the construction and operation of computers. Subsequently, the first two generations of computers were produced on the basis of these principles. There have been some changes in later generations, although Neumann's principles are still relevant today.

In fact, Neumann managed to summarize the scientific developments and discoveries of many other scientists and formulate something fundamentally new on their basis.

Von Neumann's principles

    Use of the binary number system in computers. The advantage over the decimal number system is that devices can be made quite simple, and arithmetic and logical operations in the binary number system are also performed quite simply.

    Computer software control. The operation of the computer is controlled by a program consisting of a set of commands. Commands are executed sequentially one after another. The creation of a machine with a stored program was the beginning of what we call programming today.

    Computer memory is used not only to store data, but also programs.. In this case, both program commands and data are encoded in the binary number system, i.e. their recording method is the same. Therefore, in certain situations, you can perform the same actions on commands as on data.

    Computer memory cells have addresses that are numbered sequentially. At any time, you can access any memory cell by its address. This principle opened up the possibility of using variables in programming.

    Possibility of conditional jump during program execution. Despite the fact that commands are executed sequentially, programs can implement the ability to jump to any section of code.

The most important consequence of these principles is that now the program was no longer a permanent part of the machine (like, for example, a calculator). It became possible to easily change the program. But the equipment, of course, remains unchanged and very simple.

By comparison, the program of the ENIAC computer (which did not have a stored program) was determined by special jumpers on the panel. It could take more than one day to reprogram the machine (set jumpers differently). And although programs for modern computers can take years to write, they work on millions of computers after a few minutes of installation on the hard drive.

How does a von Neumann machine work?

A von Neumann machine consists of a storage device (memory) - a memory, an arithmetic-logical unit - ALU, a control device - CU, as well as input and output devices.

Programs and data are entered into memory from the input device through an arithmetic logic unit. All program commands are written to adjacent memory cells, and data for processing can be contained in arbitrary cells. For any program, the last command must be the shutdown command.

The command consists of an indication of what operation should be performed (from the possible operations on a given hardware) and the addresses of memory cells where the data on which the specified operation should be performed is stored, as well as the address of the cell where the result should be written (if it needs to be saved in memory).

The arithmetic logic unit performs the operations specified by the instructions on the specified data.

From the arithmetic logic unit, the results are output to memory or an output device. The fundamental difference between a memory and an output device is that in a memory, data is stored in a form convenient for processing by a computer, and it is sent to output devices (printer, monitor, etc.) in a way that is convenient for a person.

The control unit controls all parts of the computer. From the control device, other devices receive signals “what to do”, and from other devices the control unit receives information about their status.

The control device contains a special register (cell) called the “program counter”. After loading the program and data into memory, the address of the first instruction of the program is written to the program counter. The control unit reads from memory the contents of the memory cell, the address of which is in the program counter, and places it in a special device - the “Command Register”. The control unit determines the operation of the command, “marks” in memory the data whose addresses are specified in the command, and controls the execution of the command. The operation is performed by the ALU or computer hardware.

As a result of the execution of any command, the program counter changes by one and, therefore, points to the next command of the program. When it is necessary to execute a command that is not next in order to the current one, but is separated from the given one by a certain number of addresses, then a special jump command contains the address of the cell to which control must be transferred.

Von Neumann's principles[edit | edit source text]

The principle of memory homogeneity

Commands and data are stored in the same memory and are externally indistinguishable in memory. They can only be recognized by the method of use; that is, the same value in a memory cell can be used as data, as a command, and as an address, depending only on the way it is accessed. This allows you to perform the same operations on commands as on numbers, and, accordingly, opens up a number of possibilities. Thus, by cyclically changing the address part of the command, it is possible to access successive elements of the data array. This technique is called command modification and is not recommended from the standpoint of modern programming. More useful is another consequence of the principle of homogeneity, when instructions from one program can be obtained as a result of the execution of another program. This possibility underlies translation - the translation of program text from a high-level language into the language of a specific computer.

Targeting principle

Structurally, the main memory consists of numbered cells, and any cell is available to the processor at any time. Binary codes of commands and data are divided into units of information called words and stored in memory cells, and to access them the numbers of the corresponding cells - addresses are used.

Program control principle

All calculations provided for by the algorithm for solving the problem must be presented in the form of a program consisting of a sequence of control words - commands. Each command prescribes some operation from a set of operations implemented by the computer. Program commands are stored in sequential memory cells of the computer and are executed in a natural sequence, that is, in the order of their position in the program. If necessary, using special commands, this sequence can be changed. The decision to change the order of execution of program commands is made either based on an analysis of the results of previous calculations, or unconditionally.

Binary coding principle

According to this principle, all information, both data and commands, is encoded with binary digits 0 and 1. Each type of information is represented by a binary sequence and has its own format. A sequence of bits in a format that has a specific meaning is called a field. In numeric information, there is usually a sign field and a significant digits field. In the command format, two fields can be distinguished: the operation code field and the addresses field.

Another truly revolutionary idea, the importance of which is difficult to overestimate, is the “stored program” principle proposed by Neumann. Initially, the program was set by installing jumpers on a special patch panel. This was a very labor-intensive task: for example, it took several days to change the program of the ENIAC machine (while the calculation itself could not last more than a few minutes - the lamps failed). Neumann was the first to realize that a program could also be stored as a series of zeros and ones, in the same memory as the numbers it processed. The absence of a fundamental difference between the program and the data made it possible for the computer to form a program for itself in accordance with the results of the calculations.

Von Neumann not only put forward the fundamental principles of the logical structure of a computer, but also proposed its structure, which was reproduced during the first two generations of computers. The main blocks according to Neumann are a control unit (CU) and an arithmetic-logical unit (ALU) (usually combined into a central processor), memory, external memory, input and output devices. The design diagram of such a computer is shown in Fig. 1. It should be noted that external memory differs from input and output devices in that data is entered into it in a form convenient for a computer, but inaccessible to direct perception by a person. Thus, the magnetic disk drive refers to external memory, and the keyboard is an input device, display and print are output devices.

Rice. 1. Computer architecture built on von Neumann principles. Solid lines with arrows indicate the direction of information flows, dotted lines indicate control signals from the processor to other computer nodes

The control device and the arithmetic-logical unit in modern computers are combined into one unit - the processor, which is a converter of information coming from memory and external devices (this includes retrieving instructions from memory, encoding and decoding, performing various, including arithmetic, operations, coordination of the operation of computer nodes). The functions of the processor will be discussed in more detail below.

Memory (memory) stores information (data) and programs. The storage device in modern computers is “multi-tiered” and includes random access memory (RAM), which stores the information with which the computer is working directly at a given time (the executable program, part of the data necessary for it, some control programs), and external storage devices (ESD). ) much larger capacity than RAM. but with significantly slower access (and significantly lower cost per 1 byte of stored information). The classification of memory devices does not end with RAM and VRAM - certain functions are performed by both SRAM (super-random access memory), ROM (read-only memory), and other subtypes of computer memory.

In a computer built according to the described scheme, instructions are sequentially read from memory and executed. Number (address) of the next memory cell. from which the next program command will be extracted is indicated by a special device - a command counter in the control unit. Its presence is also one of the characteristic features of the architecture in question.

The fundamentals of the architecture of computing devices developed by von Neumann turned out to be so fundamental that they received the name “von Neumann architecture” in the literature. The vast majority of computers today are von Neumann machines. The only exceptions are certain types of systems for parallel computing, in which there is no program counter, the classical concept of a variable is not implemented, and there are other significant fundamental differences from the classical model (examples include streaming and reduction computers).

Apparently, a significant deviation from the von Neumann architecture will occur as a result of the development of the idea of ​​fifth-generation machines, in which information processing is based not on calculations, but on logical conclusions

.

The first adding machine capable of performing four basic arithmetic operations was the adding machine of the famous French scientist and philosopher Blaise Pascal. The main element in it was a gear wheel, the invention of which in itself became a key event in the history of computer technology. I would like to note that the evolution in the field of computer technology is uneven, spasmodic: periods of accumulation of strength are replaced by breakthroughs in development, after which a period of stabilization begins, during which the achieved results are used practically and at the same time knowledge and strength are accumulated for the next leap forward. After each revolution, the process of evolution reaches a new, higher level.

In 1671, the German philosopher and mathematician Gustav Leibniz also created an adding machine based on a gear wheel of a special design - the Leibniz gear wheel. Leibniz's adding machine, like the adding machines of his predecessors, performed four basic arithmetic operations. This period ended, and humanity, for almost a century and a half, accumulated strength and knowledge for the next round of evolution of computer technology. The 18th and 19th centuries were a time when various sciences, including mathematics and astronomy, developed rapidly. They often involved tasks that required time-consuming and labor-intensive calculations.

Another famous person in the history of computing was the English mathematician Charles Babbage. In 1823, Babbage began working on a machine for calculating polynomials, but, more interestingly, this machine, in addition to directly producing calculations, was supposed to produce results - print them on a negative plate for photographic printing. It was planned that the machine would be powered by a steam engine. Due to technical difficulties, Babbage was unable to complete his project. Here, for the first time, the idea arose of using some external (peripheral) device to output calculation results. Note that another scientist, Scheutz, nevertheless implemented the machine conceived by Babbage in 1853 (it turned out to be even smaller than planned). Babbage probably liked the creative process of searching for new ideas more than translating them into something material. In 1834, he outlined the principles of operation of another machine, which he called “Analytical”. Technical difficulties again prevented him from fully realizing his ideas. Babbage was only able to bring the machine to the experimental stage. But it is the idea that is the engine of scientific and technological progress. Charles Babbage's next machine was the embodiment of the following ideas:

Production process management. The machine controlled the operation of the loom, changing the pattern of the created fabric depending on the combination of holes on a special paper tape. This tape became the predecessor of such information carriers that are familiar to us all as punched cards and punched tapes.

Programmability. The machine was also controlled by a special paper tape with holes. The order of the holes on it determined the commands and the data processed by these commands. The machine had an arithmetic device and memory. The machine's commands even included a conditional jump command, which changed the course of calculations depending on some intermediate results.

Countess Ada Augusta Lovelace, who is considered the world's first programmer, took part in the development of this machine.

Charles Babbage's ideas were developed and used by other scientists. So, in 1890, at the turn of the 20th century, the American Herman Hollerith developed a machine that worked with data tables (the first Excel?). The machine was controlled by a program on punched cards. It was used in the 1890 US Census. In 1896, Hollerith founded the company that was the predecessor of the IBM Corporation. With the death of Babbage, another break came in the evolution of computing technology until the 30s of the 20th century. Subsequently, the entire development of mankind became unthinkable without computers.

In 1938, the center of development briefly shifted from America to Germany, where Konrad Zuse created a machine that, unlike its predecessors, operated not with decimal numbers, but with binary ones. This machine was also still mechanical, but its undoubted advantage was that it implemented the idea of ​​​​processing data in binary code. Continuing his work, Zuse in 1941 created an electromechanical machine, the arithmetic device of which was based on a relay. The machine could perform floating point operations.

Overseas, in America, work was also underway during this period to create similar electromechanical machines. In 1944, Howard Aiken designed a machine called the Mark-1. She, like Zuse's machine, worked on a relay. But because this machine was clearly created under the influence of Babbage's work, it operated with data in decimal form.

Naturally, due to the high proportion of mechanical parts, these machines were doomed.

Four generations of computers

By the end of the thirties of the 20th century, the need for automation of complex computing processes increased greatly. This was facilitated by the rapid development of such industries as aircraft manufacturing, nuclear physics and others. From 1945 to the present day, computer technology has gone through 4 generations in its development:

First generation

The first generation (1945-1954) - vacuum tube computers. These are prehistoric times, the era of the emergence of computer technology. Most of the first generation machines were experimental devices and were built to test certain theoretical principles. The weight and size of these computer dinosaurs, which often required separate buildings for themselves, have long become a legend.

Beginning in 1943, a group of specialists led by Howard Aitken, J. Mauchly and P. Eckert in the USA began to design a computer based on vacuum tubes, rather than on electromagnetic relays. This machine was called ENIAC (Electronic Numeral Integrator And Computer) and it worked a thousand times faster than the Mark-1. ENIAC contained 18 thousand vacuum tubes, occupied an area of ​​9x15 meters, weighed 30 tons and consumed a power of 150 kilowatts. ENIAC also had a significant drawback - it was controlled using a patch panel, it had no memory, and in order to set a program it took several hours or even days to connect the wires in the right way. The worst of all shortcomings was the horrific unreliability of the computer, since about a dozen vacuum tubes managed to fail in a day of operation.

To simplify the process of setting programs, Mauchly and Eckert began to design a new machine that could store a program in its memory. In 1945, the famous mathematician John von Neumann was involved in the work, who prepared a report on this machine. In this report, von Neumann clearly and simply formulated the general principles of the functioning of universal computing devices, i.e. computers. This was the first operational machine built on vacuum tubes and was officially put into operation on February 15, 1946. They tried to use this machine to solve some problems prepared by von Neumann and related to the atomic bomb project. She was then transported to Aberdeen Proving Ground, where she operated until 1955.

ENIAC became the first representative of the 1st generation of computers. Any classification is conditional, but most experts agreed that generations should be distinguished based on the elemental base on which the machines are built. Thus, the first generation appears to be tube machines.

It is necessary to note the enormous role of the American mathematician von Neumann in the development of first-generation technology. It was necessary to understand the strengths and weaknesses of ENIAC and make recommendations for subsequent developments. The report by von Neumann and his colleagues G. Goldstein and A. Burks (June 1946) clearly formulated the requirements for the structure of computers. Many of the provisions of this report were called Von Neumann principles.

The first projects of domestic computers were proposed by S.A. Lebedev, B.I. Rameev in 1948 In 1949-51. according to the project by S.A. Lebedev, MESM (small electronic calculating machine) was built. The first test launch of a prototype of the machine took place in November 1950, and the machine was put into operation in 1951. MESM worked in a binary system, with a three-address command system, and the calculation program was stored in an operational storage device. Lebedev's machine with parallel word processing was a fundamentally new solution. It was one of the first computers in the world and the first on the European continent with a stored program.

The 1st generation computer also includes BESM-1 (large electronic calculating machine), the development of which under the leadership of S.A. Lebedeva was completed in 1952, it contained 5 thousand lamps, worked without failures for 10 hours. Performance reached 10 thousand operations per second (Appendix 1).

Almost simultaneously, the Strela computer was designed (Appendix 2) under the leadership of Yu.Ya. Bazilevsky, in 1953. it was put into production. Later, the Ural - 1 computer appeared (Appendix 3), which marked the beginning of a large series of Ural machines, developed and put into production under the leadership of B.I. Rameeva. In 1958 The first generation computer M-20 (speed up to 20 thousand operations/s) was put into serial production.

First-generation computers had speeds of several tens of thousands of operations per second. Ferrite cores were used as internal memory, and ALUs and control units were built on electronic tubes. The speed of the computer was determined by a slower component - the internal memory - and this reduced the overall effect.

The first generation computers were oriented towards performing arithmetic operations. When trying to adapt them to analysis tasks, they turned out to be ineffective.

There were no programming languages ​​as such yet, and programmers used machine instructions or assemblers to code their algorithms. This complicated and delayed the programming process.

By the end of the 50s, programming tools were undergoing fundamental changes: a transition was made to automation of programming using universal languages ​​and libraries of standard programs. The use of universal languages ​​led to the emergence of translators.

The programs were executed task by task, i.e. the operator had to monitor the progress of the task and, when the end was reached, initiate the next task.

Second generation

In the second generation of computers (1955-1964), transistors were used instead of vacuum tubes, and magnetic cores and magnetic drums, the distant ancestors of modern hard drives, began to be used as memory devices. All this made it possible to sharply reduce the size and cost of computers, which then began to be built for sale for the first time.

But the main achievements of this era belong to the field of programs. On the second generation of computers, what is now called an operating system first appeared. At the same time, the first high-level languages ​​were developed - Fortran, Algol, Cobol. These two important improvements made writing computer programs much easier and faster; Programming, while remaining a science, acquires the features of a craft.

Accordingly, the scope of computer applications expanded. Now it was no longer only scientists who could count on access to computing technology; computers were used in planning and management, and some large firms even computerized their accounting, anticipating the fashion by twenty years.

Semiconductors became the elemental base of the second generation. Without a doubt, transistors can be considered one of the most impressive miracles of the 20th century.

A patent for the discovery of the transistor was issued in 1948 to the Americans D. Bardeen and W. Brattain, and eight years later they, together with the theorist V. Shockley, became Nobel Prize laureates. The switching speeds of the very first transistor elements turned out to be hundreds of times higher than those of tube elements, as well as reliability and efficiency. For the first time, memory on ferrite cores and thin magnetic films began to be widely used, and inductive elements—parameterons—were tested.

The first on-board computer for installation on an intercontinental rocket, Atlas, was put into operation in the United States in 1955. The machine used 20 thousand transistors and diodes, it consumed 4 kilowatts. In 1961, Barrows ground-based stretch computers controlled the space flights of Atlas rockets, and IBM machines controlled the flight of astronaut Gordon Cooper. The computer controlled the flights of unmanned spacecraft of the Ranger type to the Moon in 1964, as well as the Mariner spacecraft to Mars. Soviet computers performed similar functions.

In 1956, IBM developed floating magnetic heads on an air cushion. Their invention made it possible to create a new type of memory - disk storage devices, the importance of which was fully appreciated in the subsequent decades of the development of computer technology. The first disk storage devices appeared in the IBM-305 and RAMAC machines (Appendix 4). The latter had a package consisting of 50 magnetically coated metal disks that rotated at a speed of 12,000 rpm. The surface of the disk contained 100 tracks for recording data, each containing 10,000 characters.

The first mass-produced mainframe computers with transistors were released in 1958 simultaneously in the USA, Germany and Japan.

The first minicomputers appear (for example, PDP-8 (Appendix 5)).

In the Soviet Union, the first lampless machines “Setun”, “Razdan” and “Razdan-2” were created in 1959-1961. In the 60s, Soviet designers developed about 30 models of transistor computers, most of which began to be mass-produced. The most powerful of them, Minsk-32, performed 65 thousand operations per second. Entire families of vehicles appeared: “Ural”, “Minsk”, BESM.

The record holder among second-generation computers was BESM-6 (Appendix 6), which had a speed of about a million operations per second - one of the most productive in the world. The architecture and many technical solutions in this computer were so progressive and ahead of their time that it was successfully used almost to our time.

Especially for the automation of engineering calculations at the Institute of Cybernetics of the Academy of Sciences of the Ukrainian SSR under the leadership of Academician V.M. Glushkov developed the MIR (1966) and MIR-2 (1969) computers. An important feature of the MIR-2 machine was the use of a television screen for visual control of information and a light pen, with which it was possible to correct data directly on the screen.

The construction of such systems, which included about 100 thousand switching elements, would simply be impossible based on lamp technology. Thus, the second generation was born in the depths of the first, adopting many of its features. However, by the mid-60s, the boom in the field of transistor production reached its maximum - market saturation occurred. The fact is that the assembly of electronic equipment was a very labor-intensive and slow process that did not lend itself well to mechanization and automation. Thus, the conditions are ripe for a transition to new technology that would accommodate the increasing complexity of circuits by eliminating the traditional connections between their elements.

Third generation

Finally, in the third generation of computers (1965-1974), integrated circuits began to be used for the first time - entire devices and units of tens and hundreds of transistors, made on a single semiconductor crystal (what is now called microcircuits). At the same time, semiconductor memory appeared, which is still used in personal computers as RAM throughout the day. Priority in the invention of integrated circuits, which became the elemental base of third-generation computers, belongs to the American scientists D. Kilby and R. Noyce, who made this discovery independently of each other. Mass production of integrated circuits began in 1962, and in 1964 the transition from discrete to integrated elements began to rapidly take place. The mentioned ENIAK, measuring 9x15 meters, in 1971 could have been assembled on a plate of 1.5 square centimeters. The transformation of electronics into microelectronics began.

During these years, computer production acquired an industrial scale. IBM, which had become a leader, was the first to implement a family of computers - a series of computers that were fully compatible with each other, from the smallest, the size of a small closet (they had never made anything smaller then), to the most powerful and expensive models. The most widespread in those years was the System/360 family from IBM, on the basis of which the ES series of computers was developed in the USSR. In 1973, the first computer model of the ES series was released, and since 1975, models ES-1012, ES-1032, ES-1033, ES-1022, and later the more powerful ES-1060 appeared.

As part of the third generation, a unique machine “ILLIAK-4” was built in the USA, which in its original version was planned to use 256 data processing devices made on monolithic integrated circuits. The project was later changed due to the rather high cost (more than $16 million). The number of processors had to be reduced to 64, and also switched to integrated circuits with a low degree of integration. A shortened version of the project was completed in 1972; the nominal speed of ILLIAC-4 was 200 million operations per second. For almost a year, this computer held the record for computing speed.

Back in the early 60s, the first minicomputers appeared - small, low-power computers affordable for small firms or laboratories. Minicomputers represented the first step towards personal computers, prototypes of which were released only in the mid-70s. The well-known family of PDP minicomputers from Digital Equipment served as the prototype for the Soviet SM series of machines.

Meanwhile, the number of elements and connections between them that fit in one microcircuit was constantly growing, and in the 70s, integrated circuits already contained thousands of transistors. This made it possible to combine most of the computer components into a single small part - which is what Intel did in 1971, releasing the first microprocessor, which was intended for desktop calculators that had just appeared. This invention was destined to produce a real revolution in the next decade - after all, the microprocessor is the heart and soul of our personal computer.

But that’s not all - truly, the turn of the 60s and 70s was a fateful time. In 1969, the first global computer network was born - the embryo of what we now call the Internet. And in the same 1969, the Unix operating system and the C programming language appeared simultaneously, which had a huge impact on the software world and still maintains its leading position.

Fourth generation

Another change in the element base led to a change of generations. In the 70s, work was actively underway to create large and ultra-large integrated circuits (LSI and VLSI), which made it possible to place tens of thousands of elements on a single chip. This resulted in a further significant reduction in the size and cost of computers. Working with the software has become more user-friendly, which has led to an increase in the number of users.

In principle, with such a degree of integration of elements, it became possible to try to create a functionally complete computer on a single chip. Appropriate attempts were made, although they were mostly met with an incredulous smile. Probably, there would be fewer of these smiles if it were possible to foresee that it was this idea that would cause the extinction of mainframe computers in just a decade and a half.

However, in the early 70s, Intel released the microprocessor (MP) 4004. And if before that there were only three directions in the world of computing (supercomputers, mainframes and minicomputers), now another one was added to them - microprocessor. In general, a processor is understood as a functional unit of a computer designed for logical and arithmetic processing of information based on the principle of microprogram control. Based on hardware implementation, processors can be divided into microprocessors (all processor functions are fully integrated) and processors with low and medium integration. Structurally, this is expressed in the fact that microprocessors implement all the processor functions on one chip, while other types of processors implement them by connecting a large number of chips.

So, the first microprocessor 4004 was created by Intel at the turn of the 70s. It was a 4-bit parallel computing device, and its capabilities were severely limited. The 4004 could perform four basic arithmetic operations and was initially used only in pocket calculators. Later, its scope of application was expanded to include use in various control systems (for example, to control traffic lights). Intel, having correctly foreseen the promise of microprocessors, continued intensive development, and one of its projects ultimately led to major success, which predetermined the future path of development of computer technology.

This was the project to develop the 8-bit processor 8080 (1974). This microprocessor had a fairly developed command system and was able to divide numbers. It was used to create the Altair personal computer, for which the young Bill Gates wrote one of his first BASIC language interpreters. Probably, it is from this moment that the 5th generation should be counted.

Fifth generation

The transition to fifth-generation computers implied a transition to new architectures aimed at creating artificial intelligence.

It was believed that the fifth generation computer architecture would contain two main blocks. One of them is the computer itself, in which communication with the user is carried out by a unit called the “intelligent interface”. The task of the interface is to understand text written in natural language or speech, and translate the problem statement thus stated into a working program.

Basic requirements for 5th generation computers: Creation of a developed human-machine interface (speech recognition, image recognition); Development of logic programming for creating knowledge bases and artificial intelligence systems; Creation of new technologies in the production of computer equipment; Creation of new computer architectures and computing systems.

New technical capabilities of computer technology should have expanded the range of tasks to be solved and made it possible to move on to the tasks of creating artificial intelligence. One of the components necessary for creating artificial intelligence is knowledge bases (databases) in various areas of science and technology. Creating and using databases requires high speed computing systems and a large amount of memory. General purpose computers are capable of performing high-speed calculations, but are not suitable for performing high-speed comparison and sorting operations on large volumes of records, usually stored on magnetic disks. To create programs that fill, update, and work with databases, special object-oriented and logical programming languages ​​were created that provide the greatest capabilities compared to conventional procedural languages. The structure of these languages ​​requires a transition from traditional von Neumann computer architecture to architectures that take into account the requirements of the tasks of creating artificial intelligence.

The class of supercomputers includes computers that have the maximum performance at the time of their release, or the so-called 5th generation computers.

The first supercomputers appeared already among the second generation computers (1955 - 1964, see second generation computers); they were designed to solve complex problems that required high speed calculations. These are LARC from UNIVAC, Stretch from IBM and CDC-6600 (CYBER family) from Control Data Corporation, they used parallel processing methods (increasing the number of operations performed per unit of time), command pipelining (when during the execution of one command the second is read from memory and prepared for execution) and parallel processing using a complex processor structure consisting of a matrix of data processors and a special control processor that distributes tasks and controls the flow of data in the system. Computers that run multiple programs in parallel using multiple microprocessors are called multiprocessor systems. Until the mid-80s, the list of the largest supercomputer manufacturers in the world included Sperry Univac and Burroughs. The first is known, in particular, for its mainframes UNIVAC-1108 and UNIVAC-1110, which were widely used in universities and government organizations.

Following the merger of Sperry Univac and Burroughs, the combined UNISYS continued to support both mainframe lines while maintaining upward compatibility in each. This is a clear indication of the immutable rule that supported the development of mainframes - preserving the functionality of previously developed software.

Intel is also famous in the world of supercomputers. Intel's Paragon multiprocessor computers in the family of distributed memory multiprocessor structures have become just as classic.

Von Neumann principles

In 1946, D. von Neumann, G. Goldstein and A. Berks, in their joint article, outlined new principles for the construction and operation of computers. Subsequently, the first two generations of computers were produced on the basis of these principles. There have been some changes in later generations, although Neumann's principles are still relevant today. In fact, Neumann managed to summarize the scientific developments and discoveries of many other scientists and formulate fundamentally new principles on their basis:
1. The principle of representing and storing numbers.
The binary number system is used to represent and store numbers. The advantage over the decimal number system is that the bit is easy to implement, large-capacity bit memory is quite cheap, devices can be made quite simple, and arithmetic and logical operations in the binary number system are also quite simple.
2. The principle of computer program control.
The operation of the computer is controlled by a program consisting of a set of commands. Commands are executed sequentially one after another. Commands process data stored in computer memory.
3. Stored program principle.
Computer memory is used not only to store data, but also programs. In this case, both program commands and data are encoded in the binary number system, i.e. their recording method is the same. Therefore, in certain situations, you can perform the same actions on commands as on data.
4. The principle of direct memory access.
Computer RAM cells have sequentially numbered addresses. At any time, you can access any memory cell by its address.
5. The principle of branching and cyclic calculations.
Conditional jump commands allow you to implement a transition to any section of code, thereby providing the ability to organize branching and re-execute certain sections of the program.
The most important consequence of these principles is that now the program was no longer a permanent part of the machine (like, for example, a calculator). It became possible to easily change the program. But the equipment, of course, remains unchanged and very simple. By comparison, the program of the ENIAC computer (which did not have a stored program) was determined by special jumpers on the panel. It could take more than one day to reprogram the machine (set jumpers differently).
And although programs for modern computers can take months to develop, their installation (installation on a computer) takes several minutes, even for large programs. Such a program can be installed on millions of computers and run on each of them for years.

Applications

Annex 1

Appendix 2

Computer “Ural”

Appendix 3

Computer “Strela”

Appendix 4

IBM-305 and RAMAC

Appendix 5

minicomputer PDP-8

Appendix 6

Literature:

1) Broido V.L. Computing systems, networks and telecommunications. Textbook for universities. 2nd ed. – St. Petersburg: Peter, 2004

2) Zhmakin A.P. Computer architecture. – St. Petersburg: BHV - Petersburg, 2006

3) Semenenko V.A. and others. Electronic computers. Textbook for vocational schools - M.: Higher School, 1991

The construction of the vast majority of computers is based on the following general principles, formulated in 1945 by the American scientist John von Neumann (Figure 8.5). These principles were first published in his proposals for the EDVAC machine. This computer was one of the first stored program machines, i.e. with a program stored in the machine's memory, rather than read from a punched card or other similar device.

Figure 9.5 - John von Neumann, 1945

1. Program control principle . It follows from it that the program consists of a set of commands that are executed by the processor automatically one after another in a certain sequence.

A program is retrieved from memory using a program counter. This processor register sequentially increases the address of the next instruction stored in it by the instruction length.

And since the program commands are located in memory one after another, a chain of commands is thereby organized from sequentially located memory cells.

If, after executing a command, it is necessary to move not to the next one, but to some other memory cell, conditional or unconditional jump commands are used, which enter the number of the memory cell containing the next command into the command counter. Fetching commands from memory stops after reaching and executing the “stop” command.

Thus, the processor executes the program automatically, without human intervention.

According to John von Neumann, a computer should consist of a central arithmetic-logical unit, a central control unit, a storage device, and an information input/output device. A computer, in his opinion, should work with binary numbers and be electronic (not electrical); perform operations sequentially.

All calculations prescribed by the algorithm for solving the problem must be presented in the form of a program consisting of a sequence of control words-commands. Each command contains instructions for the specific operation being performed, the location (addresses) of the operands and a number of service characteristics. Operands are variables whose values ​​are involved in data transformation operations. A list (array) of all variables (input data, intermediate values ​​and calculation results) is another integral element of any program.

To access programs, instructions and operands, their addresses are used. The addresses are the numbers of computer memory cells intended for storing objects. Information (command and data: numeric, text, graphic, etc.) is encoded with binary digits 0 and 1.



Therefore, various types of information located in computer memory are practically indistinguishable; their identification is possible only when the program is executed, according to its logic, in context.

2. The principle of memory homogeneity . Programs and data are stored in the same memory. Therefore, the computer does not distinguish between what is stored in a given memory cell - a number, text or command. You can perform the same actions on commands as on data. This opens up a whole range of possibilities. For example, a program can also be processed during its execution, which allows you to set rules for obtaining some of its parts in the program itself (this is how the execution of cycles and subroutines is organized in the program). Moreover, commands from one program can be obtained as results from the execution of another program. Translation methods are based on this principle - translating program text from a high-level programming language into the language of a specific machine.

3. Targeting principle . Structurally, main memory consists of renumbered cells; Any cell is available to the processor at any time. This implies the ability to name memory areas so that the values ​​stored in them can later be accessed or changed during program execution using the assigned names.

Von Neumann's principles can be practically implemented in many different ways. Here we present two of them: a computer with a bus and a channel organization. Before describing the principles of computer operation, we introduce several definitions.

Computer architecture is called its description at some general level, including a description of user programming capabilities, command systems, addressing systems, memory organization, etc. The architecture determines the principles of operation, information connections and interconnection of the main logical nodes of a computer: processor, RAM, external storage and peripheral devices. The common architecture of different computers ensures their compatibility from the user's point of view.

Computer structure is a set of its functional elements and connections between them. The elements can be a wide variety of devices - from the main logical nodes of a computer to the simplest circuits. The structure of a computer is graphically represented in the form of block diagrams, with the help of which you can describe the computer at any level of detail.

The term is very often used computer configuration , which is understood as the layout of a computing device with a clear definition of the nature, quantity, relationships and main characteristics of its functional elements. The term " computer organization» determines how the computer’s capabilities are implemented,

Team the collection of information necessary for the processor to perform a specific action when executing a program.

The team consists of operation code, containing an indication of the operation to be performed and several address fields, containing an indication of the location of the instruction operands.

The method of calculating an address from the information contained in the address field of a command is called addressing mode. The set of commands implemented in a given computer forms its command system.

Thuring machine

Turing machine (MT)- abstract performer (abstract computing machine). It was proposed by Alan Turing in 1936 to formalize the concept of an algorithm.

A Turing machine is an extension of a finite state machine and, according to the Church-Turing thesis, capable of imitating all performers(by specifying transition rules) that somehow implement the step-by-step calculation process, in which each calculation step is quite elementary.

The structure of a Turing machine[

The Turing machine includes an unlimited in both directions ribbon(Turing machines are possible that have several infinite tapes), divided into cells, and control device(also called read-write head(GZCH)), capable of being in one of set of states. The number of possible states of the control device is finite and precisely specified.

The control device can move left and right along the tape, read and write characters of some finite alphabet into cells. Stands out special empty a symbol that fills all the cells of the tape, except those of them (the final number) on which the input data is written.

The control device operates according to transition rules, which represent the algorithm, realizable this Turing machine. Each transition rule instructs the machine, depending on the current state and the symbol observed in the current cell, to write a new symbol into this cell, move to a new state and move one cell to the left or right. Some Turing machine states can be labeled as terminal, and going to any of them means the end of the work, stopping the algorithm.

A Turing machine is called deterministic, if each combination of state and ribbon symbol in the table corresponds to at most one rule. If there is a "ribbon symbol - state" pair for which there are 2 or more instructions, such a Turing machine is called non-deterministic.

Description of the Turing machine[

A specific Turing machine is defined by listing the elements of the set of letters of the alphabet A, the set of states Q, and the set of rules by which the machine operates. They have the form: q i a j →q i1 a j1 d k (if the head is in the state q i, and the letter a j is written in the observed cell, then the head goes to the state q i1, a j1 is written in the cell instead of a j, the head makes a movement d k, which has three options: one cell to the left (L), one cell to the right (R), stay in place (N)). For every possible configuration there is exactly one rule (for a non-deterministic Turing machine there can be more rules). There are no rules only for the final state, once in which the car stops. In addition, you must specify the final and initial states, the initial configuration on the tape, and the location of the machine head.

Example of a Turing machine[

Let's give an example of MT for multiplying numbers in the unary number system. The entry of the rule “q i a j →q i1 a j1 R/L/N” should be understood as follows: q i is the state in which this rule is executed, a j is the data in the cell in which the head is located, q i1 is the state to go to, a j1 - what needs to be written in the cell, R/L/N - command to move.

Computer architecture by John von Neumann

Von Neumann architecture- a well-known principle of joint storage of commands and data in computer memory. Computing systems of this kind are often referred to as “von Neumann machines,” but the correspondence of these concepts is not always unambiguous. In general, when people talk about von Neumann architecture, they mean the principle of storing data and instructions in one memory.

Von Neumann principles

Von Neumann's principles[

The principle of memory homogeneity

Commands and data are stored in the same memory and are externally indistinguishable in memory. They can only be recognized by the method of use; that is, the same value in a memory cell can be used as data, as a command, and as an address, depending only on the way it is accessed. This allows you to perform the same operations on commands as on numbers, and, accordingly, opens up a number of possibilities. Thus, by cyclically changing the address part of the command, it is possible to access successive elements of the data array. This technique is called command modification and is not recommended from the standpoint of modern programming. More useful is another consequence of the principle of homogeneity, when instructions from one program can be obtained as a result of the execution of another program. This possibility underlies translation - the translation of program text from a high-level language into the language of a specific computer.

Targeting principle

Structurally, the main memory consists of numbered cells, and any cell is available to the processor at any time. Binary codes of commands and data are divided into units of information called words and stored in memory cells, and to access them the numbers of the corresponding cells - addresses are used.

Program control principle

All calculations provided for by the algorithm for solving the problem must be presented in the form of a program consisting of a sequence of control words - commands. Each command prescribes some operation from a set of operations implemented by the computer. Program commands are stored in sequential memory cells of the computer and are executed in a natural sequence, that is, in the order of their position in the program. If necessary, using special commands, this sequence can be changed. The decision to change the order of execution of program commands is made either based on an analysis of the results of previous calculations, or unconditionally.

Processor types

Microprocessor- this is a device that is one or more large integrated circuits (LSI) that perform the functions of a computer processor. A classic computing device consists of an arithmetic unit (AU), a control device (CU), a storage device (SU) and an input-output device (I/O) ).

IntelCeleron 400 Socket 370 in a plastic PPGA case, top view.

There are processors of various architectures.

CISC(eng. ComplexInstructionSetComputing) is a processor design concept that is characterized by the following set of properties:

· a large number of commands of different format and length;

· introduction of a large number of different addressing modes;

· has complex instruction coding.

A CISC processor has to deal with more complex instructions of unequal length. A single CISC instruction can execute faster, but processing multiple CISC instructions in parallel is more difficult.

Facilitating debugging of programs in assembler entails cluttering the microprocessor unit with nodes. To improve performance, the clock frequency and degree of integration must be increased, which necessitates improved technology and, as a result, more expensive production.

Advantages of CISC architecture[show]

Disadvantages of CISC architecture[show]

RISC(Reduced Instruction Set Computing). Processor with a reduced instruction set. The command system is simplified. All commands have the same format with simple encoding. Memory is accessed using load and write commands; the remaining commands are of the register-register type. The command entering the CPU is already divided into fields and does not require additional decryption.

Part of the crystal is freed up to accommodate additional components. The degree of integration is lower than in the previous architectural variant, so lower clock speeds are allowed for high performance. The command clutters up the RAM less, the CPU is cheaper. These architectures are not software compatible. Debugging RISC programs is more difficult. This technology can be implemented in software compatible with CISC technology (for example, superscalar technology).

Because RISC instructions are simple, fewer logic gates are needed to execute them, which ultimately reduces the cost of the processor. But most software today is written and compiled specifically for Intel CISC processors. To use the RISC architecture, current programs must be recompiled and sometimes rewritten.

Clock frequency

Clock frequency is an indicator of the speed at which commands are executed by the central processor.
Tact is the period of time required to perform an elementary operation.

In the recent past, the clock speed of a central processor was identified directly with its performance, that is, the higher the clock speed of the CPU, the more productive it is. In practice, we have a situation where processors with different frequencies have the same performance, because they can execute a different number of instructions in one clock cycle (depending on the core design, bus bandwidth, cache memory).

The processor clock speed is proportional to the system bus frequency ( see below).

Bit depth

Processor capacity is a value that determines the amount of information that the central processor is capable of processing in one clock cycle.

For example, if the processor is 16-bit, this means that it is capable of processing 16 bits of information in one clock cycle.

I think everyone understands that the higher the processor bit depth, the larger volumes of information it can process.

Typically, the higher the processor capacity, the higher its performance.

Currently, 32- and 64-bit processors are used. The size of the processor does not mean that it is obliged to execute commands with the same bit size.

Cache memory

First of all, let's answer the question, what is cache memory?

Cache memory is a high-speed computer memory designed for temporary storage of information (code of executable programs and data) needed by the central processor.

What data is stored in cache memory?

Most frequently used.

What is the purpose of cache memory?

The fact is that RAM performance is much lower compared to CPU performance. It turns out that the processor is waiting for data to arrive from RAM - which reduces the performance of the processor, and therefore the performance of the entire system. Cache memory reduces processor latency by storing data and code of executable programs that were accessed most frequently by the processor (the difference between cache memory and computer RAM is that the speed of cache memory is tens of times higher).

Cache memory, like regular memory, has a capacity. The higher the cache memory capacity, the larger volumes of data it can work with.

There are three levels of cache memory: cache memory first (L1), second (L2) and third (L3). The first two levels are most often used in modern computers.

Let's take a closer look at all three levels of cache memory.

First cache level is the fastest and most expensive memory.

L1 cache is located on the same chip as the processor and operates at the CPU frequency (hence the fastest performance) and is used directly by the processor core.

The capacity of the first level cache is small (due to its high cost) and is measured in kilobytes (usually no more than 128 KB).

L2 cache is a high-speed memory that performs the same functions as the L1 cache. The difference between L1 and L2 is that the latter has lower speed but larger capacity (from 128 KB to 12 MB), which is very useful for performing resource-intensive tasks.

L3 cache located on the motherboard. L3 is significantly slower than L1 and L2, but faster than RAM. It is clear that the volume of L3 is greater than the volume of L1 and L2. Level 3 cache is found in very powerful computers.

Number of Cores

Modern processor manufacturing technologies make it possible to place more than one core in one package. The presence of several cores significantly increases the performance of the processor, but this does not mean that the presence n cores gives increased performance in n once. In addition, the problem with multi-core processors is that today there are relatively few programs written taking into account the presence of several cores in the processor.

The multi-core processor, first of all, allows you to implement the multitasking function: distributing the work of applications between the processor cores. This means that each individual core runs its own application.

Motherboard structure

Before choosing a motherboard, you need to at least superficially consider its structure. Although it is worth noting here that the location of the sockets and other parts of the motherboard do not play a special role.

The first thing you should pay attention to is the processor socket. This is a small square recess with a fastener.

For those who are familiar with the term “overlocking” (overclocking a computer), you should pay attention to the presence of a double radiator. Often motherboards do not have a double heatsink. Therefore, for those who intend to overclock their computer in the future, it is advisable to ensure that this element is present on the board.

Elongated PCI-Express slots are designed for video cards, TV tuners, audio and network cards. Video cards require high bandwidth and use PCI-Express X16 connectors. For other adapters, PCI-Express X1 connectors are used.

Expert advice!PCI slots with different bandwidths look almost the same. It is worth looking especially carefully at the connectors and reading the labels underneath them to avoid sudden disappointments at home when installing video cards.

Smaller connectors are intended for RAM sticks. They are usually colored black or blue.

The board's chipset is usually hidden under the heatsink. This element is responsible for the joint operation of the processor and other parts of the system unit.

The small square connectors on the edge of the board are used to connect the hard drive. On the other side there are connectors for input and output devices (USB, mouse, keyboard, etc.).

Manufacturer

Many companies produce motherboards. It is almost impossible to single out the best or worst of them. Any company's payment can be called high-quality. Often even unknown manufacturers offer good products.

The secret is that all boards are equipped with chipsets from two companies: AMD and Intel. Moreover, the differences between the chipsets are insignificant and play a role only when solving highly specialized problems.

Form factor

In the case of motherboards, size matters. The standard ATX form factor is found in most home computers. The large size, and, consequently, the presence of a wide range of slots, allows you to improve the basic characteristics of the computer.

The smaller mATX version is less common. Possibilities for improvement are limited.

There is also mITX. This form factor is found in budget office computers. Improving performance is either impossible or makes no sense.

Often processors and boards are sold as a set. However, if the processor was purchased previously, it is important to ensure that it is compatible with the board. By looking at the socket, the compatibility of the processor and motherboard can be determined instantly.

Chipset

The connecting link of all components of the system is the chipset. Chipsets are manufactured by two companies: Intel and AMD. There is not much difference between them. At least for the average user.

Standard chipsets consist of a north and south bridge. The newest Intel models consist only of northern. This was not done for the purpose of saving money. This factor does not in any way reduce the performance of the chipset.

The most modern Intel chipsets consist of a single bridge, since most of the controllers are now located in the processor, including the DD3 RAM controller, PCI-Express 3.0 and some others.

AMD analogues are built on a traditional two-bridge design. For example, the 900 series is equipped with a southbridge SB950 and a northbridge 990FX (990X, 970).

When choosing a chipset, you should start from the capabilities of the north bridge. Northbridge 990FX can support simultaneous operation of 4 video cards in CrossFire mode. In most cases, such power is excessive. But for fans of heavy games or those who work with demanding graphics editors, this chipset will be most suitable.

The slightly stripped-down version of the 990X can still support two video cards at the same time, but the 970 model works exclusively with one video card.

Motherboard Layout

· data processing subsystem;

· power supply subsystem;

· auxiliary (service) blocks and units.

The main components of the motherboard data processing subsystem are shown in Fig. 1.3.14.

1 – processor socket; 2 – front tire; 3 – north bridge; 4 – clock generator; 5 – memory bus; 6 – RAM connectors; 7 – IDE (ATA) connectors; 8 – SATA connectors; 9 – south bridge; 10 – IEEE 1394 connectors; 11 – USB connectors; 12 – Ethernet network connector; 13 – audio connectors; 14 – LPC bus; 15 – Super I/O controller; 16 – PS/2 port;

17 – parallel port; 18 – serial ports; 19 – Floppy Disk connector;

20 – BIOS; 21 – PCI bus; 22 – PCI connectors; 23 – AGP or PCI Express connectors;

24 – internal bus; 25 – AGP/PCI Express bus; 26 – VGA connector

FPM (Fast Page Mode) is a type of dynamic memory.
Its name corresponds to the principle of operation, since the module allows faster access to data that is on the same page as the data transferred during the previous cycle.
These modules were used on most 486-based computers and early Pentium-based systems around 1995.

EDO (Extended Data Out) modules appeared in 1995 as a new type of memory for computers with Pentium processors.
This is a modified version of FPM.
Unlike its predecessors, EDO begins fetching the next block of memory at the same time it sends the previous block to the CPU.

SDRAM (Synchronous DRAM) is a type of random access memory that works so fast that it can be synchronized with the processor frequency, excluding standby modes.
The microcircuits are divided into two blocks of cells so that while accessing a bit in one block, preparations are in progress for accessing a bit in another block.
If the time to access the first piece of information was 60 ns, all subsequent intervals were reduced to 10 ns.
Starting in 1996, most Intel chipsets began to support this type of memory module, making it very popular until 2001.

SDRAM can operate at 133 MHz, which is almost three times faster than FPM and twice as fast as EDO.
Most computers with Pentium and Celeron processors released in 1999 used this type of memory.

DDR (Double Data Rate) was a development of SDRAM.
This type of memory module first appeared on the market in 2001.
The main difference between DDR and SDRAM is that instead of doubling the clock speed to speed things up, these modules transfer data twice per clock cycle.
Now this is the main memory standard, but it is already beginning to give way to DDR2.

DDR2 (Double Data Rate 2) is a newer variant of DDR that should theoretically be twice as fast.
DDR2 memory first appeared in 2003, and chipsets supporting it appeared in mid-2004.
This memory, like DDR, transfers two sets of data per clock cycle.
The main difference between DDR2 and DDR is the ability to operate at significantly higher clock speeds, thanks to improvements in design.
But the modified operating scheme, which makes it possible to achieve high clock frequencies, at the same time increases delays when working with memory.

DDR3 SDRAM (double data rate synchronous dynamic random access memory, third generation) is a type of random access memory used in computing as RAM and video memory.
It replaced DDR2 SDRAM memory.

DDR3 has a 40% reduction in energy consumption compared to DDR2 modules, which is due to the lower (1.5 V, compared to 1.8 V for DDR2 and 2.5 V for DDR) supply voltage of the memory cells.
Reducing the supply voltage is achieved through the use of a 90-nm (initially, later 65-, 50-, 40-nm) process technology in the production of microcircuits and the use of Dual-gate transistors (which helps reduce leakage currents).

DIMMs with DDR3 memory are not mechanically compatible with the same DDR2 memory modules (the key is located in a different location), so DDR2 cannot be installed in DDR3 slots (this is done to prevent the mistaken installation of some modules instead of others - these types of memory are not the same according to electrical parameters).

RAMBUS (RIMM)

RAMBUS (RIMM) is a type of memory that appeared on the market in 1999.
It is based on traditional DRAM but with a radically changed architecture.
The RAMBUS design makes memory access more intelligent, allowing pre-access to data while slightly offloading the CPU.
The main idea used in these memory modules is to receive data in small packets but at a very high clock speed.
For example, SDRAM can transfer 64 bits of information at 100 MHz, and RAMBUS can transfer 16 bits at 800 MHz.
These modules did not become successful as Intel had many problems with their implementation.
RDRAM modules appeared in the Sony Playstation 2 and Nintendo 64 game consoles.

RAM stands for Random Access Memory - memory that is accessed by address. Sequentially accessed addresses can take on any value, so any address (or "cell") can be accessed independently.

Statistical memory is memory built from static switches. It stores information as long as power is supplied. Typically, at least six transistors are required to store one bit in an SRAM circuit. SRAM is used in small systems (up to several hundred KB of RAM) and is used where access speed is critical (like cache inside processors or on motherboards).

Dynamic memory (DRAM) originated in the early 70s. It is based on capacitive elements. We can think of DRAM as a series of capacitors controlled by switching transistors. Only one "capacitor transistor" is needed to store one bit, so DRAM has more capacity than SRAM (and is cheaper).
DRAM is organized as a rectangular array of cells. To access a cell, we need to select the row and column in which that cell is located. Typically this is implemented in such a way that the high part of the address points to a row, and the low part of the address points to a cell in the row ("column"). Historically (due to slow speeds and small IC packets in the early 70s), the address was supplied to the DRAM chip in two phases - a row address with a column address on the same lines. First, the chip receives the row address and then after a few nanoseconds the column address is transmitted to the same line. The chip reads the data and transmits it to the output. During the write cycle, the data is received by the chip along with the column address. Several control lines are used to control the chip. RAS (Row Address Strobe) signals which transmit the row address and also activate the entire chip CAS (Column Address Strobe) signals that transmit the column address WE (Write Enable) indicating that the access performed is a write access OE (Output Enable) opens the buffers used to transfer data from the memory chip to the “host” (processor) .
FP DRAM

Since each access to classic DRAM requires the transfer of two addresses, it was too slow for 25 MHz machines. FP (Fast Page) DRAM is a variant of classic DRAM in which there is no need to transfer the row address in each access cycle. As long as the RAS line is active, the row remains selected and individual cells from that row can be selected by passing only the column address. So, while the memory cell remains the same, the access time is less because only one address transfer phase is needed in most cases.

EDO (Extended Data Out) DRAM is a variant of FP DRAM. In FP DRAM, the column address must remain correct during the entire data transfer period. Data buffers are activated only during the column address transmission cycle, by the CAS signal activity level signal. Data must be read from the memory data bus before the new column address is received on the chip. EDO memory stores data in output buffers after the CAS signal returns to the inactive state and the column address is removed. The address of the next column can be transmitted in parallel with reading the data. This provides the ability to use partial matching when reading. While EDO RAM memory cells are the same speed as FP DRAM, sequential access can be faster. So EDO should be something faster than FP, especially for massive access (like in graphics applications).

Video RAM can be based on any of the DRAM architectures listed above. In addition to the "normal" access mechanism described below, VRAM has one or two special serial ports. VRAM is often referred to as dual-port or triple-port memory. Serial ports contain registers that can store the contents of a whole series. It is possible to transfer data from an entire row of a memory array to a register (or vice versa) in a single access cycle. The data can then be read from or written to the serial register in chunks of any length. Because a register is made up of fast, static cells, access to it is very fast, usually several times faster than a memory array. In most typical applications, VRAM is used as a screen memory buffer. The parallel port (standard interface) is used by the processor, and the serial port is used to transmit data about points on the display (or read data from a video source).

WRAM is a proprietary memory architecture developed by Matrox and (who else, let me remember... - Samsung?, MoSys?...). It is similar to VRAM, but allows faster access by the host. WRAM was used on Matrox's Millenium and Millenium II graphics cards (but not on the modern Millenium G200).

SDRAM is a complete redesign of DRAM, introduced in the 90s. "S" stands for Synchronous, since SDRAM implements a completely synchronous (and therefore very fast) interface. Inside SDRAM contains (usually two) DRAM arrays. Each array has its own its own Page Register, which is (a bit) like the serial access register on VRAM. SDRAM works much smarter than regular DRAM. The entire circuit is synchronized with an external clock signal. At each clock tick, the chip receives and executes a command transmitted along the command lines. The command line names remain the same as in classic DRAM chips, but their functions are only similar to the original. There are commands for transferring data between the memory array and page registers, and for accessing data in page registers. Access to a page register is very fast - modern SDRAMs can transfer a new word of data every 6..10 ns.

Synchronous Graphics RAM is a variant of SDRAM designed for graphics applications. The hardware structure is almost identical, so in most cases we can change SDRAM and SGRAM (see Matrox G200 cards - some use SD, others SG). The difference is in the functions performed by the page register. The SG can write multiple locations in a single cycle (this allows for very fast color fills and screen clearing), and can only write a few bits per word (the bits are selected by a bit mask stored by the interface circuit). Therefore, SG is faster in graphics applications, although not physically faster than SD in "normal" use. Additional features of SG are used by graphics accelerators. I think the screen clearing and Z-buffer capabilities in particular are very useful.

RAMBUS (RDRAM)

RAMBUS (trademark of RAMBUS, Inc.) began to be developed in the 80s, so it is not new. Modern RAMBUS technologies combine old but very good ideas and today's memory production technologies. RAMBUS is based on a simple idea: we take any good DRAM, we build a static buffer into the chip (as in VRAM and SGRAM), and provide a special, electronically configurable interface operating at 250..400 MHz. The interface is at least twice as fast as that used in SDRAM, and while the random access time is usually slower , sequential access is very, very, very fast. Remember that when 250 MHz RDRAMs were introduced, most DRAMs operated at frequencies of 12..25 MHz. RDRAM requires a special interface and very careful physical placement on the PCB. Most RDRAM chips look very different from other DRAMs: they all have all the signal lines on one side of the package (so they are the same length), and only 4 power lines on the other side. RDRAMs are used in graphics cards based on Cirrus 546x chips. We will soon see RDRAMs used as main memory in PCs.

Hard drive device.

The hard drive contains a set of plates, most often representing metal disks, coated with a magnetic material - platter (gamma ferrite oxide, barium ferrite, chromium oxide...) and connected to each other using a spindle (shaft, axis).

The discs themselves (approximately 2 mm thick) are made of aluminum, brass, ceramics or glass. (see pic)

Both surfaces of the discs are used for recording. 4-9 plates are used. The shaft rotates at a high constant speed (3600-7200 rpm)

The rotation of the disks and radical movement of the heads is carried out using 2 electric motors.

Data is written or read using write/read heads, one on each surface of the disk. The number of heads is equal to the number of working surfaces of all disks.

Information is recorded on the disk in strictly defined places - concentric tracks (tracks). The tracks are divided into sectors. One sector contains 512 bytes of information.

Data exchange between RAM and NMD is carried out sequentially by an integer (cluster). Cluster - chains of sequential sectors (1,2,3,4,…)

A special motor, using a bracket, positions the read/write head over a given track (moves it in the radial direction).

When the disk is rotated, the head is located above the desired sector. Obviously, all heads move simultaneously and read information; data heads move simultaneously and read information from identical tracks on different drives.

Hard drive tracks with the same serial number on different hard drive drives are called a cylinder.

The read-write heads move along the surface of the platter. The closer the head is to the surface of the disk without touching it, the higher the permissible recording density .

Hard drive interfaces.

IDE (ATA – Advanced Technology Attachment) is a parallel interface for connecting drives, which is why it was changed (with SATA output) to PATA (Parallel ATA). Previously used to connect hard drives, but was supplanted by the SATA interface. Currently used to connect optical drives.

SATA (Serial ATA) – serial interface for data exchange with drives. An 8-pin connector is used for connection. As in the case of PATA, it is obsolete and is used only for working with optical drives. The SATA standard (SATA150) provided a throughput of 150 MB/s (1.2 Gbit/s).

SATA 2 (SATA300). The SATA 2 standard doubled the throughput, up to 300 MB/s (2.4 Gbit/s), and allows operation at 3 GHz. Standard SATA and SATA 2 are compatible with each other, however, for some models it is necessary to manually set the modes by rearranging the jumpers.

SATA 3, although according to the specifications it is correct to call it SATA 6Gb/s. This standard doubled the data transfer speed to 6 Gbit/s (600 MB/s). Other positive innovations include the NCQ program control function and commands for continuous data transfer for a high-priority process. Although the interface was introduced in 2009, it is not yet particularly popular among manufacturers and is not often found in stores. In addition to hard drives, this standard is used in SSDs (solid-state drives). It is worth noting that in practice the bandwidth of SATA interfaces does not differ in data transfer speed. In practice, the speed of writing and reading disks does not exceed 100 MB/s. Increasing the performance only affects the bandwidth between the controller and the drive cache.

SCSI (Small Computer System Interface) - a standard used in servers where increased data transfer speed is required.

SAS (Serial Attached SCSI) is a generation that replaced the SCSI standard, using serial data transmission. Like SCSI, it is used in workstations. Fully compatible with the SATA interface.

CF (Compact Flash) – Interface for connecting memory cards, as well as for 1.0 inch hard drives. There are 2 standards: Compact Flash Type I and Compact Flash Type II, the difference is in thickness.

FireWire is an alternative interface to the slower USB 2.0. Used to connect portable hard drives. Supports speeds up to 400 Mb/s, but the physical speed is lower than regular ones. When reading and writing, the maximum threshold is 40 MB/s.

Types of video cards

Modern computers (laptops) are available with various types of video cards, which directly affect performance in graphics programs, video playback, and so on.

There are currently 3 types of adapters in use that can be combined.

Let's take a closer look at the types of video cards:

  • integrated;
  • discrete;
  • hybrid;
  • two discrete;
  • Hybrid SLI.

Integrated graphics card- This is an inexpensive option. It does not have video memory and graphics processor. With the help of the chipset, graphics are processed by the central processor, RAM is used instead of video memory. Such a device system significantly reduces the performance of the computer in general and graphic processing in particular.

Often used in budget PC or laptop configurations. Allows you to work with office applications, watch and edit photos and videos, but it is impossible to play modern games. Only legacy options with minimum system requirements are available.

All modern computers, despite the fact that a lot of time has passed, work on the principles proposed by the American mathematician John von Neumann (1903 - 1957). He also made a significant contribution to the development and application of computers. He was the first to establish the principles on which a computer operates:

1. The principle of binary coding: all information in a computer is presented in binary form, a combination of 0 and 1.

2. The principle of memory homogeneity: both programs and data are stored in the same memory. Therefore, the computer does not recognize what is stored in a given memory cell, but numbers, text, commands, etc. can be located there. The same actions can be performed on commands , as with superdata.

3. The principle of memory addressability: schematically, the OP (main memory) consists of numbered cells, the CPU (central processing unit) any memory cell is accessible at any time. Therefore, it is possible to assign names to memory blocks for more convenient interaction between the OP and the CPU.

4. The principle of sequential program control: a program consists of a set of instructions that are executed by the CPU sequentially one after another.

5. The principle of conditional branch: it is not always the case that commands are executed one by one, so it is possible to have conditional branch commands that change the sequential execution of commands depending on the value of the stored data

. Classification of modern computers.

Modern computer are divided into built-in microprocessors, microcomputer(personal computers), mainframe computers And supercomputer- a computer complex with several processors.

Microprocesses- processors implemented in the form integral electronic microcircuits. Microprocessors can be built into phones, televisions and other appliances, machines and devices.

On integrated circuits processors and RAM of all modern microcomputers, as well as all blocks of large computers and supercomputers, as well as all programmable devices are implemented.

Microprocessor performance amounts to several millions operations per second, and the volume of modern RAM blocks is several million bytes.

Microcomputer - these are full-fledged computing cars, having not only a processor and RAM for data processing, but also input-output and information storage devices.

Personal computers - This microcomputer, having display devices on electronic screens, as well as data input/output devices in the form of a keyboard, and possibly devices for connecting to computer networks.

Microcomputer architecture is based on the use of a system backbone - an interface device to which processors and RAM units, as well as all input-output devices, are connected.

Using the trunk allows you to change compound And structure microcomputer- add additional input/output devices and increase the functionality of computers.

Long-term storage information in modern computers is carried out using electronic, magnetic and optical media - magnetic disks, optical disks and flash memory blocks.

Architecture of modern computers requires the presence of long-term memory where files, software packages, databases and control operating systems are located.

Mainframe computers - computers high productivity with a large amount of external memory. Mainframe computers are used as servers for computer networks and large data storage facilities.

Mainframe computers used as a basis for organization corporate information systems servicing industrial corporations and government agencies.

Supercomputer- This multiprocessor computer with a complex architecture, having the highest performance and used to solve super-complex computing problems.

Supercomputer performance amounts to tens And hundreds thousand billions computing operations per second. At the same time, the number of processors in supercomputers is increasingly increasing and the computer architecture is becoming more complex.

Continuing the topic:
Computer

Describes the experience of introducing microcontroller programming into the school curriculum for children with disabilities. The text is published with the permission of the author. Why robotics...