• The role of computer technology in human life. History of the development of computer technology The role and importance of computer technology plan

    Size: px

    Start showing from the page:

    Transcript

    1 Introduction. The role and significance of VT in modern society. Areas of application of personal computers. There are many definitions of the scientific discipline “computer science”. One of them is: Computer science is the science of methods of representing, accumulating, transmitting and processing information using a computer. This is the science of information activities, information processes. The existence of the science of “Informatics” is impossible without studying the computer, since this science is associated with the time of its origin. Computer science is a scientific discipline with a wide range of applications. Its main directions: development of computer systems and software; information theory, which studies the processes associated with the transmission, reception, transformation and storage of information; artificial intelligence methods that allow you to create programs for solving problems that require certain intellectual efforts when performed by a person (logical inference, learning, speech understanding, visual perception, games, etc.); system analysis, which consists of analyzing the purpose of the designed system and establishing the requirements that it must meet; methods of computer graphics, animation, multimedia; telecommunications, including global computer networks; a variety of applications covering manufacturing, science, education, medicine, trade, agriculture and all other activities. The term computer science refers to a set of disciplines that study the properties of information, as well as methods of representing, accumulating, processing and transmitting information using technical means. The theoretical basis of computer science is formed by a group of fundamental sciences: information theory, theory of algorithms, mathematical logic, theory of formal languages ​​and grammars, combinatorial analysis, etc. Computer science includes the following sections: computer architecture, operating systems, database theory, programming technology and others. The modern era is characterized as the era of global information technologies: Previously accumulated information is gradually transferred into digital form and stored in global information networks. New information is produced digitally using a computer. Information networks are emerging, spanning workplaces and home computers. The field of study of computer science includes information systems intended to assist specialists, managers, decision-making, and artificial intelligence systems. To use new information technologies you must: 1. introduction of computers and office equipment; 2. user participation in the information process; 3. accessible interface; 4. use of application packages; 5. access to databases using networks; 6. use of telecommunications. In computer technology, there is a periodization of the development of electronic computers. A computer is classified into one generation or another depending on the type of main elements used in it or on the technology of their manufacture. It is clear that the boundaries of generations in terms of time are very blurred, since at the same time computers of various types were actually produced; For an individual machine, the question of whether it belongs to one generation or another is resolved quite simply.

    2 In 1833, the English scientist Charles Babbage, who was involved in compiling tables for navigation, developed a project for the “Analytical Engine”. According to his plan, this machine was to become a giant program-controlled adding machine. Babbage's machine also included arithmetic and storage devices. His machine became the prototype of future computers. But it used far from perfect components; for example, it used gears to remember the digits of a decimal number. Babbage failed to implement his project due to insufficient development of technology, and the “Analytical Engine” was forgotten for a while. 100 years later, Babbage's machine attracted the attention of engineers. At the end of the 30s of the 20th century, a German engineer developed the first binary digital machine Z1. It made extensive use of electromechanical relays, that is, mechanical switches actuated by electric current. In 1941, Zuse created the Z3, a machine completely controlled by software. In 1944, the American Howard Aiken, at one of the IBM enterprises, built the Mark-1, a powerful machine for those times. This machine used mechanical elements - counting wheels - to represent numbers, and electromechanical relays were used for control. Generations of computers It is convenient to describe the history of the development of computers using the idea of ​​generations of computers. Each generation of computer is characterized by design features and capabilities. The division of computers into generations is conditional, since machines of different levels were produced at the same time. First generation A sharp leap in the development of computer technology occurred in the 40s, after the Second World War, and it was associated with the advent of qualitatively new electronic devices - electron vacuum tubes, which worked much faster than circuits based on electromechanical relays, and relay machines quickly replaced by more productive and reliable electronic computers (computers). The use of computers has significantly expanded the range of problems being solved. Tasks that were simply not possible before became available: calculations of engineering structures, calculations of planetary motion, ballistic calculations, etc. The first computer was created in in the USA and it was called ENIAC. This machine contained about 18 thousand vacuum tubes, many electromechanical relays, and about 2 thousand tubes failed every month. The ENIAC machine, as well as other early computers, had a serious drawback - the executable program was not stored in the machine's memory, but was typed in a complex way using external jumpers. In 1945, the famous mathematician and theoretical physicist von Neumann formulated the general principles of operation of universal computing devices. According to von Neumann, the computer had to be controlled by a program with sequential execution of commands, and the program itself should be stored in the machine's memory. The first computer with a program stored in memory was built in England in 1949. In 1951, a computer was created in the USSR under the leadership of the largest designer of computer technology S. A. Lebedev. Computers were constantly improved, thanks to which by the mid-50s their performance was increased from several hundred to several tens of thousands of operations per second. However, the electron tube remained the most reliable element of the computer. The use of lamps began to slow down the further progress of computing technology. Subsequently, semiconductor devices replaced lamps, thereby completing the first stage of computer development. Computers of this stage are usually called first-generation computers. Indeed, first-generation computers were located in large computer rooms, consumed a lot of electricity and required cooling with powerful fans. Programs for these computers had to be written in machine code, and this could only be done by specialists who knew the details of the computer structure.

    3 Second generation Computer developers have always followed the progress in electronic technology. When semiconductor devices replaced vacuum tubes in the mid-50s, the conversion of computers to semiconductors began. Semiconductor devices (transistors, diodes) were, firstly, much more compact than their tube predecessors. Secondly, they had a significantly longer service life. Thirdly, the energy consumption of semiconductor computers was significantly lower. With the introduction of digital elements on semiconductor devices, the creation of second-generation computers began. Thanks to the use of a more advanced element base, relatively small computers began to be created, and a natural division of computers into large, medium and small took place. In the USSR, the Rozdan and Nairi series of small computers were developed and widely used. The Mir machine, developed in 1965 at the Institute of Cybernetics of the Academy of Sciences of the Ukrainian SSR, was unique in its architecture. It was intended for engineering calculations that were performed on a computer by the user himself without the help of an operator. Medium computers included domestic machines of the Ural, M-20 and Minsk series. But the record among domestic machines of this generation and one of the best in the world was BESM-6 (“large electronic calculating machine”, 6th model), which was created by the team of Academician S. A. Lebedev. The performance of BESM-6 was two to three orders of magnitude higher than that of small and medium-sized computers, and amounted to more than 1 million operations per second. Abroad, the most common second-generation machines were Eliot (England), Siemens (Germany). Third generation The next change in computer generations occurred at the end of the 60s when semiconductor devices in computer devices were replaced with integrated circuits. An integrated circuit (microcircuit) is a small plate of silicon crystal on which hundreds and thousands of elements are placed: diodes, transistors, capacitors, resistors, etc. The use of integrated circuits has made it possible to increase the number of electronic elements in a computer without increasing their actual dimensions. Computer speed increased to 10 million operations per second. In addition, it has become possible for ordinary users, and not just electronics specialists, to compose computer programs. In the third generation, large series of computers appeared, differing in their performance and purpose. This is a family of large and medium-sized IBM360/370 machines developed in the USA. In the Soviet Union and in the CMEA countries, similar series of machines were created: ES EVM (Unified System of Computers, large and medium-sized machines), SM EVM (System of Small Computers) and “Electronics” (micro-computer system). Fourth generation In the process of improving microcircuits, their reliability and the density of the elements placed in them increased. This led to the emergence of large-scale integrated circuits (LSIs), in which there were several tens of thousands of elements per square centimeter. On the basis of LSI, computers of the next - fourth generation - were developed. Thanks to LSI, it has become possible to accommodate such a large electronic circuit as a computer processor on one tiny silicon chip. Single-chip processors later became known as microprocessors. The first microprocessor was created by Intel (USA) in 1971. It was a 4-bit microprocessor Intel 4004, which contained 2250 transistors and performed 60 operations per second. Microprocessors laid the foundation for mini-computers, and then for personal computers, that is, computers aimed at a single user. The era of personal computers (PCs) has begun. In addition to personal computers, there are other, much more powerful computer systems. The influence of personal computers on people’s understanding of computer technology turned out to be so great that the term “computer” gradually disappeared from everyday life, and its place was firmly taken by the word “computer”.

    4 Fifth generation Starting from the mid-90s, high-power computers began to use super-scale LSIs, which contained hundreds of thousands of elements per square centimeter. Many experts began to talk about fifth generation computers. A characteristic feature of fifth-generation computers should be the use of artificial intelligence and natural communication languages. It is assumed that fifth-generation computers will be easily manageable. The user will be able to give commands to the machine by voice. The transition to fifth-generation computers implied a transition to new architectures aimed at creating artificial intelligence. It was believed that the fifth generation computer architecture would contain two main blocks. One of them is the computer itself, in which communication with the user is carried out by a unit called the “intelligent interface”. The task of the interface is to understand text written in natural language or speech, and translate the task condition stated in this way into a working program. Basic requirements for 5th generation computers: creation of a developed human-machine interface (speech recognition, image recognition); development of logic programming to create knowledge bases and artificial intelligence systems; creation of new technologies in the production of computer equipment; creation of new computer architectures and computing systems. VT Classification There are many different types of computers, including: supercomputers, mainframes, servers, desktops, workstations, laptops, ultraportables. Supercomputers Nowadays, computers with enormous computing power are commonly called supercomputers. Supercomputers differ from servers, which are necessary for prompt processing of requests. They also differ from mainframes, which also have high performance, but are used for simultaneous work with many users. Supercomputers can also be used to work with a single program. Which requires powerful resources. These are weather modeling, calculations of technical processes in production, nuclear tests. The most “advanced” processors in Russia today are the MCST R1000 models (four cores, frequency 1 GHz) and the hybrid six-core Elbrus-2C+. Both chips are manufactured using 90nm technology. By the end of 2012, the company is expected to release a quad-core Elbrus-4S processor, manufactured using 65-nm technology, and in 2015, MCST plans to complete the development of an eight-core processor under a government contract with the Ministry of Industry and Trade. Currently, the main market for processors is the defense sector. One of the largest projects where they are used is in air defense systems. Servers

    5 Servers are high-performance computers used in businesses and other organizations. Servers serve many end users or clients. Desktop Computers There are different types of desktop computers with different capabilities. Desktop computers support a variety of connection types, video options, and a wide variety of peripherals. Workstations Workstations are high-power commercial computers. They are designed for specialized professional applications, such as running design programs such as CAD (computer-aided design). Workstations are used to create 3D graphics, animation and virtual reality simulations. In addition, they can be used as control stations for telecommunications or medical equipment. Like servers, workstations typically come with multiple CPUs, plenty of RAM, and multiple, high-capacity, high-speed disks. Typically workstations have very powerful graphics capabilities and a large monitor or multiple monitors. Portable Devices In addition to desktop computers of various types, there are many more portable electronic devices. They vary in size, power and graphics capabilities. This category includes: laptop or laptop; Tablet PC; Pocket PC; personal digital secretary. Personal computers The appearance of the PC was prepared by the entire previous history of the development of computers. In the beginning, computers occupied huge halls, consumed a lot of energy and created a lot of noise. Then computers became smaller and began to work more efficiently, but still required separate rooms for themselves. The most powerful computers were located in separate complexes called computer centers (CC). In those not very distant times (70s), few people imagined a compact computer that could fit on a desktop. Engineers and scientists could only dream of such a machine, and it would be difficult for ordinary people to explain why such a computer was needed at all. The first sign was a computer designed in 1971. Outwardly, it resembled a car radio with indicator lights and switches rather than a familiar personal computer. From 1971 to 1974, different PC models were created by different companies. But due to the limited capabilities of these computers, there was little interest in them. Users and manufacturers really became interested in personal computers in 1974, when the American company MITS developed the Altair computer based on the Intel 8080 microprocessor. This personal computer was much more convenient than its predecessors and had more capabilities. A much more advanced model of a personal computer was developed in 1976 by two young Americans, Steve Wozniak and Steve Jobs. They named their computer Apple and quickly launched its production and sale. Thanks to the low price (about $500), they sold about 100 computers in the first year. The following year they released the Apple II model, which had a motherboard, display, keyboard and looked like a TV. The number of PC customers began to number in the hundreds and thousands. Personal computers were rapidly improving; in 1978, a flexible magnetic disk with a diameter of 5.25 inches (1 inch = 2.45 cm) was designed for storing information. Through the efforts of MOTOROLA, in 1979, the Motorola 68000 microprocessor was created, which surpassed its competitors in speed, performance and capabilities for working with graphics programs. IN

    6 1980, a hard magnetic disk appeared in personal computers, however, it contained only 5 MB of data. The first PCs were 8-bit and looked more like an expensive toy than a serious computer. This continued until a computer giant appeared in the individual computer industry - IBM, which specialized in the manufacture of large computers. In 1982, IBM released a very successful model of a bit computer. It was built around the Intel 8088 microprocessor, clocked at 4.77 MHz, and used the MS DOS operating system. This computer model was called IBM PC. Further, the development of PCs occurred at a very high pace: IBM created a new model every year. In 1983, the PC XT model appeared, followed by the more advanced and productive PC AT computer. They quickly conquered the PC market and became a kind of standards that competing firms tried to imitate. IBM did not create its personal computer from scratch, but using nodes from other manufacturers (primarily the Intel microprocessor). At the same time, she made no secret of how computer nodes should connect and interact with each other. As a result, other companies could join in the creation and improvement of the computer - the architecture of IBM PC computers turned out to be “open”. IBM computers had numerous “clones”, that is, different families of computers similar to the IBM PC. Later, computers supporting the IBM PC standard began to be called simply “personal computers.” Over the years, PCs have lived up to their name, as for many people they have become a necessary part of leisure time, a tool for business and research. In addition to IBM-compatible PCs, there is another family of personal computers called Macintosh. These computers trace their lineage back to the already mentioned Apple model; they were produced by Apple Computer. The architecture of Macintosh computers, unlike the IBM PC, was not open. Therefore, despite its more advanced graphics capabilities compared to the IBM PC, Macs were unable to conquer such a large market. The number of Macs is tens of times less than the number of IBM PC-compatible computers. The main trend in the development of computer technology at present is the further expansion of the scope of application of computers and, as a consequence, the transition from individual machines to their systems - computer systems and complexes of various configurations with a wide range of functionality and characteristics. The most promising ones - computer networks - focus not so much on computational information processing, but on communication information services: e-mail, teleconferencing systems and information and reference systems. In the development and creation of computers themselves, super-powerful computers - supercomputers and miniature and subminiature PCs - have a significant and stable priority in recent years. As already mentioned, research work is underway to create 6th generation computers based on a distributed neural architecture - neurocomputers. In particular, neurocomputers can use existing specialized network microprocessors - transputers - network microprocessors with built-in communications. The widespread introduction of multimedia tools, primarily audio and video means of input and output of information, will make it possible to communicate with a computer in natural language. New technical capabilities of computer technology should have expanded the range of tasks to be solved and made it possible to move on to the tasks of creating artificial intelligence. One of the components necessary for creating artificial intelligence is knowledge bases (databases) in various areas of science and technology. Creating and using databases requires high speed computing systems and a large amount of memory. General purpose computers are capable of performing high-speed calculations, but are not suitable for performing high-speed comparison and sorting operations on large volumes of records, usually stored on magnetic disks. To create programs that fill and update databases

    7 data and working with them, special object-oriented and logical programming languages ​​were created that provide the greatest capabilities compared to conventional procedural languages. The structure of these languages ​​requires a transition from traditional von Neumann computer architecture to architectures that take into account the requirements of the tasks of creating artificial intelligence. Test questions 1. Expand the basic concepts of computer science. 2. On what principles are new information technologies based? 3. What device is called a computer? 4. List the characteristics by which computers are classified. 5. What is the classification of computers by purpose?

    8 Section 1. General composition and structure of PCs and computing systems. Principles of building a computer and aircraft. Backbone-modular principle, general functional diagram Modern computers were preceded by a half-century period, which is divided into computer generations. While the list of functional blocks itself has remained virtually unchanged for more than half a century, the methods of their connection and interaction have undergone some evolutionary development. Computer architecture - a description of the structure and principles of operation of a computer, its technical structure. The basic principles for constructing a universal computer were outlined by John von Neumann in 1946, according to which a universal computer was built in 1949. The diagram shows the functional structure of a 1st-2nd generation computer. Functional diagram according to the von Neumann principle Computer devices: 1. ALU is an arithmetic-logical device for performing arithmetic and logical operations. 2. Control device for executing programs. 3. RAM for storing programs and commands. 4. VU external input/output devices. The operation of a computer is as follows: using a computer, a program is entered into RAM; The control unit reads the contents of the memory cell and executes the command, then reads the contents of the next one. The order of execution can be forced to change using jump commands. Two ALU and control units are combined into a common processor. From the above diagram it is clearly visible that the center of such a design is the processor. Firstly, it controls all devices, and secondly, all information flows pass through it. The described system, by definition, has a fundamental drawback: the processor is overloaded. By fully regulating the exchange between all devices, it is often forced to passively wait for the end of input from slow (usually containing mechanical parts) devices, which significantly reduces the efficiency of the entire system as a whole. Computers with a channel organization The emerging contradiction between the ever-increasing performance of the processor and the relatively low speed of exchange with external devices became clearly noticeable already during the heyday of second-generation computing technology. Therefore, when designing the next, third generation, engineers began to take special measures to “offload” the processor and free it from detailed I/O management. 3rd generation computers had a functional diagram with a channel organization. In addition to the already familiar set of devices (central processor, memory, input-output devices), a computer with a channel organization includes devices called channels. A channel is a specialized processor that carries out all the work of managing the controllers of external devices and exchanging data between the main memory and external devices. Devices are grouped by characteristic speed and connected to the appropriate channels. "Fast" devices (for example, magnetic disk drives) are connected to selector channels. Such a device receives

    Selector channel 9 is in exclusive use for the entire duration of the data exchange operation. "Slow" devices are connected to multiplex channels. A multiplex channel is divided (multiplexed) between several devices, and simultaneous data exchange with several devices is possible. Both the central processor and one of the channels can access the RAM. To control the access order, there is a RAM controller. It determines the priority access discipline when multiple devices simultaneously access memory. The CPU has the lowest priority. Among the channels, slow channels have higher priority. Thus, priority is inversely proportional to the frequency of device access to memory. Due to the significant complication of the computer organization, the input/output architecture is simplified. Data exchange operations become simpler. The channel is essentially a specialized "intelligent" direct memory access controller. The channel can inform the processor about its state using interrupts. All external device controllers are connected to “their” channels using a standard interface. The freedom to connect external devices is maintained thanks to the standard interface protocol, while it becomes possible to group devices by characteristics. In a computer with a channel organization, the processor is almost completely freed from the routine work of organizing input-output. Control of external device controllers and data exchange is taken over by the channel. The presence of multiple data paths eliminates the difficulties associated with blocking a single data path (system bus), which increases the communication speed. All this makes it possible to exchange data with external devices in parallel with the main computing work of the central processor. As a result, the overall system performance increases significantly. The increased cost of the scheme pays off. One of the first machines with channels was the second generation computer IBM-704. A striking example of a computer with channels are machines of the IBM-360/370 family. The appearance of these computers revolutionized computing, and for many years they became role models for computer creators. Although these machines are now a thing of the past, they have left a rich legacy in the form of interesting architectural solutions, software and algorithmic developments. Currently, circuits with specialized input/output processors are often found in computers of various types. Computers with a bus organization The transition to the fourth generation of computers was not only accompanied by a multiple increase in the density of installation in microcircuits, but also a change in the general strategy for using computer technology. Cumbersome computers for collective use have been replaced by personal computers, designed primarily for individual work by individual users. At the same time, the architecture continued its development and improvement in the direction of freeing the processor from management

    10 I/O processes. As a result, a modern PC has acquired the structure shown in the diagram. The main feature of this scheme is the presence of a dedicated bus (backbone) for transmitting information between the functional nodes of the computer. It consists of three parts: the address bus, which determines where exactly information is sent on the bus; a data bus through which information is transmitted; a control bus that determines the characteristics of the exchange and synchronizes it. All computer devices are connected to the bus, from the processor to input and output devices. An essential feature of PC architecture is the presence of specialized input/output processors called controllers. Their role is to support information exchange processes for a given device, as well as to coordinate with the standard bus of various external devices from various manufacturers. To communicate with memory, it is necessary to transfer the addresses of the necessary cells from the CPU and read the corresponding data from them, and to ensure communication between the nodes, a control bus is introduced. The SD is used to exchange information between blocks, the SH is designed to transmit the addresses of memory cells or input/output ports that are being accessed, and the SHU is for transmitting control signals. These buses are called the system bus or backbone. Functional diagram of a computer with a bus organization Let's consider the operation of a computer. When turned on, the original data is transferred from read-only memory (ROM). The CPU is set to operating state and connects all nodes to the buses. Programs that are permanently stored in ROM chips are classified as hardware. Random access memory (RAM) reserves space for programs, instructions, and data. During operation, the processor performs the following operations: determines the addresses of the required cells; reads data or instructions from them; executes instructions (counting); transfers data to specific memory cells; indicates the display port address; Using the controller, it sends data to the display. In this scheme, all devices are symmetrically connected to one channel of a common bus. This makes it possible to connect new devices. Thanks to the bus architecture, it is easy to make any changes required by a specific user to the computer configuration. The described scheme also has a bottleneck; it requires high bus bandwidth. To overcome this difficulty, modern designs use multiple buses, each of which connects the processor to a specific device or group of devices. Architecture of modern computers The operation of modern computers is determined by the chipset - a set of control chips installed on the motherboard. Previously, chipsets consisting of many controllers were used, and the first chipsets appeared in the mid-80s of the last century. The transition to chipsets made it possible to reduce the cost of motherboards and increase the mutual compatibility of components, which made the task of designing motherboards easier. The common architecture of modern chipsets is built on

    11 using two chips that form the basis, the so-called north bridge and south bridge. The north bridge chip ensures operation with the fastest PC subsystems. Contains a system bus controller, a memory controller, a graphics bus controller, a communication bus controller with a south bridge, which provides operation with slower system components and peripheral devices. The south bridge chip usually includes: a two-channel IDE (SATA) controller, a USB controller, and a built-in audio system (audio codec). The south bridge is responsible for working with less fast devices and ensures data transfer from the hard drive, optical drive, printer, scanner, and also to them. These devices transmit information through wires to the south bridge, which forwards it to the north bridge. The north bridge sends information to RAM, after which it can go to the processor or video card for processing. The chipset is a kind of intermediary in the communication of the processor with other devices of the computer system. The tasks of the chipset include managing the operation of computer components and ensuring data transfer between them. At the same time, each chipset serves only the architecture of the processor for which it was designed. Since 2005, chipsets from different manufacturers have been focused on the use of multi-core microprocessors. The bridges were named by analogy with a geographical map, on which the north pole is located at the top and the south pole at the bottom. Test questions 1. Expand the concept of computer architecture. 2. Features of the functional diagram according to von Neumann. 3. Features of a functional diagram with a channel organization. 4. Features of a functional diagram with a channel organization. 5. Features of the circuitry of modern computers.

    12 Section 1. General composition and structure of PCs and computing systems. Internal computer architecture: processor, memory. Peripherals. Purpose of computer devices. Most computers require three elements working together to function properly. 1. Hardware - the internal and external physical components that make up a computer. 2. Operating system - a set of computer programs that control computer hardware. 3. Application software (applications) - programs that are downloaded to perform specific tasks using the capabilities of the computer. A modern personal computer consists of the following components: 1. The motherboard is a large printed circuit board to which all the electronics and circuits that make up the computer system are connected. This board has connectors to which the main components of the system, such as the CPU and RAM, are connected. The motherboard provides data exchange between various connectors and system components. In addition, the motherboard has slots for the network card, video card and sound card. Many motherboards have these components built into them. The difference is in the update method. When using a motherboard with connectors, system components can be easily removed and replaced with more modern ones.

    13 The selected motherboard must: support the type and speed of the selected CPU; support the type and amount of RAM required to run applications; have a sufficient number of connectors for all necessary interface cards; have a sufficient number of interfaces of the required type. This is a board with the help of which the remaining components (parts) of the computer are combined and function together. 1. PCI slot - used to connect various cards, such as a modem, sound card. 2. Input for video card. 3. CPU slot. 4. Input for powering the processor from the power supply 5. Connector for connecting a hard drive or drive (CD-DVD) with an IDE ATA interface 6. Connectors for connecting hard drives or drives (CD-DVD) with a SATA interface 7. Slots for RAM 8. Input for connecting (floppy disk drive). 9. Connector for connecting power to the motherboard from the power supply, in this image 24 pin (number of pins) or 20 pin.

    14 Rear panel 1. PS/2 - Mouse input (Always green). 2. PS/2 - Keyboard input (always Purple). 3. Digital input. 4. Digital output. 5. USB universal ports for connecting various devices. 6. Input for network cable (local network, dedicated Internet). 7. Outputs for connecting an audio system (speakers) 2. Processor. The processor performs all calculations, operations and gives commands to other components. The processor frequency is measured in megahertz; the higher the frequency, the more operations per second it can perform. The processor also has its own small cache memory, in which it stores the most frequently performed operations, which increases its speed. Processor cache is measured in megabytes, and its capacity usually ranges from about 8 megabytes to 32 at the moment; the larger the cache, the more expensive the processor. Modern processors have several cores, which is like several processors in one. Which makes it much more productive and increases the speed of its calculations. Most modern processors are implemented in the form of a single semiconductor chip containing millions, and more recently even billions of transistors. The microprocessor includes: a control device (CU) - generates and supplies to all blocks of the machine at the right times certain control signals (control pulses), determined by the specifics of the operation being performed and the results of previous operations; generates addresses of memory cells used by the operation being performed and transmits these addresses to the corresponding blocks of the computer; the control device receives a reference sequence of pulses from the clock pulse generator; arithmetic-logical unit (ALU) - designed to perform all arithmetic and logical operations on numerical and symbolic information (in some PC models, an additional mathematical coprocessor is connected to the ALU to speed up the execution of operations); microprocessor memory (MPM) - serves for short-term storage, recording and output of information directly used in calculations in the next cycles of machine operation. MPP is built on registers and is used to ensure high speed of the machine, because the main memory (RAM) does not always provide the speed of writing, searching and reading information necessary for the efficient operation of a high-speed microprocessor. Registers are high-speed memory cells of various lengths (in contrast to OP cells, which have a standard length of 1 byte and lower performance); microprocessor interface system - implements pairing and communication with other PC devices; includes an internal MP interface, buffer storage registers and control circuits for input/output ports (I/O) and the system bus.

    15 3. RAM in a computer plays the role of a temporary buffer for storing information, that is, when you start an application, it is partially loaded into RAM, therefore, the more memory you have, the more you can simultaneously open and work in several programs, for example, playing a computer game and listening to music at the same time. A large amount of RAM is required in modern games. RAM has two main characteristics: its volume and the frequency at which it operates. 4. The video card is designed to display images on the monitor; it is responsible for graphics processing. If a weak video card is installed, then it cannot handle graphics processing. Modern video cards have their own built-in processor (core), the power of which is also calculated, like the central processor, in megahertz. Its task is to remove the graphics processing load from the central processor and take this task upon itself, that is, the higher the frequency, megahertz of the video card core, the faster it processes graphics, therefore, games run faster. The video card also has memory, video memory, with the help of which it stores textures, processed parts of graphics, video memory is again calculated in megabytes, gigabytes. 5. Adapter cards expand the capabilities of a computer system. They are inserted into the motherboard connectors and become part of the system. Many motherboards have built-in adapter card functionality, eliminating the need for additional components. Embedded cards support basic functionality, but specialized adapter cards often improve system performance. The most common cards are: video cards; sound cards; network interface cards; modems; interface boards; controller boards. 6. The power supply supplies electricity to all components of the computer and allows it to work. A cable from the electrical network goes into it, and then it distributes the voltage throughout

    16 computer. The power of the power supply is calculated in watts; the more powerful your computer, the more powerful the power supply it requires; modern video cards are very demanding of power supplies, which sometimes require a power supply of up to a kilowatt. From the power supply there are power cables to the motherboard, hard drives, coolers, and drives. High-quality power supplies are more resistant to voltage surges in the network, which prevents failure of the unit itself and all computer components. 7. Hard drive. The hard drive stores programs, games, documents. Like any storage, it has a maximum capacity, a volume that is measured in gigabytes. The larger the hard drive, the more information you can store on it. A hard drive is a mechanical device. It spins several layers of disks on which information is written and read using a magnetic head. The hard drive also has its own temporary high-speed buffer, a cache, it is arranged in the form of a small chip, with the help of which the hard drive reduces the number of physical accesses directly to the disks, thereby increasing the operating speed and its service life. 8. Peripherals. A peripheral is a device that connects to a computer and expands its capabilities. These devices are optional in nature and are not required to perform basic functions. They only provide some additional functionality. Peripheral devices are connected from the outside of the computer, using special cables or wireless communication. They fall into one of four categories: input, output, storage, or network devices. Examples of peripheral devices are: input devices trackball, joystick, scanner, digital camera, encoder, barcode reader, microphone; output devices: printer, plotter, speakers, headphones; storage devices additional hard drive, external CD/DVD drives, flash drives; network devices - external modems, external network adapters. 9. Permanent memory. ROM (English ROM, read-only memory) is used to store unchangeable (permanent) program and reference information. In the first personal computers, the BIOS code was written into a read-only ROM chip, which was created at the factory. Later, rewritable chips began to be used to store BIOS code.

    17 Electrically erasable reprogrammable ROM chip. Main parameters: Memory capacity - 16 Mbit, Sampling time - 65 ns. General description: Supply voltage range: 3.0 - 3.6 V; Technological process 0.25 microns, Possibility of erasing any combination of sectors and entire memory; Guaranteed number of erasing cycles; Data storage time is 13 years at a temperature of 125 C; Temperature range: C. BIOS location on the motherboard. In most cases, flash memory is installed on the motherboard panel, which allows you to replace the chip if necessary, but in some cases it is soldered directly to the motherboard. Flash memory chips for storing BIOS have different capacities, older computers use chips with a capacity of 1-2 Mbit (KB), and modern systems use 4-8 Mbit or more (512 KB-1 MB or more). The BIOS uses configuration parameters that are stored in special CMOS memory. It got its name from the chip manufacturing technology that used a complementary metal oxide semiconductor. The CMOS memory is powered by a special battery on the motherboard, which is also used to power the real-time clock. The service life of such a battery is usually 10 years. As a rule, during this time the computer (in particular the motherboard) becomes obsolete, and the need to replace the power supply becomes meaningless. Some CMOS chip manufacturing technologies integrate the battery directly into the chip. In this case, when the battery is discharged, it must be replaced entirely. Procedure for starting a computer Programs written into ROM chips are available to the computer immediately after turning it on. Programs in ROM are divided into: machine startup program, basic input/output system (BIOS). The role of the BIOS is twofold: on the one hand, it is an integral element of the hardware, and on the other hand, an important module of any operating system. These programs are executed every time you turn on. The startup consists of several phases: checking the functionality of the machine, initializing programmable chips and peripheral devices, checking the presence of additional equipment, loading the operating system. The check programs are short and execute quickly. The last operation is loading the operating system, performed by the bootloader program. After the OS is loaded from the disk, control is transferred to it. BIOS is part of the ROM, which is actively used during the entire operating time of the computer to control devices (contains their drivers) display, keyboard, disk drive, handles interrupts, provides energy saving, and automatic configuration settings. Interrupts are signals from the outside world that inform the processor about the occurrence of an event (key press, floppy disk service). The BIOS uses software interrupts to call and execute special utility programs.

    18 During startup, messages about the operation of scan programs appear on the screen, a prompt for a shell program or operating system appears, and further work occurs under the control of the OS. Computer diagnostics 1. The computer does not turn on - it does not respond to pressing the power button, the computer turns on, but nothing is displayed on the monitor - the coolers are working in the system unit. Option number one - when turned on, the speaker makes a single sound (squeak), that is, it reports that everything is in order; in this case, the main probability is that the video card has burned out. Option number two, the speaker is silent (does not beep), from this we conclude that either the motherboard or the power supply is broken, this also applies to the case when the computer does not react in any way to pressing the power button. Speaker is a small speaker in the system unit connected to the motherboard, which informs the user when starting the computer about the status of the components and the general operation of your computer. Decoding (main) sound combinations Speaker a 1 short signal everything works properly. There are no signals - there is a problem with the power supply, perhaps it is not connected to the motherboard, there is also a small percentage of the possibility that the motherboard itself is faulty. A continuous beep means a problem with the power supply. 2 short beeps for minor errors. 1 long recurring problem with RAM. 2. Every time you start the computer you have to press the F1 key and until this is done, the computer does not start booting. If after every time you turn on the computer your system time and date are reset, then the reason for this is a dead battery on the motherboard. In this case, you need to replace the battery on the system board and then log in and out, saving the BIOS settings. Test questions 1. What is the simplest PC configuration. 2. What is included in the system unit. 3. What is motherboard? 4. Purpose of the microprocessor. 5. List the types of memory. 6. What does the term “periphery” mean?


    Module 2. Computer architecture 1. A set of devices designed for automatic or automated processing of information is: 1) information system 2) information technology 3)

    Chapter 4 Software and hardware systems for implementing information processes Universal computer 17 technical information processing system The advent of computers completely changed all existing

    Microprocessor: main elements and characteristics Grade 10 Teacher MBOU "School 91" Safonova L.F. Microprocessor: main elements and characteristics The central processor is a computer device designed

    Topic 2.1. Basic components and blocks of computers A computer is a universal electronic program-controlled device designed for automatic processing, storage and transmission of information.

    Section 11. Computer architecture. Main components and their purpose The main components of a computer, their functionality and operating principles. The software principle of computer operation. According to its purpose

    Internal devices of a computer Internal devices of a PC Internal devices are considered to be devices located in the system unit. Some of them are accessible on the front panel, which is convenient for quick

    Computer structure Levashova L.N. ANALOGY BETWEEN A COMPUTER AND A HUMAN HUMAN Sense organs Reception (input) of information Storing information BRAIN Thinking process (information processing) Computer

    Computer science Information technology hardware Information technology tools Information technology Algorithmic tools (brainware) Hardware (hardware) Software

    RESEARCH WORK Computer architecture. Principles of John von Neumann Computer architecture includes both a structure that reflects the composition of the PC and software and mathematical software. Computer structure - totality

    GENERATIONS OF COMPUTING EQUIPMENT Presentation by Yulia Yuryevna Vereshchagina, computer science teacher, Municipal Educational Institution Secondary School, Zolotaya Dolina, Partizansky District, Primorsky Territory 1 Electronic computer equipment is usually divided

    Topic Lesson COMPUTER HARDWARE AND SOFTWARE 2 Block diagram of a computer Principles of operation of computer hardware Chapter 1 Personal computer hardware interconnected system

    Introduction to PC. History of the creation of the PC. PC device. Informatics. Lecture 3. Part 1. History of the creation of the computer The word “Computer” means “computer”, i.e. computing device. 1642 Blaise Pascal

    Lecture 2. Topic 1. Hardware (HARDWARE) - The concept of automation of calculations; - Classification of computers; - Personal computer device; - Peripheral devices; - Thin system

    State autonomous educational institution of the city of Moscow “School with in-depth study of individual subjects “SHIK 16” Abstract on computer science “History of the development of computer technology” Work

    COMPOSITION AND PURPOSE OF COMPUTER ELEMENTS The term “computer” comes from the English word Computer calculator, i.e. programmable electronic device designed for automated processing

    3 Classification of computers by areas of application Performance is some integrated characteristic that determines the total computing power of a computer, and, accordingly, its areas of application.

    Personal computer 1 Definition! Personal computer PC (English personal computer, PC), PC (personal electronic computer) - a device or system capable of performing a given task,

    Lecture 3 History of the development of computer technology. Classification and scope of computers. Personal computers Objectives of the lecture: to have an idea of ​​the stages of development of computer technology to know

    Testing on the topic “PC structure” Grade 11 Processor 1. What blocks are included in the processor? 1) arithmetic-logical unit 2) control device 3) registers 4) controllers 5) constant

    DEVICES AND PURPOSE OF THE MOTHERBOARD Zatulin A.G. Balakovo Engineering and Technology Institute, branch of the National Research Nuclear University "MEPhI" Balakovo, Russia Zatulin A.G.

    Computer architecture. Okulov Alexander Municipal educational institution "Secondary school 30" 10a class 2007 1. General principles of computer operation. A computer is a machine for automatically processing information. Included in the computer

    Architecture of modern computing tools Classification according to the principle of operation Analog computer (AVM) Analog computer is an analog computer (AVM), which represents numerical

    On the role of computer technology in modern society (interview with G.I. Marchuk)

    On the role of computer technology in modern society (interview with G.I. Marchuk)

    A. Lepikhov

    Interview about the role of computer technology in modern society in the late 1980s, academician G.I. Marchuk gave it to journalist A. Lepikhov. In the manner characteristic of Guriy Ivanovich, the role of computers in the modern world is clearly shown: science, production, economics, social sphere, etc. This interview has not lost its relevance today, and in some cases has increased the drama associated with the use Computers, the place and role of information technology specialists in society. He talks about a new reality in which it is necessary to teach schoolchildren differently, rebuild the entire higher education system, change the nature of training and retraining of technicians and workers, and teach the management of enterprises to effectively use electronic equipment.

    - We live in a time when electronic computing technology begins to literally permeate all spheres of human activity - from big science to automatic children's games. And, as always happens when something fundamentally new actively invades our lives, the process of “computer expansion,” of course, requires comprehension. First of all, the question arises: what was the motivating reason for the development of computer technology?

    - G.M.: The need to solve more and more complex problems in science, technology, economics, the desire to express qualitative ideas with quantitative ones. This applies to all sciences: geography and geology, medicine and sociology... Not to mention the needs of engineers and designers, who, earlier than many, began to feel a lack of computing facilities.

    As experts well know, the principles of electronic computing technology were formulated over a hundred years ago, and even earlier, the theoretical basis for constructing a computer appeared - Boolean algebra, named after the English mathematician George Boole, one of the founders of mathematical logic. However, these achievements were forgotten for many decades, because people got by with simple counting methods and basic technical devices for this purpose. In short, this is far from an isolated case when a discovery was ahead of its era and did not immediately receive proper recognition.

    What we call electronic computing technology was born in the 40s of the 20th century. The first ENIAC computer (electronic digital integrator and computer) was “involved” in the compilation of ballistic tables. Work in the field of nuclear physics gave a powerful impetus to the progress of computers, and space research confirmed their outstanding importance. Large allocations have dramatically expanded the scope of application of electronic computers, and applications with obvious benefits.

    Industrially advanced countries stimulated a kind of “autocatalysis” of computers: society invested increasingly large sums in improving computer technology, its use brought additional profit, part of which went to the further development of the same computer technology.

    Let's turn over some pages of the history of domestic computers. The first copyright certificate in the USSR for the invention of a programmed automatic computer was issued in 1948. Subsequently, on December 25, 1951, at the Institute of Electrical Engineering of the Academy of Sciences of the Ukrainian SSR, a small electronic calculating machine came into operation - the first in our country, developed under the leadership of an academician. The unit occupied an area of ​​50 square meters, contained over 6 thousand lamps, which consumed 25 kilowatts of electricity. MESM could perform arithmetic operations on five-six-digit numbers at a speed of... 50 operations per second. But then even this seemed fantastic because it was approximately 1.5 thousand times greater than a person’s “counting abilities.” (It is still more correct to consider the computer of I.S. Brook as the first Soviet computer. - approx. E. Proydakov).

    Another brainchild of Soviet scientists, which appeared in 1953, is -1 (high-speed electronic counting machine). She could already count almost 200 times faster and at that time was one of the fastest in the world. BESM made it possible to solve a number of problems that specialists did not undertake due to the huge amount of calculations.

    Among the Soviet scientists who contributed to the progress of electronic computing technology, one must name academician, president of the Academy of Sciences from 1961 to 1975, and founder of the Siberian Branch of the USSR Academy of Sciences, academician.

    The development of various branches of technology strengthened the base and capabilities of electronics, which naturally affected computers. Moving from lamps to semiconductors, and then to integrated circuits, computers gained speed and found more and more new areas of application.

    Computers based on simple integrated circuits can handle hundreds of thousands of operations in a second. Computers based on large integrated circuits are ten times faster than their predecessors. And now computers based on ultra-large integrated circuits are making themselves known. Their speed is tens and hundreds of millions of operations per second.

    To the uninitiated, the numbers are staggering. Meanwhile, this is far from the limit. The comprehensive program of scientific and technological progress of the CMEA member countries, as a priority task, provides for the creation of computers that will perform 10 billion operations per second.

    Of course, all current and projected advances in electronics are impossible without mastering the production of ultra-pure metals, special alloys and artificial crystals, without advances in laser technology, and in many areas of applied sciences. Another thing is clear: without the help of computers, the qualitative leap observed today in various spheres of human activity would be simply unthinkable.

    And one more thing. At some point, computers - through new designs embodying deep physical ideas - forced the development of new, efficient electronic elements and circuits. The interaction has gone so far that the computer itself, based on automatic design systems, is already creating variants of the components of the next electronic computers. This is especially clearly seen in the example of microelectronics, when a computer microprocessor fits on a crystal with an area of ​​less than one square centimeter. Here, the design and manufacture of microcomputers are essentially combined into one cycle.

    And all this happened in 35-40 years, before the eyes of one generation of researchers.

    - What you are talking about is perceived as somewhat detached.

    - G.M.: Then let's resort to comparisons. The thickness of a human hair is approximately 100 microns. So imagine that you fit a grid of 400 transistors, each of which consists of lines 1 micron thick, on a silicon crystal the size of your hair. Now compress these lines to half a micron. In the same area it is already possible to place almost 1.5 thousand semiconductor transistors. Let's repeat the compression operation. With a thickness of a quarter micron, each semiconductor transistor would be the size of a large virus, and the cross-sectional area of ​​a human hair would be enough for 4,500 such transistors.

    This is not at all an exercise in abstract actions, but a reality that designers of modern computers face. The first integrated circuits, or, as experts say, “chips,” with lines one micron thick, are entering the world market. They contain over a million transistors. Chips with half-micron elements - 4 million transistors can be placed here - are now being tested in laboratories and will be put into production within the next few years. Chips with quarter-micron elements (tens of millions of transistors) will likely come into use sometime toward the end of this century. And at the very end of the century, according to existing estimates, we may have so-called “gigabit integrated circuits” in our hands, that is, with a billion components each.

    Not long ago, microns were considered the limit for semiconductors on silicon chips. However, as we see, the barrier has been overcome by engineers who have already mastered the world of ultra-microminiaturization. Complex structures are created, sometimes approaching the size of a molecule, so tiny that they cannot be seen even in powerful optical microscopes.

    At the same time, chips with elements smaller than a micron are causing a revolution in their manufacturing itself. First of all, it is necessary to fully automate production, because the presence of a person can lead to the fact that the technological process becomes insufficiently clean. If anyone has ever visited semiconductor factories, he will agree that there are few places cleaner than such factories. Because the slightest speck of dust threatens to ruin the chip, workers wear white overalls and sterile masks, like surgeons. The air in industrial premises is constantly filtered, and there are a thousand times fewer dust particles per cubic centimeter than in a hospital operating room.

    Yet for submicron chips, traditional semiconductor foundries are hopelessly dirty.

    The number of dust particles per cubic centimeter needs to be reduced another hundred times. This is possible if people are removed from production premises altogether. But sterility is not the only factor. The challenges of designing, testing, and printing integrated circuits are rapidly expanding beyond human capabilities. A person is simply not able to “pack” four million devices on a tiny silicon wafer. Only computer-controlled machines can do this.

    I think I won't be too far wrong if I say that somewhere in the mid-1990s, just one integrated circuit will be able to compete with today's computers. And it will probably cost extraordinarily cheap. Everything that is being done now, the most sophisticated existing methods of using chips, is only a small step towards what awaits us in 10-20 years.

    - One often hears from computer specialists: they say that every time the cost of computer technology decreased sharply, the face of the world changed.

    - G.M.: This statement, of course, is too categorical and ambitious. But it must be admitted that the rapid improvement of the computer element base is already prompting designers to seriously think about what was considered science fiction just a few years ago.

    First of all, huge opportunities for modern computers open up in factory workshops, where systems are being implemented that can skillfully “manage” technological processes of any complexity and provide such control over the quality of products that a person simply cannot do.

    Or let's take cars, for example. How many of them are there in the world? Tens and tens of millions. Microprocessors here will help to operate the engine correctly, reduce exhaust emissions, reduce fuel consumption, and avoid accidental collisions on the roads.

    Superchips, or ultra-large-scale integrated circuits, will no doubt revolutionize television. Transmitting signals in digital form - a method that is cheap precisely in the presence of superchips - allows you to obtain an image that is significantly superior in quality to the current one. Perhaps the models of such TVs will only have two or three superchips. Undoubtedly, televisions with storage devices will appear. Favorite films, plays, performances by popular artists can be played back at any time by sending the appropriate command to your home computer. The cost of such video devices is still very high, but they will become available to everyone when four-megabit chips “come into use.”

    Remember the first electronic wristwatch. The very idea that a traditional mechanism worked out over centuries could be replaced with something amazed minds. And now electronic watches are so common that they successfully compete in price with mechanical ones.

    - Your last example is just a reason to return from the future to the present. Hence the next question: what are today's electronic computers and what, in general terms, is their scope of application?

    - G.M.: Indeed, the current “spectrum” of computers is very wide - from supercomputers to microprocessors. Conventionally, there are three main lines of computers: large machines with speeds of millions of operations per second, minicomputers with speeds of hundreds of thousands of operations per second, and microcomputers with speeds of tens and sometimes hundreds of thousands of operations per second.

    Any computer is equipped with arithmetic and logical processors, RAM and long-term memory, control devices and input/output information. Long-term memory is usually recorded on magnetic disks, tapes, or special media. It is long-term memory that houses the programs needed for calculations and all the material that makes up the database.

    Our lives include supercomputers with a productivity of hundreds of millions of operations per second. As a rule, they are needed for research purposes or the management of very complex scientific and technical complexes. Based on these machines, in particular, systems for collective use are developed. We are talking about application software systems organized into packages by application area. This could be a package of linear algebra problems, statistical processing of experimental results, a package for displaying information in the form of graphs, etc. It is important that most packages are universal, that is, they do not depend on the nature of a specific task. In other words, if in the course of solving a problem there is a need to process statistical data or, say, display information on a graph, then this no longer requires new programs - rather universal packages.

    Many service packages are stored in the machine’s memory; work is greatly facilitated and the productivity of the computer user is increased, which means that through this it is possible to achieve an additional economic effect. And although it is not easy to estimate, it is, of course, proportional to the increase in productivity of those who work with the help of computers.

    Creation of computers that serve subscribers in various modes of access to them (remote batch processing, time-sharing mode, “man-machine” dialogue, etc.), improvement of peripheral equipment and terminals - terminal devices as part of a computer system intended for input and output of information during human interaction with a computer (for example, displays and teletypes are used in this capacity), improvements in information transmission lines have significantly expanded their capabilities. This made it possible to move from local computer centers, the equipment of which is located in one place, to multi-machine complexes, the components of which are located at considerable distances from each other. The latter are called “networks of computers”, “networks of computers”, “networks of computers”.

    Computer networks can best ensure the work of users in the case when in some points there is a shortage of computer time, and in others there is an excess. In addition, the computer network provides access to huge databases of not only a universal, but also a specialized nature, helps the user find “pieces” of already well-debugged programs and other valuable information in these databases, and dramatically speed up the solution of his task.

    -What about minicomputers?

    - G.M.: They are used primarily to provide automated control - both production (ACS) and technological processes (APCS), in scientific research, education systems and many other areas.

    In the first case, the computer is responsible for analyzing the implementation of plans, calculating wages and material and technical resources, developing network schedules for production preparation, assessing jobs and many other functions. The presence of an automated control system is a guarantee that the manager at any time has comprehensive information about the activities of his enterprise and can reasonably take the necessary organizational and economic measures. In fact, modern production is a complex organism with a large number of direct and feedback connections. The duty of both the director and, of course, all levels of management is to find states of this organism that are stable with respect to small deviations and the optimal option that leads to the highest economic effect. Naturally, such an effect is associated with certain limitations characteristic of the actual production environment.

    Speaking about automated process control systems, it should be noted that their role is very great in production itself, because each system is intended for the comprehensive automation of a specific technological process. It is here that a mini-computer is indispensable, and with good reason we can assume that the high economic effect from the introduction of automated process control systems is achieved precisely as a result of the use of electronic computer technology.

    Production process control systems have been around, perhaps, since the creation of the conveyor belt. But traditional options for managing an assembly line or rigid production facility were limited. And only new computer technology, including microprocessor technology, made it possible to control, say, the progress of a technological operation on the basis of constantly incoming and processed information. This happens in much the same way as if dozens of controllers were vigilantly performing their duties and if any deviations from the technology norm were detected, they would be immediately eliminated. In fact, this is done by an automated control system. Information is continuously received from a set of sensors and analyzed on high-speed computers. Their memory contains numerous options for disruption of the production process and a list of what needs to be done to correct the situation. The computer, in accordance with the program, “finds” the required command and sends it to the actuators to make the necessary adjustments.

    I will give just a few examples.

    The rolling mill must roll a sheet of a given thickness. Previously, we put up with some tolerances. They were inevitable due to the heterogeneity of the source material, uneven dynamic and static effects. As a result, interest, or even ten percent of the valuable metal was wasted.

    Modern rolling mills are equipped with sensors connected to a computer. If some discrepancy with the established standard is detected, the computer gives a command to re-roll, and the sheet is brought to the required thickness.

    If the mill is continuous, working in one direction, then the computer “orders” the next roll to increase the pressure in the stand and again controls the thickness of the steel sheet in order to make the next operational decision. With such rolling, tolerances are practically eliminated and the entire metal goes into use.

    Saving material resources is an important task. But it is equally important to produce products that meet the highest technical requirements. For example, we melt cast iron. Only a very experienced specialist, as they say, senses the quality and readiness of the melt. Of course, samples are taken and express analysis is carried out, but sometimes the results come from the laboratory too late, and nothing can be corrected. And the end result is substandard cast iron. If we move to an automated process control system, when spectral analysis is continuously carried out, the concentrations of all components of the melt are recorded, and this data is processed on a computer, then the blast furnace production will become as controllable as a rolling mill. There will be huge savings due to additional volumes of high-quality cast iron. Although such systems are still undergoing pilot testing, it is already clear that their payback period is certainly less than a year.

    The creation of more and more automated process control systems is the main path for the development of an intensive economy.

    By the way, let’s talk about the importance of mini-computers. Today, a more general concept of combining automation of production management and automation of technological process control has already been formed. Here we come to a unified system based on the so-called integrated automated control systems. The opportunity to optimize organizational and purely technical measures that such a system provides promises brilliant prospects.

    - And now, please, in more detail - about microprocessors.

    - G.M.: This computing technology is built into components of machines, devices and elements. Each microprocessor manages its own node. But it can be connected to other machine components through other microprocessors. As a rule, their actions are coordinated by a single mini-computer. This structure is based on the logic of managing large systems, such as enterprises themselves. After all, they are built according to a hierarchical principle: first sections, then workshops, then entire productions and, finally, management.

    Microprocessors have already taken a strong place in the machine tool industry - in computer numerical control (CNC) machines. This is a new and active area of ​​​​application of microprocessor technology in production. At the same time, the most radical step towards comprehensive automation: from controlling one machine with a limited set of operations to unmanned robotic production complexes.

    I would like to emphasize the main thing: the great opportunities that the introduction of computers opens up to achieve a national economic effect. It is achieved through the optimal organization of production and its components, timely adjustments to the technological process in the event of random deviations, and reliable operation without the presence of a highly qualified worker.

    If we return to household appliances, even now we already feel the influence of electronics in general and microprocessors in particular on our everyday life. Washing machines with a programmable set of operations, a wide variety of microcalculators, video recorders and much more are on sale. The pace of intellectualization of household appliances is undoubtedly increasing. This means that the household will take up less labor, which will again benefit social production.

    - Today there is a lot of talk about the fact that conducting scientific research without electronic computer technology is practically impossible, perhaps except in the most abstract areas associated with purely theoretical developments. How exactly do computers help scientists, where is their use needed in the first place?

    - G.M.: First of all, of course, in mathematical modeling. In fact, scientific research usually begins with hypotheses. On their foundation, more and more detailed models of the phenomena being studied are built, which are usually implemented on a computer. Possessing greater speed and memory, a computer based on one or another model repeatedly solves a problem given a wide variety of sets of input parameters. And this makes it possible to quantitatively describe possible solutions to a given problem and select from them those that interest the researcher. And do it in a fairly short time. Equipping laboratories with electronic computer technology is a reliable way to increase the pace of scientific research.

    Next. Outstanding achievements of recent years, such as the creation of artificial genes, the production of feed protein from methane, the emergence of large and ultra-large integrated circuits, could not become a reality without computer technology, which helped to conduct relevant experiments. The computers controlled all stages of the experiment and if it deviated from the given program, they immediately sent a correction command.

    Electronic computer technology is also indispensable when processing the results of experiments. If in the pre-computer era complex experiments lasted for days, or even weeks, then the processing of their results was delayed for months, or even years. The computer today gives the answer almost immediately after the end of the experiment. The time savings are truly enormous. It is safe to say that computers have increased the productivity of researchers by more than 10 times.

    The logic of modern scientific research is such that it requires bringing the computer closer to the scientist - be it a theorist or an experimenter. As for experimenters, a certain tendency has already emerged: they are quite satisfied with standard mini-computers, since the nature of the use of these machines is not much different from their use in automated process control systems.

    With theorists the situation is more complicated. To work, they need the entire computer, albeit not so fast, but with all its capabilities. The time sharing mode on mainframe computers solves this problem, but only partially solves it. After all, a scientist thinks, constantly turning to new information; sometimes he has a need to intervene in the course of calculations or change them. However, it is impossible to attract a mainframe computer for such purposes - its time and resources are very expensive.

    There is a contradiction between the needs of the researcher and the capabilities of the computer. It was overcome when a new original direction was born in computer technology - individual, or, as they say now, personal computers. These are completely modern machines with their characteristic architecture, a set of appropriate equipment and programs. Work on a personal computer is carried out using 16- and 32-bit words. 64-bit arithmetic is also possible, of course, with some loss of calculation speed. A personal computer has input/output devices and, if necessary, communication lines with other computers. That is, if the “abilities” of a personal computer are not enough to solve the problem at hand, then through the communications system the finished program can be transferred to another machine that has more resources, in order to then receive an “answer”.

    - You talked about the participation of computers in the activities of scientists. But a scientific idea is usually embodied, so to speak, into a “real product” only through design and engineering developments. After all, as often happens: a scientific idea has long won universal recognition, but before its optimal or simply effective implementation in the national economy, years of painstaking work of design engineers pass. Is this distance getting shorter?

    - G.M.: A real opportunity to reduce the time “from idea to machine” arose after the advent of CAD systems - automatic design systems. I will not talk about the historical path that they went through, although it is interesting and instructive in itself, but I will only say about their basic principles.

    What is a modern design and engineering system? It consists of three interconnected stages. The first is the formation of technical specifications for the project: human-machine dialogue to draw up a schematic diagram. Naturally, the project must be based on the most modern scientific ideas, take into account the possibilities of implementation, and limitations on the required resources. These are, so to speak, “discussions” between a person and a computer, the memory of which contains all the necessary information - from theoretical models to all sorts of restrictions. The end result of the first stage is the “outline” of the project.

    Then comes the time for its detailed design study. At the second stage, application software packages focused on the problems of this project are widely used. This operation, if necessary, is combined with a system for finding the best solutions based on the researcher’s experience. As a result, a complete set of design documentation and its graphical display appears.

    And finally, a project is being created for the technological preparation of production for the production of serial products.

    But it happens that the idea of ​​the car is good, and the design development is quite solid, but it is impossible to produce serial products for one reason or another. Then the so-called iterative design process begins, taking into account the limitations dictated by production. Sometimes this affects the fundamental aspects of the project, and everything seems to be repeated again - from the level of completion, or even secondary elaboration of the technical specifications. And so on until the desired result.

    It is clear that the presence of a computer sharply reduces the time required to complete the three indicated stages. And the sooner a scientific idea is translated into a new machine or technology, the greater the economic effect the national economy will receive. But the benefits of using computers do not end there.

    The automatic design system for machine tools, machining centers or color televisions is the fruit of the intense efforts of scientists, designers, technologists and programmers. After all, first you need application software packages that are designed to speed up design work. Then the same packages can serve well in all design bureaus and enterprises where new equipment is born. Compared to the traditional method, when each team acted in its own way, the gain is colossal. Previously, years were spent on a project, now weeks and even days.

    It is true that application packages for CAD, brought to the appropriate standards, are quite labor-intensive and still turn out to be very expensive. But, once they arise, they are able to satisfy any designers and technologists, putting at their disposal huge amounts of programmed knowledge. Application software packages are becoming our national wealth. And it is not surprising that since 1983 they and other computer software have been considered commercial products in our country. This is an important step to stimulate the development of computer software using economic means.

    - Today, the volume of a wide variety of information - scientific, economic, technological, social - is literally growing like a snowball, and it is already difficult to navigate the information ocean without the help of a computer. How is this done practically?

    - G.M.: Electronic computers are widely involved in the field of information - from creating databases to organizing effective search systems.

    We started by organizing the most complex information flows, by combining a vast mass of information into special sections, subsections and paragraphs. All of them have consistent indexing, and the computer can move from large arrays of homogeneous information to smaller and smaller ones. As a result, continuously narrowing the search range, the machine achieves its goal - it finds what interests the user.

    There are many hundreds and even thousands of databases of different types. Collecting them all together into a single computing system is simply unrealistic. In fact, let's take at least three databases - about synthesized organic compounds, about the immune status of the patient, and about the composition and characteristics of stars in the Galaxy. Of course, these data have some commonalities, but the subject information of such databases, the areas and methods of their use are completely different. On the one hand, they cannot be “torn off” from the teams of research institutes, clinics, observatories, libraries - without them they will soon lose their freshness, and therefore their value. On the other hand, and this is natural, it is necessary to make sure that any database is accessible to all users. In other words, they must be combined. Where is the way out of the current contradiction? It was found in the organization of a distributed knowledge system.

    Indeed, why try to combine the incompatible? It is much better to give each team of researchers, albeit a small one, but with a sufficiently capacious computer memory, to create their own standard-structured database. The “owners” of this database will constantly develop and update it - after all, we are talking about vital information for them. A user from any other institution, “entering” this database through communication channels, acquires the latest and most qualified information. That is, one team is able to provide the entire country with relevant information. All such specialized sources of information constitute a distributed knowledge system. If we now combine them with each other, we will come to a unified system of data banks for the country. This is the main path of development of modern information technology.

    Now, for example, the Institute of Organic Chemistry of the Siberian Branch of the USSR Academy of Sciences, upon a teletype request from any user, can give an answer about whether a chemical compound with the specified parameters was previously obtained or not. But the number of chemical compounds, if I’m not mistaken, increases annually by about two to three tens of thousands. Is it necessary to explain once again how much such an “electronic certificate” saves the time of an organic chemist and saves him from rediscovering already synthesized substances.

    Or the design work we just talked about. Any new machine or technical device must at least meet world standards. But this global level needs to be continuously “monitored”, and the latest information coming from different countries must be promptly entered into data banks. We are talking here about tens and hundreds of thousands of types of products.

    Society will become more and more informatized. First, fundamental constants, then technical data systems and, finally, semantic texts as the most complex type of information - these are the stages in the formation of a unified information network of our country. However, this is just the beginning of the journey. Ahead lies a huge and interesting work on the use of knowledge accumulated by man and systematized with the help of electronic computer technology.

    - It is well known that a computer is capable of solving the most complex problems of science and technology. On the instructions of the researcher, in the process of answering, she goes through numerous options and settles on the best one. But a computer usually operates according to a clearly formulated program. The same search for an optimal solution and enumeration system were proposed to her by a person. But do modern electronic machines have their own intelligence?

    - G.M.: Already at the first stage of the development of computers, people began to teach them to “think” and draw at least elementary, but quite logical conclusions. True, the boundaries between a fully programmed computer operating system and its “initiative” are very arbitrary, but, so to speak, there is still a “programmed initiative”.

    Creating more and more advanced programming languages, people strive to write down the conditions of a problem in a form close to natural language. For example, he instructs the machine to calculate an airplane wing of such and such shape, surface quality and size, taking into account certain air flow rates. Based on the received information, the computer must accurately, down to the smallest details, compose a mathematical problem. Most recently, this was done by a software engineer. The systems that exist today for displaying the initial conditions of a problem are such that a computer copes with them no worse. And most importantly - in a matter of minutes or hours, as opposed to the weeks and months that are required by a specialist armed with knowledge and intelligence. It’s just that modern machines have “learned” to choose rational or even optimal intermediate operations. This means that they are capable of making decisions when various options for software implementation of calculations are possible. It is here, at the level of the corresponding machine languages ​​and translators - methods of translating language into machine commands, that we first encountered the artificial intelligence of a computer.

    However, as soon as computers began to be used in design work, to build automated database or technological process management systems, researchers had the idea of ​​​​introducing creative elements into the software. Let's say a designer begins to design a machine part on a display. He needs to know the dimensions of the part, as well as the input and output characteristics - after all, the part must fit the future machine. Monitoring compliance with these essential conditions rests with the computer. If they are violated in the design search, the computer immediately lets the person know about it. She acts as an experienced assistant or expert. This is again an element of artificial intelligence.

    For a part from the database, you need to select a material of the required strength, with certain temperature parameters, etc. Based on the “request,” the computer finds the required sets of materials and offers it to the person. The designer, guided by his experience, gives the computer the task, given the known characteristics of the selected material, to calculate the strength, temperature and other fields of the part. If the calculation results satisfy him, the work is completed and the part is ready. If not, then he selects another suitable material, and everything is repeated again. As we see, the designer and the computer come into contact interactively, and everything that the assistants had to do, using reference books and the corresponding calculation schemes, is done by the computer. It now replaces man not only in performing mechanical work, but also in making logical conclusions.

    It is where logic and logical conclusions begin that artificial intelligence begins to manifest itself. Man gradually transfers more and more of his functions as a designer-researcher to a machine, reserving only the most fundamental ones, where creativity and unprogrammed knowledge are indispensable.

    Intelligence modeling occupies a special place in the development of modern science. I am not talking, for example, about the derivation of new mathematical theorems, although much has already been achieved here with the help of the algebra of logic, in particular by the Leningrad school of Professor N.A. Shanin, who achieved outstanding results in proving theorems in set theory. Let's take simpler things. We are all taught at school to solve geometric and trigonometric problems. But this can also be “taught” to a computer. So if a scientist later encounters any problem from Euclidean geometry during his research, it will be immediately solved by the machine.

    Next. In mathematics, and especially in computational mathematics, many universal and specialized algorithms for solving problems related to linear algebra, differential and integral equations have been developed today. It is also possible to build databases and search engines from them to select algorithms with the help of which the problem will be solved by a computer in the best possible way. And this is an element of artificial intelligence.

    Precise integration, differentiation, expansion of functions into series is also becoming an area that people are already transferring to electronic computer technology.

    Intellectualization tools for solving problems on a computer and basic models will in the foreseeable future develop on the basis of a dialogue between man and machine. It is in the cooperation of the higher intellect of a person, which cannot be fully described, and the increasingly improved elements of artificial intelligence of a computer with its uniquely fast search of data arrays, necessary information and the search for various optimizations - the prospect of using electronic computers.

    In the meantime, a much more modest goal is on the agenda: to teach computers to understand us at the level of a simple, but natural language; give advice to a person who is not privy to the intricacies of algorithms for solving complex problems; find optimal solutions; reflect volumetric information in the form of graphs and holograms; answer us with synthesized speech.

    This is not a complete, but the main list of problems of artificial intelligence that a person gives to a computer. Provides for increasing the pace of scientific research, the speed and quality of design work, for information support and management of production processes. If we add to this the active use of computers in medicine, banking, trade, transport and many other areas, then a truly endless horizon of computer applications will open up before us. The limit of their applicability today can only be set by our imagination.

    - What will an industrialized society look like with the mass introduction of electronic computing technology? Where will the changes brought about by computers be most noticeable?

    - G.M.: First of all, in social production. The content of labor itself will change and its productivity will increase tenfold.

    Modern mass production is based on the division of labor, on performing specialized operations that do not require special skills, and computers greatly increase the possibilities of its complete automation, eliminating repetitive, monotonous, and tedious operations for humans. So these kinds of jobs will disappear in industrial enterprises first. But not only them. Today, many factories already operate numerically controlled machines or even special machining centers. However, we must not forget that with their advent the nature of the duties of a qualified machine operator has changed. He now only observes automated equipment. The figure of the virtuoso turner is becoming a thing of the past. And vice versa, there is an increasing need for highly qualified specialists - engineers for the operation of microelectronic equipment, software experts.

    In the next century - and the next century is just around the corner - most industrial jobs will look very different. They will be occupied by robots that can “see”, “hear”, “touch”, respond to ultraviolet, infrared or radioactive radiation, self-program and reprogram. The first fully automated enterprises are already being created, where there is practically no live human labor. Automata that do not know rest 24 hours a day, with productivity immeasurably higher than that of a person, and moreover, “reproducing” themselves, are a close reality.

    And you need to be prepared for this reality. It is necessary to teach schoolchildren differently (the reform of secondary schools is already underway), rebuild the entire higher education system, change the nature of training and retraining of technicians and workers, teach enterprise managers to effectively use electronic equipment.

    Science, technology, production, scientific and technological progress in general require that the focus of attention in the training of specialists of all categories move from the simple assimilation of large volumes of information to its creative assimilation, the perception of continuously changing ideas, new trends in modern development.

    The very appearance of computers gives a powerful impetus to the creation of teaching methods in schools, technical schools and universities that would enhance the creative abilities of a person armed with computer technology.

    In a word, everyone needs to change their usual working methods and go back to school. Learn to live and work in a new, rapidly changing world, which is unthinkable without the widest use of modern computer technology.

    The article was prepared for placement in the Virtual Museum by Ponarin O.S., Fedorova A.P., Brest.
    From the book “Horizons of Scientific Research”, Marchuk G.I. Publishing house "Soviet Russia", Moscow, 1987
    February 17, 2017

    Relatively recently, the term “computer technology” has come into use. This designation initially did not imply all those aspects that are included in it today. And, unfortunately, most people for some reason believe that computers and computer technology are synonymous words. This is clearly a fallacy.

    Computer technology: the meaning of the word

    The meaning of this term can be interpreted in completely different ways, especially since different dictionaries can interpret it in different interpretations.

    However, if we approach the issue with some kind of generalization, we can safely say that computer technology is a technical device with a set of certain mathematical tools, techniques and methods for automating (or even mechanizing) the processing of any information and computational processes or describing this or that another phenomenon (physical, mechanical, etc.).

    What is this in a broad sense?

    Computer technology has been known to mankind for a long time. The most primitive devices that appeared hundreds of years BC can be called, for example, the same Chinese abacus or the Roman abacus. Already in the second half of the current millennium, devices such as the Knepper scale, Schickard arithmometer, calculator, etc. appeared. Judge for yourself, today’s analogues in the form of calculators can also safely be attributed to one of the varieties of computer technology.

    Nevertheless, the interpretation of this term acquired a more expanded meaning with the advent of the first computers. This happened in 1946, when the first computer was created in the USA, denoted by the abbreviation ENIAC (in the USSR, such a device was created in 1950 and was called MESM).

    Today, the interpretation has expanded even further. Thus, at the present stage of technology development, it can be defined that computer technology is:

    • computer systems and network management tools;
    • automated control systems and data (information) processing;
    • automated design, modeling and forecasting tools;
    • software development systems, etc.

    Computing Tools

    Now let's see what computer technology is. The basis of any process is information or, as they say now, data. But the concept of information is considered quite subjective, since for one person some process may carry a semantic load, but for another it does not. Thus, to unify data, it was developed that is perceived by any machine and is most widely used for data processing.

    Among the tools themselves, one can highlight technical devices (processors, memory, input/output devices) and software, without which all this “hardware” turns out to be completely useless. Here it is worth noting separately that a computing system has a number of characteristic features, for example, integrity, organization, connectivity and interactivity. There are also so-called computing systems, which are classified as multiprocessor systems that provide reliability and an increased level of performance unavailable to conventional single-processor systems. And only in the overall combination of hardware and software can we say that they are the main means of computing. Naturally, we can add here methods that provide a mathematical description of a particular process, but this may take quite a long time.

    The structure of modern computers

    Based on all these definitions, we can describe the operation of modern computers. As mentioned above, they combine hardware and software, and one cannot function without the other.

    Thus, a modern computer (computer technology) is a set of technical devices that ensure the functioning of a software environment for performing certain tasks, and vice versa (a set of programs for the operation of hardware). The first statement is the most correct, not the second, because ultimately this set is needed specifically for processing incoming information and outputting the result.

    (computer technology) includes several basic components without which no system can do. This includes motherboards, processors, hard drives, RAM, monitors, keyboards, mice, peripherals (printers, scanners, etc.), disk drives, etc. In terms of software, operating systems and drivers occupy the first place. Operating systems run application programs, and drivers ensure the correct functioning of all hardware devices.

    A few words about classification

    Modern computing systems can be classified according to several criteria:

    • principle of operation (digital, analog, hybrid);
    • generations (stages of creation);
    • purpose (problem-oriented, basic, household, dedicated, specialized, universal);
    • capabilities and sizes (super-large, super-small, single- or multi-user);
    • conditions of use (home, office, industrial);
    • other characteristics (number of processors, architecture, performance, consumer properties).

    As is already clear, it is impossible to draw clear boundaries in defining classes. In principle, any division of modern systems into groups still looks purely conditional.

    The personal computer quickly entered our lives. Just a few years ago it was rare to see any kind of personal computer - they existed, but they were very expensive, and not even every company could have a computer in their office. Now every third home has a computer, which has already become deeply embedded in human life.

    Modern computers represent one of the most significant achievements of human thought, the influence of which on the development of scientific and technological progress can hardly be overestimated. The scope of computer applications is enormous and is constantly expanding.

    Even 30 years ago there were only about 2000 different applications of microprocessor technology. These are production management (16%), transport and communications (17%), information and computing technology (12%), military equipment (9%), household appliances (3%), training (2%), aviation and space (15 %), medicine (4%), scientific research, municipal and urban services, banking, metrology, and other areas.


    Computers in institutions. Computers have literally revolutionized the business world. The secretary of almost any institution processes texts when preparing reports and letters. The institutional apparatus uses a personal computer to display wide-format tables and graphic material on the display screen. Accountants use computers to manage an institution's finances and enter documentation.

    Computers in production. Computers are used in a wide range of industrial tasks. For example, a dispatcher at a large plant has at his disposal an automated control system that ensures the uninterrupted operation of various units. Computers are also used to control temperature and pressure during various manufacturing processes. There are also computer-controlled robots in factories, such as car assembly lines, that involve repetitive tasks such as tightening bolts or painting body parts.

    Computer - assistant designer. Airplane, bridge, or building design projects require a lot of time and effort. They represent one of the most labor-intensive types of work. Today, in the computer age, designers have the opportunity to devote their time entirely to the design process, since the machine “takes over” the calculations and preparation of drawings. Example: A car designer uses a computer to study how body shape affects the performance of a car. Using devices such as an electronic pen and tablet, the designer can quickly and easily make any changes to the project and immediately see the result on the display screen.


    Computer in a self-service store. Imagine it's 1979 and you work part-time as a cashier at a large department store. As customers place their selected purchases on the counter, you must read the price of each purchase and enter it into the cash register. Now let's go back to our days. You are still working as cashiers in the same department store. But so much has changed here. When customers now place their purchases on the counter, you pass each item through an optical scanning device, which reads the universal code on the purchase, which the computer uses to determine the price of that item, stored in the computer's memory, and displays it on a small screen to the buyer could see the cost of his purchase. Once all selected items have passed through the optical scanning device, the computer immediately displays the total value of the items purchased.


    Computer in banking. Performing financial calculations using a home personal computer is just one of its possible applications in banking. Powerful computing systems allow you to perform a large number of operations, including processing checks, recording changes to each deposit, accepting and issuing deposits, processing loans, and transferring deposits from one account to another or from bank to bank. In addition, the largest banks have automatic devices located outside the bank. ATMs allow customers to avoid long lines at the bank and withdraw money from their account when the bank is closed. All that is required is to insert a plastic bank card into the automatic device. Once this is done, the necessary operations will be performed.

    Computer in medicine. How often do you get sick? You probably had a cold, chickenpox, or a stomach ache? If in these cases you went to the doctor, most likely he performed the examination quickly and quite effectively. However, medicine is a very complex science. There are many diseases, each of which has only its own symptoms. In addition, there are dozens of diseases with the same and even completely identical symptoms. In such cases, it can be difficult for a doctor to make an accurate diagnosis. And here the computer comes to his aid. Currently, many doctors use a computer as an assistant in making a diagnosis, i.e. to clarify what exactly is hurting the patient. To do this, the patient is thoroughly examined, and the results of the examination are reported to the computer. After a few minutes, the computer reports which of the tests performed gave an abnormal result. At the same time, he can name a possible diagnosis.

    Computer in education. Today, many educational institutions cannot do without computers. Suffice it to say that with the help of computers: three-year-old children learn to distinguish objects by their shape;


    six- and seven-year-old children learn to read and write; school graduates are preparing for entrance exams to higher education institutions; students explore what will happen if the temperature of a nuclear reactor exceeds the permissible limit. “Machine learning” is a term that refers to the process of learning using a computer. The latter in this case acts as a “teacher”. A microcomputer or terminal that is part of an electronic data transmission network can be used in this capacity. The process of assimilation of educational material is gradually controlled by the teacher, but if the educational material is given in the form of a package of appropriate computer programs, then its assimilation can be controlled by the student himself.

    Computers are on guard of the law. Here is news that will not please the criminal: the “long arms of the law” are now equipped with computer technology. The “intellectual” power and high speed of the computer, its ability to process huge amounts of information, are now being put to the service of law enforcement agencies to increase work efficiency. The ability of computers to store large amounts of information is used by law enforcement agencies to create a file of criminal activity. Electronic data banks with relevant information are easily accessible to state and regional investigative agencies throughout the country. Thus, the Federal Bureau of Investigation (FBI) maintains a national data bank, which is known as the National Crime Information Center. Computers are used by law enforcement agencies not only in computer information networks, but also in the process of investigative work. For example, in crime labs, computers help analyze substances found at crime scenes. The conclusions of a computer expert often prove decisive in the evidence in a pending case.

    Computer as a means of communication between people. If at least two people work on one computer, they already have a desire to use this computer to exchange information with each other. On large machines that are used simultaneously by dozens or even hundreds of people, special programs are provided for this purpose, allowing users to send messages to each other. Needless to say, as soon as the opportunity arose to connect several machines into a network, users seized on this opportunity not only to use the resources of remote machines, but also to expand their social circle. Programs are created designed to exchange messages between users located on different machines. The most universal means of computer communication is e-mail. It allows you to forward messages from almost any machine to any machine, since most known machines running on different systems support it. E-mail is the most common Internet service. Currently, approximately 20 million people have an email address. Sending a letter by e-mail is much cheaper than sending a regular letter. In addition, a message sent by e-mail will reach the recipient in a few hours, while a regular letter can take several days, or even weeks, to reach the recipient.

    Internet is a global computer network covering the whole world. Today the Internet has about 15 million subscribers in more than 150 countries. The network size increases monthly by 7-10%. The Internet forms a kind of core that connects various information networks belonging to various institutions around the world with one another.