• Methodological basis and reference systems. OSI reference model

    In 1978, ISO (International Standards Organization) released a set of specifications describing an open systems interoperability model, i.e. systems available for communication with other systems. This was the first step towards international standardization of protocols. All systems could now use the same protocols and standards to exchange information.

    In 1984, ISO released a new version of its model, called the ISO Open Systems Interconnection Reference Model. This version has become the international standard. Its specifications are used by manufacturers when developing network products, and they are followed when building networks. The entire model is called ISO OSI (Open System Interconnection Reference Model). For brevity we will call it OSI model . The OSI model is not a network architecture because it does not describe the services and protocols used at each layer. It simply defines what each level should do. It is also important to understand that the reference model is not something real that enables communication. By itself, it does not make communications function and serves only for classification. It classifies what directly works, namely - protocols . A protocol is a set of specifications that define the implementation of one or more OSI layers. ISO has also developed standards for each level, although these standards are not part of the reference model itself. Each of these has been published as a separate International Standard.

    The OSI model has seven levels . Each layer is associated with different network operations, equipment, and protocols. The appearance of exactly seven levels was due to the functional features of the model.

    The OSI model without physical media is shown in Fig.

    Certain network functions performed at each level interact only with the functions of neighboring levels - higher and lower. For example, Session layer should only interact with Representative And Transport levels . All these functions are described in detail.

    Each layer performs several operations in preparing data for delivery over the network to another computer. Levels are separated from each other by boundaries - interfaces . All requests from one level to another are transmitted through the interface. Each level, performing its functions, uses the services of the lower level. The lowest layers - 1st and 2nd - define the physical environment when transmitting data bits through the CA card and cable. The topmost layers determine how applications access communication services.

    The task of each level is to provide services to the higher level, while masking the details of the implementation of these services. Each layer on the sending computer operates as if it were directly connected to the corresponding layer on the receiving computer. This virtual connection is shown in Fig. dotted lines. In reality, communication occurs between adjacent levels of the same computer. Software at each level implements specific network functions in accordance with a set of protocols.

    Before sending to the network data is divided into packages , transmitted between network devices as a single unit. The packet passes sequentially through all levels of software, from application to physical, and at each level, formatting or addressing information necessary for error-free data transmission over the network is added to the packet.

    On the receiving party the package also goes through all the levels, but in reverse order. The software at each level analyzes the packet information, removes the information that was added to the packet at the same level by the sender, and passes the packet to the next level. Once the package reaches the Application layer, all service information will be deleted and the data will return to its original form.

    Thus, only Physical model level can directly send information to the appropriate layer of another computer. Information on the sending computer and the receiving computer must go through all levels, starting from the one from which it is sent, and ending with the corresponding level of the computer by which it is received. For example, if the Network Layer transmits information from Computer A, it travels down through the Link and Physical Layers into a network cable, then goes to Computer B, where it travels up through the Physical and Link Layers to reach the Network Layer. In a client-server environment, an example of such information is the address and error check result appended to the packet.

    Interaction between adjacent levels is carried out through the interface. The interface defines the services that the lower layer provides to the upper layer and how to access them.

    Let's look at each of the seven layers of the OSI model and the services they provide to adjacent layers.

    Application level . Layer 7. It provides a window for application processes to access network services. The services it provides directly support user applications. The application layer controls shared access to the network, data flow and data recovery after communication failures.

    Presentation layer . Layer 6: The representative layer defines the format used to exchange data between networked computers. A typical example of the work of Executive level services is coding transmitted data in a certain standard way. The presentation layer is responsible for protocol conversion, data translation and encryption, changing the code table and expanding graphics commands. It also controls data compression to reduce the amount of bits transferred.

    Session level . Layer 5: The session layer allows two applications on different computers to establish, use, and terminate a connection called a session. A session can also provide an extended set of services that are useful for some applications. The session layer manages the dialogue between communicating processes, establishing which party, when, for how long, etc. must carry out the transfer.

    Transport layer . Layer 4. The main function of the Transport Layer is to accept data from the Session Layer, break it into small pieces if necessary, and pass it on to the Network Layer, ensuring that these pieces are in in the right order will arrive at their destination. All this must be done efficiently and in a way that isolates more high levels from any changes in hardware technology. The transport layer also monitors the creation and deletion of network connections, manages message flow, checks for errors, and is involved in solving problems related to sending and receiving packets. Examples of transport layer protocols are TCP and SPX.

    Network layer . Layer 3: The network layer controls subnet operations. It is responsible for addressing messages and translating logical addresses and names into physical ones. The network layer also resolves problems associated with in different ways addressing and different protocols when moving packets from one network to another, allowing you to combine disparate networks. Examples of network layer protocols are IP and IPX.

    Data level or channel (Data Link) . Layer 2: The main task of the Data Link Layer is to transform the ability of the Physical Layer to transmit data into a reliable communication link, free from undetected errors from the point of view of the higher Network Layer. The Data Link Layer performs this task by splitting the input data into frames ranging in size from several hundred to several thousand bytes. Each subsequent data frame is transmitted only after receiving and processing an acknowledgment frame sent back by the recipient. A frame is a logically organized structure into which data can be placed. In Fig. a simple data frame is presented, where the sender identifier is the address of the sending computer, and the recipient identifier is the address of the receiving computer. Control information is used for routing, packet type indication, and segmentation. CRC (cyclic code) detects errors and ensures that information is received correctly.

    Physical layer . Layer 1: The physical layer transmits an unstructured, raw bit stream over a physical medium (for example, a network cable). At this level, electrical, optical, mechanical and functional interfaces with the cable are implemented. The physical layer also generates signals that carry data received from all higher layers. At this level, the method of connecting the network cable to the CA board and the method of transmitting signals along the network cable are determined. The physical layer is responsible for encoding data and synchronizing bits, ensuring that a transmitted one is perceived as a one and not as a zero. The level sets the duration of each bit and how it is translated into electrical or optical pulses transmitted over the network cable

    The proposed BPM (Business Process Management) reference model is based on a chain of the following premises:

      Increasing the productivity of an enterprise as a complex system requires its rational construction, and process management is the most modern concept for such construction;

      BPM (as a discipline) offers a systematic approach to the implementation of process management;

      Each process-driven enterprise has its own BPM system - a portfolio of all business processes, as well as methods and tools for managing the development, execution and development of this portfolio;

      The flexibility of an enterprise BPM system is a major factor in its success;

      A specialized software platform (BPM suite) for implementing an enterprise BPM system is necessary, but not sufficient, since BPM occupies a special place in the enterprise architecture.

    Goal: increasing enterprise productivity

    To manage their productivity, most businesses use the principle feedback(Fig. 1), allowing you to adapt to the external business ecosystem by performing a certain sequence of actions:

      Measuring the progress of production and business activities (usually such measurements are presented in the form of various metrics or indicators, for example, the percentage of returning customers);

      Isolating important events for the enterprise from the external business ecosystem (for example, laws or new market needs);

      Determining the enterprise’s business development strategy;

      Implementation decisions made(by making changes to the enterprise’s business system).

    In accordance with the classic recommendation of Edward Deming, author of numerous works in the field of quality management, including the famous book “Exiting the Crisis,” all improvements should be carried out cyclically, continuously and with verification at each cycle. The extent and frequency of these improvements depends on the specific situation, but it is recommended to keep such cycles fairly compact. Various improvements can affect different aspects of the enterprise. The question is how a company can achieve the best results in each specific case? There are two objective prerequisites for optimizing the activities of an enterprise as a whole:

      Providing management with adequate information and tools for decision making;

      Ensuring that the enterprise's business system is capable of implementing the necessary changes at the required pace.

    The most modern concept of organizing the work of an enterprise is process management, in which processes and services become explicit.

    Process management

    The business world has long understood (see techniques such as TQM, BPR, Six Sigma, Lean, ISO 9000, etc.) that services and processes are the basis for the functioning of most enterprises. Many enterprises use process management to organize their production and economic activities, as a portfolio of business processes and methods for managing them.

    Process management, as a management concept, postulates the feasibility of coordinating the activities of individual services of an enterprise in order to obtain a certain result using clearly and formally defined business processes. In this case, services are operationally independent functional units; An enterprise can have many elementary nanoservices that are organized into a megaservice (the enterprise itself).

    Using an explicit definition of coordination allows you to formalize the interdependencies between services. The presence of such a formalization makes it possible to use various methods(modeling, automated verification, version control, automated execution, etc.) to improve business understanding (to make better decisions) and increase the speed of business systems development (to implement changes faster).

    In addition to processes and services, enterprise business systems work with events, rules, data, performance indicators, roles, documents, etc.

    To implement process management, enterprises use three popular disciplines of continuous improvement of business processes: ISO 9000, Six Sigma and Lean production. They affect different areas of the enterprise's business system, but there is always a provision for collecting data about the work actually done and using some kind of business process model to make decisions (although sometimes this model is only in someone's head). At the same time, they offer different and complementary methods to determine what changes are needed to improve the functioning of an enterprise's business system.

    What you model is what you execute.

    In Fig. Figure 2 shows a generalized model of a process-driven enterprise.

    What is the main difficulty in optimizing the activities of such an enterprise? Various parts of a business system use different descriptions the same business process. Typically these descriptions exist separately and were developed by different people, are updated at different rates, do not share information, and some of them are simply not made explicit. The presence of a unified description of an enterprise’s business processes eliminates this shortcoming. This description must be explicitly and formally defined to simultaneously serve as a model for modeling, an executable program, and documentation that is easily understood by all employees involved in the business process.

    This description is the basis of the BPM discipline, which allows you to model, automate, execute, control, measure and optimize work flows that span software systems, employees, customers and partners within and across enterprise boundaries. The BPM discipline considers all operations with business processes (modeling, execution, etc.) as a single whole (Fig. 3).

    On at the moment The BPM industry has not yet developed a proper system of standards for formats for formal description of business processes. The three most popular formats: BPMN (Business Process Modeling Notation, graphical representation of business process models), BPEL ( Business Process Execution Language, formalization of the execution of interaction between Web services) and XPDL (XML Process Description Language, www.wfmc.org, a specification for exchanging business process models between different applications) were developed by different groups and for different purposes and, unfortunately, do not adequately complement each other each other.

    The situation is aggravated by the fact that different manufacturers are behind different formats and each is trying to “push” their solution onto the market. As has been repeated many times, the interests of the end user are not taken into account in such a struggle - today there is no strong enough organization representing the interests of the end user of BPM (similar to the HTML standards group, the success of which is attributed to the adoption of a single ACID3 benchmark by all Web browser developers their products). The ideal situation in BPM would be a standard definition of execution semantics for a BPMN-like description of business processes. It is the standard execution semantics that would guarantee the same interpretation of business processes by any software. Additionally, such a description should allow adaptation of the degree of description of business processes to the needs of a specific consumer (for example, the user sees a rough diagram, the analyst sees a more detailed one, etc.).

    All this does not mean that BPEL or XPDL will become unnecessary - their use will be hidden, as is the case in the field of electronic document preparation. The same electronic document can simultaneously exist in XML, PDF, PostScript, etc., but only one main format (XML) is used to modify the document.

    BPM discipline in enterprise culture

    In addition to processes and services, enterprise business systems work with additional artifacts such as:

      events(events) - phenomena that occurred within and outside the boundaries of the enterprise, to which some reaction of the business system is possible, for example, when receiving an order from a client, it is necessary to start a service business process;

      objects(data and documents objects) - formal information descriptions of real things and people that make up a business; this is information at the input and output of a business process, for example, the order servicing business process receives the order form itself and information about the client as input, and at the output generates a report on the execution of the order;

      activities(activities) - small works that transform objects, for example, automatic activities such as checking a client’s credit card or human activities, such as signing a document by management;

      rules(rules) - restrictions and conditions under which the enterprise operates, for example, the issuance of a loan for a certain amount must be approved by the general director of the bank;

      roles(roles) - concepts representing the relevant skills or responsibilities required to perform certain actions, for example, only a senior manager can sign a particular document;

      audit trail(audit trails) - information about the execution of a specific business process, for example, who did what and with what result;

      key performance indicators(Key Performance Indicator, KPI) - a limited number of indicators that measure the degree to which goals have been achieved.

    Rice. Figure 4 illustrates the distribution of artifacts between different parts of the enterprise business system. The expression “processes (as templates)” means abstract descriptions (models or plans) of processes;

    the expression "processes (as instances)" refers to the actual results of executing these patterns. Typically a template is used to create many copies (like a blank form that is copied many times to be filled out by different people). The expression "services (as interfaces)" refers to formal descriptions of services that are available to their consumers; The expression "services (as programs)" means the means for executing services - such means are provided by service providers.

    To successfully work with the entire complex set of interdependent artifacts, any process-driven enterprise has its own BPM system - this is a portfolio of all business processes of the enterprise, as well as methods and tools for managing the development, execution and development of this portfolio. In other words, an enterprise BPM system is responsible for the synergistic functioning of various parts of the enterprise business system.

    A BPM system, as a rule, is not ideal (for example, some processes may exist only on paper, and some details “live” only in the minds of certain people), but it exists. For example, any implementation of ISO 9000 can be considered an example of a BPM system.

    Improving an enterprise's BPM system, in addition to purely technical aspects, must take into account socio-technical issues. An enterprise BPM system has many stakeholders, each of whom solves their own problems, perceives the BPM discipline in their own way and works with their own artifacts. To successfully develop an enterprise BPM system, it is necessary to pay special attention to the problems of all stakeholders and explain to them in advance how improving the enterprise BPM system will change their work for the better. It is critical to achieve a common understanding of all artifacts among all stakeholders.

    Specialized software for implementing BPM systems

    The growing popularity and great potential of BPM have led to the emergence of a new class of enterprise software - BPM suite, or BPMS, containing the following typical components (Fig. 5):

      Process modeling tool - graphics program to manipulate artifacts such as events, rules, processes, activities, services, etc.;

      Process testing tool - a functional testing environment that allows you to “execute” a process under various scenarios;

      Process template repository - a database of business process templates with support for different versions of the same template;

      Process execution engine;

      Process instance repository - a database for running and already completed instances of business processes;

      Work list - an interface between the BPM suite and a user performing certain activities within one or more business processes;

      Dashboard - interface for operational control over the execution of business processes;

      Process analysis tool - an environment for studying trends in the execution of business processes;

      Process simulation tool is an environment for testing the performance of business processes.

    The need for interaction between the BPM suite and enterprise software that supports other artifacts has given rise to a new class of enterprise software - the Business Process Platform (BPP). Typical BPP technologies (Fig. 6):

      Business Event Management (BEM) - analysis of business events in real time and launch of corresponding business processes (BEM is associated with Complex Event Processing (CEP) and Event Driven Architecture (EDA));

      Business Rules Management (BRM) - explicit and formal coding of business rules that can be modified by users;

      Master Data Management (MDM) - simplification of working with structured data by eliminating chaos when using the same data;

      Enterprise Content Management (ECM) - management of corporate information intended for humans (a generalization of the concept of a document);

      Configuration Management Data Base (CMDB) - a centralized description of the entire information and computing environment of the enterprise, used to link BPM to the information and computing resources of the enterprise;

      Role-Based Access Control (RBAC) - management of access to information for the purpose of effective separation of control and executive powers (separation of duty);

      Business Activity Monitoring (BAM) - operational control of the functioning of the enterprise;

      Business Intelligence (BI) - analysis of characteristics and trends of the enterprise;

      Service-Oriented Architecture (SOA) is an architectural style for constructing complex software systems as a set of universally accessible and interdependent services, which is used to implement, execute and manage services;

      Enterprise Service Bus (ESB) is a communication environment between services within an SOA.

    Thus, the BPM discipline is able to provide a single, formal and executable description of business processes that can be used in various BPM suite tools, with real data collected during the execution of business processes. However, high flexibility of an enterprise BPM system is not automatically guaranteed after purchasing a BPM suite or BPP - the ability of a specific BPM system to develop at the required pace must be designed, implemented and constantly monitored. Like human health, all this cannot be bought.

    BPM in enterprise architecture

    The need to involve almost all enterprise software in a single logic for improving the enterprise BPM system raises the question of the role and place of BPM in enterprise architecture (EA). EA is today an established practice of IT departments to streamline the information and computing environment of an enterprise. EA is based on the following rules:

      The current situation of the enterprise computing environment is carefully documented as an as-is starting point;

      The desired situation is documented as the end point of the to-be;

      A long-term plan is being built and implemented to transfer the information and computing environment of the enterprise from one point to another.

    All this would seem quite reasonable, but it is immediately noticeable that it differs from the small improvement approach that underlies process management. How to combine these two opposing approaches?

    The BPM discipline can solve the main problem of EA - to give an objective assessment of the production and economic capabilities (and not just information and computing) of what will be at the to-be point. Despite the fact that EA describes the full range of artifacts of an enterprise (its genotype), it cannot reliably say what changes in this genotype affect the specific production and economic characteristics of the enterprise, that is, the phenotype of the enterprise (the set of characteristics inherent in an individual at a certain stage of development ).

    For its part, the BPM discipline structures the interdependencies between artifacts in the form of explicit and executable models (a business process is an example of the interdependence between artifacts such as events, roles, rules, etc.). The presence of such executable models makes it possible to assess with some degree of reliability the production and economic characteristics of an enterprise when the genotype of the enterprise changes.

    Naturally, the more interdependencies between artifacts are modeled and the more reliable these models are, the more accurate such estimates are. Potentially, the symbiosis of the nomenclature of enterprise artifacts and the formally defined interdependencies between them provides an executable model of the enterprise at a specific point in time. If such executable models are built on common principles (for example, krislawrence.com), then it becomes possible to compare the effect of applying different enterprise development strategies and the emergence of more systematic and predictable technologies for converting some executable models into others.

    In a sense, the combination of EA+BPM can become a kind of navigator that provides guidance and practical help in business and IT development in the implementation of the general line of the enterprise.

    It's no secret that software vendors today define and develop BPM in different ways. However, a more promising path forward for BPM is end-customer BPM, and the BPM reference model is the first step in creating a common understanding of BPM among all stakeholders.

    The reference model proposed in the article is based on the author’s practical experience in the design, development and maintenance of various corporate solutions. In particular, this model was used to automate the annual production of more than 3 thousand complex electronic products with an average product preparation time of several years. As a result, the maintenance and development of this production system required several times fewer resources than with the traditional approach. n

    Alexander Samarin ([email protected]) - Corporate Architect for the IT Department of the Government of the Canton of Geneva (Switzerland).

    Process Frameworks for BPM

    An approach to the implementation of business process management technologies that simplifies the implementation of BPM systems implies a clear definition of the business task and the corresponding business processes; implementation of these processes within a period of no more than three months in order to demonstrate the value of this approach; further expansion of implementation to core business tasks. However, the main difficulty along this path is misunderstanding and lack of alignment between business and IT departments. Specialized reference models (Process Frameworks) can significantly simplify the implementation project and reduce costs.

    Reference model- a package of analytical and software resources, consisting of a description and recommendations for organizing a high-level structure of a business process, a set of attributes and metrics for assessing the effectiveness of implementation, as well as software modules created for quick construction a prototype of a business process for its subsequent adaptation to the specifics of a particular company.

    Reference models help in defining and establishing requirements and enable business processes to be established, they are based on industry standards and incorporate industry experience. For typical processes, reference models can help in selecting and modeling key work sequences, defining key performance indicators (KPIs) and parameters to evaluate performance in key areas, as well as in managing activities and solving problems, analyzing root causes and handling exceptional cases.

    The structure of a typical reference model includes: recommendations and description of the subject area; elements of composite user interfaces (screen forms and portlets logically connected in chains); service shells for quickly implementing access to business data; examples of typical business rules; key performance indicators and elements for their analysis; executable process models; data models and process attributes; adaptation to legislative framework and the specifics of business in a particular country; recommendations on the stages of deployment and implementation of processes. This set of resources will allow you to quickly adapt to the implementation of the process approach within a specific business process management system, reducing the time of iterations of the development cycle, test execution and process analysis. At the same time, maximum compliance between the technical implementation and the existing business problem is achieved.

    However, as AMR Research analysts note, “technologies and methods by themselves are not capable of providing any benefits - “more” does not always mean “better.” Some companies use many various solutions, however, the effectiveness of this only decreases. Competence in the use of such technologies is important.” Reference models use industry standards as a basis and Software AG's experience in creating a reference model to define customer requirements. In practice, this model becomes the starting point from which clients can create the desired model.

    The Process Framework, for example, for the order processing business process, includes a basic process model with action diagrams for various users and roles, selected KPIs from the SCOR model (The Supply-Chain Operations Reference-model) for the process as a whole and individual stages, rules to support different processing sequences, for example based on customer segment, targets for different customer segments, product types and regions, and dashboards to help monitor special situations.

    The Process Framework allows you to focus on the need and possibility of adjusting KPIs for specific customer groups and configuring them taking into account the emergence of new products, entry into new regions or market segments. Such information will allow managers responsible for supply chains, trading operations, logistics and production to improve control over specific activities, and IT department managers will quickly assess the actual performance of IT systems that support order processing.

    Vladimir Alentsev ([email protected]) - consultant on BPM and SOA, representation Software AG in Russia and CIS (Moscow).

    The OSI reference model is the defining document for the development of open standards for interconnecting communication systems and networks that have varying levels of complexity and use different technologies. In this regard, it is also commonly called open systems architecture or reference model of open systems interaction (OSI)).


    The developers of the reference model were guided by the following principles.

    · The number of protocol layers should not be too large so that network design and implementation are not overly complex, and at the same time it should not be too small so that the logical modules executed at each level are not overly complex.

    · Levels must be clearly distinguished by the functions (objects) and logical modules performed on them.

    · Functions and protocols of one layer can be changed if it does not affect other layers.

    · The amount of information transferred through interfaces between layers should be minimal.

    · Further division of levels into sublevels is allowed if there is a need for local allocation of functions within one level. Division into sublevels is advisable when it is necessary to split a labor-intensive task into separate, less complex ones.

    The resulting reference model contains seven levels (Figure 4.24).

    The highest, seventh, level of the OSI model is application layer (Application), which manages terminals and application processes in end systems, which are sources and consumers of information in information network. This layer exposes services directly to user applications. To avoid incompatibilities between user programs, the application layer defines standard ways to present services at this layer. This frees programmers from having to re-write the same functions in each network application program, which they create. Application layer services themselves are not applications. The application layer provides programmers with a set of open, standard Application Programming Interfaces (APIs) that can be used to perform such functions. network application such as file transfer, remote registration, etc. The result is application modules that are smaller in size and require less memory.

    The application layer for users is the most visible part of the OSI model, since it is responsible for launching programs, their execution, data input and output, and network administration. The protocols for interaction between objects of the seventh level are called applied.


    Presentation layer (Presentation) interprets and transforms data transmitted over the network into a form understandable for application processes. Provides data presentation in consistent formats and syntax, translation and interpretation of programs from various languages, data encryption and compression. Thanks to this, the network does not impose any restrictions on the use of various types of computers as end systems. In practice, many functions of this layer are grouped with functions of the application layer, so the presentation layer protocols have not received adequate development and are not used in many networks.

    Session layer (Session) provides the performance of functions for managing a communication session (session) focused on end-to-end transmission of messages, such as, for example: establishing and terminating a session; control of queue and data transmission mode (simplex, half-duplex, full-duplex); synchronization; session activity management; preparation of reports on exceptional situations.

    Figure 4. OSI reference model

    In logical connection sessions, connection establishment and connection termination requests, as well as data transfer requests, are forwarded to the underlying transport layer. At the end of a session, the session level carries out a gradual, rather than sudden, end of the session, and performs a handshake procedure (sending a service message about the end of the communication session), which helps prevent data loss in the case when one of the parties wants to interrupt the dialogue, but the other does not. Sessions are extremely useful when there is a logical connection between a client and a server on a network. It should be noted that without establishing a logical connection, a session, as a rule, is not possible. However, there is an exception to this rule and some networks support connectionless file transfer. Even so, the session layer provides some useful functions to manage the dialogue. Session layer services are optional and useful only for certain applications; for many applications they provide only limited benefit. Often the functions of this layer are implemented at the transport layer, so session layer protocols have limited use.

    Transport layer (Transport) performs message segmentation and manages end-to-end, error-free transport of data from source to consumer. The complexity of transport layer protocols is inversely proportional to the reliability of services at lower layers (network, data link and physical).

    Segmentation function consists of splitting long information messages into transport level data blocks - segments. In the case of a small message, the segment is associated with its size. When managing end-to-end data transportation, the transport layer supports functions such as: addressing, connection establishment and termination, data flow control, data prioritization, error detection and correction, failure recovery, multiplexing. Transport layer protocols are divided into two types: connection-oriented protocols and protocols that provide reliable connectionless service to higher layers. With the increasing number of applications that do not require guaranteed message delivery or do not allow message retransmission as a method of error control (real-time applications such as video streaming or IP telephony), non-deliverable transport layer protocols are gaining popularity.

    Addressing function at the transport layer, in contrast to addressing at the network and data link layers, consists of attaching an additional unique address that identifies the application process running in the end system. Most computers are capable of running multiple processes simultaneously, supporting multiple applications running simultaneously. However, at the network level, each of them, as a rule, is associated with one address - this is the hardware address of the port of the destination computer. When a packet (a block of network layer data) reaches a port on the destination computer, the latter must know which running process it is destined for. It is this information that the unique transport layer address provides.

    So the transport layer address is logical(corresponds to the software port associated with a specific application). It is the only one that addresses a process, not a machine (unlike addresses at the link and network layers).

    Connection establishment and disconnection function upon request of the session layer between peer objects of the transport layer is implemented through the procedure three-way handshake.

    This procedure minimizes the chance of accidentally establishing an erroneous connection by requiring two acknowledgments in response to a single connection request. A connection is established only when all three events (request, confirmation of request receipt, confirmation of receipt of acknowledgment) occur within a specified time period. This allows us to judge that both transport layer objects are ready for the communication session. If the procedure's actions do not fit within the specified period of time, for example, due to delays or damage to service packets, it is initiated again.

    The release of the transport layer connection is also controlled by a three-way handshake, which ensures its correctness. The connection is terminated separately in the forward and reverse directions, which eliminates the possibility of loss of user data in the case when one of the parties has completed data transfer, and the other is still active.

    Flow control function consists of agreeing on transmission parameters during the three-way handshake procedure. These parameters include: maximum size data segment for the established connection; size free space receiver buffer, where incoming segments will be placed; the size of a group of segments upon receipt of which the receiver must send an acknowledgment of receipt to the transmitter. Confirmations serve not only as evidence of correctly received data, but also indicate what next number of segments can be received taking into account the current load of the receiving buffer.

    Data Prioritization Function is the exclusive prerogative of the transport level. The lower network layer has no idea about the existence of priority traffic and perceives all packets (network layer data blocks) as the same.

    Many transport layer protocols support two priorities: ordinary data and urgent. The priority assignment request comes from the session layer. The assigned priority identifier is placed in the transport layer overhead field appended to the segment.

    Separate buffer pools can be organized for each priority. In this case, the transportation algorithm provides for priority servicing of the urgent data buffer and only after its emptying - the regular data buffer.

    Another approach is to group urgent and regular data segments into one transmitted block and place a boundary indicator of their location in the service information field.

    Error detection and correction function performed by many link layer protocols, but the transport layer does not duplicate it at all. The difference is that the data link layer detects and corrects bit errors that occur at the physical layer when transmitting bits, and the transport layer eliminates errors that arise as a result of incorrect operation of the network layer (packet loss, late delivery of packets, etc.). In addition, in networks where the data link layer is not responsible for detecting and correcting errors in binary bits or where this layer is absent altogether, the transport layer takes on these functions.

    The transport layer's function of identifying erroneous packets is based on the ordering of segments. To do this, each segment is assigned a serial number and its own timer starts at the moment of sending. The timer runs until an acknowledgment (positive or negative) of packet reception is received at the receiving end. In case of negative confirmation, the transmitter repeats the transmission of the segment.

    In some simpler implementations of transport layer protocols, a positive acknowledgment of receipt of the last segment of a message is perceived as the error-free receipt of all its segments. Receiving a negative acknowledgment means that the transmitter must retransmit segments from the point (segment) where the error occurred (this mechanism is called back-to-N transmission). If the segment timer expires, an error detection procedure is initiated.

    Crash recovery feature provides the ability to recover lost data in the event of network malfunctions. Failures include: failure of the communication line (and, as a result, loss of the virtual connection), failure of the network node equipment (and, as a result, loss of packets in a connectionless environment), and, finally, failure of the computer to which the data is addressed. If the failure of individual network components is short-term and it is quickly possible to establish a new virtual channel or find a route that bypasses the faulty node, the transport layer, by analyzing the sequence numbers of the segments, determines exactly which segments have already been received and which should be retransmitted. In case of long-term network damage, the transport layer can establish a transport connection in a backup network (if one is provided).

    If the transmitting or receiving computer fails, the operation of the transport layer is suspended, since it operates under the control of the operating systems installed in them. After the machine is restored to working capacity, the transport layer begins to initiate the sending of broadcast messages to all computers operating on the network in order to establish which one had an active transport connection with the failed one. In this way, the restored computer manages to restore the interrupted connection by relying on information stored in healthy machines.

    Multiplexing function allows you to organize several transport layer connections in one network connection. The transport layer address, discussed earlier, allows the transport layer to distinguish between segments addressed to different application processes. The advantage of such multiplexing is that it reduces the cost of transporting data in the network. However, it makes sense only when the network operation mode is connection-oriented (virtual channel).

    In conclusion, let us dwell once again on the features of the transport layer in connectionless mode. As noted above, it is used when guaranteed end-to-end data delivery is not required. These are primarily processes that exchange data in real time (audio or video processes), for which delivery without delay is much more important than the reliability achieved through repeated transmissions of segments. In addition, connectionless mode allows you to use your network more efficiently without occupying it. throughput a fair amount of proprietary information. The doubt may arise: “Is the transport layer even necessary when running real-time applications?” And here we should once again emphasize the relevance of the transport layer addressing function, which provides support for several simultaneously running application processes on one machine, which is not possible without transport layer services.

    Network layer (Network) performs the main telecommunications function - ensuring communication between end systems of the network. This communication can be implemented by providing an end-to-end channel switched from individual sections in accordance with the optimally selected route, a logical virtual channel, or by direct routing of a data block during its delivery. In this case, the network layer frees higher levels from knowing through which sections of the network or through which networks the information transmission route passes. If higher layers (application, representative, session and transport) are usually present in end systems interacting through a network, the three lower layers (network, data link and physical) are also mandatory for all intermediate network devices located at transit points along the data transfer route.

    The main function of the network layer is routing It consists in deciding through which specific intermediate points the data transmission route sent from one end system to another should pass and how switching should be performed between the inputs and outputs of network devices located at intermediate points of the network, corresponding to a specific route.

    The blocks of data that the network layer operates on are called packages. A packet is formed by adding to the segment transmitted from the transport layer a header including network layer address. It consists of two parts and identifies both the address of the end user's network and the user himself in it.

    Networks with different network addresses are connected to each other routers(see section “physical structure of the network”). In order to transmit a packet from a sender located on one network to a recipient located on another network, it is necessary to make several transit “hops” - hops(hops) between networks, each time choosing the best (in terms of travel time or reliability) route. The network layer also solves the problem of interconnecting networks with different technologies and creating protective barriers to unwanted traffic between networks.

    At the network level, two types of protocols are used. These are the actual network protocols that ensure the movement of packets through the network. They are usually associated with network-level punctures. Another type of network protocol is routing protocols, which deal with the exchange of routing information. Using these protocols, routers collect information about the topology of internetwork connections. Network layer protocols are executed by modules operating system, as well as software and hardware of routers.

    The network layer can also operate protocols to map the destination address of the network layer to the link layer address of the network where the end user is located.

    Link layer (Data-link) is responsible for high-quality data transmission between two points connected by a physical channel, taking into account the characteristics of the transmitting environment. The term " data transfer" in contrast to the term " transfer of information"emphasizes precisely this aspect of the activity of the link layer. If a connection is established between two end systems that are not directly connected, it will involve independently operating physical data links. However, their physical transmission media may differ (copper, optical fiber). The requirements for the format of data presentation in each channel, which is called linear coding. In this situation, the data link layer takes over the functions of adapting the data to the type of physical communication channel, providing the upper layers with a “transparent connection”.

    A block of data at the link layer is called frame or frame. Network layer packets combined into a frame are framed by separating flags (special sequences of bits placed at the beginning and end of a block of packets). In addition, a checksum is added to the frame, which is used to check the accuracy of the frame transmitted over the channel. If an uncorrectable error is detected, the receiver requests the transmitter to retransmit the frame. Data transmission theory and coding theory are quite well developed, which makes it possible to ensure high efficiency of link layer protocols. It should be noted that the bit error correction function is not always mandatory for the link layer, so it is absent in some link layer protocols (Ethernet, Frame relay). Sometimes in global networks, it is difficult to isolate the link level functions in their pure form, since in the same protocol they are combined with network level functions (ATM, Frame relay).

    Important functions of the link layer also include: access control to the communication channel, frame synchronization, data flow control, addressing, connection establishment and disconnection.

    Channel access control determined by the type of physical channel connecting the stations and the number of stations connected to it. The type of channel is determined by its operating mode (duplex, half-duplex) and configuration (two-point - only two stations, multipoint - more than two stations). Access control is relevant in half-duplex mode of operation of a channel with a multipoint configuration, when stations must wait for the moment to begin their data transmission.

    Frame synchronization provides the receiver with the ability to accurately determine the beginning and end of the received frame. Two methods are defined for data transmission: character-oriented (typically 8-bit character) asynchronous transmission, where the transmission of each character is preceded by a start bit and terminated by a stop bit, and frame-oriented synchronous transmission, where the start and flag flags are used as synchronization sequences. end of frame.

    Data Flow Control is to provide the receiver with the opportunity to inform the transmitter about its readiness or unreadiness to receive frames. The effect is that it prevents a situation where the transmitter floods the receiver with frames that it is unable to process.

    Addressing required in the case of a multipoint channel configuration with more than two stations to identify the recipient. Link layer addresses are called hardware. The address field contains the destination address and the source address.

    Establishing and releasing a connection is a procedure for activating and deactivating a link layer connection that is performed by software. In this case, the transmitting station initiates the connection by sending a special “start” command to the recipient, and the receiving station sends a connection confirmation, after which data transmission begins. This procedure is also performed after crashes and restarts. software link level. There is also a "stop" command that stops the software from running.

    Physical layer(Physical) is responsible for placing bits of information into the physical medium. The following types of media can be used at the physical layer: twisted pair cable, coaxial cable, fiber optic cable, local digital circuit and airwaves. The main characteristics of physical transmission media are parameters such as bandwidth, noise immunity, characteristic impedance etc. Here, the physical interfaces of devices with the transmission medium and between devices between which bits are transmitted are implemented.

    Main Features physical level can be combined into the following groups.

    Mechanical. These are characteristics that relate to the physical properties of the interface with the transmission medium, i.e. connectors that provide connection of a device to one or more conductors. The types of connectors and the purpose of each pin are usually standardized.

    Electrical. Determine the requirements for the representation of bits transmitted to the physical medium, for example, current or voltage levels of transmitted signals, pulse slopes, types of linear codes, signal transmission rates.

    Functional. Determine the functions of individual channels of physical interfaces of devices interacting through the transmission medium. The main schemes for interaction of devices at the physical layer are: simplex communication (one-way), half-duplex communication (alternate) and full-duplex communication (two-way, simultaneous), sometimes called full-duplex. In this case, two options for organizing communication can be implemented: “ point-to-point" And " dot-many dots" In the first option, two devices share one link, which in turn can be simplex, half-duplex or full-duplex. The second option assumes that data transmitted by one device is received by many devices. As a rule, such connections are simplex ( cable television) or half-duplex (local network based on the Ethernet standard). In some cases, full-duplex communications (network based on SONET technology) can be used. Other physical layer topologies can be used, such as tire, star, ring, however, they are all variations of the point-to-point and point-to-multipoint communication options. Thus, the bus topology is a typical “point - many points” option, the star topology is a set of point-to-point connections, the ring is a set of circular point-to-point connections.

    Procedural. They define the rules by which bit streams are exchanged across the physical medium. These are diagrams of the operation of serial and parallel interfaces. In the first case, there is only one communication channel between interacting devices, through which bits are transmitted one after another. This results in a limited transfer rate and therefore a slow interface. In the second case, several bits are transmitted between communicating devices simultaneously over several channels. This increases the transmission speed.

    One of the important functions of the physical layer is multiplexing, providing the combination of multiple narrowband (low-speed) channels into one broadband (high-speed) channel. As is known, according to the technological principle, a distinction is made between frequency division multiplexing (FDM) and time division multiplexing (TDM). FDM and TDM technologies can be combined in such a way that a subchannel in a frequency division multiplexing system is divided into several channels by time division multiplexing. This technique is used in the operation of digital cellular networks.


    OSI network model(English) open systems interconnection basic reference model- the basic reference model for the interaction of open systems) - the network model of the OSI/ISO network protocol stack.

    Due to the protracted development of the OSI protocols, the main protocol stack currently in use is TCP/IP, which was developed before the adoption of the OSI model and without connection with it.

    OSI model

    Data type

    Layer

    Functions

    7. Application

    Access to network services

    6. Presentation

    Data representation and encryption

    5. Session

    Session management

    Segments/Datagrams

    4. Transport

    Direct communication between endpoints and reliability

    3. Network

    Route determination and logical addressing

    2. Channel (data link)

    Physical addressing

    1. Physical

    Working with transmission media, signals and binary data

    osi model levels

    In the literature, it is most often customary to start describing the layers of the OSI model from layer 7, called application layer, at which user applications access the network. The OSI model ends with the 1st layer - physical, which defines the standards required by independent manufacturers for data transmission media:

      type of transmission medium (copper cable, optical fiber, radio air, etc.),

      signal modulation type,

      signal levels of logical discrete states (zero and one).

    Any protocol of the OSI model must interact either with protocols at its layer, or with protocols one unit higher and/or lower than its layer. Interactions with protocols of one level are called horizontal, and with levels one higher or lower - vertical. Any protocol of the OSI model can perform only the functions of its layer and cannot perform functions of another layer, which is not performed in the protocols of alternative models.

    Each level, with some degree of convention, corresponds to its own operand - a logically indivisible element of data, which at a separate level can be operated within the framework of the model and the protocols used: at the physical level the smallest unit is a bit, at the link level information is combined into frames, at the network level - into packets ( datagrams), on transport - into segments. Any piece of data logically combined for transmission - frame, packet, datagram - is considered a message. It is messages in general that are the operands of the session, representative and application levels.

    Basic network technologies include the physical and data link layers.

    Application layer

    Application layer (application layer) - the top level of the model, ensuring the interaction of user applications with the network:

      Allows applications to use network services:

      • remote access to files and databases,

        shipment email;

      is responsible for transmitting service information;

      provides applications with error information;

      generates queries to the presentation layer.

    Application level protocols: RDP HTTP (HyperText Transfer Protocol), SMTP (Simple Mail Transfer Protocol), SNMP (Simple Network Management Protocol), POP3 (Post Office Protocol Version 3), FTP (File Transfer Protocol), XMPP, OSCAR, Modbus, SIP, TELNET and others.

    Executive level

    Executive level (presentation level; English) presentation layer) provides protocol conversion and data encryption/decryption. Application requests received from the application layer are converted into a format for transmission over the network at the presentation layer, and data received from the network is converted into an application format. This layer can perform compression/decompression or encoding/decoding of data, as well as redirecting requests to another network resource if they cannot be processed locally.

    The presentation layer is usually an intermediate protocol for transforming information from neighboring layers. This allows communication between applications on disparate computer systems in a manner transparent to the applications. The presentation layer provides code formatting and transformation. Code formatting is used to ensure that the application receives information to process that makes sense to it. If necessary, this layer can perform translation from one data format to another.

    The presentation layer not only deals with the formats and presentation of data, it also deals with the data structures that are used by programs. Thus, layer 6 provides organization of data as it is sent.

    To understand how this works, let's imagine that there are two systems. One uses the extended binary information interchange code EBCDIC to represent data, for example, this could be the IBM mainframe, and the other uses the American standard information interchange code ASCII (most other computer manufacturers use it). If these two systems need to exchange information, then a presentation layer is needed that will perform the conversion and translate between the two different formats.

    Another function performed at the presentation layer is data encryption, which is used in cases where it is necessary to protect transmitted information from being received by unauthorized recipients. To accomplish this task, processes and code in the presentation layer must perform data transformation.

    Presentation layer standards also define how graphical images are represented. For these purposes, the PICT format can be used - an image format used to transfer QuickDraw graphics between programs. Another representation format is the tagged TIFF image file format, which is typically used for high-resolution raster images. The next presentation layer standard that can be used for graphics is the JPEG standard.

    There is another group of presentation level standards that define the presentation of audio and film fragments. This includes the Electronic Musical Instrument Interface (MIDI) for digital representation of music, developed by the Motion Picture Experts Group MPEG standard.

    Presentation layer protocols: AFP - Apple Filing Protocol, ICA - Independent Computing Architecture, LPP - Lightweight Presentation Protocol, NCP - NetWare Core Protocol, NDR - Network Data Representation, XDR - eXternal Data Representation, X.25 PAD - Packet Assembler/Disassembler Protocol .

    Session layer

    Session level session layer) model ensures the maintenance of a communication session, allowing applications to interact with each other long time. The layer manages session creation/termination, information exchange, task synchronization, data transfer eligibility determination, and session maintenance during periods of application inactivity.

    Session layer protocols: ADSP, ASP, H.245, ISO-SP (OSI Session Layer Protocol (X.225, ISO 8327)), iSNS, L2F, L2TP, NetBIOS, PAP (Password Authentication Protocol), PPTP, RPC, RTCP , SMPP, SCP (Session Control Protocol), ZIP (Zone Information Protocol), SDP (Sockets Direct Protocol)..

    Transport layer

    Transport layer transport layer) model is designed to ensure reliable data transfer from sender to recipient. However, the level of reliability can vary widely. There are many classes of transport layer protocols, ranging from protocols that provide only basic transport functions (for example, data transfer functions without acknowledgment), to protocols that ensure that multiple data packets are delivered to the destination in the proper sequence, multiplex multiple data streams, provide data flow control mechanism and guarantee the reliability of the received data. For example, UDP is limited to monitoring the integrity of data within a single datagram and does not exclude the possibility of losing an entire packet or duplicating packets, disrupting the order in which data packets are received; TCP ensures reliable continuous data transmission, excluding data loss or disruption of the order of their arrival or duplication, can redistribute data by breaking up large portions of data into fragments and, conversely, merging fragments into one package.

    Transport layer protocols: ATP, CUDP, DCCP, FCP, IL, NBF, NCP, RTP, SCTP, SPX, SST, TCP (Transmission Control Protocol), UDP (User Datagram Protocol).

    Network layer

    Network layer network layer) model is designed to determine the path of data transmission. Responsible for translating logical addresses and names into physical ones, determining the shortest routes, switching and routing, monitoring problems and congestion in the network.

    Network layer protocols route data from source to destination. Devices (routers) operating at this level are conventionally called third-level devices (based on the level number in the OSI model).

    Network layer protocols: IP/IPv4/IPv6 (Internet Protocol), IPX, X.25, CLNP ( network protocol without organizing connections), IPsec (Internet Protocol Security). Routing protocols - RIP, OSPF.

    Data Link Layer

    Data link layer data link layer) is designed to ensure the interaction of networks at the physical level and control over errors that may occur. It packs the data received from the physical layer, presented in bits, into frames, checks them for integrity and, if necessary, corrects errors (forms a repeated request for a damaged frame) and sends them to the network layer. The data link layer can communicate with one or more physical layers, monitoring and managing this interaction.

    The IEEE 802 specification divides this layer into two sublayers: MAC. media access control) regulates access to a shared physical medium, LLC (eng. logical link control) provides network layer service.

    Switches, bridges and other devices operate at this level. These devices use layer 2 addressing (by layer number in the OSI model).

    Link layer protocols - ARCnet, ATMEthernet, Ethernet Automatic Protection Switching (EAPS), IEEE 802.2, IEEE 802.11wireless LAN, LocalTalk, (MPLS), Point-to-Point Protocol (PPP), Point-to-Point Protocol over Ethernet (PPPoE ),StarLan,Token ring,Unidirectional Link Detection(UDLD),x.25.

    Physical layer

    Physical level physical layer) - the lowest level of the model, which determines the method of transferring data, presented in binary form, from one device (computer) to another. They transmit electrical or optical signals into a cable or radio broadcast and, accordingly, receive and convert them into data bits in accordance with digital signal encoding methods.

    Hubs, signal repeaters and media converters also operate at this level.

    Physical layer functions are implemented on all devices connected to the network. On the computer side, the physical layer functions are performed by the network adapter or serial port. The physical layer refers to the physical, electrical, and mechanical interfaces between two systems. The physical layer defines such types of data transmission media as optical fiber, twisted pair, coaxial cable, satellite channel data transfers, etc. Standard types of network interfaces related to the physical layer are: V.35, RS-232, RS-485, RJ-11, RJ-45, AUI and BNC connectors.

    Physical layer protocols: IEEE 802.15 (Bluetooth),IRDA,EIARS-232,EIA-422,EIA-423,RS-449,RS-485,DSL,ISDN,SONET/SDH,802.11Wi-Fi,Etherloop,GSMUm radio interface ,ITU and ITU-T,TransferJet,ARINC 818,G.hn/G.9960.

    TCP/IP family

    The TCP/IP family has three transport protocols: TCP, which fully complies with OSI, providing verification of data receipt; UDP, which corresponds to the transport layer only by the presence of a port, ensuring the exchange of datagrams between applications, but does not guarantee the receipt of data; and SCTP, designed to overcome some of the shortcomings of TCP, and adds some innovations. (There are about two hundred more protocols in the TCP/IP family, the most famous of which is the ICMP service protocol, used for internal operational needs; the rest are also not transport protocols).

    IPX/SPX Family

    In the IPX/SPX family, ports (called sockets or sockets) appear in the IPX network layer protocol, allowing datagrams to be exchanged between applications (the operating system reserves some of the sockets for itself). The SPX protocol, in turn, complements IPX with all other transport layer capabilities in full compliance with OSI.

    As a host address, IPX uses an identifier formed from a four-byte network number (assigned by routers) and the MAC address of the network adapter.

    TCP/IP model (5 layers)

      Application Layer (5) or the application layer provides services that directly support user applications, for example, file transfer software, database access, electronic mail, and server logging services. This level controls all other levels. For example, if the user works with electronic Excel tables and decides to save the working file in its directory on the network file server, then the application layer ensures that the file is moved from the working computer to the network drive transparently to the user.

      Transport (4) layer (Transport Layer) ensures delivery of packets without errors and losses, as well as in the required sequence. Here, the transmitted data is divided into blocks, placed in packets, and the received data is restored from the packets. Packet delivery is possible both with the establishment of a connection (virtual channel) and without. The transport layer is the boundary layer and the bridge between the top three, which are highly application-specific, and the bottom three, which are highly network-specific.

      Network (3) layer (Network Layer) is responsible for addressing packets and translating logical names (logical addresses, such as IP addresses or IPX addresses) into physical network MAC addresses (and vice versa). At the same level, the problem of choosing a route (path) along which the packet is delivered to its destination is solved (if there are several routes in the network). At the network level, complex intermediate network devices such as routers operate.

      Channel (2) layer or transmission line control layer (Data link Layer) is responsible for generating packets (frames) of a standard type for a given network (Ethernet, Token-Ring, FDDI), including initial and final control fields. Here, access to the network is controlled, transmission errors are detected by calculating checksums, and erroneous packets are resent to the receiver. The data link layer is divided into two sublayers: the upper LLC and the lower MAC. Intermediate network devices such as switches operate at the data link layer.

      Physical (1) layer (Physical Layer)– this is the lowest level of the model, which is responsible for encoding the transmitted information into signal levels accepted in the transmission medium used, and reverse decoding. It also defines the requirements for connectors, connectors, electrical matching, grounding, interference protection, etc. At the physical layer, network devices such as transceivers, repeaters, and repeater hubs operate.

    The generalized structure of any software or information system can be represented, as noted above, by two interacting parts:

    • functional part, which includes application programs that implement the functions of the application area;
    • environment or system part, ensuring the execution of application programs.

    Two groups of standardization issues are closely related to such separation and interconnection:

    1. standards for interfaces between application programs and the IS environment, Application Program Interface (API);
    2. standards for interfaces between the IS itself and its external environment (External Environment Interface - EEI).

    These two groups of interfaces define the specifications of the external description of the IS environment - the architecture, from the point of view of the end user, the IS designer, and the application programmer developing the functional parts of the IS.

    Specifications of external interfaces of the IS environment and interaction interfaces between components of the environment itself are precise descriptions of all necessary functions, services and formats of a specific interface.

    The totality of such descriptions is Open Systems Interconnection (OSI) reference model. This model has been used for more than 30 years, it “grew” from the SNA (System Network Architecture) network architecture proposed by IBM. The open systems interconnection model is used as the basis for the development of many ISO IT standards. The publication of this standard sums up many years of work by many well-known standardizing organizations and telecommunications manufacturers.

    In 1984, the model received the status of the international standard ISO 7498, and in 1993 an expanded and updated edition of ISO 7498-1-93 was published. The standard has a composite heading "Information and computing systems - Interconnection (interaction) of open systems - Reference model." Short name- "Reference model of interconnection (interaction) of open systems" (Open Systems Interconnection / Basic Reference Model - OSI/BRM).

    The model is based on dividing the computing environment into seven levels, the interaction between which is described by the corresponding standards and ensures the connection of the levels, regardless of the internal structure of the level in each specific implementation (Fig. 2.6).


    Rice. 2.6.

    The main advantage of this model is a detailed description of the connections in the environment from the point of view technical devices and communication interactions. However, it does not take into account the interconnection taking into account the mobility of application software.

    The advantages of the “layered” organization of the interaction model are that it provides independent development level standards, modularity in the development of hardware and software of information and computing systems and thereby contributes to technical progress in this area.

    The ISO 7498 standard identifies seven levels (layers) of information interaction, which are separated from each other by standard interfaces:

    1. application layer (application layer)
    2. presentation layer
    3. sessional (session level)
    4. transport
    5. network
    6. duct
    7. physical.

    In accordance with this, the information interaction of two or more systems is a set of information interactions of level subsystems, and each layer of the local information system interacts, as a rule, with the corresponding layer of the remote system. Interaction is carried out using appropriate communication protocols and interfaces. In addition, by using encapsulation techniques, the same software modules can be used at different levels.

    Protocol is a set of algorithms (rules) for the interaction of objects of the same levels of different systems.

    Interface- this is a set of rules in accordance with which interaction with an object of a given or another level is carried out. A standard interface may be called a service in some specifications.

    Encapsulation is the process of placing fragmented blocks of data from one level into blocks of data from another level.

    When dividing the environment into levels, the following general principles were observed:

    • do not create too many small partitions, as this complicates the description of the system of interactions;
    • form a level from easily localized functions; this, if necessary, allows you to quickly rebuild the level and significantly change its protocols to use new solutions in the field of architecture, software and hardware, programming languages, network structures, without changing the standard interaction and access interfaces;
    • place similar functions on the same level;
    • create separate levels to perform functions that clearly differ in the actions or technical solutions that implement them;
    • draw the boundary between levels in a place where the description of services is the smallest, and the number of interaction operations across the boundary (border crossing) is minimized;
    • draw the boundary between layers at a place where at some point a corresponding standard interface must exist.

    Each layer has a protocol specification, i.e. a set of rules that govern the interaction of peer processes at the same level, and a list of services that describe a standard interface with a higher level. Each layer uses the services of the layer below it, and each layer below provides services to the layer above it. Let's give brief description each layer, noting that in some OSI model descriptions the numbering of layers may be in reverse order.

    Level 1 - application layer or application layer (Application Layer). This level is associated with application processes. Layer protocols are designed to provide access to network resources and user application programs. At this level, the interface with the communication part of the applications is defined. An example of application layer protocols is the Telnet protocol, which provides user access to a “host” (a host computing device, one of the main elements in a multi-machine system, or any device connected to a network that uses TCP/IP protocols) in remote terminal mode.

    The application layer performs the task of providing various forms of interaction between application processes located in various information network systems. To do this, it performs the following functions:

    • description of forms and methods of interaction of applied processes;
    • performing various types of work (task management, file transfer, system management, etc.);
    • identification of users (interaction partners) by their passwords, addresses, electronic signatures;
    • identification of functioning subscribers;
    • announcement of the possibility of access to new application processes;
    • determining the sufficiency of available resources;
    • sending connection requests to other application processes;
    • submitting applications to a representative level for the necessary methods of describing information;
    • selection of procedures for the planned dialogue of processes;
    • managing data exchanged between application processes;
    • synchronization of interaction between application processes;
    • determining the quality of service (delivery time of data blocks, acceptable error rate, etc.);
    • agreement to correct errors and determine the reliability of data;
    • coordination of restrictions imposed on syntax (character sets, data structure).

    The application layer is often divided into two sublayers. The top sublayer includes network services. Lower - contains standard service elements that support the operation of network services.

    Level 2 - Presentation Layer. At this level, information is converted into the form in which it is required to perform application processes. The presentation layer provides encoding of data produced by application processes and interpretation of the transmitted data. For example, algorithms are performed to convert the data presentation format for printing - ASCII or KOI-8. Or, if a display is used to visualize data, then this data according to a given algorithm is formed in the form of a page that is displayed on the screen.

    The representative level performs the following main functions:

    • choosing a representation image from possible options;
    • changing the representation image to a given virtual image;
    • converting data syntax (codes, symbols) into standard;
    • defining the data format.

    Level 3 - session level or session layer (Session Layer). At this level, sessions between representative application objects (application processes) are established, maintained and terminated. As an example of a session layer protocol, consider the RPC (Remote Procedure Call) protocol. As the name suggests, this protocol is designed to display the results of a procedure on a remote host. During this procedure, a session connection is established between applications. Purpose of this connection is servicing requests that arise, for example, when a server application interacts with a client application.

    The session layer provides interaction with the transport layer, coordinates the reception and transmission of data from one communication session, contains functions for managing passwords, calculating fees for using network resources, etc. This level provides the following functions:

    • establishing and terminating connections between partners at the session level;
    • performing normal and urgent data exchange between application processes;
    • synchronization of session connections;
    • notification of application processes about exceptional situations;
    • establishing marks in the application process that allow, after a failure or error, to restore its execution from the nearest mark;
    • interrupting the application process when necessary and resuming it correctly;
    • termination of the session without data loss;
    • transmission of special messages about the progress of the session.

    Level 4 - Transport Layer. The transport layer is designed to control the flow of messages and signals. Flow control is an important function of transport protocols, since this mechanism allows for reliable data transmission over networks with heterogeneous structures, while the route description includes all components of the communication system that ensure data transmission along the entire path from the sender’s devices to receiving devices recipient. Flow control requires the transmitter to wait for confirmation of receipt of a specified number of segments by the receiver. The number of segments that a transmitter can send without receiving confirmation from the receiver is called a window.

    There are two types of transport layer protocols - segmentation protocols and datagram protocols. Segmentation protocols of the transport layer break the original message into blocks of transport layer data - segments. The main function of such protocols is to ensure delivery of these segments to the destination and recovery of the message. Datagram protocols do not segment the message; they send it in one packet along with the address information. A packet of data, called a Datagram, is routed over address switching networks or transmitted over a local network to an application program or user.

    The transport layer can also negotiate the network layers of different incompatible networks through special gateways. The level under consideration determines the addressing of subscriber systems and administrative systems. The main task of the transport layer is the use of virtual channels laid between interacting subscriber systems and administrative systems, for transmitting data blocks in packets. The main functions performed by the transport layer are:

    • managing the transfer of data blocks and ensuring their integrity;
    • detection of errors, their partial elimination, reporting of uncorrected errors;
    • restoration of transmission after failures and malfunctions;
    • consolidation or disaggregation of data blocks;
    • providing priorities when transferring blocks;
    • transmission of confirmations about transmitted data blocks;
    • elimination of blocks in case of deadlock situations in the network.

    In addition, the transport layer can recover blocks of data lost in lower layers.

    Level 5 - network layer (Network Layer). The main task of network layer protocols is to determine the path that will be used to deliver data packets when operating upper-layer protocols (routing). In order for a packet to be delivered to any given host, this host must be assigned a network address known to the transmitter. Groups of hosts, united by territorial principle, form networks. To simplify the routing task, a host's network address is composed of two parts: the network address and the host address. Thus, the routing task is divided into two - searching for a network and searching for a host on this network. The following functions can be performed at the network level:

    • creating network connections and identifying their ports;
    • detection and correction of errors that occur during transmission through a communication network;
    • packet flow control;
    • organization (ordering) of sequences of packets;
    • routing and switching;
    • package segmentation and merging;
    • return to original state;
    • selection of service types.

    Level 6 - channel layer or data link layer (Data Link Layer). The purpose of link layer protocols is to ensure data transmission in a transmission medium over a physical medium. A start signal for data transmission is generated in the channel, the start of transmission is organized, the transmission itself is carried out, the correctness of the process is checked, the channel is turned off in case of failures and restored after the fault is eliminated, a signal is generated to end the transmission and the channel is switched to standby mode.

    Thus, the data link layer can perform the following functions:

    • organization (establishment, management, termination) of channel connections and identification of their ports;
    • transmission of data blocks;
    • error detection and correction;
    • data flow management;
    • ensuring transparency of logical channels (transmission of data encoded in any way over them).

    At the data link layer, data is transmitted in blocks called frames. The type of transmission medium used and its topology largely determine the type of transport layer protocol frame that must be used. When using the "common bus" and "one-to-many" (Point-to-Multipoint) topologies, the link layer protocol tools specify the physical addresses with which data will be exchanged in the transmission medium and the procedure for accessing this medium . Examples of such protocols are Ethernet (where appropriate) and HDLC. Transport layer protocols, which are designed to operate in a one-to-one (Point-to-Point) environment, do not define physical addresses and have a simplified access procedure. An example of this type of protocol is the PPP protocol.

    Level 7 - Physical Layer. Physical layer protocols provide direct access to the data transmission medium for link and subsequent layer protocols. Data is transmitted using protocols at this level in the form of sequences of bits (for serial protocols) or groups of bits (for parallel protocols). At this level, the set of signals that the systems exchange, the parameters of these signals (temporal and electrical) and the sequence of signal generation when performing the data transfer procedure are determined.

    The physical layer performs the following functions:

    • establishes and disconnects physical connections;
    • transmits a sequence of signals;
    • “listens” to channels when necessary;
    • performs channel identification;
    • notifies about the occurrence of malfunctions and failures.

    In addition, at this level, requirements are formulated for the electrical, physical and mechanical characteristics of the transmission medium, transmission and connecting devices.

    Network-dependent and network-independent levels. The above functions at all levels can be classified into one of two groups: either functions focused on working with applications regardless of the network device, or functions that depend on the specific technical implementation of the network.

    Three upper levels- application, representative and session are application-oriented and practically do not depend on the technical features of network construction. The protocols at these layers are not affected by any changes in the network topology, replacement of equipment, or transition to another network technology.


    Rice. 2.9.

    Standardization of interfaces ensures complete transparency of interaction, regardless of how the levels are arranged in specific implementations (services) of the model.