• Distributed architecture concepts that I learned while building a large payment system. Keith Matsudeira: Scalable Web Architecture and Distributed Systems

  • Paul M. Duval, Stephen M. Mathias III, Andrew Glover. Building Software Every Change (Document)
  • Soloviev V.I. Strategy and tactics of competition in the software market (Document)
  • Description - Technologies for creating and methodology for evaluating software (Document)
  • Kaner Sam, Folk Jack, Nguyen Kek Yong. Software testing. Fundamental Concepts of Business Application Management (Document)
  • Tamre Louise. Introduction to Software Testing (Document)
  • Answers to State Standards on ACS in 2009 (Crib)
  • Standards for a unified system of program documentation (Standard)
  • n1.doc

    11. Architecture of distributed systems

    Goals
    The purpose of this chapter is to study the architecture of distributed software systems. After reading this chapter you should:


    • know the main advantages and disadvantages of distributed systems;

    • have an understanding of the various approaches used in the development of client/server architectures;

    • Understand the differences between client/server architecture and distributed object architecture;

    • Know the concept of an object request broker and the principles implemented in CORBA standards.

    Today, almost all large software systems are distributed. A distributed system is a system in which information processing is concentrated on more than one computer, but distributed among several computers. When designing distributed systems, which has much in common with the design of any other software, a number of specific features should still be taken into account. Some of these were already mentioned in the introduction to Chapter 10 when discussing client/server architecture, and they are discussed in more detail here.

    Since distributed systems have become widespread these days, software developers must be familiar with their design considerations. Until recently, all large systems were basically centralized, running on one main computer (mainframe) with terminals connected to it. The terminals were practically not involved in information processing - all calculations were performed on the main machine. The developers of such systems did not have to think about the problems of distributed computing.

    All modern software systems can be divided into three large classes.
    1. Application software systems designed to run on only one personal computer or workstation. These include word processors, spreadsheets, graphics systems, etc.

    2. Embedded systems designed to run on a single processor or on an integrated group of processors. These include control systems for household devices, various devices, etc.

    3. Distributed systems in which software runs on a loosely integrated group of parallel processors connected through a network. These include ATM systems owned by a bank, publishing systems, shared software systems, etc.
    Currently, there are clear boundaries between the listed classes of software systems, which will become increasingly blurred in the future. Over time, when high speed wireless networks become widely available, it will be possible to dynamically integrate devices with embedded software systems, for example electronic organizers with more general systems.

    The book identifies six main characteristics of distributed systems.
    1. Sharing resources. Distributed systems allow the sharing of hardware and software resources, such as hard drives, printers, files, compilers, etc., connected through a network. Obviously, resource sharing is also possible in multi-user systems, but in this case the central computer must be responsible for provisioning and managing resources.

    2. Openness. This is an opportunity to expand the system by adding new resources. Distributed systems are open systems, to which hardware and software from different manufacturers are connected.

    3. Parallelism. In distributed systems, multiple processes can run simultaneously on different computers online. These processes may (but do not have to) interact with each other during their execution.

    4. Scalability. In principle, all distributed systems are scalable: in order for the system to meet new requirements, it can be expanded by adding new computing resources. But in practice, expansion may be limited to a network connecting individual computers in the system. If you connect many new machines, the network bandwidth may not be sufficient.

    5. Fault tolerance. The presence of multiple computers and the possibility of duplicating information means that distributed systems are resistant to certain hardware and software errors(see chapter 18). Most distributed systems can usually maintain at least partial functionality in the event of an error. A complete system failure occurs only in the event of network errors.

    6. Transparency. This property means that users are provided with full transparent access to resources and at the same time information about the distribution of resources in the system is hidden from them. However, in many cases, specific knowledge about the organization of the system helps the user make better use of resources.
    Of course, distributed systems have a number of disadvantages.

    Complexity. Distributed systems are more complex than centralized ones. It is much more difficult to understand and evaluate the properties of distributed systems in general, and to test these systems. For example, here system performance does not depend on the speed of one processor, but on the network bandwidth and the speed of different processors. By moving resources from one part of the system to another, the performance of the system can be radically affected.

    Safety. Typically the system can be accessed from several different cars, communications on the network may be viewed or intercepted. Therefore, it is much more difficult to maintain security in a distributed system.

    Controllability. The system may consist of different types of computers on which different versions of operating systems may be installed. Errors on one machine can spread to other machines with unpredictable consequences. Therefore, much more effort is required to manage and maintain the system.

    Unpredictability. As all Web users know, the response of distributed systems to certain events is unpredictable and depends on full load system, its organization and network load. Since all of these parameters are subject to constant change, the time it takes to complete a user's request may vary significantly at any given time.
    In discussing the advantages and disadvantages of distributed systems, the book identifies a number of critical design issues for such systems (Table 11.1). This chapter focuses on distributed software architecture because I believe that when developing software products This moment is the most significant. If you are interested in other topics, refer to specialized books on distributed systems.
    Table 11.1. Distributed System Design Issues


    Design problem

    Description

    Resource Identification

    Resources in a distributed system are located on different computers, so the resource naming system should be designed so that users can easily open and reference the resources they need. An example is the Uniform Resource Locator (URL) system, which defines the addresses of Web pages. Without an easily understandable and universal identification system, most resources will be inaccessible to users of the system

    Communications

    The universal functionality of the Internet and the efficient implementation of TCP/IP protocols on the Internet for most distributed systems serve as an example of the most efficient way to organize communication between computers. However, where special requirements are imposed on performance, reliability, etc., you can use alternative ways system communications

    Quality of system service

    The quality of service offered by a system reflects its performance, availability and reliability. The quality of service is influenced by a number of factors: distribution of system processes, resource distribution, system and network hardware and system adaptation capabilities

    Software architecture

    Software architecture describes the distribution of system functions across system components, as well as the distribution of these components across processors. If high quality system service is to be maintained, choosing the right architecture is critical

    The task of distributed system designers is to design software or hardware to provide all the necessary characteristics of a distributed system. And this requires knowing the advantages and disadvantages of various distributed system architectures. There are two related types of distributed system architectures.
    1. Client/server architecture. In this model, the system can be represented as set of services provided by servers to clients. In such systems, servers and clients differ significantly from each other.

    2. Architecture of distributed objects. In this case, there is no difference between servers and clients and the system can be represented as a set of interacting objects, the location of which does not matter much. There is no difference between the service provider and their users.
    In a distributed system, different system components can be implemented on different languages programming and run on different types of processors. Data models, information representation, and communication protocols are not all necessarily the same in a distributed system. Therefore, distributed systems require software that can manage these different types of parts and guarantee interaction and data exchange between them. Middleware belongs to this class of software. It is located, as it were, in the middle between different parts of the distributed components of the system.

    This article describes the different types of middleware that can support distributed computing. As a rule, such software is compiled from ready-made components and does not require special modifications from developers. Examples of middleware include database interaction management programs, transaction managers, data converters, communication inspectors, etc. The rest of the chapter will describe the structure of distributed systems as a class of middleware.

    Distributed systems are typically developed using an object-oriented approach. These systems are created from loosely integrated parts, each of which can interact directly with both the user and other parts of the system. These parts should react to independent events whenever possible. Software objects built on the basis of such principles are natural components of distributed systems. If you are not yet familiar with the concept of objects, I recommend reading Chapter 12 first and then returning to this chapter.

    11.1. Multiprocessor architecture

    The simplest distributed system is a multiprocessor system. It consists of many different processes that can (but do not have to) run on different processors. This model often used in large real-time systems. As you will learn in Chapter 13, these systems collect information, make decisions based on it, and send signals to an actuator that changes the system environment. In principle, all processes related to information gathering, decision making, and actuator control can be executed on a single processor under the control of a job scheduler. Using multiple processors improves system performance and resilience. The distribution of processes between processors can be overridden (inherent in critical systems) or under the control of the process manager.

    In Fig. Figure 11.1 shows an example of this type of system. This is a simplified model of a traffic control system. A group of distributed sensors collects information about the flow magnitude. The collected data is processed locally before being sent to the control room. Based on the information received, operators make decisions and control traffic lights. In this example, there are separate logic processes for controlling the sensors, control room, and traffic lights. These can be either individual processes or a group of processes. In our example, they are executed on different processors.

    Rice. 11.1. Multiprocessor traffic control system
    Software systems that run multiple processes simultaneously are not necessarily distributed. If there is more than one processor in the system, implementing process distribution is not difficult. However, when creating multiprocessor software systems, it is not necessary to start only from distributed systems. The design of this type of system follows essentially the same approach as the design of real-time systems discussed in Chapter 13.

    11.2. Client/server architecture

    Chapter 10 already covered the client/server concept. In client/server architecture, a software application is modeled as a set of services provided by servers and a set of clients that use those services. Clients must be aware of available (available) servers, although they may not be aware of the existence of other clients. As can be seen from Fig. 11.2, which shows a diagram of a distributed client/server architecture, clients and servers represent different processes.


    Rice. 11.2. Client/server system
    In a system, processes and processors do not necessarily have a one-to-one relationship. In Fig. Figure 11.3 shows the physical architecture of the system, which consists of six client machines and two servers. They run the client and server processes shown in Fig. 11.2. In general, when I talk about clients and servers, I mean logical processes rather than physical machines, on which these processes are executed.

    The client/server system architecture should reflect the logical structure of the software application being developed. In Fig. Figure 11.4 offers another look at a software application, structured in three layers. The presentation layer provides information to users and interaction with them. The application execution layer implements the application logic. The data management layer performs all database operations. In centralized systems there is no clear separation between these levels. However, when designing distributed systems, it is necessary to separate these layers and then place each layer on different computers.

    The simplest client/server architecture is a two-tier one, in which an application consists of a server (or many identical servers) and a group of clients. There are two types of such architecture (Fig. 11.5).
    1. Model thin client. In this model, all application work and data management is done on the server. Only the presentation layer software runs on the client machine.

    2. Fat client model. In this model, the server only manages the data. The client machine implements the operation of the application and interaction with the system user.


    Rice. 11.3. Computers on a client/server network

    Rice. 11.4. Software Application Levels
    A two-tier thin client architecture is the simplest way to convert existing centralized systems (see Chapter 26) to a client/server architecture. The user interface in these systems is “moved” to a personal computer, and the software application itself performs the functions of a server, i.e. runs all application processes and manages data. The thin client model can also be implemented where the clients are regular network devices rather than personal computers or workstations. Network devices run an Internet browser and a user interface implemented within the system.


    Rice. 11.5. Thin and thick client models
    The main disadvantage of the thin client model is the high load on the server and network. All calculations are performed on the server, and this can lead to significant network traffic between client and server. Modern computers have enough computing power, but it is practically not used in the bank's thin client model.

    In contrast, the thick client model uses the processing power of local machines: both the application execution layer and the presentation layer are placed on the client machine. The server here is essentially a transaction server that manages all database transactions. An example of this type of architecture is ATM systems, in which the ATM is the client and the server is the central computer that maintains the customer accounts database.

    In Fig. 11.6 shown network system ATMs. Note that ATMs are not connected to the settlement database directly, but through a teleprocessing monitor. This monitor is an intermediate link that interacts with remote clients and organizes client requests into a sequence of transactions to work with the database. Using sequential transactions when failures occur allows the system to recover without losing data.


    Rice. 11.6. Client/server system for ATM network
    Because the thick client model runs a software application more efficiently than the thin client model, it is more difficult to manage. Here the functions of the application are distributed among many different machines. The need to replace an application results in its re-installation on all client computers, which is expensive if there are hundreds of clients on the system.

    The advent of the Java language and downloadable applets allowed the development of client/server models that fall somewhere in between the thin and thick client models. Some of the programs that make up the application can be loaded on the client machine as Java applets and thereby relieve the server. The user interface is built through a Web browser that runs Java applets. However, Web browsers from various manufacturers and even different versions Web browsers from the same manufacturer do not always perform the same. Earlier versions of browsers on older machines may not always run Java applets. Therefore, you should only use this approach if you are sure that all users on the system have Java-compatible browsers installed.

    In a two-tier client/server model, a significant challenge is the placement of three logical layers—presentation, application execution, and data management—on two computer systems. Therefore, this model often faces either scalability and performance issues if a thin client model is chosen, or system management issues if a thick client model is used. To avoid these problems, it is necessary to use an alternative approach - a three-tier client/server architecture model (Figure 11.7). In this architecture, the presentation, application execution, and data management layers have separate processes.


    Rice. 11.7. Three-tier client/server architecture
    The software architecture, built on a three-tier client/server model, does not require that three computer systems. On a single server computer, you can run both application execution and data management as separate logical servers. At the same time, if system demands increase, it will be relatively easy to separate application execution and data management and run them on different processors.

    A banking system using Internet services can be implemented using a three-tier client/server architecture. The settlement database (usually located on the host computer) provides data management services, the Web server supports application services, such as money transfer facilities, report generation, bill payment, etc. And the user's computer with an Internet browser is the client. As shown in Fig. 11.8, this system is scalable because it is relatively easy to add new Web servers as the number of clients increases.

    The use of a three-tier architecture in this example optimized data transfer between the Web server and the database server. The interaction between these systems does not have to be based on Internet standards; faster, low-level communication protocols can be used. Typically, the information from the database is processed by efficient middleware that supports database queries in the language structured queries SQL.

    In some cases, a three-tier client/server model can be converted to a multi-tier model by adding additional servers to the system. Multi-tier systems can also be used where applications need to access information located in different databases. In this case, the federation server is located between the server running the application and the database servers. The federation server collects distributed data and presents it to the application as if it were in the same database.


    Rice. 11.8. Distributed architecture of the banking system usingInternet-services
    Developers of client/server architectures must consider a number of factors when choosing the most appropriate one. In table Table 11.2 lists various use cases for client/server architecture.
    Table 11.2. Application different types client/server architectures


    Architecture

    Applications

    Two-tier thin client architecture

    Legacy systems in which it is not practical to separate application execution and data management.

    Compute-intensive applications, such as compilers, but with little data management.

    Applications that process large amounts of data (queries), but with a small amount of computation in the application itself

    Two-tier thick client architecture

    Applications where the user requires intensive data processing (for example, data visualization or large amounts of computation).

    Applications with a relatively constant set of user-side functionality, running in a well-managed environment

    11.3. Distributed Object Architecture

    In the client/server model of a distributed system, there are differences between clients and servers. The client requests services only from the server, hq not from other clients; servers can function as clients and request services from other servers, but not from clients; clients need to be aware of the services provided by specific servers and how those servers communicate. This model works well for many types of applications, but at the same time limits system developers who are forced to decide where to provide services. They must also provide support for scalability and develop a means of enabling clients on distributed servers.

    A more general approach used in distributed system design is to blur the distinction between client and server and design the system architecture as a distributed object architecture. In this architecture (Fig. 11.9), the main components of the system are objects that provide a set of services through their interfaces. Other objects call these services, making no distinction between the client (the service user) and the server (the service provider).


    Rice. 11.9. Distributed Object Architecture
    Objects can be located on different computers on the network and interact through middleware. Similar to the system bus, which allows you to connect various devices and support communication between hardware, middleware can be thought of as a software bus. It provides a set of services that allows objects to interact with each other, add or remove them from the system. The middleware is called an object request broker. Its task is to provide an interface between objects. Object request brokers are discussed in Section 11.4.

    The following are the main benefits of the distributed object architecture model.
    System developers may be slow to make decisions about where and how services will be provided. Objects that provide services can run anywhere (node) on the network. Consequently, the distinction between thick and thin client models becomes irrelevant, since there is no need to plan in advance the placement of objects to run the application.

    The system architecture is quite open, which allows you to add new resources to the system if necessary. The next section notes that software bus standards are constantly evolving, allowing objects written in different programming languages ​​to interact and provide services to each other.

    Flexibility and scalability of the system. In order to cope with system loads, you can create system instances with the same services, which will be provided by different objects or different instances (copies) of objects. As the load increases, new objects can be added to the system without interrupting the work of its other objects.

    It is possible to dynamically reconfigure the system through objects migrating across the network based on requests. Objects providing services can migrate to the same processor as objects requesting services, thereby improving system performance.
    Distributed object architecture can be used in two ways during system design.
    1. In the form of a logical model that allows developers to structure and plan the system. In this case, the functionality of the application is described only in terms and combinations of services. Methods are then developed to provide services using multiple distributed objects. At this level, as a rule, large-modular objects are designed that provide services that reflect the specifics of a particular application area. For example, in an accounting program retail you can include objects that would keep track of inventory status, track interactions with customers, classify goods, etc.

    2. As a flexible approach to the implementation of client/server systems. In this case logic model systems are a client/server model in which clients and servers are implemented as distributed objects that interact via a software bus. With this approach, it is easy to replace a system, for example, a two-level one with a multi-level one. In this case, neither the server nor the client can be implemented in a single object, but can consist of many small objects, each of which provides a specific service.
    An example of a system that is suitable for distributed object architecture is a system for processing data stored in different databases (Fig. 11.10). In this example, any database can be represented as an object with an interface that provides read-only access to the data. Each of the integrator objects deals with certain types of data dependencies, collecting information from databases to try to trace these dependencies.

    Visualizer objects interact with integrator objects to present data graphically or to generate reports on the analyzed data. Methods for presenting graphical information are discussed in Chapter 15.


    Rice. 11.10. Architecture of a distributed data processing system
    For this type of application, a distributed object architecture is more suitable than a client/server architecture for three reasons.
    1. In these systems (unlike, for example, an ATM system) there is no one service provider on which all data management services would be concentrated.

    2. You can increase the number of available databases without interrupting the system because each database is simply an object. These objects support a simplified interface that controls access to data. Available databases can be placed on different machines.

    3. By adding new integrator objects, you can track new types of dependencies between data.
    The main disadvantage of distributed object architectures is that they are more difficult to design than client/server systems. It turns out that client/server systems provide a more natural approach to building distributed systems. It reflects the relationships between people in which some people use the services of other people who specialize in providing specific services. It is much more difficult to design a system according to a distributed object architecture because the software industry has not yet developed sufficient experience in the design and development of large-scale objects.

    11.4. CORBA

    As noted in the previous section, when implementing a distributed object architecture, middleware (object request brokers) is required to organize interaction between distributed objects. Certain problems may arise here, since objects in the system can be implemented in different programming languages ​​and can be run on different platforms and their names should not be known to all other objects in the system. Therefore, middleware must do a lot of work to keep objects communicating continuously.

    IN present moment There are two main middleware standards to support distributed object computing.
    1. CORBA (Common Object Request Broker Architecture – architecture of request brokers for common objects). This is a set of middleware standards developed by the OMG (Object Management Group). OMG is a consortium of software and hardware companies including Sun, Hewlett-Packard and IBM. The CORBA standards define a common machine-independent approach to distributed object computing. Various manufacturers Many implementations of this standard have been developed. CORBA standards are supported by the operating system Unix system and operating systems from Microsoft.

    2. DCOM (Distributed Component Object Model - object model of distributed components). DCOM is a standard developed and implemented by Microsoft and integrated into its operating systems. This distributed computing model is less versatile than CORBA and offers more limited networking capabilities. Currently, the use of DCOM is limited to Microsoft operating systems.
    Here I decided to pay attention to CORBA technology, since it is more universal. In addition, I believe that it is likely that CORBA, DCOM and other technologies, such as RMI (Remote Method Invocation, a technology for building distributed applications in the Java language), will gradually move closer to each other and this convergence will be based on standards CORBA. Therefore, there is no need for another standard. Different standards will only hinder further development.

    The CORBA standards are defined by the OMG group, which unites more than 500 companies supporting object-oriented development. The OMG's role is to create standards for object-oriented development, not to provide specific implementations of those standards. These standards are freely available on the OMG Web site. The group not only focuses on CORBA standards, but also defines a wide range of other standards, including the UML modeling language.

    A representation of distributed applications within CORBA is shown in Fig. 11.11. This is a simplified diagram of the object management architecture taken from the article. A distributed application is expected to consist of the following components:
    1. Application objects that are created and developed for a given software product.

    2. Standard objects that are defined by the OMG for specific tasks. At the time of writing, many experts were developing object standards in the fields of finance, insurance, e-commerce, healthcare, and many others.

    3. Core CORBA services that support basic distributed computing services, such as directories, security management, etc.

    4. Horizontal CORBA facilities, such as user interfaces, system management tools, etc. Horizontal refers to features that are common to many applications.


    Rice. 11.11. Structure of a standards-based distributed applicationCORBA
    CORBA standards describe four main elements.
    1. An object model in which a CORBA object encapsulates states through a clear description in IDL (Interface Definition Language).

    2. Object Request Broker (ORB), which manages requests to object services. An ORB hosts objects that provide services, prepares them to receive requests, passes the request to the service, and returns results to the object that made the request.

    3. A set of object services that are core services and are needed in many distributed applications. Examples include directory services, transaction services, and temporary object support services.

    4. Totality common components, built on top of the core services. They can be either vertical, reflecting the specifics of a particular area, or horizontal universal components used in many software applications. These components are discussed in Chapter 14.
    In the CORBA model, an object encapsulates attributes and services like a regular object. However, CORBA objects must still contain the definition of various interfaces that describe global attributes and object operations. CORBA object interfaces are defined in a standard universal language IDL interface descriptions. If one object requests services provided by other objects, it accesses those services through an IDL interface. CORBA objects have a unique identifier called IOR (Interoperable Object Reference). When one entity makes requests to a service provided by another entity, the IOR identifier is used.

    The object request broker knows the objects requesting the services and their interfaces. It organizes interaction between objects. Communicating objects do not need to know anything about the placement of other objects or their implementation. Because the IDL interface decouples objects from the broker, the implementation of objects can be changed without affecting other system components.

    In Fig. Figure 11.12 shows how objects ol and o2 interact through an object request broker. The caller (ol) is associated with an IDL stub that defines the interface of the object providing the service. The constructor of the ol object, when requesting a service, injects calls into the stub of its implementation of the object. IDL is an extension of C++, so if you program in C++, C, or Java, accessing the stub is easy. Translating an object's interface description into IDL is also possible for other languages, such as Ada or COBOL. But in these cases, appropriate instrumental support is needed.

    Rice. 11.12. Object interaction via an object request broker
    The object providing the service is associated with an IDL skeleton, which links the interface to the implementation of the services. In other words, when a service is called through an interface, the IDL framework translates the call to the service regardless of what language was used in the implementation. After a method or procedure completes, the skeleton translates the results into IDL so that they are available to the caller. If an object simultaneously provides services to other objects or uses services that are provided elsewhere, it requires both an IDL backbone and an IDL stub. The latter is necessary for all used objects.

    An object request broker is typically implemented not as separate processes, but as a framework (see Chapter 14) that is coupled to the object implementation. Therefore, in a distributed system, each computer running objects must have its own object request broker, which will handle all local calls to objects. But if a request is made to a service that is provided by a remote entity, communication between brokers is required.

    This situation is illustrated in Fig. 11.13. In this example, if object ol or o2 sends requests to services provided by objects o3 or o4, then the interaction of brokers associated with these objects is necessary. The CORBA standards support broker-to-broker interaction, which provides brokers with access to IDL interface definitions, and offer the OMG standard for the Generic Inter-ORB Protocol (GIOP). This protocol defines standard messages that brokers can exchange when making calls to a remote object and transferring information. Combined with the low-level Internet protocol TCP/IP, this protocol allows brokers to communicate over the Internet.

    The first versions of CORBA were developed back in the 1980s. Early versions CORBA was simply concerned with supporting distributed objects. However, over time, the standards developed and became more extensive. Similar to the mechanisms for communicating distributed objects, the CORBA standards now define some standard services, which can be used to support object-oriented applications.


    Rice. 11.13. Interaction between object request brokers
    CORBA services are tools that are needed in many distributed systems. These standards define approximately 15 common services. Here are some of them.
    1. A naming service that allows objects to find and refer to other objects on the network. A naming service is a directory service that assigns names to objects. Optionally, objects can find the IORs of other objects through this service.

    2. A registration service that allows objects to register other objects after certain events occur. With this service, objects can be registered based on their participation in a specific event, and when this event has already happened, it is automatically registered by the service.

    3. Transaction service, which supports elementary transactions and rollback in case of errors or failures. This service is a fault-tolerant feature (see Chapter 18) that provides recovery from errors during an update operation. If actions to update an object result in errors or system failure, this object You can always return back to the state that was before the update began.
    CORBA standards are expected to provide interface definitions for a wide range of components that can be used to build distributed applications. These components can be vertical or horizontal. Vertical components are developed specifically for specific applications. As already noted, many specialists from various fields of activity are engaged in developing definitions of these components. Horizontal components are generic, such as user interface components.

    At the time of writing this book, component specifications have been developed but not yet agreed upon. In my opinion, this is probably where the CORBA standards are weakest, and it may take several years to get to the point where both specifications and component implementations are in place.
    KEY CONCEPTS
    All large systems in to one degree or another are distributed, in which software components are executed on a network-integrated group of processors.

    Distributed systems have the following features: resource utilization, openness, concurrency, scalability, error tolerance, and transparency.

    Client/server systems are distributed. Such systems are modeled as a set of services provided by the server to client processes.

    In a client/server system, the user interface is on the client side, and data management is always maintained on the shared server. Application functions can be implemented on the client computer or on the server.

    Distributed object architecture makes no distinction between clients and servers. Objects provide core services that other objects can call. The same approach can be used in implementing client/server systems.

    Distributed object systems must have middleware designed to handle interactions between objects and the addition or removal of objects from the system. Conceptually, middleware can be thought of as a software bus to which objects are connected.

    The CORBA standards are a set of standards for middleware that supports distributed object architecture. These include definitions of the object model, object request broker, and shared services. There are currently several implementations of the CORBA standards.
    Exercises
    11.1. Explain why distributed systems are always more scalable than centralized ones. What is the likely limit to the scalability of software systems?

    11.2. What is the main difference between thick and thin client models in client/server system development? Explain why using Java as the implementation language smooths out the differences between these models?

    11.3. Based on the application model shown in Fig. 11.4, consider the potential problems that may arise when converting a 1980s mainframe healthcare system to a client/server architecture.

    11.4. Distributed systems based on a client/server model have been developed since the 1980s, but only recently have such systems based on distributed objects been implemented. Give three reasons why this happened.

    11.5. Explain why using distributed objects in conjunction with an object request broker simplifies the implementation of scalable client/server systems. Illustrate your answer with an example.

    11.6. How is IDL used to support communication between objects implemented in different programming languages? Explain why this approach might cause performance problems if there are radical differences between the languages ​​used to implement the objects.

    11.7. What basic facilities should an object request broker provide?

    11.8. It can be shown that developing CORBA standards for horizontal and vertical components limits competition. If they have already been created and adapted, this prevents smaller companies from developing better components. Discuss the role of standardization in supporting or limiting competition in the software market.

    (Site material http://se.math.spbu.ru)

    Introduction.

    Today, almost all large software systems are distributed. Distributed system- a system in which information processing is not concentrated on one computer, but is distributed among several computers. When designing distributed systems, which has much in common with software design in general, some specific features should still be taken into account.

    There are six main characteristics of distributed systems.

    1. Sharing resources. Distributed systems allow the sharing of both hardware (hard drives, printers) and software (files, compilers) resources.
    2. Openness.This is the ability to expand the system by adding new resources.
    3. Parallelism.In distributed systems, multiple processes can run simultaneously on different computers on the network. These processes can interact while they are running.
    4. Scalability . Under scalability the possibility of adding new properties and methods is understood.
    5. Fault tolerance. The presence of several computers allows duplication of information and resistance to some hardware and software errors. Distributed systems can support partial functionality in the event of an error. A complete system failure occurs only in the event of network errors.
    6. Transparency.Users are given full access to resources in the system, while at the same time information about the distribution of resources throughout the system is hidden from them.

    Distributed systems also have a number of disadvantages.

    1. Complexity. It is much more difficult to understand and evaluate the properties of distributed systems as a whole, they are more difficult to design, test and maintain. Also, system performance depends on the speed of the network, not individual processors. Redistributing resources can significantly change the speed of the system.
    2. Safety. Typically, a system can be accessed from several different machines, and messages on the network can be viewed and intercepted. Therefore, it is much more difficult to maintain security in a distributed system.
    3. Controllability. The system may consist of different types of computers on which different versions of operating systems can be installed. Errors on one machine can spread in unpredictable ways to other machines.
    4. Unpredictability . The response of distributed systems to some events is unpredictable and depends on the full system load, its organization and network load. Since these parameters may constantly change, the response time to a request may differ significantly from time to time.

    From these shortcomings, it can be seen that when designing distributed systems, a number of problems arise that developers need to take into account.

    1. Resource Identification . Resources in distributed systems are located on different computers, so the resource naming system should be designed so that users can easily open and reference the resources they need. An example is the URL (Uniform Resource Locator) system, which defines the names of Web pages.
    2. Communication. The universal functionality of the Internet and the efficient implementation of TCP/IP protocols on the Internet for most distributed systems serve as an example of the most efficient way to organize communication between computers. However, in some cases where special performance or reliability is required, specialized tools may be used.
    3. Quality of system service . This parameter reflects performance, availability, and reliability. The quality of service is influenced by a number of factors: distribution of processes, resources, hardware and system adaptation capabilities.
    4. Software architecture . Software architecture describes the distribution of system functions across system components, as well as the distribution of these components across processors. If high quality system service is to be maintained, choosing the right architecture is critical.

    The task of distributed system designers is to design software and hardware to provide all the necessary characteristics of a distributed system. And this requires knowing the advantages and disadvantages of various distributed system architectures. There are three types of distributed system architectures.

    1. Client/server architecture . In this model, the system can be represented as a set of services provided by servers to clients. In such systems, servers and clients differ significantly from each other.
    2. Three-tier architecture . In this model, the server provides services to clients not directly, but through a business logic server.

    The first two models have already been mentioned more than once; let’s take a closer look at the third.

    1. Distributed Object Architecture . In this case, there is no difference between servers and clients and the system can be represented as a set of interacting objects, the location of which does not matter much. There is no difference between the service provider and their users.

    This architecture is widely used nowadays and is also called web services architectures. A web service is an application accessible via the Internet and providing some services, the form of which does not depend on the provider (since the universal data format is used - XML) and the operating platform. IN given time There are three different technologies that support the concept of distributed object systems. These are EJB, CORBA and DCOM technologies.

    First, a few words about what XML is in general. XML is a universal data format that is used to provide Web services. Web services are based on open standards and protocols: SOAP, UDDI and WSDL.

    1. SOAP ( Simple Object Access Protocol), developed by the W3C consortium, defines the format of requests to Web services. Messages between a Web service and its user are packaged in so-called SOAP envelopes (sometimes also called XML envelopes). The message itself can contain either a request to perform some action, or a response - the result of performing this action.
    2. WSDL (Web Service Description Language).The Web service interface is described in WSDL documents (and WSDL is a subset of XML). Before deploying a service, the developer writes its description in WSDL, specifies the Web service address, supported protocols, a list of allowed operations, request and response formats.
    3. UDDI (Universal Description, Discovery and Integration) - protocol for searching Web services on the Internet ( http://www.uddi.org/). Is a business registry where Web service providers register services and developers find necessary services to include in your applications.

    From the report it may seem that Web services are the best and no alternative solution, and the only question is the choice of development tools. However, this is not true. An alternative to Web services exists; it is the Semantic Web, the need for which was spoken about by WWW creator Tim Berners-Lee five years ago.

    If the task of Web services is to facilitate communication between applications, then the semantic Web is designed to solve much more complex problem- using metadata mechanisms to increase the effectiveness of the value of information that can be found on the Web. This can be done by abandoning the document-oriented approach in favor of an object-oriented one.

    References

    1. SommervilleI. Software engineering.
    2. Dranica A. Java vs. NET. - "Computerra", #516.
    3. Internet resources.


    based on reconfigurable multi-pipeline computing L-Net environments

    One of the current problems in the field of control systems is the development of software for distributed fault-tolerant control systems. The solutions that exist in this area today are proprietary and, as a result, expensive and not always effective.

    These solutions do not provide for the efficient use of backup database resources, technical and software, which negatively affects both the fault tolerance and scalability of such solutions. If the network architecture is violated, there is no possibility of dynamic reconfiguration of both information processing processes and the transmission of data flows (both control and information). The use of specific microcontrollers and the use of DCS/SCADA complicates the development and support of systems and the expansion of their functionality.

    Architecture of a distributed control system

    The generalized typical architecture of a distributed control system (DCS) includes three hierarchically related levels: operator level, control level and input/output level (see Fig. 1).

    The main task of the operator level is to provide a human-machine interface (HMI) to configure and control the functioning of the entire system. The control level is responsible for receiving and processing data from sensors, transmitting data to the operator level and developing control actions on actuators. The input-output level represents sensors and actuators directly connected to the control object.

    The task of the software, within the framework of the generalized DCS architecture, is to ensure the functioning of the operator level and its connection with the system control level. Consequently, the fundamental level when designing software and solving issues of its interaction with hardware is the operator level. The software must make the most efficient use of the system's available hardware resources while being as independent as possible from the internal hardware architecture.

    Hardware provides computing resources, memory, and communication media between nodes in the system. When designing the general architecture of a system, specific I/O level nodes that will be connected to it in a specific implementation are not considered; therefore, the general architecture considers the operator level and the control level. The hardware must be widespread, meet modern standards, and have all the properties and capabilities necessary to implement the architecture.

    Requirements for DCS

    The requirements for the DCS relate not only to the system as a whole, but also to its hardware and software components separately, since specific approaches to meeting these requirements for these components may differ fundamentally. The DCS must first and foremost be fault-tolerant. The simplest method of increasing fault tolerance is redundancy (duplication) of functional units or their combination. The second important property is scalability. Scalability is based on the implementation of special algorithms in software and the hardware ability to replace and add new nodes or their components. At the same time, the system must remain simple for its operation, development of new units or modules and modification of its architecture.

    Overview of DCS architectures

    To conduct a review of DCS architectures, we selected Siemens SIMATIC PCS 7 DCS as one of the most popular on the market and RTS S3 as a DCS implemented on the basis of the QNX RTOS.

    Siemens SIMATIC PCS 7

    The system architecture has all the properties of a generalized DCS architecture. Operator stations are computers based on x86 processor architecture with Windows OS and the Siemens WinCC package, which provides HMI. There are servers with databases. Operator stations, engineering stations and servers are connected by a local network based on Ethernet. The operator layer is associated with the management layer of a redundant Industrial Ethernet network. At the control level there are programmable logic controllers(PLC) with the possibility of redundancy due to duplication of functionality. It is possible to connect to external systems and networks and organize remote access to the system.

    RTS S3

    This architecture similarly consists of levels of a generalized DCS structure. Operator stations are based on the same hardware platform as the SIMATIC DCS, but can be controlled by either Windows or Linux OS. Engineering stations are combined with operator stations. The system provides a unified application development environment. The Ethernet network connects the nodes within the operator layer and the operator layer itself to the control layer using the TCP/IP protocol stack. At the control level there are industrial computers running the QNX OS with their own database and the possibility of redundancy by duplicating the functionality of the node.

    Disadvantages of the described systems

    In the systems described above, different hardware and software platforms are used for the operator level and the control level. Within the operator level, only one processor architecture can be used, and a special engineering station is required to configure and develop the control level. These DCSs offer, as a way to increase fault tolerance, only hardware redundancy with duplication of the functionality of the redundant node, which is an irrational use of redundant hardware.

    Characteristics and functional features of the L-Net system

    When developing the L-Net system, the task was to create a control system that would have the following characteristics:

    • Dynamic reconfiguration with full restoration of functionality with minimal losses in the event of a host failure or network topology disruption.
    • Effective distribution of tasks among available efficient network nodes.
    • Duplication of communication channels between nodes with dynamic reconfiguration of data flows.
    • Ease of operation and scaling of the system.
    • Portability and operability of the system on any hardware platform intended for building control systems and embedded systems.

    To build a system with the characteristics described above, you need an operating system designed primarily for creating control systems and embedded systems. Analysis of existing operating systems showed that the most suitable operating system is the QNX 6 (Neutrino) OS, which has very efficient resource distribution and networking capabilities. Wide networking opportunities provided by the Qnet network protocol. It solves the problem of reliability and dynamic load balancing of communication channels, but it does not solve the problem of fault tolerance of the system as a whole. As a result, an innovative control system was developed based on a distributed, reconfigurable, multi-pipeline computing environment. The developed system has a peer-to-peer architecture, including three logical blocks: an input-output block, a switch block general purpose and a reconfigurable computing environment (RCE) block (see Fig. 2).

    The main advantages of this architecture are:

    • Peer type
    • Decentralization
    • Scalability
    • Spatial distribution

    Functional features of this architecture:

    • Pipeline processing
    • Hardware redundancy
    • Load distribution
    • On-the-fly reconfiguration

    At the first level of the architecture there is an input/output (I/O) block, which includes: input/output nodes, an input/output node switch, an input/output interface, sensors and actuators. The block is responsible for the basic mechanisms for generating control actions based on data from local sensors and data received from other levels of the control system. Assigned tasks are distributed among healthy I/O nodes based on their current relative performance or manually by an operator. Sensors and actuators are connected via a bus to all I/O nodes in the block, which allows any node to poll any sensor or act on any actuator. The I/O node switch provides communication between all I/O nodes to exchange data between them and other levels of the system architecture to receive control and information data. With appropriate hardware capabilities, nodes communicate directly with each other and with nodes and switches at other levels of the system, which reduces response time in the network. Direct communication between nodes and a certain workload of nodes in the current operating mode of the I/O block allows organizing in the block pipeline calculations necessary for the operation of this block without resorting to external computing power of the control system (RCS), which makes it possible to effectively use free resources provided for redundancy nodes of the I/O block at the time of failure.

    The block of general-purpose switches, located at the second level of the architecture, organizes communication lines between I/O blocks and RCS and external systems. Each switch can interconnect different links and switches in the entire control system. The number of communication lines is determined by the hardware capabilities of the nodes and switches included in the blocks. Since the Qnet network allows dynamic distribution of data flows, the scaling of this block is carried out simple connection new devices and does not require configuration, and if one of the switches fails, data transfer between the nodes will not be interrupted if the other switch provides a similar connection between the nodes or they are connected directly. In this case, it is necessary to take care of sufficient network bandwidth necessary to back up a failed switch.

    The reconfigurable computer network (RCN) block, located at the third level of the architecture, provides a control system for high computing power to solve complex problems of information processing, decision making, recognition, etc. The block is responsible for initializing the entire control system: checking the operability of switches and nodes, network integrity, constructing network graphs of the entire system, setting the starting parameters for the operation of I/O blocks. The nodes of this block provide archiving of both their own data and data from I/O blocks. Each node of this block can play the role of an operator’s machine, designed to monitor the operation of the system and make adjustments to the operating programs of both this node and all nodes of the system, performing reconfiguration upon request.

    Load distribution

    One of the main tasks of the L-Net system is to distribute the computing load on network nodes. The solution to this problem is based on the construction of computational pipelines. To build a computational pipeline, a task graph is first constructed - a scheme for exchanging data flows from source to recipient. Sensors act as a source, and actuators act as a receiver. The computational pipeline itself is a mapping of the task graph (see Fig. 3) onto the computer network graph (see Fig. 4), taking into account the task requirements for the computing resources of the system and its current state.

    The solution is to use a service that provides the recipient with comprehensive information about the current hardware, its state and available data sources, working with network and task graphs. As a result, performance increases due to the pipelining of calculations and the rational use of all available to the system computing resources.

    Fault tolerance

    The main problem with the functioning of such a system is the complete disruption of the computing pipelines if any node of this pipeline fails or if data transfer between them is disrupted. The basic means of the Qnet protocol achieve the restoration of connections between nodes when they are partially disrupted due to the backup lines provided by the architecture. The L-Net system solves the problem of restoring functionality in the event of a complete failure of the host of a computing system by dynamically reconfiguring the computing pipeline, i.e. using working resources to replace a bad block. The system provides three recovery (reconfiguration) scenarios, differing in response time to the fact of failure, recovery time and used hardware resources: upon failure, with passive readiness, with active readiness.

    • Reconfiguration upon failure– after detecting a failure, a search is made for available hardware and its inclusion in the task graph.
    • Reconfiguration with passive availability– redundant hardware is determined in advance, a process is launched that ensures the implementation of the top of the task graph on the node, connections are established, but the process does not process the data unless the main node fails.
    • Reconfiguration with active availability– the vertex of the task graph is implemented on several nodes, which perform data processing in parallel and transmit the result.

    As a result, the system is provided with flexible readiness for failures both at the software and hardware levels, the ability to change the configuration of nodes without stopping work and losing performance, while being independent of the implementation of the network, computing pipeline and node.

    Conclusion

    The developed L-Net system, unlike existing analogues involves the use of a wide range of hardware characteristics of DCS units with their full software compatibility. When nodes operate under the same operating system (QNX Neutrino), it is possible to build them on various processor architectures (x86, ARM, MIPS, etc.) with a diverse set of interfaces and peripheral devices. The implementation of nodes is possible in the form of desktop, industrial PCs, wearable PCs and single-board computers. All components of the software complex of the developed DCS can be launched on any of its nodes with the QNX OS, while it remains possible to use nodes with another operating system. This approach allows each node to be used to solve problems at both the operator level and the management level. Consequently, there is a flexible system of interaction between peer nodes without a rigid hierarchy of levels inherent in the generalized DCS architecture and systems using this architecture as a base. Peer-to-peer network simplifies the processes of deployment, operation, scaling and debugging of the system.

    To realize the computing potential of redundant hardware in the developed system, dynamic configuration and reconfiguration algorithms are proposed, based on the Qnet network protocol and software L-Net networks. The dynamic configuration algorithm is based on distributing the computing load across all nodes by pipelining and parallelizing tasks and dynamically balancing the load on data transmission channels between nodes. The system reconfiguration algorithm assumes the presence of three scenarios for restoring operability in the event of a failure, depending on the available hardware, priorities and tasks assigned to the system: upon failure, with passive readiness (allocation of resources) and with active readiness (use of resources). Dynamic configuration and reconfiguration algorithms improve performance and reliability by leveraging existing hardware reserves in the system.

    An important advantage of the system is the maximum transparency of both hardware and software technologies used in it, which makes it possible to seriously simplify the technical support of the system and the development of new modules for it.

    Conclusion

    The developed architectural solutions make it possible to improve such indicators of distributed control systems as reliability, performance, cost, scalability and simplicity due to the possibility of using a wide range of hardware, implementing dynamic configuration algorithms and rational use of system resources.

    1. http://kazanets.narod.ru/DCSIntro.htm.
    2. http://kazanets.narod.ru/PCS7Overview.htm.
    3. http://www.rts.ua/rus/news/678/0/409.
    4. Zyl S. QNX Momentics: basics of application. – St. Petersburg: BHV-Petersburg, 2005.
    5. Krten R. Introduction to QNX Neutrino. A guide to real-time application development. – St. Petersburg: BHV-Petersburg, 2011.

    Key words: distributed control system, information support control systems, distributed reconfigurable systems.

    Architecture of a distributed control system based on reconfigurable multi-pipeline computing environment L-Net

    Sergey Yu. Potomskiy, Assistant Professor of National Research University "Higher School of Economics".

    Nikita A. Poloyko, Fifth-year student of National Research University "Higher School of Economics". Study assistant. Programmer. Field of training: “Control and informatics in the technical systems.”

    Abstract. The article is devoted to a distributed control system based on reconfigurable multi-pipeline computing environment. The architecture of the system is given. Also, the basic characteristics and functional properties of the system are given too. The article presents a rationale for the choice of the operating system. The basic advantages of the system in comparison with existing similar developments are shown in the article.

    Keywords: distributed control system, systems software support, distributed reconfigurable.


    Heterogeneous multicomputer systems

    The largest number of currently existing distributed systems are built according to the heterogeneous multicomputer scheme. This means that the computers that are part of this system can be extremely diverse, for example, in processor type, memory size, and I/O performance. In practice, the role of some of these computers can be performed by high-performance parallel systems, for example multiprocessor or homogeneous multicomputer systems.

    The network connecting them can also be highly heterogeneous.

    An example of heterogeneity is the creation of large multicomputer systems using existing networks and channels. For example, it is not unusual for university distributed systems to exist, consisting of local networks of various departments interconnected by high-speed channels. IN global systems the different stations may in turn be connected by public networks, such as network services offered by commercial telecom operators, e.g. SMDS or Frame relay.

    Unlike the systems discussed in previous paragraphs, many large-scale heterogeneous multicomputer systems require a global approach. This means that the application cannot assume that certain performance or certain services will be available to it at all times.

    Moving on to the scaling issues inherent in heterogeneous systems, and taking into account the need for a global approach inherent in most of them, we note that creating applications for heterogeneous multicomputer systems requires specialized software. Distributed systems cope with this problem. To ensure that application developers don't have to worry about the hardware they're using, distributed systems provide a software wrapper that protects applications from what's happening in the hardware (that is, they provide transparency).

    The earliest and most fundamental distributed architecture is the client-server architecture, in which one party (the client) initiates communication by sending a request to the other party (the server). The server processes the request and, if necessary, sends a response to the client (Fig. 2.7).

    Rice. 2.7. Client-server interaction model

    Interaction within the client-server model can be either synchronous, when the client waits for the server to complete processing of its request, or asynchronous, in which the client sends a request to the server and continues its execution without waiting for a response from the server. The client and server model can be used as a basis for describing various interactions. Let's consider the interaction of the components of the software that forms a distributed system.



    Rice. 2.8. Logical application levels

    Let's consider some typical application, which, in accordance with modern concepts, can be divided into the following logical levels (Fig. 2.8): user interface (UI), application logic (AL) and data access (DA), working with a database (DB). The user of the system interacts with it through the user interface; the database stores data describing subject area application, and the application logic layer implements all algorithms related to the application domain.

    Since in practice different users of a system are usually interested in accessing the same data, the simplest way to distribute the functions of such a system across several computers would be to separate the logical levels of the application between one server part of the application, responsible for accessing data, and client parts located on several computers. implementing the user interface. Application logic can be assigned to the server, clients, or divided between them (Figure 2.9).

    Rice. 2.9. Two-tier architecture

    The architecture of applications built on this principle is called client-server or two-link. In practice, such systems are often not classified as distributed, but formally they can be considered the simplest representatives of distributed systems.

    The development of client-server architecture is three-tier architecture, in which the user interface, application logic and data access are separated into independent components of the system that can run on independent computers (Figure 2.10).

    Rice. 2.10. Three-tier architecture

    The user's request in such systems is sequentially processed by the client part of the system, the application logic server and the database server. However, usually a distributed system is understood as a system with a more complex architecture than a three-tier one.

    Rice. 2.11. Distributed retail system

    In relation to enterprise automation applications, distributed systems are usually called systems with application logic distributed among several system components, each of which can be executed on a separate computer. For example, the implementation of the application logic of a retail sales system must use requests to the application logic of third parties, such as suppliers of goods, electronic payment systems, or banks that provide consumer loans (Figure 2.11).

    Another example of a distributed system is networks direct data exchange between clients (peer-to-peer networks). If the previous example had a “tree” architecture, then direct exchange networks are organized in a more complex way, Fig. 2.12. Such systems are currently probably one of the largest distributed systems in existence, connecting millions of computers.

    Rice. 2.12. System of direct data exchange between clients

    According to the famous expert in the field of computer science E. Tanenbaum, there is no generally accepted and at the same time strict definition of a distributed system. Some wits argue that distributed is such computing system, in which a malfunction of a computer that users previously didn't even know existed causes all their work to stop. A significant part of distributed computing systems, unfortunately, satisfy this definition, but formally it only applies to systems with a unique point of vulnerability ( single point of failure).

    Often, when defining a distributed system, the division of its functions between several computers is of paramount importance. With this approach, any computing system, where data processing is divided between two or more computers. Based on the definition of E. Tanenbaum, a somewhat more narrowly distributed system can be defined as a set of independent computers connected by communication channels, which from the point of view of the user of some software look like a single whole.

    This approach to defining a distributed system has its drawbacks. For example, everything used in such a distributed system software could work on one single computer, however, from the point of view of the above definition, such a system will no longer be distributed. Therefore, the concept of a distributed system should probably be based on an analysis of the software that forms such a system.

    As a basis for describing the interaction of two entities, let us consider general model client-server interaction, in which one of the parties (client) initiates data exchange by sending a request to the other party (server). The server processes the request and, if necessary, sends a response to the client (Fig. 1.1).


    Rice. 1.1.

    Interaction within the client-server model can be either synchronous, when the client waits for the server to complete processing of its request, or asynchronous, in which the client sends a request to the server and continues its execution without waiting for a response from the server. The client and server model can be used as a basis for describing various interactions. For this course, the interaction of the component parts of the software that forms a distributed system is important.


    Rice. 1.2.

    Let's consider a typical application, which, in accordance with modern concepts, can be divided into the following logical levels (Fig. 1.2): user interface(IP), application logic (AL) and data access (DA), working with a database (DB). The user of the system interacts with it through the user interface, the database stores data describing the application domain, and the application logic level implements all algorithms related to subject area.

    Since in practice different users of a system are usually interested in accessing the same data, the simplest way to distribute the functions of such a system across several computers would be to separate the logical levels of the application between one server part of the application, responsible for accessing data, and client parts located on several computers. implementing the user interface. Application logic can be assigned to the server, clients, or divided between them (Figure 1.3).


    Rice. 1.3.

    The architecture of applications built on this principle is called client-server or two-tier. In practice, such systems are often not classified as distributed, but formally they can be considered the simplest representatives of distributed systems.

    A development of the client-server architecture is a three-tier architecture in which the user interface, application logic and data access are separated into independent components of the system that can run on independent computers (Fig. 1.4).


    Rice. 1.4.

    The user's request in such systems is sequentially processed by the client part of the system, the application logic server and the database server. However, usually a distributed system is understood as a system with a more complex architecture than a three-tier one.