• What is the difference between a switch and a router? Do-it-yourself local network: General rules for building a home network and its main components

    03/18/1997 Dmitry Ganzha

    Switches occupy a central place in modern local area networks. TYPES OF SWITCHING SWITCHING HUBS METHODS OF PACKET PROCESSING RISC AND ASIC ARCHITECTURE OF HIGH-CLASS SWITCHES BUILDING VIRTUAL NETWORKS THIRD LEVEL SWITCHING CONCLUSION Switching is one of the most popular modern technologies.

    Switches occupy a central place in modern local area networks.

    Switching is one of the most popular modern technologies. Switches are displacing bridges and routers to the periphery of local networks, leaving behind them the role of organizing communication through global network. This popularity of switches is primarily due to the fact that they allow, through microsegmentation, to increase network performance compared to shared networks with the same nominal bandwidth. In addition to dividing the network into small segments, switches make it possible to organize connected devices into logical networks and easily regroup them when necessary; in other words, they allow you to create virtual networks.

    What is a switch? According to the IDC definition, “a switch is a device designed in the form of a hub and acting as a high-speed multiport bridge; the built-in switching mechanism allows segmentation of the local network and allocation of bandwidth to end stations in the network” (see M. Kulgin’s article “Build a network, plant a tree..." in the February issue LAN). However, this definition applies primarily to frame switches.

    TYPES OF SWITCHING

    Switching usually refers to four different technologies - configuration switching, frame switching, cell switching, and frame-to-cell conversion.

    Configuration switching is also known as port switching, where a specific port on a smart hub module is assigned to one of the internal Ethernet segments (or Token Ring). This assignment is made remotely via program control network when connecting or moving users and resources on the network. Unlike other switching technologies, this method does not improve the performance of the shared LAN.

    Frame switching, or LAN switching, uses standard Ethernet (or Token Ring) frame formats. Each frame is processed by the nearest switch and transmitted further across the network directly to the recipient. As a result, the network turns into a set of parallel high-speed direct channels. We will look at how frame switching is carried out inside a switch below using the example of a switching hub.

    Cell switching is used in ATM. The use of small fixed-length cells makes it possible to create low-cost, high-speed switching structures at the hardware level. Both frame switches and mesh switches can support multiple independent workgroups, regardless of their physical connection(see section "Building virtual networks").

    The conversion between frames and cells allows, for example, a station with an Ethernet card to communicate directly with devices on an ATM network. This technology is used to emulate a local network.

    In this lesson we will be primarily interested in frame switching.

    SWITCHING HUBS

    The first switching hub, called EtherSwictch, was introduced by Kalpana. This hub made it possible to reduce network contention by reducing the number of nodes in a logical segment using microsegmentation technology. Essentially, the number of stations in one segment was reduced to two: the station initiating the request and the station responding to the request. No other station sees the information transmitted between them. Packets are transmitted as if through a bridge, but without the delay inherent in a bridge.

    In dial-up Ethernet networks each member of a group of several users can be simultaneously guaranteed throughput 10 Mbit/s. The best way to understand how such a concentrator works is by analogy with an ordinary old telephone switch, in which participants in a dialogue are connected coaxial cable. When a subscriber called “eternal” 07 and asked to be connected to such and such a number, the operator first of all checked whether the line was available; if so, he connected the participants directly using a piece of cable. No one else (except for the intelligence services, of course) could hear their conversation. After the call ended, the operator disconnected the cable from both ports and waited for the next call.

    Switching hubs operate in a similar way (see Figure 1): they forward packets from an input port to an output port through the switch fabric. When a packet arrives at an input port, the switch reads its MAC address (i.e., layer 2 address) and it is immediately forwarded to the port associated with that address. If the port is busy, the packet is queued. Essentially, a queue is a buffer on an input port where packets wait for the desired port to become free. However, the buffering methods are slightly different.

    Figure 1.
    Switching hubs function similarly to older telephone switches: they connect an input port directly to an output port through a switch fabric.

    PACKET PROCESSING METHODS

    In end-to-end switching (also called in-flight switching and bufferless switching), the switch reads only the address of the incoming packet. The packet is transmitted further regardless of the absence or presence of errors in it. This can significantly reduce packet processing time, since only the first few bytes are read. Therefore, it is up to the receiving party to identify defective packets and request their retransmission. However, modern cable systems reliable enough that the need for retransmission on many networks is minimal. However, no one is immune to errors in the event of a damaged cable, faulty network card, or interference from an external electromagnetic source.

    When switching with intermediate buffering, the switch, receiving a packet, does not transmit it further until it reads it completely, or at least reads all the information it needs. It not only determines the recipient's address, but also checks the checksum, i.e. it can cut off defective packets. This allows you to isolate the error-producing segment. Thus, buffer-and-forward switching emphasizes reliability rather than speed.

    Apart from the above two, some switches use a hybrid method. Under normal conditions, they perform end-to-end switching, but they monitor the number of errors by checking checksums. If the number of errors reaches a specified threshold, they enter switching mode with forward buffering. When the number of errors decreases to an acceptable level, they return to end-to-end switching mode. This type of switching is called threshold or adaptive switching.

    RISC AND ASIC

    Often, buffer-forward switches are implemented using standard RISC processors. One advantage of this approach is that it is relatively inexpensive compared to ASIC switches, but it is not very good for specialized applications. Switching in such devices is carried out using software, therefore their functionality can be changed by upgrading the installed software. Their disadvantage is that they are slower than ASIC-based switches.

    Switches with ASIC integrated circuits are designed to perform specialized tasks: all their functionality is “hardwired” into the hardware. There is also a drawback to this approach: when modernization is necessary, the manufacturer is forced to rework the circuit. ASICs typically provide end-to-end switching. The switch fabric ASIC creates dedicated physical paths between an input and output port, as shown in .

    ARCHITECTURE OF HIGH-CLASS SWITCHES

    High-end switches are typically modular in design and can perform both packet and cell switching. Modules of such a switch carry out switching between networks different types, including Ethernet, Fast Ethernet, Token Ring, FDDI and ATM. In this case, the main switching mechanism in such devices is the ATM switching structure. We will look at the architecture of such devices using the Bay Networks Centillion 100 as an example.

    Switching is accomplished using the following three hardware components (see Figure 2):

  • ATM backplane for ultra-high-speed cell transfer between modules;
  • a CellManager special-purpose integrated circuit on each module to control cell transfer across the backplane;
  • a special-purpose SAR integrated circuit on each module to convert frames to cells and vice versa.
  • (1x1)

    Figure 2.
    Cell switching is increasingly being used in high-end switches due to its high speed and ease of migration to ATM.

    Each switch module has I/O ports, buffer memory, and a CellManager ASIC. In addition, each LAN module also has a RISC processor to perform frame switching between local ports and a packet assembler/disassembler to convert frames and cells into each other. All modules can independently switch between their ports, so that only traffic destined for other modules is sent through the backplane.

    Each module maintains its own table of addresses, and the main control processor combines them into one common table, so that an individual module can see the network as a whole. If, for example, an Ethernet module receives a packet, it determines who the packet is addressed to. If the address is in the local address table, then the RISC processor switches the packet between local ports. If the destination is on another module, then the assembler/disassembler converts the packet into cells. The CellManager specifies a destination mask to identify the module(s) and port(s) to which the cells payload is destined. Any module whose board mask bit is specified in the destination mask copies the cell to local memory and transmits the data to the corresponding output port in accordance with the specified port mask bits.

    BUILDING VIRTUAL NETWORKS

    In addition to increasing productivity, switches allow you to create virtual networks. One of the methods for creating a virtual network is to create a broadcast domain through a logical connection of ports within the physical infrastructure of a communication device (this can be either a smart hub - configuration switching or a switch - frame switching). For example, the odd ports of an eight-port device are assigned to one virtual network, and the even ports are assigned to another. As a result, a station in one virtual network becomes isolated from stations in another. The disadvantage of this method of organizing a virtual network is that all stations connected to the same port must belong to the same virtual network.

    Another method for creating a virtual network is based on the MAC addresses of connected devices. With this method of organizing a virtual network, any employee can connect, for example, his laptop computer to any switch port, and it will automatically determine whether his user belongs to a particular virtual network based on the MAC address. This method also allows users connected to the same switch port to belong to different virtual networks. Read more about virtual networks see the article by A. Avduevsky “Such real virtual networks” in the March issue of LAN for this year.

    LEVEL 3 SWITCHING

    For all their advantages, switches have one significant drawback: they are unable to protect the network from avalanches of broadcast packets, and this leads to unproductive network load and increased response time. Routers can monitor and filter unnecessary broadcast traffic, but they are orders of magnitude slower. Thus, according to Case Technologies documentation, the typical performance of a router is 10,000 packets per second, and this cannot be compared with the same indicator of a switch - 600,000 packets per second.

    As a result, many manufacturers have begun to build routing capabilities into switches. To prevent the switch from slowing down significantly, use various methods: For example, both layer 2 switching and layer 3 switching are implemented directly in hardware (in integrated circuits ASIC). Various manufacturers This technology is called differently, but the goal is the same: the routing switch must perform third-level functions at the same speed as second-level functions. An important factor is the price of such a device per port: it should also be low, like that of switches (see article by Nick Lippis in the next issue of LAN magazine).

    CONCLUSION

    Switches are both structurally and functionally very diverse; It is impossible to cover all their aspects in one short article. In the next tutorial, we'll take a closer look at ATM switches.

    Dmitry Ganzha is the executive editor of LAN. He can be contacted at: [email protected].



    Interlocutors. As a rule, in public networks it is impossible to provide each pair of subscribers with their own physical communication line, which they could exclusively “own” and use at any time. Therefore, the network always uses some method of switching subscribers, which ensures the division of existing physical channels between several communication sessions and between network subscribers.

    Switching in local data networks

    Ethernet segment switching technology was introduced by Kalpana in 1990 in response to the growing need for increased bandwidth between high-performance servers and workstation segments. Block diagram EtherSwitch switch offered by Kalpana is presented below. Each of the 8 10Base-T ports is served by one Ethernet packet processor - EPP (Ethernet Packet Processor). In addition, the switch has a system module that coordinates the operation of all EPP processors. The system module maintains the general address table of the switch and provides control of the switch via SNMP protocol. To transfer frames between ports, a switch fabric is used, similar to those found in telephone switches or multiprocessor computers, connecting multiple processors to multiple memory modules. The switching matrix operates on the principle of circuit switching. For 8 ports, the matrix can provide 8 simultaneous internal channels at half duplex mode port operation and 16 - with full duplex, when the transmitter and receiver of each port operate independently of each other.

    When a frame arrives at any port, the EPP processor buffers the first few bytes of the frame to read the destination address. After receiving the destination address, the processor immediately decides to transmit the packet, without waiting for the remaining bytes of the frame to arrive. To do this, it looks through its own address table cache, and if it doesn’t find it there the desired address, accesses the system module, which operates in multitasking mode, servicing requests from all EPP processors in parallel. The system module scans the general address table and returns the found line to the processor, which it buffers in its cache for later use. After finding the destination address, the EPP processor knows what to do next with the incoming frame (while looking through the address table, the processor continued to buffer the frame bytes arriving at the port). If a frame needs to be filtered, the processor simply stops writing frame bytes to the buffer, clears the buffer, and waits for a new frame to arrive. If the frame needs to be transmitted to another port, then the processor accesses the switching matrix and tries to establish a path in it that connects its port with the port through which the route to the destination address goes. The switching fabric can only do this if the port of the destination address is free at that moment, that is, not connected to another port. If the port is busy, then, as in any circuit-switched device, the matrix refuses the connection. In this case, the frame is completely buffered by the input port processor, after which the processor waits for the output port to become free and for the switching matrix to form the required path.

    After the right path set, buffered frame bytes are sent to it, which are received by the output port processor. As soon as the output port processor accesses the Ethernet segment connected to it using the CSMA/CD algorithm, the frame bytes immediately begin to be transmitted to the network. The input port processor permanently stores several bytes of a received frame in its buffer, allowing it to independently and asynchronously receive and transmit frame bytes.

    Switching in city telephone networks

    Urban telephone network is a set of linear and station structures. A network with one PBX is called non-zoned. Linear structures of such a network consist only of subscriber lines. The typical capacity of such a network is 8-10 thousand subscribers. For large capacities, due to a sharp increase in the length of the transmission line, it is advisable to switch to a regionalized network structure. In this case, the city territory is divided into districts, in each of which one district automatic telephone exchange (RATS) is built, to which subscribers of this district are connected. Subscribers in one area are connected through one RATS, and subscribers from different RATS are connected through two. RATS are connected to each other by connecting lines in the general case according to the “each to each” principle. The total number of bundles between RATS is equal to the number of RATS/2. As the network capacity increases, the number of trunk lines connecting PATCs with each other on the principle of “each to each” begins to grow sharply, which leads to an excessive increase in cable consumption and communication costs, and therefore, with a network capacity of over 80 thousand subscribers, an additional switching node is used. On such a network, communication between automatic telephone exchanges of different areas is carried out through nodes incoming message(UVS), and communication within its nodal area (UR) is carried out on the principle of “each with each” or through its own UVS.

    Ethernet segment switching technology was introduced by Kalpana in 1990 in response to the growing need for increased bandwidth between high-performance servers and workstation segments.

    The block diagram of the EtherSwitch switch proposed by Kalpana is shown in Fig. 12.6.

    Fig. 12. 6 Example of switch structure

    Each of the 8 10Base-T ports is serviced by one Ethernet Packet Processor (EPP). In addition, the switch has a system module that coordinates the operation of all EPP processors. The system module maintains the general address table of the switch and provides management of the switch via the SNMP protocol. To transfer frames between ports, a switch fabric is used, similar to those found in telephone switches or multiprocessor computers, connecting multiple processors to multiple memory modules.

    The switching matrix operates on the principle of circuit switching. For 8 ports, the matrix can provide 8 simultaneous internal channels when the ports operate in half-duplex mode and 16 in full-duplex mode, when the transmitter and receiver of each port operate independently of each other.

    When a frame arrives at any port, the EPP processor buffers the first few bytes of the frame to read the destination address. After receiving the destination address, the processor immediately decides to transmit the packet, without waiting for the remaining bytes of the frame to arrive. To do this, it looks through its own address table cache, and if it does not find the required address there, it turns to the system module, which operates in multitasking mode, servicing requests from all EPP processors in parallel. The system module scans the general address table and returns the found line to the processor, which it buffers in its cache for later use.

    After finding the destination address, the EPP processor knows what to do next with the incoming frame (while looking through the address table, the processor continued to buffer the frame bytes arriving at the port). If a frame needs to be filtered, the processor simply stops writing frame bytes to the buffer, clears the buffer, and waits for a new frame to arrive.

    If the frame needs to be transmitted to another port, then the processor accesses the switching matrix and tries to establish a path in it that connects its port with the port through which the route to the destination address goes. The switching fabric can only do this if the port of the destination address is free at that moment, that is, not connected to another port.

    If the port is busy, then, as in any circuit-switched device, the matrix refuses the connection. In this case, the frame is completely buffered by the input port processor, after which the processor waits for the output port to become free and for the switching matrix to form the required path.

    Once the desired path is established, the buffered bytes of the frame are sent to it and received by the output port processor. As soon as the output port processor accesses the Ethernet segment connected to it using the CSMA/CD algorithm, the frame bytes immediately begin to be transmitted to the network. The input port processor permanently stores several bytes of the received frame in its buffer, which allows it to independently and asynchronously receive and transmit frame bytes (Figure 4.24).

    When the output port was free at the time of frame reception, the delay between the reception of the first byte of the frame by the switch and the appearance of the same byte at the output of the destination address port was only 40 μs for the Kalpana switch, which was much less than the delay of the frame when it was transmitted by a bridge.

    Fig. 12. 7 Frame transmission through the switch fabric

    The described method of transmitting a frame without its complete buffering is called “on-the-fly” or “cut-through” switching. This method is, in fact, pipeline processing of a frame when partially combined there are several stages of its transmission in time (Fig. 12.8).

    Reception of the first bytes of the frame by the input port processor, including reception of the destination address bytes.

    Looks up the destination address in the switch's address table (in the processor cache or in the system module's general table).

    Matrix switching.

    Reception of the remaining bytes of the frame by the input port processor.

    Reception of frame bytes (including the first) by the output port processor through the switching matrix.

    Gaining access to the environment by the output port processor.

    Transmission of frame bytes by the output port processor to the network.

    Stages 2 and 3 cannot be combined in time, since without knowing the output port number, the matrix switching operation does not make sense.

    P

    Fig. 12. 8 Saving time during conveyor processing of frames: a - conveyor processing; b - normal processing with full buffering


    comparison with the full frame buffering mode, also shown in Fig. 12.8, the savings from pipelineization are noticeable.

    However, the main reason for improved network performance when using a switch is parallel processing multiple frames.

    Since the main advantage of a switch, thanks to which it has gained very good positions in local networks, is its high performance, switch developers are trying to produce called non-blocking ( non - blocking ) switch models .

    A non-blocking switch is a switch that can transmit frames through its ports at the same speed as they arrive at them. Naturally, even a non-blocking switch cannot resolve over a long period of time situations where frame blocking occurs due to limited output port speed.

    Usually they mean a stable non-blocking mode of operation of the switch, when the switch transmits frames at their arrival rate for an arbitrary period of time. To ensure such a mode, it is naturally necessary to distribute frame flows across the output ports so that they can cope with the load and the switch can always, on average, transmit as many frames to the outputs as they arrived at the inputs. If the input frame stream (summed over all ports) on average exceeds the output frame stream (also summed over all ports), then the frames will accumulate in the switch’s buffer memory, and if it exceeds its capacity, they will simply be discarded. To ensure the non-blocking mode of the switch, it is necessary to execute A simple condition is enough:

    ,

    Where
    - switch performance,
    - maximum performance of the protocol supported by the i-th port of the switch.

    The total performance of the ports takes into account each passing frame twice - as an incoming frame and as an outgoing frame, and since in steady mode the input traffic is equal to the output traffic, the minimum sufficient switch performance to support non-blocking mode is equal to half the total port performance. If the port operates in half-duplex mode, such as Ethernet 10 Mbps, then the port performance
    is equal to 10 Mbit/s, and if in full duplex, then it
    will be 20 Mbit/s.

    The widespread use of switches was undoubtedly facilitated by the fact that the introduction of switching technology did not require replacement of equipment installed in networks - network adapters, hubs, cable systems. The switch ports operated in normal half-duplex mode, so it was possible to transparently connect both an end node and a hub organizing an entire logical segment to them.

    Since switches and bridges are transparent to network layer protocols, their appearance on the network did not have any impact on the network routers, if any were present.

    The convenience of using the switch also lies in the fact that it is a self-learning device and, if the administrator does not load it with additional functions, it is not necessary to configure it - you just need to correctly connect the cable connectors to the switch ports, and then it will work independently and effectively perform the task assigned to it the task of increasing network performance.

    An unmanaged switch is suitable for building home network or small office networks. Its difference from the others is the “boxed” version. That is, after the purchase, it is enough to set up a connection to the provider’s server and you can distribute the Internet.

    When working with such a switch, it is worth considering that short-term delays are possible when using pagers voice communication(Skype, Vo-IP) and the inability to distribute Internet channel width. That is, when you turn on the Torrent program on one of the computers on the network, it will consume almost the entire bandwidth of the channel, and the rest of the computers on the network will use the remaining bandwidth.

    A managed switch is best solution for building a network in offices and computer clubs. This type sold in standard and standard settings.

    To configure such a switch you will have to work hard - large number settings can make your head spin, but when the right approach bring wonderful results. Main feature- distribution of channel width and configuration of the throughput of each port. Let's take as an example an Internet channel of 50 Mbps/s, 5 computers on the network, an IP-TV set-top box and an ATC. We can do several options, but I will consider only one.

    Next - only your imagination and out-of-the-box thinking. In total we have relatively big canal. Why relatively? You will learn this information further if you carefully delve into the essence. I forgot to clarify - I'm putting together a network for a small office. IP-TV is used for TV in the waiting room, computers are used to work with by email, transfer of documents, website browsing, ATC - for connection landlines to the main line for receiving calls from Skype, QIP, cell phones etc.

    A managed switch is a modification of a regular, unmanaged switch.

    In addition to the ASIC chip, it contains a microprocessor capable of performing additional operations on frames, such as filtering, modification and prioritization, as well as other actions not related to frame forwarding. For example, provide a user interface.

    In practical terms, the differences between managed and unmanaged switches lie, firstly, in the list of supported standards - if a regular, unmanaged switch supports only the Ethernet standard (IEEE 802.3) in its various varieties, then managed switches support a much wider list of standards: 802.1Q. 802.1X, 802.1AE, 802.3ad (802.1AX) and so on, which require configuration and management.

    There is another type - SMART switches.

    The appearance of smart switches was due to a marketing move - the devices support a significantly smaller number of functions than their older brothers, but are nevertheless manageable.

    In order not to confuse or mislead consumers, the first models were produced with the designation intelligent or web-managed.

    These devices offered the basic functionality of managed switches at a significantly lower price - VLAN organization, administrative enabling and disabling of ports, MAC address filtering or speed limiting. Traditionally, the only method of management was the web interface, so the name web-managed was firmly assigned to smart switches.

    The switch stores a switching table in associative memory, which indicates the correspondence of the host MAC address to the switch port. When the switch is turned on, this table is empty, and it begins to operate in learning mode. In this mode, data arriving on any port is transmitted to all other ports of the switch. In this case, the switch analyzes the frames and, having determined the MAC address of the sending host, enters it into the table.

    Subsequently, if one of the switch ports receives a frame intended for a host whose MAC address is already in the table, then this frame will be transmitted only through the port specified in the table. If the destination host's MAC address is not bound to any port on the switch, then the frame will be sent to all ports.

    Over time, the switch builds a complete table for all its ports, and as a result, the traffic is localized.

    It is worth noting the low latency (delay) and high forwarding speed on each interface port.

    Switching methods in a switch.

    There are three switching methods. Each of them is a combination of parameters such as the waiting time for the switch to make a decision (latency) and transmission reliability.

    With intermediate storage (Store and Forward).

    “Cut-through”.

    “Fragment-free” or hybrid.

    With intermediate storage (Store and Forward). The switch reads all incoming information in the frame, checks it for errors, selects a switching port, and then sends the verified frame to it.

    “Cut-through”. The switch reads only the destination address in the frame and then performs the switching. This mode reduces transmission delays, but does not have an error detection method.

    “Fragment-free” or hybrid. This mode is a modification of the "All Around" mode. The transmission is carried out after filtering collision fragments (frames 64 bytes in size are processed using store-and-forward technology, the rest using cut-through technology). The "switch decision" latency is added to the time it takes a frame to enter and exit a switch port and together determines the overall switch latency.

    Switch performance characteristics.

    The main characteristics of a switch that measure its performance are:

    • - filtration speed;
    • - routing speed (forwarding);
    • - throughput;
    • - frame transmission delay.

    Additionally, there are several switch characteristics that have the greatest impact on these performance specifications. These include:

    • - size of frame buffer(s);
    • - internal bus performance;
    • - performance of the processor or processors;
    • - size of the internal address table.

    Frame filtering and forwarding speed are two key performance characteristics of a switch. These characteristics are integral indicators; they do not depend on how the switch is technically implemented.

    The filtering speed determines the speed at which the switch performs next steps frame processing:

    • - receiving the frame into your buffer;
    • - destruction of the frame, since its destination port coincides with the source port.

    The forwarding rate determines the speed at which the switch performs the following frame processing steps:

    • - receiving the frame into your buffer;
    • - viewing the address table to find the port for the frame's destination address;
    • - transmission of the frame to the network through the destination port found in the address table.

    Both filtering speed and forwarding speed are usually measured in frames per second.

    If the characteristics of the switch do not specify for which protocol and for what frame size the filtering and forwarding speeds are given, then by default it is assumed that these indicators are given for the Ethernet protocol and frames 64 bytes long (without preamble), with a data field of 46 bytes .

    The use of frames of minimum length as the main indicator of the speed of a switch is explained by the fact that such frames always create the most difficult operating mode for the switch compared to frames of other formats with equal throughput of transferred user data.

    Therefore, when testing a switch, the minimum frame length mode is used as the most difficult test, which should verify the ability of the switch to operate under the worst combination of traffic parameters for it.

    In addition, for packets of minimal length, the filtering and forwarding speeds are maximum value, which is of no small importance when advertising a switch.

    The throughput of a switch is measured by the amount of user data transmitted per unit of time through its ports.

    Since the switch operates on link level, then for it the user data is the data that is transferred to the data field of the data link layer protocol frames - Ethernet, Token Ring, FDDI, etc.

    The maximum value of the switch throughput is always achieved on frames maximum length, since in this case the share of overhead costs for frame service information is much lower than for frames of minimum length, and the time the switch performs frame processing operations per byte of user information is significantly less.

    The dependence of the switch's throughput on the size of transmitted frames is well illustrated by the example of the Ethernet protocol, for which, when transmitting frames of minimum length, a transmission speed of 14880 frames per second and a throughput of 5.48 Mb/s is achieved, and when transmitting frames of maximum length, a transmission speed of 812 frames per second is achieved. second and throughput 9.74 Mb/s.

    Throughput drops almost twice when switching to frames of minimum length, and this does not take into account the loss of time for processing frames by the switch.

    Frame transmission latency is measured as the time elapsed from the moment the first byte of the frame arrives at the input port of the switch until the moment this byte appears at the output port of the switch.

    Latency consists of the time spent buffering the frame's bytes, as well as the time spent processing the frame by the switch - looking through the address table, making filtering or forwarding decisions, and gaining access to the egress port environment. The amount of delay introduced by the switch depends on its operating mode. If switching is carried out "on the fly", then the delays are usually small and range from 10 µs to 40 µs, and with full frame buffering - from 50 µs to 200 µs (for frames of minimal length). A switch is a multiport device, so it is customary to give all the above characteristics (except for frame transmission delay) in two versions:

    • - the first option is the total performance of the switch with simultaneous transmission of traffic on all its ports;
    • - the second option is the performance given per port.

    Since when traffic is simultaneously transmitted by several ports, there is a huge number of traffic options, differing in the size of the frames in the flow, the distribution of the average intensity of frame flows between destination ports, the coefficients of variation in the intensity of frame flows, etc., etc.

    Then, when comparing switches for performance, it is necessary to take into account for which traffic variant the published performance data is obtained. Some laboratories that routinely test communications equipment have developed detailed descriptions conditions for testing switches and use them in their practice, however, these tests have not yet become common in industry. Ideally, a switch installed on a network transmits frames between nodes connected to its ports at the rate at which the nodes generate these frames, without introducing additional delays or losing a single frame.

    In real practice, the switch always introduces some delays when transmitting frames, and may also lose some frames, that is, not deliver them to the recipients. Due to differences in internal organization different models switches, it is difficult to predict how a particular switch will transmit frames for any particular traffic pattern. The best criterion The practice still remains when the switch is placed in real network and the delays it introduces and the number of lost frames are measured. The overall performance of the switch is sufficiently high performance each of its individual elements - the port processor, the switching matrix, the common bus connecting the modules, etc.

    Regardless of the internal organization of the switch and how its operations are piped, it is possible to determine fairly simple performance requirements for its elements that are necessary to support a given traffic matrix. Since switch manufacturers try to make their devices as fast as possible, the overall internal performance of a switch often exceeds by some margin medium intensity any type of traffic that can be directed to the switch ports in accordance with their protocols.

    This type of switch is called non-blocking, i.e., any type of traffic is transmitted without reducing its intensity. In addition to bandwidth individual elements switch, such as port processors or a common bus, the performance of the switch is affected by such parameters as the size of the address table and the size of the general buffer or individual port buffers.

    The address table size affects the maximum capacity of the address table and determines the maximum number of MAC addresses that the switch can handle simultaneously.

    Since switches most often use a dedicated processing unit to perform operations on each port with its own memory to store an instance of the address table, the size of the address table for switches is usually given per port.

    Instances of the address table of different processor modules do not necessarily contain the same address information - most likely there will not be many duplicate addresses, unless the distribution of traffic on each port is completely equal among the other ports. Each port stores only those sets of addresses that it uses in lately. The maximum number of MAC addresses that the port processor can remember depends on the application of the switch. Workgroup switches typically support only a few addresses per port because they are designed to form microsegments. Department switches must support several hundred addresses, and network backbone switches must support up to several thousand, typically 4000 - 8000 addresses. Insufficient address table capacity can cause the switch to slow down and the network to become clogged with excess traffic. If the port processor's address table is completely full and it encounters new address source in the incoming packet, it must oust any old address from the table and place a new one in its place. This operation itself will take some of the processor's time, but the main performance loss will be observed when a frame arrives with a destination address that had to be removed from the address table.

    Since the frame's destination address is unknown, the switch must forward the frame to all other ports. This operation will create unnecessary work for many port processors, in addition, copies of this frame will end up on those network segments where they are completely unnecessary. Some switch manufacturers solve this problem by changing the algorithm for handling frames with an unknown destination address. One of the switch ports is configured as a trunk port, to which all frames with an unknown address are sent by default.

    The switch's internal buffer memory is needed to temporarily store data frames in cases where they cannot be immediately transmitted to the output port. The buffer is designed to smooth out short-term traffic bursts.

    After all, even if the traffic is well balanced and the performance of the port processors, as well as other processing elements of the switch, is sufficient to transmit average traffic values, this does not guarantee that their performance will be sufficient for very large peak loads. For example, traffic can arrive simultaneously at all switch inputs within a few tens of milliseconds, preventing it from transmitting received frames to output ports. To prevent frame loss when the average traffic intensity is repeatedly exceeded for a short time (and for local networks, traffic ripple coefficient values ​​in the range of 50-100 are often found), the only means is a large-volume buffer. As with address tables, each port processor module typically has its own buffer memory for storing frames. The larger the volume of this memory, the less likely it is that frames will be lost due to overloads, although if the average traffic values ​​are unbalanced, the buffer will sooner or later overflow.

    Typically, switches designed to operate in critical parts of the network have a buffer memory of several tens or hundreds of kilobytes per port.

    It is good when this buffer memory can be redistributed between several ports, since simultaneous overloads on several ports are unlikely. An additional means of protection can be a buffer common to all ports in the switch management module. Such a buffer usually has a capacity of several megabytes.

    It would seem that it could be easier than combining computers into information networks? But not everything is so simple: for them to work, it is necessary for quite a lot of equipment to function. It is very diverse. This article will consider representatives of the second level. So what is a switch? Why is it needed and how does it function?

    Why is it needed? A network switch is a device that is used to connect multiple nodes on a computer network. It works at the data link layer. Switch technology was developed using the bridge principle. A special feature of this device is that it sends data exclusively to the recipient. This has a positive effect on network performance and security, because in this case data cannot fall into the wrong hands.

    How much does a switch cost? The price for the cheapest is 800 rubles, the most expensive is 24,000.

    Operating principle

    This device has a so-called associative memory, where the switching table is stored. It indicates the correspondence of the computer host to a specific port. When network switch just turns on, the table is empty. In this case, the device itself operates only in training mode. So, if you transfer some data to it, it will transfer it one by one to all its ports. During this process, the received information is analyzed and the sender's address is entered into a table. And if data is received that needs to be transferred to an already identified user, then everything will come through the previously specified port. Over time, the network switch will create a table containing information about all active addresses. It should also be noted that this device has low latency and high speed forwarding data to each port.

    Switching modes

    You already know what a switch is. But do they work according to the same principle or are there several approaches to their implementation? It is clear that such a complex mechanism can have several special modes of operation. There are three of them in total. Each of them is a combination of two parameters: data transmission reliability and latency.

    1. With intermediate storage. The device reads all the information that is in the package. Then it is checked for errors, a switching port is selected, and only after that the data is sent.
    2. Through. The switch reads only the address where the data needs to be sent, and then immediately switches it. This is very fast mode transmission, but a significant drawback is that a packet may be sent that contains errors.
    3. Hybrid. In this mode, only the first 64 bytes of the data packet are analyzed for errors. If they are not here, then the data is sent.

    Asymmetrical and symmetrical switching

    You already know what a switch is and what functionality it performs. Let's talk about data transfer. Symmetry during switching is necessary to characterize the device itself in terms of bandwidth and its capabilities for each port of the device. It allows for equal width when all ports can transmit 100 Mb/s or 10 Mb/s.

    An asymmetric switch can provide a connection if the ports have different bandwidths. This way it can easily process data that flows at speeds of 10, 100 and 1000 Mb/s. Asymmetric switching can be used in the presence of large network data flows, which are arranged on the client-server principle. To direct data from a port with a significantly larger array of information to a smaller one, a memory buffer is used. It is necessary to avoid the risk of overflow and, accordingly, data loss. Also, asymmetric switches are necessary to maintain the functionality of vertical cross-connections and channels between individual trunk segments.

    Conclusion

    Development does not stand still, and already at the time of writing this article, switches are considered outdated devices. Of course, it is still possible to use them from a purely technical side of the issue, but now that there are routers that have incorporated their functionality and can additionally provide data transmission via wireless network, the switches look rather pale.