• Development of an integrated access network based on Ethernet and Wi-Fi technologies. Ethernet technology frame formats Construction of networks using ethernet 1000base t technology

    The development of multimedia technologies has led to the need to increase the capacity of communication lines. In this regard, Gigabit Ethernet technology was developed, providing data transmission at a speed of 1 Gbit/s. In this technology, as well as in Fast Ethernet, continuity with Ethernet technology was maintained: frame formats remained virtually unchanged, preserved access method CSMA/ CD in half duplex mode. At the logical level, coding is used 8 B/10 B. Since the transmission speed has increased 10 times compared to Fast Ethernet, it was necessary or reduce the network diameter to 20 – 25 m, or increase the minimum frame length. Gigabit Ethernet technology took the second path, increasing the minimum frame length to 512 byte instead 64 bytes in Ethernet and Fast Ethernet technologies. The network diameter is 200 m, just like Fast Ethernet. Increasing the frame length can occur in two ways. The first method involves filling the data field of a short frame with symbols of prohibited code combinations, which will result in unproductive network load. The second method allows you to transmit several short frames in a row with a total length of up to 8192 byte.

    Modern Gigabit Ethernet networks, as a rule, are built on the basis of switches and operate in full duplex mode. In this case, we are not talking about the diameter of the network, but about the length of the segment, which is determined by the technical means of the physical layer, first of all, by the physical data transmission medium. Gigabit Ethernet provides for the use of:

      single-mode fiber optic cable; 802.3 z

      multimode fiber optic cable; 802.3 z

      balanced UTP category 5 cable; 802.3 ab

      coaxial cable.

    When transmitting data over a fiber optic cable, either LEDs operating at the wavelength are used as emitters 830 nm, or lasers – at the wavelength 1300 nm. In accordance with this standard 802.3 z defined two specifications 1000 Base- SX And 1000 Base- LX. The maximum length of a segment implemented on a 62.5/125 multimode cable of the 1000Base-SX specification is 220 m, and on a 50/125 cable - no more than 500 m. The maximum length of a segment implemented on a single-mode 1000Base-LX specification is 5000 m. The length of the segment on the coaxial cable does not exceed 25 m.

    A standard has been developed to use existing balanced Category 5 UTP cables. 802.3 ab. Since in Gigabit Ethernet technology data must be transmitted at a speed of 1000 Mbit/s, and twisted pair category 5 has a bandwidth of 100 MHz, it was decided to transmit data in parallel over 4 twisted pairs and use UTP category 5 or 5e with a bandwidth of 125 MHz. Thus, each twisted pair must transmit data at a speed of 250 Mbit/s, which is 2 times higher than the capabilities of UTP category 5e. To eliminate this contradiction, the 4D-PAM5 code with five potential levels (-2, -1, 0, +1, +2) is used. Each pair of wires simultaneously transmits and receives data at a speed of 125 Mbit/s in each direction. In this case, collisions occur, in which signals of complex shapes of five levels are formed. Separation of input and output streams is carried out through the use of hybrid decoupling circuits H(Fig. 5.4). Such schemes are used signal processors. To isolate the received signal, the receiver subtracts its own transmitted signal from the total (transmitted and received) signal.

    Thus, Gigabit Ethernet technology provides high-speed data exchange and is used mainly for data transfer between subnets, as well as for the exchange of multimedia information.

    Rice. 5.4. Data transmission over 4 UTP category 5 pairs

    The IEEE 802.3 standard recommends that Gigabit Ethernet technology with data transmission over fiber should be a backbone. Time slots, frame format and transmission are common to all 1000 Mbit/s versions. The physical layer is determined by two signal coding schemes (Fig. 5.5). Scheme 8 B/10 B used for optical fiber and copper shielded cables. For symmetrical cables UTP pulse amplitude modulation is used (code PAM5 ). Technology 1000 BASE- X uses logical coding 8 B/10 B and linear coding ( NRZ).

    Fig.5.5. Gigabit Ethernet Technology Specifications

    Signals NRZ transmitted over fiber using either shortwave ( short- wavelength), or long-wave ( long- wavelength) light sources. LEDs with a wavelength of 850 nm for transmission over multimode optical fiber (1000BASE-SX). This less expensive option is used for short distance transmission. Long-wave laser sources ( 1310 nm) use single-mode or multimode optical fiber (1000BASE-LX). Laser sources with single-mode fiber are capable of transmitting information over distances up to 5000 m.

    In point-to-point connections ( point- to- point) to transfer ( Tx) and reception ( Rx) separate fibers are used, therefore it is realized full duplex connection. Gigabit Ethernet technology allows you to install only the only repeater between two stations. Below are the parameters of 1000BASE technologies (Table 5.2).

    Table 5.2

    Comparative characteristics of Gigabit Ethernet specifications

    Gigabit Ethernet networks are built on switches, where the distance of full-duplex connections is limited only by the medium, and not by the round-trip time. In this case, as a rule, the topology “ star" or " extended star", and problems are determined by logical topology and data flow.

    The 1000BASE-T standard uses virtually the same UTP cable as the 100BASE-T and 10BASE-T standards. 1000BASE-T UTP cable is the same as 10BASE-T and 100BASE-TX cable, except that Category 5e cable is recommended. With a cable length of 100 m, 1000BASE-T equipment operates at the limit of its capabilities.

    The frame fields Preamble (7 bytes) and Start Frame Delimiter (SFD) (1 byte) in Ethernet are used for synchronization between transmitting and receiving devices. These first eight bytes of the frame are used to attract the attention of the receiving nodes. Essentially the first few bytes tell receivers to prepare to receive a new frame.

    Field MAC Address Destination

    The Destination MAC Address field (6 bytes) is an identifier for the intended recipient. As you may recall, this address is used by Layer 2 to help devices determine whether a given frame is addressed to them. The address in the frame is compared with the MAC address of the device. If the addresses match, the device accepts the frame.

    Source MAC Address field

    The Destination MAC Address field (6 bytes) identifies the sending NIC or interface of the frame. Switches also use this address to add it to their mapping tables. The role of switches will be discussed later in this section.

    Field Length/Type

    For any IEEE 802.3 standard earlier than 1997, the Length field specifies the exact length of the frame's data field. This is later used later as part of the FCS to ensure that the message was received correctly. If the purpose of the field is to specify the type, as in Ethernet II, the Type field describes which protocol is implemented.

    These two applications of the field were officially combined in 1997 in the IEEE 802.3x standard because both applications were common. The Ethernet Type II field is included in the current 802.3 frame definition. When a node receives a frame, it must examine the Length field to determine which higher-layer protocol is present. If the value of the two octets is greater than or equal to the hexadecimal number 0x0600 or the decimal number 1536, then the contents of the Data field are decoded according to the designated protocol type. If the field value is less than or equal to hexadecimal 0x05DC or decimal 1500, the Length field is used to indicate use of the IEEE 802.3 frame format. This differentiates between Ethernet II and 802.3 frames.

    Fields Data and Padding

    The Data and Padding fields (46 to 1500 bytes) contain encapsulated data from the higher layer, which is a typical Layer 3 PDU, typically an IPv4 packet. All frames must be at least 64 bytes long. If a smaller packet is encapsulated, Padding is used to increase the frame size to this minimum size.

    IEEE maintains a list of general purpose Ethernet II types.

    Story

    Ethernet technology was developed along with many of Xerox PARC's early projects. It is generally accepted that Ethernet was invented on May 22, 1973, when Robert Metcalf ( Robert Metcalfe) wrote a memo for the head of PARC on the potential of Ethernet technology. But Metcalfe received the legal right to the technology a few years later. In 1976, he and his assistant David Boggs published a brochure entitled "Ethernet: Distributed Packet-Switching For Local Computer Networks" R. M. Metcalfe and D. R. Boggs. Ethernet: Distributed Packet Switching for Local Computer Networks. // ACM Communications, 19(5):395--404, July 1976.

    Metcalf left Xerox in 1979 and founded 3Com to market computers and local area networks (LANs). He managed to convince DEC, Intel and Xerox to work together and develop the Ethernet standard (DIX). This standard was first published on September 30, 1980. It began to compete with two major patented technologies: Token Ring and Arcnet, which were soon buried under the rolling waves of Ethernet products. In the process, 3Com became the dominant company in the industry.

    Technology

    The standard of the first versions (Ethernet v1.0 and Ethernet v2.0) indicated that coaxial cable was used as the transmission medium; later it became possible to use twisted pair and optical cable.

    Popular varieties of Ethernet are designated as 10Base2, 100BaseTX, etc. Here the first element indicates the transmission speed, Mbit/s. Second element:

    • Base - direct (unmodulated) transmission,
    • Broad - use of broadband cable with frequency division multiplexing.

    Third element: rounded cable length in hundreds of meters (10Base2 - 185 m, 10Base5 - 500 m) or transmission medium (T, TX, T2, T4 - twisted pairs, FX, FL, FB, SX and LX - optical fiber, CX - twinaxial cable for Gigabit Ethernet).

    The reasons for switching to twisted pair were:

    • possibility of working in duplex mode;
    • low cost of twisted pair cable;
    • higher reliability of networks in case of cable failure;
    • greater noise immunity when using a differential signal;
    • the ability to power low-power nodes via cable, for example IP phones (Power over Ethernet, POE standard);
    • lack of galvanic connection (current flow) between network nodes. When using a coaxial cable in Russian conditions, where, as a rule, there is no grounding of computers, the use of a coaxial cable was often accompanied by a breakdown of network cards, and sometimes even a complete “burnout” of the system unit.

    The reason for switching to optical cable was the need to increase the length of the segment without repeaters.

    Access control method (for a coaxial cable network) - multiple access with carrier sensing and collision detection (CSMA/CD, Carrier Sense Multiple Access with Collision Detection), data transfer rate 10 Mbit/s, packet size from 72 to 1526 bytes, described data coding methods. The operating mode is half-duplex, that is, the node cannot simultaneously transmit and receive information. The number of nodes in one shared network segment is limited to a limit of 1024 workstations (physical layer specifications may set more stringent restrictions, for example, no more than 30 workstations can be connected to a thin coaxial segment, and no more than 100 to a thick coaxial segment). However, a network built on a single shared segment becomes ineffective long before the limit on the number of nodes is reached, mainly due to the half-duplex mode of operation.

    Most Ethernet cards and other devices support multiple data rates, using autonegotiation of speed and duplex to achieve the best connection between two devices. If auto-detection does not work, the speed is adjusted to the partner, and half-duplex transmission mode is activated. For example, the presence of a 10/100 Ethernet port in the device means that it can work using 10BASE-T and 100BASE-TX technologies, and the 10/100/1000 Ethernet port supports 10BASE-T, 100BASE-TX and 1000BASE-TX standards. T.

    Early Ethernet Modifications

    • Xerox Ethernet- the original technology, speed 3Mbit/s, existed in two versions Version 1 and Version 2, the frame format of the latest version is still widely used.
    • 10BROAD36- has not received wide distribution. One of the first standards allowing work over long distances. Used broadband modulation technology similar to that used in cable modems. Coaxial cable was used as a data transmission medium.
    • 1BASE5- also known as StarLAN, was the first modification of Ethernet technology using twisted pair cables. It worked at a speed of 1 Mbit/s, but did not find commercial use.

    10 Mbit/s Ethernet

    • 10BASE5,IEEE 802.3 (also called “Fat Ethernet”) is the original development of the technology with a data transfer rate of 10 Mbps. Following the early IEEE standard, it uses 50 ohm coaxial cable (RG-8), with a maximum segment length of 500 meters.
    • 10BASE2, IEEE 802.3a (called "Thin Ethernet") - uses RG-58 cable, with a maximum segment length of 200 meters, computers connected one to another, to connect the cable to the network card you need a T-connector, and the cable must have a BNC connector . Requires terminators at each end. For many years this standard was the main one for Ethernet technology.
    • StarLAN 10- The first development using twisted pair cables to transmit data at a speed of 10 Mbit/s. Later it evolved into the 10BASE-T standard.

    Despite the fact that it is theoretically possible to connect more than two devices operating in simplex mode to one twisted pair cable (segment), such a scheme is never used for Ethernet, unlike working with coaxial cable. Therefore, all twisted pair networks use a star topology, while coaxial networks use a bus topology. Terminators for working over twisted pair cables are built into each device, and there is no need to use additional external terminators in the line.

    • 10BASE-T, IEEE 802.3i - 4 wires of a twisted pair cable (two twisted pairs) of category-3 or category-5 are used for data transmission. The maximum segment length is 100 meters.
    • FOIRL- (acronym for Fiber-optic inter-repeater link). The basic standard for Ethernet technology, using optical cable for data transmission. The maximum data transmission distance without a repeater is 1 km.
    • 10BASE-F, IEEE 802.3j - The basic term for a family of 10 Mbps ethernet standards using fiber optic cable over distances of up to 2 kilometers: 10BASE-FL, 10BASE-FB and 10BASE-FP. Of the above, only 10BASE-FL has become widespread.
    • 10BASE-FL(Fiber Link) - An improved version of the FOIRL standard. The improvement concerned an increase in the length of the segment to 2 km.
    • 10BASE-FB(Fiber Backbone) - Currently an unused standard, it was intended to combine repeaters into a backbone.
    • 10BASE-FP(Fiber Passive) - The “passive star” topology, in which repeaters are not needed, has never been used.

    Fast Ethernet (Fast Ethernet, 100 Mbit/s)

    • 100BASE-T- a general term for standards that use twisted pair cables as a data transmission medium. Segment length up to 100 meters. Includes 100BASE-TX, 100BASE-T4 and 100BASE-T2 standards.
    • 100BASE-TX, IEEE 802.3u - development of the 10BASE-T standard for use in star networks. Category 5 twisted pair cable is used, actually only two unshielded pairs of conductors are used, duplex data transmission is supported, distance up to 100 m.
    • 100BASE-T4- a standard that uses twisted pair cable of category 3. All four pairs of conductors are used, data transmission occurs in half duplex. Practically not used.
    • 100BASE-T2- a standard that uses twisted pair cable of category 3. Only two pairs of conductors are used. Full duplex is supported, with signals propagating in opposite directions on each pair. Transmission speed in one direction is 50 Mbit/s. Practically not used.
    • 100BASE-SX- a standard using multimode optical fiber. The maximum segment length is 400 meters in half duplex (for guaranteed collision detection) or 2 kilometers in full duplex.
    • 100BASE-FX- a standard using single-mode optical fiber. The maximum length is limited only by the amount of attenuation in the fiber optic cable and the power of the transmitters.
    • 100BASE-FX WDM- a standard using single-mode optical fiber. The maximum length is limited only by the amount of attenuation in the fiber optic cable and the power of the transmitters. Interfaces come in two types, differ in the wavelength of the transmitter and are marked either with numbers (wavelength) or with one Latin letter A (1310) or B (1550). Only paired interfaces can operate in pairs: on one side there is a transmitter at 1310 nm, and on the other at 1550 nm.

    Fast Ethernet

    Fast Ethernet (IEEE802.3u, 100BASE-X) is a set of standards for data transmission in computer networks, with speeds up to 100 Mbit/s, in contrast to regular Ethernet (10 Mbit/s).

    Gigabit Ethernet (Gigabit Ethernet, 1 Gbit/s)

    • 1000BASE-T,IEEE 802.3ab is a standard using Category 5e twisted pair cable. All 4 pairs are involved in data transmission. The data transfer rate is 250 Mbit/s over one pair. The PAM5 coding method is used, the fundamental frequency is 62.5 MHz.
    • 1000BASE-TX was created by the Telecommunications Industry Association. Telecommunications Industry Association, TIA) and published in March 2001 as the "Physical Layer Specification for 1000 Mb/s Full Duplex Ethernet (1000BASE-TX) Category 6 Balanced Cabling Systems (ANSI/TIA/EIA-854-2001)". “A Full Duplex Ethernet Specification for 1000 Mbis/s (1000BASE-TX) Operating Over Category 6 Balanced Twisted-Pair Cabling (ANSI/TIA/EIA-854-2001)”). The standard uses separate reception and transmission (1 pair for transmission, 1 pair for reception, for each pair data is transmitted at a speed of 500 Mbit/s), which significantly simplifies the design of transceiver devices. But, as a consequence, stable operation of this technology requires a high-quality cabling system, so 1000BASE-TX can only use Category 6 cable. Another significant difference of 1000BASE-TX is the absence of a digital interference and return noise compensation circuit, as a result of which the complexity, power consumption and price of processors become lower than those of 1000BASE-T standard processors. Almost no products have been created based on this standard, although 1000BASE-TX uses a simpler protocol than the 1000BASE-T standard and can therefore use simpler electronics.
    • 1000BASE-X is a general term for standards with pluggable GBIC or SFP transceivers.
    • 1000BASE-SX,IEEE 802.3z is a standard using multimode optical fiber. The signal transmission range without a repeater is up to 550 meters.
    • 1000BASE-LX,IEEE 802.3z is a standard using single-mode optical fiber. The signal range without a repeater is up to 80 kilometers.
    • 1000BASE-CX- standard for short distances (up to 25 meters), using twinaxial cable with a characteristic impedance of 150 Ohms. Replaced by the 1000BASE-T standard and is no longer used.
    • 1000BASE-LH(Long Haul) - a standard using single-mode optical fiber. The signal range without a repeater is up to 100 kilometers.

    10 Gigabit Ethernet

    The new 10 Gigabit Ethernet standard includes seven physical media standards for LAN, MAN and WAN. It is currently covered by the IEEE 802.3ae amendment and should be included in the next revision of the IEEE 802.3 standard.

    • 10GBASE-CX4- 10 Gigabit Ethernet technology for short distances (up to 15 meters), using CX4 copper cable and InfiniBand connectors.
    • 10GBASE-SR- 10 Gigabit Ethernet technology for short distances (up to 26 or 82 meters, depending on the cable type), using multimode fiber. It also supports distances of up to 300 meters using new multimode fiber (2000 MHz/km).
    • 10GBASE-LX4- uses wavelength multiplexing to support distances of 240 to 300 meters over multimode fiber. Also supports distances up to 10 kilometers using single-mode fiber.
    • 10GBASE-LR And 10GBASE-ER- these standards support distances of up to 10 and 40 kilometers, respectively.
    • 10GBASE-SW, 10GBASE-LW And 10GBASE-EW- These standards use a physical interface compatible in speed and data format with the OC-192 / STM-64 SONET/SDH interface. They are similar to the 10GBASE-SR, 10GBASE-LR and 10GBASE-ER standards, respectively, as they use the same cable types and transmission distances.
    • 10GBASE-T,IEEE 802.3an-2006 - adopted in June 2006 after 4 years of development. Uses shielded twisted pair cable. Distances - up to 100 meters.

    The 10 Gigabit Ethernet standard is still too young, so it will take time to understand which of the above transmission media standards will actually be in demand in the market. 10 Gigabit/second is not the limit. Development of 1000 G Ethernet and beyond is already underway.

    ETHERNET TECHNOLOGY

    Ethernet is the most widespread local network standard today.

    When people say Ethernet, they usually mean any of the variants of this technology. In a narrower sense, Ethernet is a network standard based on the experimental Ethernet Network, which Xerox developed and implemented in 1975. The access method was tested even earlier: in the second half of the 60s, the University of Hawaii radio network used various options for random access to the general radio environment, collectively called Aloha. In 1980, DEC, Intel, and Xerox jointly developed and published the Ethernet version II standard for coaxial cable networks, which became the final version of the proprietary Ethernet standard. Therefore, the proprietary version of the Ethernet standard is called the Ethernet DIX standard or Ethernet P.

    Based on the Ethernet DIX standard, the IEEE 802.3 standard was developed, which largely coincides with its predecessor, but there are still some differences. While the IEEE 802.3 standard distinguishes between the MAC and LLC layers, original Ethernet combines both layers into a single data link layer. Ethernet DIX defines the Ethernet Configuration Test Protocol, which is not found in IEEE 802.3. The frame format is also somewhat different, although the minimum and maximum frame sizes in these standards are the same. Often, in order to distinguish Ethernet, defined by the IEEE standard, and proprietary Ethernet DIX, the first is called 802.3 technology, and the proprietary name is left behind the name Ethernet without additional designations.

    Depending on the type of physical medium, the IEEE 802.3 standard has various modifications - 10Base-5, 10Base-2, 10Base-T, 10Base-FL, 10Base-FB.

    In 1995, the Fast Ethernet standard was adopted, which in many ways is not an independent standard, as evidenced by the fact that its description is simply an additional section to the main 802.3 standard - section 802.3u. Similarly, the Gigabit Ethernet standard adopted in 1998 is described in section 802.3z of the main document.

    Manchester code is used to transmit binary information over the cable for all variants of the physical layer of Ethernet technology that provide a throughput of 10 Mbit/s.

    All types of Ethernet standards (including Fast Ethernet and Gigabit Ethernet) use the same method of separating the data transmission medium - the CSMA/CD method.

    Addressing in Ethernet networks

    To identify the recipient of information in Ethernet technologies, 6-byte MAC addresses are used.

    The MAC address format makes it possible to use specific multicast addressing modes in an Ethernet network and, at the same time, eliminate the possibility of two stations appearing within the same local network that would have the same address.

    The physical address of an Ethernet network consists of two parts:

    • Vendor codes
    • Individual device identifier

    A special organization within the IEEE is responsible for distributing permitted encodings of this field according to requests from network equipment manufacturers. Various forms can be used to write a MAC address. The most commonly used form is hexadecimal, in which pairs of bytes are separated from each other by "-" characters:

    E0-14-00-00-00

    Ethernet and IEEE 802.3 networks use three main modes for generating the destination address:

    • Unicast – individual address;
    • Multicast – group address;
    • Broadcast – broadcast address.

    The first addressing mode (Unicast) is used when the source station addresses the transmitted packet to only one data recipient.

    A sign that the Multicast addressing mode is being used is the presence of a 1 in the least significant bit of the high byte of the equipment manufacturer identifier.

    C-CC-CC-CC

    A frame whose DA field content belongs to the Multicast type will be received and processed by all stations that have the corresponding Vendor Code field value - in this case, these are Cisco network devices. The given Multicast address is used by network devices of this company for interaction in accordance with the rules of Cisco Discovery Protocol (CDP).

    An Ethernet and IEEE 802.3 network station can also use Broadcast addressing mode. The address of the destination station of the Broadcast type is encoded with a special value:

    FF-FF-FF-FF-FF-FF

    When using this address, the transmitted packet will be accepted by all stations located on this network.

    Access method CSMA/CD

    Ethernet networks use a media access method called carrier-sense-multiply-access with collision detection (CSMA/CD). .

    The CSMA/CD protocol defines the nature of interaction between workstations on a network with a single data transmission medium common to all devices. All stations have equal conditions for data transmission. There is no specific sequence in which stations can access the medium to transmit. It is in this sense that access to the environment is random. Implementing random access algorithms seems to be a much simpler task than implementing deterministic access algorithms. Since in the latter case, either a special protocol is required that controls the operation of all network devices (for example, the token circulation protocol inherent in Token Ring and FDDI networks), or a special dedicated device - a master hub, which in a certain sequence would provide all other stations with the ability to transmit (Arcnet networks , 100VG AnyLAN).

    However, a network with random access has one, perhaps, main drawback - it is not entirely stable operation of the network under heavy load, when quite a long time can pass before a given station is able to transmit data. This is due to collisions that arise between stations that began transmitting simultaneously or almost simultaneously. If a collision occurs, the transmitted data does not reach the recipients, and the transmitting stations have to resume transmission again - the encoding methods used in Ethernet do not allow the signals of each station to be separated from the general signal. (3 Note that this fact is reflected in the “Base(band)” component present in the names of all physical protocols of Ethernet technology (for example, 10Base-2,10Base-T, etc.). Baseband network means a baseband network in which messages are sent digitally over a single channel, without frequency division.)

    Collision is a normal situation in Ethernet networks. For a collision to occur, it is not necessary that several stations start transmitting absolutely simultaneously; such a situation is unlikely. It is much more likely that a collision occurs due to the fact that one node starts transmitting earlier than the other, but the signals of the first simply do not have time to reach the second node by the time the second node decides to start transmitting its frame. That is, collisions are a consequence of the distributed nature of the network.

    The set of all network stations, the simultaneous transmission of any pair of which leads to a collision, is called a collision domain or collision domain.

    Due to collisions, unpredictable delays may occur when propagating frames through the network, especially when the network is heavily loaded (many stations are trying to simultaneously transmit within the collision domain, > 20-25) and when the diameter of the collision domain is large (> 2 km). Therefore, when building networks, it is advisable to avoid such extreme operating modes

    The problem of constructing a protocol capable of resolving collisions in the most optimal way and optimizing the operation of the network under heavy loads was one of the key ones at the stage of forming the standard. Initially, three main approaches were considered as candidates for implementing a random access algorithm to the environment: non-persistent, 1-constant and p-constant (Fig. 11.2).

    Figure 11.2. Algorithms for multiple random access (CSMA) and time delay in a conflict situation (collision back off)

    Nonpersistent algorithm. In this algorithm, a station wishing to transmit is guided by the following rules.

    1. Listens to the medium, and if the medium is free (that is, if there is no other transmission or there is no collision signal) transmits, otherwise the medium is busy - proceeds to step 2;

    2. If the environment is busy, wait a random time (according to a certain probability distribution curve) and return to step 1.

    Using a random wait value in a busy environment reduces the likelihood of collisions. Indeed, let us assume otherwise that two stations are about to transmit almost simultaneously, while the third is already transmitting. If the first two did not have a random waiting time before starting transmission (in case the medium was busy), but only listened to the medium and waited for it to become free, then after the third station stopped transmitting, the first two would begin to transmit simultaneously, which would inevitably lead to to collisions. Thus, random waiting eliminates the possibility of such collisions. However, the inconvenience of this method manifests itself in the inefficient use of channel bandwidth. Because it may happen that by the time the medium is free, the station wishing to transmit will still continue to wait for some random time before deciding to listen on the medium, since it was previously listening on a medium that turned out to be occupied. As a result, the channel will remain idle for some time, even if only one station is waiting for transmission.

    1-persistent algorithm. To reduce the time when the environment is idle, a 1-constant algorithm could be used. In this algorithm, a station wishing to transmit is guided by the following rules.

    1. Listens to the medium, and if the medium is not busy transmits, otherwise goes to step 2;

    2. If the medium is busy, continues to listen to the medium until the medium is freed, and as soon as the medium is freed it immediately begins transmitting.

    Comparing the non-persistent and 1-persistent algorithms, we can say that in the 1-constant algorithm the station wishing to transmit behaves more “selfishly”. Therefore, if two or more stations are waiting for transmission (waiting until the medium is free), a collision can be said to be guaranteed. After a collision, the stations begin to think about what to do next.

    P-persistent algorithm. The rules of this algorithm are as follows:

    1. If the environment is free, the station with probability p immediately begins transmission or with probability (1- p ) waits for a fixed time interval T. The interval T is usually taken equal to the maximum time the signal propagates from end to end;

    2. If the medium is busy, the station continues listening until the medium is free, then proceeds to step 1;

    3. If the transmission is delayed by one interval T, the station returns to step 1.

    And here the question arises of choosing the most effective parameter value p . The main problem is how to avoid instability under high loads. Let's consider a situation in which n stations intend to transmit frames while transmission is already in progress. At the end of the transmission, the expected number of stations that will try to transmit will be equal to the product of the number of stations willing to transmit by the probability of transmission, that is n.p. . If n.p. > 1, then on average several stations will try to transmit at once, which will cause a collision. Moreover, once a collision is detected, all stations will go back to step 1, causing a re-collision. In the worst case, new stations willing to transmit could be added to n , which will further aggravate the situation, ultimately leading to continuous collision and zero throughput. To avoid such a catastrophe, the work n.p. must be less than one. If the network is subject to conditions where many stations simultaneously want to transmit, then it is necessary to reduce p . On the other hand, when p becomes too small, even a single station can wait on average (1- p )/p intervals T before transmitting. So if p=0.1 then the average downtime preceding the transfer will be 9T.

    CSMA/CD Collision Resolution Multiple Random Media Access Protocol embodied the ideas of the above listed algorithms and added an important element - collision resolution. Since a collision destroys all frames transmitted at the time of its formation, there is no point in stations continuing further transmission of their frames once they (stations) have detected collisions. Otherwise, there would be a significant loss of time when transmitting long frames. Therefore, in order to timely detect a collision, the station listens to the medium throughout its own transmission. Here are the basic rules of the CSMA/CD algorithm for the transmitting station (Fig. 11.3):

    1. A station about to transmit listens to the medium. And transmits if the environment is free. Otherwise (i.e. if the medium is busy), it proceeds to step 2. When transmitting several frames in a row, the station maintains a certain pause between sending frames - the interframe interval, and after each such pause, before sending the next frame, the station listens to the medium again (returning to the beginning step 1);

    2. If the medium is busy, the station continues to listen on the medium until the medium becomes free, and then immediately begins transmitting;

    3. Each station transmitting listens to the medium, and if a collision is detected, it does not immediately stop transmitting, but first transmits a short special collision signal - a jam signal, informing other stations about the collision, and stops transmitting;

    4. After transmitting the jam signal, the station becomes silent and waits for some arbitrary time in accordance with the binary exponential delay rule and then returns to step 1.

    To be able to transmit a frame, the station must ensure that the shared medium is clear. This is achieved by listening to the fundamental harmonic of the signal, which is also called the carrier-sense (CS). A sign of an unoccupied medium is the absence of a carrier frequency on it, which with the Manchester coding method is 5-10 MHz, depending on the sequence of ones and zeros transmitted at the moment.

    After the end of frame transmission, all network nodes are required to withstand a technological pause (Inter Packet Gap) of 9.6 μs (96 bt). This pause, also called the interframe interval, is needed to bring the network adapters to their original state, as well as to prevent one station from taking over the environment exclusively.

    Figure 11.3. Block diagram of the CSMA/CD algorithm (MAC layer): when a station transmits a frame

    Jam signal (jamming - literally jamming). Transmitting a jam signal guarantees that more than one frame will not be lost, since all nodes that were transmitting frames before the collision occurred, having received the jam signal, will interrupt their transmissions and become silent in anticipation of a new attempt to transmit frames. The jam signal must be of sufficient length so that it reaches the most remote stations in the collision domain, taking into account the additional SF (safety margin) delay on possible repeaters. The content of the jam signal is not critical except that it must not correspond to the value of the CRC field of the partially transmitted frame (802.3), and the first 62 bits must represent an alternation of '1' and '0' with a start bit of '1'.

    Figure 11.4. Random access method CSMA/CD

    Figure 11.5 illustrates the collision detection process as applied to a bus topology (based on thin or thick coaxial cable (10Base5 and 10Base2 standards, respectively).

    At time node A(DTE A) begins transmission, naturally listening to its own transmitted signal. At the point in time when the frame has almost reached the node B(DTE B), this node, not knowing that transmission is already in progress, begins to transmit itself. At time, node B detects a collision (the constant component of the electrical signal in the monitored line increases). After this the node B transmits a jam signal and stops transmitting. At the moment of time the collision signal reaches the node A, after which A also transmits a jam signal and stops transmitting.

    Figure 11.5. Collision detection when using CSMA/CD scheme

    According to the IEEE 802.3 standard, a node cannot transmit very short frames, or in other words, conduct very short transmissions. Even if the data field is not completely filled, a special additional field appears that extends the frame to a minimum length of 64 bytes without taking into account the preamble. Channel time ST (slot time) is the minimum time during which a node must transmit and occupy a channel. This time corresponds to the transmission of a frame of the minimum acceptable size accepted by the standard. Channel time is related to the maximum allowable distance between network nodes - the diameter of the collision domain. Let's assume that the example above implements a worst-case scenario where stations A And B separated from each other at the maximum distance. Time, signal propagation from A to B denote by . Knot A starts transmitting at time zero. Knot B starts transmitting at a moment in time and detects a collision after an interval after the start of its transmission. Knot A detects a collision at time . To frame emitted A, was not lost, it is necessary that the node A did not stop transmitting to this moment, because then, having detected a collision, the node A will know that its frame did not arrive and will try to retransmit it. Otherwise the frame will be lost. The maximum time after which the node has started transmitting A can still detect a collision equally - this time is called double turnaround time of the PDV signal (Path Delay Value, PDV). More generally, PDV defines the total latency associated with both the delay due to the finite length of the segments and the delay that occurs when processing frames at the physical layer of intermediate repeaters and network end nodes. For further consideration, it is convenient to also use another unit of time: bit time bt (bit time). A time of 1 bt corresponds to the time required to transmit one bit, i.e. 0.1 µs at 10 Mbit/s.

    Clear recognition of collisions by all network stations is a necessary condition for the correct operation of the Ethernet network. If any transmitting station does not recognize the collision and decides that it transmitted the data frame correctly, then this data frame will be lost. Due to the overlap of signals during a collision, the frame information will be distorted, and it will be rejected by the receiving station (possibly due to a checksum mismatch). Most likely, the corrupted information will be retransmitted by some upper-layer protocol, such as a connection-oriented transport or application protocol. But the retransmission of the message by upper-level protocols will occur after a much longer time interval (sometimes even after several seconds) compared to the microsecond intervals that the Ethernet protocol operates. Therefore, if collisions are not reliably recognized by Ethernet network nodes, this will lead to a noticeable decrease in the useful throughput of this network.

    For reliable collision detection, the following relationship must be satisfied:

    Tmin>=PVD,

    where T min is the transmission time of a frame of minimum length, and PDV is the time during which the collision signal manages to propagate to the farthest node in the network. Since in the worst case the signal must travel twice between the stations of the network that are most distant from each other (an undistorted signal passes in one direction, and a signal already distorted by a collision propagates on the way back), this is why this time is called double revolution time (Path Delay Value, PDV).

    If this condition is met, the transmitting station must have time to detect the collision caused by its transmitted frame even before it finishes transmitting this frame.

    Obviously, the fulfillment of this condition depends, on the one hand, on the length of the minimum frame and network capacity, and on the other hand, on the length of the network cable system and the speed of signal propagation in the cable (this speed is slightly different for different types of cable).

    All parameters of the Ethernet protocol are selected in such a way that during normal operation of network nodes, collisions are always clearly recognized. When choosing parameters, of course, the above relationship was taken into account, connecting the minimum frame length and the maximum distance between stations in a network segment.

    The Ethernet standard assumes that the minimum length of a frame data field is 46 bytes (which, together with service fields, gives a minimum frame length of 64 bytes, and together with the preamble - 72 bytes or 576 bits).

    When transmitting large frames, for example 1500 bytes, a collision, if it occurs at all, is detected almost at the very beginning of the transmission, no later than the first 64 bytes transmitted (if a collision did not occur at this time, then it will not occur later, since all stations are listening to the line and, “hearing” the transmission, will remain silent). Since the jam signal is much shorter than the full frame size, when using the CSMA/CD algorithm, the amount of idle channel capacity consumed is reduced to the time required to detect a collision. Early collision detection results in more efficient channel utilization. Late detection of collisions, characteristic of longer networks, when the diameter of the collision domain is several kilometers, which reduces the efficiency of the network. Based on a simplified theoretical model of the behavior of a busy network (assuming a large number of simultaneously transmitting stations and a fixed minimum length of transmitted frames for all stations), it is possible to express the performance of the network U in terms of the PDV/ST ratio:

    Where - the base of the natural logarithm. Network performance is affected by the size of broadcast frames and the diameter of the network. The performance in the worst case (when PDV=ST) is about 37%, and in the best case (when PDV is much less than ST) it tends to 1. Although the formula is derived in the limit of a large number of stations trying to transmit simultaneously, it does not take into account the peculiarities the truncated binary exponential delay algorithm discussed below, and is not valid for a network heavily overloaded with collisions, for example, when there are more than 15 stations wanting to transmit.

    Truncated binary exponential delay(truncated binary exponential backoff). The CSMA/CD algorithm adopted in the IEEE 802.3 standard is closest to the 1-constant algorithm, but differs in an additional element - truncated binary exponential delay. When a collision occurs, the station counts how many times in a row a collision occurs when sending a packet. Since repeated collisions indicate a high congestion in the environment, the MAC node tries to increase the delay between retries to transmit the frame. The corresponding procedure for increasing time intervals obeys the rule truncated binary exponential delay.

    A random pause is selected using the following algorithm:

    Pause = Lx (delay interval),

    where (backoff interval) = 512 bit intervals (51.2 µs);

    L is an integer selected with equal probability from the range , where N is the number of retransmission of this frame: 1,2,..., 10.

    After the 10th attempt, the interval from which the pause is selected does not increase. Thus, a random pause can take values ​​from 0 to 52.4 ms.

    If 16 consecutive attempts to transmit a frame cause a collision, then the transmitter must stop trying and discard the frame.

    The CSMA/CD algorithm using truncated binary exponential delay is recognized as the best among many random access algorithms and ensures efficient network operation under both low and medium loads. With large loads, two disadvantages should be noted. Firstly, with a large number of collisions, station 1, which is about to send a frame for the first time (has not tried to transmit frames before), has an advantage over station 2, which has already unsuccessfully tried to transmit a frame several times, encountering collisions. Because station 2 waits a significant amount of time before making further attempts in accordance with the binary exponential backoff rule. Thus, irregular frame transmission may occur, which is undesirable for time-sensitive applications. Secondly, when there is a heavy load, the efficiency of the network as a whole decreases. Estimates show that with simultaneous transmission of 25 stations, the total bandwidth is reduced by approximately 2 times. But the number of stations in the collision domain may be greater, since not all of them will simultaneously access the medium.

    Frame reception (Fig. 11.6)

    Figure 11.6. Block diagram of the CSMA/CD algorithm (MAC level): when a frame is received by a station

    The receiving station or other network device, such as a hub or switch, first synchronizes using the preamble and then converts the Manchester code into binary form (at the physical layer). Next, the binary stream is processed.

    At the MAC layer, the remaining preamble bits are reset and the station reads the destination address and compares it with its own. If the addresses match, then the frame fields excluding the preamble, SDF and FCS are placed in a buffer and a checksum is calculated and compared with the check sequence field of the FCS frame (using the CRC-32 cyclic summation method). If they are equal, then the contents of the buffer are passed to the higher-level protocol. Otherwise the frame is discarded. The occurrence of a collision when receiving a frame is detected either by a change in the electrical potential, if a coaxial segment is used, or by the fact of receiving a defective frame, an incorrect checksum, if a twisted pair or optical fiber is used. In both cases, the received information is reset.

    From the description of the access method it is clear that it is probabilistic in nature, and the probability of successfully obtaining a common medium at its disposal depends on the network load, that is, on the intensity of the need for frame transmission in stations. When this method was developed in the late 70s, it was assumed that the data transfer rate of 10 Mbit/s was very high compared to the needs of computers for mutual data exchange, so the network load would always be light. This assumption remains sometimes true to this day, but there are already real-time multimedia applications that put a lot of load on Ethernet segments. In this case, collisions occur much more often. When the intensity of collisions is significant, the useful throughput of the Ethernet network drops sharply, since the network is almost constantly busy with repeated attempts to transmit frames. To reduce the intensity of collisions, you need to either reduce traffic, for example, reducing the number of nodes in a segment or replacing applications, or increase the speed of the protocol, for example, switch to Fast Ethernet.

    It should be noted that the CSMA/CD access method does not at all guarantee that a station will ever be able to access the medium. Of course, when the network load is light, the probability of such an event is small, but when the network utilization factor approaches 1, such an event becomes very likely. This disadvantage of the random access method is the price to pay for its extreme simplicity, which has made Ethernet the most inexpensive technology. Other access methods - token access of Token Ring and FDDI networks, Demand Priority method of 100VG-AnyLAN networks - are free from this drawback.

    As a result of taking into account all factors, the relationship between the minimum frame length and the maximum possible distance between network stations was carefully selected, which ensures reliable collision recognition. This distance is also called the maximum network diameter.

    As the frame transmission rate increases, which occurs in new standards based on the same CSMA/CD access method, such as Fast Ethernet, the maximum distance between network stations decreases in proportion to the increase in transmission rate. In the Fast Ethernet standard it is about 210 m, and in the Gigabit Ethernet standard it would be limited to 25 meters if the developers of the standard had not taken some measures to increase the minimum packet size.

    In table 11.1 shows the values ​​of the main parameters of the 802.3 frame transmission procedure, which do not depend on the implementation of the physical medium. It is important to note that each Ethernet physical environment option adds to these limitations its own, often more stringent, limitations that must also be met and which will be discussed below.

    Table 11.1.MAC Ethernet Layer Parameters

    Options Values
    Bit rate 10 Mbit/s
    Snooze interval 512 bt
    Interframe interval (IPG) 9.6 µs
    Maximum number of transmission attempts
    Maximum pause range increase number
    Jam sequence length 32 bits
    Maximum frame length (without preamble) 1518 bytes
    Minimum frame length (without preamble) 64 bytes (512 bits)
    Preamble length 64 bit
    Minimum length of a random pause after a collision 0 bt
    Maximum length of random pause after a collision 524000 bt
    Maximum distance between network stations 2500m
    Maximum number of stations in the network

    Ethernet Frame Formats

    The Ethernet technology standard, described in IEEE 802.3, describes a single MAC layer frame format. Since the MAC layer frame must contain an LLC layer frame, described in the IEEE 802.2 document, according to IEEE standards, only a single version of the link layer frame can be used in an Ethernet network, the header of which is a combination of the MAC and LLC sublayer headers.

    However, in practice, frames of 4 different formats (types) are used in Ethernet networks at the data link level. This is due to the long history of the development of Ethernet technology, dating back to the period before the adoption of IEEE 802 standards, when the LLC sublayer was not separated from the general protocol and, accordingly, the LLC header was not used.

    A consortium of three firms Digital, Intel and Xerox in 1980 submitted to the 802.3 committee their proprietary version of the Ethernet standard (which, of course, described a specific frame format) as a draft international standard, but the 802.3 committee adopted a standard that differed in some details from DIX offers. The differences also concerned the frame format, which gave rise to the existence of two different types of frames in Ethernet networks.

    Another frame format emerged as a result of Novell's efforts to speed up its Ethernet protocol stack.

    Finally, the fourth frame format was the result of the 802:2 committee's efforts to bring the previous frame formats to some common standard.

    Differences in frame formats can lead to incompatibility in the operation of hardware and network software designed to work with only one Ethernet frame standard. However, today almost all network adapters, network adapter drivers, bridges/switches and routers can work with all Ethernet technology frame formats used in practice, and frame type recognition is performed automatically.

    Below is a description of all four types of Ethernet frames (here, a frame refers to the entire set of fields that relate to the data link layer, that is, the fields of the MAC and LLC layers). The same frame type can have different names, so below for each frame type are several of the most common names:

    • 802.3/LLC frame (802.3/802.2 frame or Novell 802.2 frame);
    • Raw 802.3 frame (or Novell 802.3 frame);
    • Ethernet DIX frame (or Ethernet II frame);
    • Ethernet SNAP frame.

    The formats of all these four types of Ethernet frames are shown in Fig. 11.7.

    802.3/LLC frame

    The 802.3/LLC frame header is the result of concatenating the frame header fields defined in the IEEE 802.3 and 802.2 standards.

    The 802.3 standard defines eight header fields (Figure 11.7; the preamble field and initial frame delimiter are not shown in the figure).

    • Preamble field consists of seven synchronizing bytes 10101010. With Manchester coding, this combination is represented in the physical environment by a periodic wave signal with a frequency of 5 MHz.
    • Start-of-frame-delimiter (SFD) consists of a single byte 10101011. The appearance of this bit pattern is an indication that the next byte is the first byte of the frame header.
    • Destination Address (DA) can be 2 or 6 bytes long. In practice, addresses of 6 bytes are always used. The first bit of the high byte of the destination address indicates whether the address is individual or group. If it is 0, then the address is individual (unicast), a if 1, then this group address (multicast). If the address consists of all ones, that is, it has a hexadecimal representation of 0xFFFFFFFFFFFF, then it is intended for all network nodes and is called broadcast address (broadcast).

    In IEEE Ethernet standards, the least significant bit of a byte is depicted in the leftmost field position, and the most significant bit is depicted in the rightmost field position. This non-standard way of displaying the order of bits in a byte corresponds to the order of transmission of bits on the communication line by an Ethernet transmitter. Standards from other organizations, such as RFC IETF, ITU-T, ISO, use the traditional byte representation, where the least significant bit is considered the rightmost bit of the byte, and the most significant bit is the leftmost bit. In this case, the byte order remains traditional. Therefore, when reading the standards published by these organizations, as well as reading the data displayed on the screen by the operating system or protocol analyzer, the value of each byte of the Ethernet frame must be mirrored to get a correct understanding of the meaning of the bits of that byte according to IEEE documents. For example, a multicast address in IEEE notation of the form 1000 0000 0000 0000 1010 0111 1111 0000 0000 0000 0000 0000 or in hexadecimal notation 80-00-A7-FO-00-00 will most likely be displayed by a protocol analyzer in the traditional form as 01-00-5E-0F-00-00.

    • Source Address (SA) - This is a 2- or 6-byte field containing the address of the sending node of the frame. The first bit of the address is always 0.
    • Length (L) - A 2-byte field that specifies the length of the data field in the frame.
    • Data field can contain from 0 to 1500 bytes. But if the field length is less than 46 bytes, then the next field, the padding field, is used to pad the frame to the minimum allowed value of 46 bytes.
    • Padding field consists of such a number of padding bytes that the data field has a minimum length of 46 bytes. This ensures that the collision detection mechanism works correctly. If the data field is long enough, the padding field does not appear in the frame.
    • Frame Check Sequence (PCS) field consists of 4 bytes containing a checksum. This value is calculated using the CRC-32 algorithm. After receiving a frame, the workstation performs its own checksum calculation for that frame, compares the resulting value with the value of the checksum field, and thus determines whether the received frame is corrupted.

    The 802.3 frame is a MAC sublayer frame, so in accordance with the 802.2 standard, its data field is enclosed in an LLC sublayer frame with the start and end of frame flags removed. The LLC frame format was described above. Since the LLC frame has a header length of 3 (in LLC1 mode) or 4 bytes (in LLC2 mode), the maximum data field size is reduced to 1497 or 1496 bytes.

    Figure 11.7. Ethernet frame formats


    Related information.


    In a network, several computers must share access to the transmission medium. However, if two computers try to transmit data at the same time, a collision will occur and the data will be lost.

    All network computers must use the same access method, otherwise the network will fail. Individual computers, whose methods will dominate, will prevent others from carrying out the transfer. Access methods are used to prevent multiple computers from accessing the cable at the same time, streamlining the transmission and reception of data over the network and ensuring that only one computer can transmit at a time.

    With carrier sense multiple access and collision detection (abbreviated CSMA/CD), all computers on the network - both clients and servers - "listen" to the cable, trying to detect the data being transmitted (that is, traffic).

    1) The computer “understands” that the cable is free (that is, there is no traffic).

    2) The computer can start transmitting data.

    3) Until the cable becomes free (during data transfer), none of the network computers can transmit.

    When more than one network device attempts to simultaneously access the transmission medium, a collision occurs. Computers register the occurrence of a collision, release the transmission line for a certain randomly specified (within the boundaries defined by the standard) time interval, after which the transmission attempt is repeated. The computer that first seizes the transmission line begins transmitting data.

    CSMA/CD is known as an adversarial method because networked computers "compete" with each other for the right to transmit data.

    The ability to detect collisions is the reason that limits the scope of CSMA/CD itself. Due to the finite speed of signal propagation in wires at distances greater than 2500 m (1.5 miles), the collision detection mechanism is not effective. If the distance to the transmitting computer exceeds this limit, some computers do not have time to detect cable loading and begin transmitting data, which leads to collisions and destruction of data packets.

    Examples of CDSMA/CD protocols are DEC's Ethernet version 2 and IEEE 802.3.

    Ethernet Physical Medium Specification

    For Ethernet technology, various physical layer options have been developed, differing not only in the type of cable and electrical parameters of the pulses, as is done in 10 Mb/s Ethernet technology, but also in the method of encoding signals and the number of conductors used in the cable. Therefore, the Ethernet physical layer has a more complex structure than classic Ethernet.

    Ethernet technology specifications today include the following data transmission media.

    • 10Base-2- coaxial cable with a diameter of 0.25 inches is called thin coax. Has a characteristic impedance of 50 Ohms. The maximum segment length is 185 meters (without repeaters).
    • 10Base-5- coaxial cable with a diameter of 0.5 inches is called “thick” coax. Has a characteristic impedance of 50 Ohm. The maximum length of a segment without a repeater is 500 meters.
    • 10Base-T- cable based on unshielded twisted pair (UTP). Forms a star topology based on hubs. The distance between the hub and the end node is no more than 100 meters.
    • 10Base-F- fiber optic cable. The topology is similar to that of the 10Base-T standard. There are several options for this specification - FOIRL (distance up to 1000 m), 10Base-FL (distance up to 2000 m).

    Ethernet frame formats

    Just like in manufacturing, personnel on an Ethernet network is everything. They serve as a container for all high-level packets, so in order to understand each other, the sender and receiver must use the same type of Ethernet frame. The Ethernet technology standard, defined in the IEEE802.3 document, describes a single MAC layer frame format. Frames can be in only four different formats, and also not very different from each other. Moreover, there are only two basic frame formats (in English terminology they are called “raw formats”) - Ethernet_II and Ethernet_802.3, and they differ in the purpose of only one field.

    • Frame Ethernet DIX(Ethernet II). It emerged as a result of the work of a consortium of three firms Digital, Intel and Xerox in 1980, which submitted its proprietary version of the Ethernet standard to the 802.3 committee as a draft international standard.
    • 802.3/LLC, 802.3/802.2 or Novell 802.2. Adopted by the Committee 802.3 adopted a standard differing in some details from Ethernet DIX.
    • Raw 802.3 frame, or Novell 802.3- emerged as a result of Novell's efforts to speed up its Ethernet protocol stack

    Each frame begins with a preamble (Preamble) 7 bytes long, filled with the pattern 0b10101010 (to synchronize the source and destination). After the preamble comes the Start of Frame Delimiter (SFD) byte, containing the sequence 0b10101011 and indicating the beginning of its own frame. Next come the Destination Address (DA) and Source Address (SA) fields. Ethernet uses 48-bit IEEE MAC-level addresses.

    The following field has different meanings and different lengths depending on the frame type.

    At the end of the frame there is a 32-bit checksum field (Frame Check Sequence, FCS). The checksum is calculated using the CRC-32 algorithm. Ethernet frame size from 64 to 1518 bytes (excluding preamble, but including checksum field)

    Ethernet DIX frame type

    An Ethernet DIX frame, also called an Ethernet II frame, is similar to a Raw 802.3 frame in that it also does not use LLC sublayer headers, but differs in that it has a protocol type field (Type field) in place of the length field. This field serves the same purpose as the DSAP and SSAP fields of the LLC frame - to indicate the type of upper-layer protocol that included its packet in the data field of this frame. The protocol type encoding uses values ​​greater than the maximum data field length value of 1500, so Ethernet II and 802.3 frames are easily distinguishable.

    Frame type Raw 802.3.

    Following the source address, it contains a 16-bit length (L) field that specifies the number of bytes following the length field (ignoring the checksum field). This frame type always contains an IPX protocol packet. The first two bytes of the IPX protocol header contain the checksum of the IPX datagram. However, by default this field is not used and has a value of 0xFFFF.

    Frame type 802.3.LLC

    Following the source address field is a 16-bit length field specifying the number of bytes following this field (ignoring the checksum field). This is followed by the LLC header. The 802.3/LLC frame header is the result of combining the frame header fields defined in the 802.3 and 802.2 standards.

    The 802.3 standard defines eight header fields:

    Preamble field consists of seven bytes of synchronization data. Each byte contains the same sequence of bits - 10101010. With Manchester coding, this combination is represented in the physical environment by a periodic wave signal. The preamble is used to give time and opportunity for the transceiver circuits to come into stable synchronization with the received clock signals.

    Start limiter frame consists of one byte with the bit set 10101011. The appearance of this combination is an indication of the upcoming reception of the frame.

    Recipient's address- can be 2 or 6 bytes long (recipient MAC address). The first bit of the recipient's address is an indication of whether the address is individual or group: if 0, then the address points to a specific station, if 1, then this is the group address of several (possibly all) network stations. In broadcast addressing, all bits of the address field are set to 1. It is common practice to use 6-byte addresses.

    Sender's address- 2 or 6 byte field containing the sender station address. The first bit always has the value 0.

    Double byte length field defines the length of the data field in the frame.

    Data field can contain from 0 to 1500 bytes. But if the field length is less than 46 bytes, then the next field, the padding field, is used to pad the frame to the minimum allowed length.

    Fill field consists of a number of padding bytes that provides a certain minimum data field length (46 bytes). This ensures that the collision detection mechanism works correctly. If the data field is long enough, the padding field does not appear in the frame.

    Checksum field- 4 bytes containing a value that is calculated using a specific algorithm (CRC-32 polynomial). After receiving a frame, the workstation performs its own checksum calculation for that frame, compares the resulting value with the value of the checksum field, and thus determines whether the received frame is corrupted.

    The 802.3 frame is a MAC sublayer frame; in accordance with the 802.2 standard, its data field contains an LLC sublayer frame with the start and end of frame flags removed.

    The resulting 802.3/LLC frame is depicted downwards. Since the LLC frame has a 3-byte header, the maximum data field size is reduced to 1497 bytes.

    Ethernet SNAP frame type

    The Ethernet SNAP frame (SNAP - SubNetwork Access Protocol) is an extension of the 802.3/ LLC frame by introducing an additional SNAP protocol header. The header consists of a 3-byte Organization Identifier (OUI) field and a 2-byte Type (Ethertype) field. The type identifies the top-level protocol, and the OUI field identifies the identifier of the organization that controls the assignment of protocol type codes. Protocol codes for IEEE 802 standards are controlled by IEEE, which has an OUI code of 0x000000. For this OUI code, the type field for Ethernet SNAP is the same as the Ethernet DIX type value.

    Summary table on the use of different types of frames by higher-level protocols.

    Typeframe

    Ethernet II

    Ethernet Raw 802.3

    Ethernet 802.3/LLC

    Ethernet SNAP

    Networkprotocols

    IPX, IP, AppleTalk Phase I

    IPX, IP, AppleTalk Phase II

    Fast Ethernet

    Difference between Fast Ethernet and Ethernet technology

    All the differences between Ethernet and Fast Ethernet technologies are concentrated at the physical layer. The goal of Fast Ethernet technology is to achieve significantly, an order of magnitude, higher speed compared to 10 Base T Ethernet - IEEE 802.3, while maintaining the same access method, frame format and recording system. The MAC and LLC levels in Fast Ethernet remain absolutely the same same.

    The organization of the physical layer of Fast Ethernet technology is more complex, since it uses three types of cabling systems:

    • Fiber optic multimode cable(two fibers)
    • Category 5 twisted pair (two pairs)
    • Category 3 twisted pair (four pairs)

    Fast Ethernet does not use coaxial cable. Fast Ethernet networks on a shared medium, like 10Base-T/10Base-F networks, have a hierarchical tree structure built on hubs. The main difference in the configuration of Fast Ethernet networks is the reduction in diameter to 200 meters, which is explained by a reduction in the transmission time of a frame of minimum length by 10 times compared to a 10-MB Ethernet network.

    But when using switches, the Fast Ethernet protocol can operate in full-duplex mode, in which there is no limit on the total length of the network, but only on individual physical segments.

    Physical Environment Specification Ethernet

    • 100BASE-T- A general term for one of the three 100 Mbit/s Ethernet standards, using twisted pair cable as a data transmission medium. Segment length up to 200-250 meters. Includes 100BASE-TX, 100BASE-T4 and 100BASE-T2.
    • 100BASE-TX, IEEE 802.3u- Development of 10BASE-T technology, a star topology is used, a category-5 twisted pair cable is used, which actually uses 2 pairs of conductors, the maximum data transfer rate is 100 Mbit/s.
    • 100BASE-T4- 100 Mbps Ethernet over Category 3 cable. All 4 pairs are involved. Now it is practically not used. Data transmission occurs in half-duplex mode.
    • 100BASE-T2- Not used. 100 Mbps Ethernet over Cat-3 cable. Only 2 pairs are used. Full duplex transmission mode is supported, when signals propagate in opposite directions on each pair. Transmission speed in one direction is 50 Mbit/s.
    • 100BASE-FX- 100 Mbps Ethernet using fiber optic cable. The maximum segment length is 400 meters in half-duplex mode (for guaranteed collision detection) or 2 kilometers in full-duplex mode over multimode optical fiber and up to 32 kilometers over single-mode.

    Gigabit Ethernet

    • 1000BASE-T, IEEE 802.3ab- Ethernet standard 1 Gbit/s. Category 5e or category 6 twisted pair cable is used. All 4 pairs are involved in data transmission. Data transfer speed - 250 Mbit/s over one pair.
    • 1000BASE-TX, - 1 Gbps Ethernet standard using only Category 6 twisted pair cables. Practically not used.
    • 1000Base-X- a general term for Gigabit Ethernet technology, which uses fiber optic cable as a data transmission medium, includes 1000BASE-SX, 1000BASE-LX and 1000BASE-CX.
    • 1000BASE-SX, IEEE 802.3z- 1 Gbit/s Ethernet technology, uses multimode fiber, signal transmission range without a repeater is up to 550 meters.
    • 1000BASE-LX, IEEE 802.3z- 1 Gbit/s Ethernet technology, uses multimode fiber, signal transmission range without a repeater is up to 550 meters. Optimized for long distances using single-mode fiber (up to 10 kilometers).
    • 1000BASE-CX- Gigabit Ethernet technology for short distances (up to 25 meters), a special copper cable (Shielded Twisted Pair (STP)) with a characteristic impedance of 150 Ohms is used. Replaced by the 1000BASE-T standard, and now not used.
    • 1000BASE-LH (Long Haul)- 1 Gbit/s Ethernet technology, uses single-mode optical cable, signal transmission range without a repeater is up to 100 kilometers.

    Gigabit Ethernet problems

    • Ensuring an acceptable network diameter to operate on a shared medium. Due to the cable length limitations of CSMA/CD, the shared media version of Gigabit Ethernet would only allow a segment length of 25 meters. It was necessary to solve this problem.
    • Achieving a bit rate of 1000Mbps on an optical cable. Fiber Channel technology, the physical layer of which was taken as the basis for the fiber-optic version of Gigabit Ethernet, provides data transfer rates of only 800 Mbit/s.
    • Use as a twisted pair cable.

    To solve these problems, it was necessary to make changes not only to the physical layer, but also to the MAC layer.

    Means of ensuring a network diameter of 200 m on a shared medium

    To expand the maximum diameter of a Gigabit Ethernet network in half-duplex mode to 200 m, technology developers have taken fairly natural measures based on the known ratio of frame transmission time of the minimum length and double turnaround time.

    The minimum frame size has been increased (excluding preamble) from 64 to 512 bytes or 4096 bt. Accordingly, the double turnaround time could now also be increased to 4095 bt, making a network diameter of about 200 m possible when using a single repeater. With double the signal delay of 10 bt/m, 100 m of fiber optic cables will contribute during double 1000 bt round trips, and if the repeater and network adapters will contribute the same delays as in Fast Ethernet technologies (data for which were given in the previous section) , then the delay of a repeater of 1000 bt and a pair of network adapters of 1000 bt will give a total double turnaround time of 4000 bt, which satisfies the condition for collision recognition. To increase the frame length to the value required in the new technology, the network adapter must supplement the data field to a length of 448 bytes with the so-called extension, which is a field filled with prohibited 8V/10V code characters that cannot be mistaken for data codes.

    To reduce the overhead of using too long frames to transmit short receipts, the developers of the standard allowed end nodes to transmit several frames in a row, without transmitting the medium to other stations. This mode is called Burst Mode - exclusive burst mode. A station can transmit several frames in a row with a total length of no more than 65,536 bits or 8192 bytes. If a station needs to transmit several small frames, then it may not pad them up to a size of 512 bytes, but transmit in a row until the limit of 8192 bytes is exhausted (this limit includes all bytes of the frame, including the preamble, header, data and checksum) . The limit of 8192 bytes is called BurstLength. If a station begins to transmit a frame and the BurstLength limit is reached in the middle of the frame, then the frame is allowed to be transmitted to the end.

    Increasing the “combined” frame to 8192 bytes slightly delays access to the shared medium of other stations, but at a speed of 1000 Mbit/s this delay is not so significant

    Literature

    1. V.G.Olifer, N.A.Olifer Computer networks