• Maximum fast ethernet speed. Fast Ethernet technology, its features, physical layer, construction rules. DSAP and SSAP field values

    The most widespread among standard networks is the Ethernet network. It appeared in 1972, and in 1985 it became an international standard. It was adopted by the largest international standards organizations: Committee 802 IEEE (Institute of Electrical and Electronic Engineers) and ECMA (European Computer Manufacturers Association).

    The standard is called IEEE 802.3 (read in English as “eight oh two dot three”). It defines multiple access to a mono bus type channel with collision detection and transmission control, that is, with the already mentioned CSMA/CD access method.

    Main characteristics of the original IEEE 802.3 standard:

    · topology – bus;

    · transmission medium – coaxial cable;

    · transmission speed – 10 Mbit/s;

    · maximum network length – 5 km;

    · maximum number of subscribers – up to 1024;

    · network segment length – up to 500 m;

    · number of subscribers on one segment – ​​up to 100;

    · access method – CSMA/CD;

    · narrowband transmission, that is, without modulation (mono channel).

    Strictly speaking, there are minor differences between the IEEE 802.3 and Ethernet standards, but they are usually ignored.

    The Ethernet network is now the most popular in the world (more than 90% of the market), and presumably it will remain so in the coming years. This was greatly facilitated by the fact that from the very beginning the characteristics, parameters, and protocols of the network were open, as a result of which a huge number of manufacturers around the world began to produce Ethernet equipment that was fully compatible with each other.

    The classic Ethernet network used 50-ohm coaxial cable of two types (thick and thin). However, recently (since the early 90s), the most widely used version of Ethernet is that using twisted pairs as a transmission medium. A standard has also been defined for use in fiber optic cable networks. To accommodate these changes, appropriate additions were made to the original IEEE 802.3 standard. In 1995, an additional standard appeared for a faster version of Ethernet operating at a speed of 100 Mbit/s (the so-called Fast Ethernet, IEEE 802.3u standard), using twisted pair or fiber optic cable as the transmission medium. In 1997, a version with a speed of 1000 Mbit/s (Gigabit Ethernet, IEEE 802.3z standard) also appeared.



    In addition to the standard bus topology, passive star and passive tree topologies are increasingly being used. This involves the use of repeaters and repeater hubs that connect different parts (segments) of the network. As a result, a tree-like structure can be formed on segments of different types (Fig. 7.1).

    The segment (part of the network) can be a classic bus or a single subscriber. Coaxial cable is used for bus segments, and twisted pair and fiber optic cable is used for passive star spokes (for connecting single computers to a hub). The main requirement for the resulting topology is that it should not contain closed paths (loops). In fact, it turns out that all subscribers are connected to a physical bus, since the signal from each of them propagates in all directions at once and does not return back (as in a ring).

    The maximum cable length of the network as a whole (maximum signal path) can theoretically reach 6.5 kilometers, but practically does not exceed 3.5 kilometers.

    Rice. 7.1. Classic Ethernet network topology.

    A Fast Ethernet network does not have a physical bus topology; only a passive star or passive tree is used. In addition, Fast Ethernet has much more stringent requirements for the maximum network length. After all, with a 10-fold increase in transmission speed and the same packet format, its minimum length becomes ten times shorter. Thus, the permissible value of double signal transmission time through the network is reduced by 10 times (5.12 μs versus 51.2 μs in Ethernet).

    The standard Manchester code is used to transmit information on an Ethernet network.

    Access to the Ethernet network is carried out using the random CSMA/CD method, ensuring equality of subscribers. The network uses packets of variable length.

    For an Ethernet network operating at a speed of 10 Mbit/s, the standard defines four main types of network segments, focused on different information transmission media:

    · 10BASE5 (thick coaxial cable);

    · 10BASE2 (thin coaxial cable);

    · 10BASE-T (twisted pair);

    · 10BASE-FL (fiber optic cable).

    The name of the segment includes three elements: the number “10” means a transmission speed of 10 Mbit/s, the word BASE means transmission in the base frequency band (that is, without modulating a high-frequency signal), and the last element is the permissible length of the segment: “5” – 500 meters, “2” – 200 meters (more precisely, 185 meters) or type of communication line: “T” – twisted pair (from the English “twisted-pair”), “F” – fiber optic cable (from the English “fiber optic”).

    Similarly, for an Ethernet network operating at a speed of 100 Mbit/s (Fast Ethernet), the standard defines three types of segments, differing in the types of transmission media:

    · 100BASE-T4 (quad twisted pair);

    · 100BASE-TX (dual twisted pair);

    · 100BASE-FX (fiber optic cable).

    Here, the number “100” means a transmission speed of 100 Mbit/s, the letter “T” means twisted pair, and the letter “F” means fiber optic cable. The types 100BASE-TX and 100BASE-FX are sometimes combined under the name 100BASE-X, and 100BASE-T4 and 100BASE-TX are called 100BASE-T.


    Token-Ring Network

    The Token-Ring network was proposed by IBM in 1985 (the first version appeared in 1980). It was intended to network all types of computers produced by IBM. The very fact that it is supported by IBM, the largest manufacturer of computer equipment, suggests that it needs to be given special attention. But equally important is that Token-Ring is currently an international standard, IEEE 802.5 (although there are minor differences between Token-Ring and IEEE 802.5). This puts this network on the same level of status as Ethernet.

    Token-Ring was developed as a reliable alternative to Ethernet. And although Ethernet is now replacing all other networks, Token-Ring cannot be considered hopelessly outdated. More than 10 million computers around the world are connected by this network.

    The Token-Ring network has a ring topology, although outwardly it looks more like a star. This is due to the fact that individual subscribers (computers) connect to the network not directly, but through special hubs or multiple access devices (MSAU or MAU - Multistation Access Unit). Physically, the network forms a star-ring topology (Fig. 7.3). In reality, the subscribers are still united in a ring, that is, each of them transmits information to one neighboring subscriber and receives information from another.

    Rice. 7.3. Star-ring topology of the Token-Ring network.

    The transmission medium in the IBM Token-Ring network was initially twisted pair, both unshielded (UTP) and shielded (STP), but then equipment options appeared for coaxial cable, as well as for fiber optic cable in the FDDI standard.

    Main technical characteristics of the classic version of the Token-Ring network:

    · maximum number of IBM 8228 MAU type hubs – 12;

    · maximum number of subscribers in the network – 96;

    · maximum cable length between the subscriber and the hub is 45 meters;

    · maximum cable length between hubs is 45 meters;

    · the maximum length of the cable connecting all hubs is 120 meters;

    · data transfer speed – 4 Mbit/s and 16 Mbit/s.

    All characteristics given refer to the case of using unshielded twisted pair cable. If a different transmission medium is used, network performance may vary. For example, when using shielded twisted pair (STP), the number of subscribers can be increased to 260 (instead of 96), the cable length can be increased to 100 meters (instead of 45), the number of hubs can be increased to 33, and the total length of the ring connecting the hubs can be up to 200 meters . Fiber optic cable allows you to increase the cable length up to two kilometers.

    To transmit information to Token-Ring, a biphase code is used (more precisely, its version with a mandatory transition in the center of the bit interval). As with any star topology, no additional electrical termination or external grounding measures are required. Negotiation is performed by the hardware of network adapters and hubs.

    To connect cables, the Token-Ring uses RJ-45 connectors (for unshielded twisted pair), as well as MIC and DB9P. The wires in the cable connect the connector contacts of the same name (that is, so-called “straight” cables are used).

    The Token-Ring network in its classic version is inferior to the Ethernet network both in terms of permissible size and the maximum number of subscribers. Regarding the transfer speed, there are currently versions of Token-Ring at 100 Mbit/s (High Speed ​​Token-Ring, HSTR) and at 1000 Mbit/s (Gigabit Token-Ring). Companies supporting Token-Ring (including IBM, Olicom, Madge) do not intend to abandon their network, considering it as a worthy competitor to Ethernet.

    Compared to Ethernet equipment, Token-Ring equipment is noticeably more expensive, since it uses a more complex method of managing the exchange, so the Token-Ring network has not become so widespread.

    However, unlike Ethernet, the Token-Ring network can handle high load levels (more than 30-40%) much better and provides guaranteed access time. This is necessary, for example, in industrial networks, where a delay in the response to an external event can lead to serious accidents.

    The Token-Ring network uses the classic token access method, that is, a token constantly circulates around the ring, to which subscribers can attach their data packets (see Fig. 4.15). This implies such an important advantage of this network as the absence of conflicts, but there are also disadvantages, in particular the need to control the integrity of the token and the dependence of the functioning of the network on each subscriber (in the event of a malfunction, the subscriber must be excluded from the ring).

    The maximum time for transmitting a packet to Token-Ring is 10 ms. With a maximum number of subscribers of 260, the full ring cycle will be 260 x 10 ms = 2.6 s. During this time, all 260 subscribers will be able to transmit their packets (if, of course, they have something to transmit). During this same time, the free token will definitely reach each subscriber. This same interval is the upper limit of the Token-Ring access time.


    Arcnet network

    The Arcnet network (or ARCnet from the English Attached Resource Computer Net, a computer network of connected resources) is one of the oldest networks. It was developed by Datapoint Corporation back in 1977. There are no international standards for this network, although it is considered the ancestor of the token access method. Despite the lack of standards, the Arcnet network until recently (in 1980 - 1990) was popular, even seriously competing with Ethernet. A large number of companies produced equipment for this type of network. But now production of Arcnet equipment has practically ceased.

    Among the main advantages of the Arcnet network compared to Ethernet are the limited access time, high reliability of communication, ease of diagnosis, and the relatively low cost of adapters. The most significant disadvantages of the network include low information transmission speed (2.5 Mbit/s), addressing system and packet format.

    To transmit information on the Arcnet network, a rather rare code is used, in which a logical one corresponds to two pulses during a bit interval, and a logical zero corresponds to one pulse. Obviously, this is a self-timed code that requires even more cable bandwidth than even Manchester.

    The transmission medium in the network is a coaxial cable with a characteristic impedance of 93 Ohms, for example, brand RG-62A/U. Options with twisted pair (shielded and unshielded) are not widely used. Fiber optic cable options were also proposed, but they also did not save Arcnet.

    As a topology, the Arcnet network uses a classic bus (Arcnet-BUS), as well as a passive star (Arcnet-STAR). The star uses concentrators (hubs). It is possible to combine bus and star segments into a tree topology using hubs (as in Ethernet). The main limitation is that there should be no closed paths (loops) in the topology. Another limitation: the number of segments connected in a daisy chain using hubs should not exceed three.

    Thus, the topology of the Arcnet network is as follows (Fig. 7.15).

    Rice. 7.15. Arcnet network topology is bus type (B – adapters for working in a bus, S – adapters for working in a star).

    The main technical characteristics of the Arcnet network are as follows.

    · Transmission medium – coaxial cable, twisted pair.

    · The maximum network length is 6 kilometers.

    · The maximum cable length from the subscriber to the passive hub is 30 meters.

    · The maximum cable length from the subscriber to the active hub is 600 meters.

    · The maximum cable length between active and passive hubs is 30 meters.

    · The maximum cable length between active hubs is 600 meters.

    · The maximum number of subscribers in the network is 255.

    · The maximum number of subscribers on the bus segment is 8.

    · The minimum distance between subscribers in the bus is 1 meter.

    · The maximum length of the bus segment is 300 meters.

    · Data transfer speed – 2.5 Mbit/s.

    When creating complex topologies, it is necessary to ensure that the delay in signal propagation in the network between subscribers does not exceed 30 μs. The maximum signal attenuation in the cable at a frequency of 5 MHz should not exceed 11 dB.

    The Arcnet network uses a token access method (transfer of rights method), but it is somewhat different from that of the Token-Ring network. This method is closest to the one provided in the IEEE 802.4 standard.

    Just like with Token-Ring, conflicts are completely eliminated in Arcnet. Like any token network, Arcnet carries the load well and guarantees long access times to the network (unlike Ethernet). The total time for the marker to bypass all subscribers is 840 ms. Accordingly, the same interval determines the upper limit of network access time.

    The token is generated by a special subscriber – the network controller. This is the subscriber with the minimum (zero) address.


    FDDI network

    The FDDI network (from the English Fiber Distributed Data Interface, fiber-optic distributed data interface) is one of the latest developments in local network standards. The FDDI standard was proposed by the American National Standards Institute ANSI (ANSI specification X3T9.5). The ISO 9314 standard was then adopted, conforming to ANSI specifications. The level of network standardization is quite high.

    Unlike other standard local networks, the FDDI standard was initially focused on high transmission speeds (100 Mbit/s) and the use of the most promising fiber optic cable. Therefore, in this case, the developers were not constrained by the old standards, which focused on low speeds and electrical cables.

    The choice of optical fiber as a transmission medium determined such advantages of the new network as high noise immunity, maximum confidentiality of information transmission and excellent galvanic isolation of subscribers. High transmission speeds, which are much easier to achieve in the case of fiber optic cables, make it possible to solve many tasks that are not possible with lower-speed networks, for example, transmitting images in real time. In addition, fiber optic cable easily solves the problem of transmitting data over a distance of several kilometers without relaying, which makes it possible to build large networks that even cover entire cities and have all the advantages of local networks (in particular, a low error rate). All this determined the popularity of the FDDI network, although it is not yet as widespread as Ethernet and Token-Ring.

    The FDDI standard was based on the token access method provided for by the international standard IEEE 802.5 (Token-Ring). Minor differences from this standard are determined by the need to ensure high speed information transfer over long distances. The FDDI network topology is ring, the most suitable topology for fiber optic cable. The network uses two multi-directional fiber optic cables, one of which is usually in reserve, but this solution allows the use of full-duplex information transmission (simultaneously in two directions) with double the effective speed of 200 Mbit/s (with each of the two channels operating at the speed 100 Mbit/s). A star-ring topology with hubs included in the ring (as in Token-Ring) is also used.

    Main technical characteristics of the FDDI network.

    · The maximum number of network subscribers is 1000.

    · The maximum length of the network ring is 20 kilometers.

    · The maximum distance between network subscribers is 2 kilometers.

    · Transmission medium – multimode fiber optic cable (possibly using electrical twisted pair).

    · Access method – token.

    · Information transfer speed – 100 Mbit/s (200 Mbit/s for duplex transmission mode).

    The FDDI standard has significant advantages over all previously discussed networks. For example, a Fast Ethernet network with the same 100 Mbps bandwidth cannot match FDDI in terms of network size allowance. In addition, the FDDI token access method, unlike CSMA/CD, provides guaranteed access time and the absence of conflicts at any load level.

    The limitation on the total network length of 20 km is not due to signal attenuation in the cable, but to the need to limit the time it takes for a signal to completely travel around the ring to ensure maximum permissible access time. But the maximum distance between subscribers (2 km with a multimode cable) is determined precisely by the attenuation of the signals in the cable (it should not exceed 11 dB). It is also possible to use single-mode cable, in which case the distance between subscribers can reach 45 kilometers, and the total ring length can be 200 kilometers.

    There is also an implementation of FDDI on an electrical cable (CDDI - Copper Distributed Data Interface or TPDDI - Twisted Pair Distributed Data Interface). This uses a Category 5 cable with RJ-45 connectors. The maximum distance between subscribers in this case should be no more than 100 meters. The cost of network equipment on an electric cable is several times less. But this version of the network no longer has such obvious advantages over competitors as the original fiber-optic FDDI. Electrical versions of FDDI are much less standardized than fiber optic versions, so compatibility between equipment from different manufacturers is not guaranteed.

    To transmit data in FDDI, a 4B/5B code specially developed for this standard is used.

    To achieve high network flexibility, the FDDI standard provides for the inclusion of two types of subscribers in the ring:

    · Class A subscribers (stations) (dual-attachment subscribers, DAS – Dual-Attachment Stations) are connected to both (internal and external) network rings. At the same time, the possibility of exchange at speeds of up to 200 Mbit/s or network cable redundancy is realized (if the main cable is damaged, a backup one is used). Equipment of this class is used in the most critical parts of the network in terms of performance.

    · Class B subscribers (stations) (single connection subscribers, SAS – Single-Attachment Stations) are connected to only one (external) network ring. They are simpler and cheaper than Class A adapters, but do not have their capabilities. They can only be connected to the network through a hub or bypass switch, which turns them off in the event of an emergency.

    In addition to the subscribers themselves (computers, terminals, etc.), the network uses Wiring Concentrators, the inclusion of which allows you to collect all connection points in one place in order to monitor the operation of the network, diagnose faults and simplify reconfiguration. When using different types of cables (for example, fiber optic cable and twisted pair), the hub also performs the function of converting electrical signals into optical signals and vice versa. Concentrators also come in dual connection (DAC - Dual-Attachment Concentrator) and single connection (SAC - Single-Attachment Concentrator).

    An example of an FDDI network configuration is shown in Fig. 8.1. The principle of combining network devices is illustrated in Fig. 8.2.

    Rice. 8.1. Example of FDDI network configuration.

    Unlike the access method proposed by the IEEE 802.5 standard, FDDI uses so-called multiple token passing. If in the case of the Token-Ring network a new (free) token is transmitted by the subscriber only after his packet is returned to him, then in FDDI the new token is transmitted by the subscriber immediately after the end of his packet transmission (similar to how this is done with the ETR method in the Token-Ring network Ring).

    In conclusion, it should be noted that despite the obvious advantages of FDDI, this network has not become widespread, which is mainly due to the high cost of its equipment (on the order of several hundred and even thousands of dollars). The main area of ​​application of FDDI now is basic, core (Backbone) networks that combine several networks. FDDI is also used to connect powerful workstations or servers that require high-speed communication. It is expected that Fast Ethernet can supplant FDDI, but the advantages of fiber optic cable, token management and the record-breaking network size currently put FDDI ahead of the competition. And in cases where the cost of the equipment is critical, a twisted-pair version of FDDI (TPDDI) can be used in non-critical areas. In addition, the cost of FDDI equipment can greatly decrease as its production volume increases.


    100VG-AnyLAN network

    The 100VG-AnyLAN network is one of the latest developments in high-speed local area networks that has recently appeared on the market. It complies with the international standard IEEE 802.12, so its level of standardization is quite high.

    Its main advantages are high exchange speed, relatively low cost of equipment (about twice as expensive as the equipment of the most popular Ethernet 10BASE-T network), a centralized method of managing exchange without conflicts, as well as compatibility at the level of packet formats with Ethernet and Token-Ring networks.

    In the name of the 100VG-AnyLAN network, the number 100 corresponds to a speed of 100 Mbps, the letters VG indicate low-cost unshielded twisted pair Category 3 (Voice Grade), and AnyLAN (any network) indicates that the network is compatible with the two most common networks.

    Main technical characteristics of the 100VG-AnyLAN network:

    · Transfer speed – 100 Mbit/s.

    · Topology – star with expandability (tree). The number of cascading levels of concentrators (hubs) is up to 5.

    · Access method – centralized, conflict-free (Demand Priority – with a priority request).

    · Transmission media are quad unshielded twisted pair (UTP Category 3, 4 or 5 cable), dual twisted pair (UTP Category 5 cable), dual shielded twisted pair (STP), and fiber optic cable. Nowadays, quad twisted pair cables are mostly common.

    · The maximum cable length between the hub and the subscriber and between hubs is 100 meters (for UTP cable category 3), 200 meters (for UTP cable category 5 and shielded cable), 2 kilometers (for fiber optic cable). The maximum possible network size is 2 kilometers (determined by acceptable delays).

    · The maximum number of subscribers is 1024, recommended – up to 250.

    Thus, the parameters of the 100VG-AnyLAN network are quite close to the parameters of the Fast Ethernet network. However, the main advantage of Fast Ethernet is its full compatibility with the most common Ethernet network (in the case of 100VG-AnyLAN, this requires a bridge). At the same time, the centralized control of 100VG-AnyLAN, which eliminates conflicts and guarantees maximum access time (which is not provided in the Ethernet network), also cannot be discounted.

    An example of the 100VG-AnyLAN network structure is shown in Fig. 8.8.

    The 100VG-AnyLAN network consists of a central (main, root) Level 1 hub, to which both individual subscribers and Level 2 hubs can be connected, to which subscribers and Level 3 hubs, in turn, can be connected, etc. In this case, the network can have no more than five such levels (in the original version there were no more than three). The maximum network size can be 1000 meters for unshielded twisted pair cable.

    Rice. 8.8. Network structure 100VG-AnyLAN.

    Unlike non-intelligent hubs of other networks (for example, Ethernet, Token-Ring, FDDI), 100VG-AnyLAN network hubs are intelligent controllers that control access to the network. To do this, they continuously monitor requests arriving on all ports. Hubs receive incoming packets and send them only to those subscribers to whom they are addressed. However, they do not perform any information processing, that is, in this case, the result is still not an active, but not a passive star. Concentrators cannot be called full-fledged subscribers.

    Each of the hubs can be configured to work with Ethernet or Token-Ring packet formats. In this case, the hubs of the entire network must work with packets of only one format. Bridges are required to communicate with Ethernet and Token-Ring networks, but the bridges are quite simple.

    Hubs have one upper-level port (for connecting it to a higher-level hub) and several lower-level ports (for connecting subscribers). The subscriber can be a computer (workstation), server, bridge, router, switch. Another hub can also be connected to the lower level port.

    Each hub port can be set to one of two possible operating modes:

    · Normal mode involves forwarding to the subscriber connected to the port only packets addressed to him personally.

    · Monitor mode involves forwarding to the subscriber connected to the port all packets arriving at the hub. This mode allows one of the subscribers to control the operation of the entire network as a whole (perform a monitoring function).

    The 100VG-AnyLAN network access method is typical for star networks.

    When using quad twisted pair cable, each of the four twisted pair cables transmits at a speed of 30 Mbps. The total transmission speed is 120 Mbit/s. However, useful information due to the use of the 5B/6B code is transmitted at only 100 Mbit/s. Thus, the cable bandwidth must be at least 15 MHz. Category 3 twisted pair cable (16 MHz bandwidth) satisfies this requirement.

    Thus, the 100VG-AnyLAN network provides an affordable solution for increasing transmission speeds up to 100 Mbps. However, it is not fully compatible with any of the standard networks, so its future fate is problematic. In addition, unlike the FDDI network, it does not have any record parameters. Most likely, 100VG-AnyLAN, despite the support of reputable companies and a high level of standardization, will remain just an example of interesting technical solutions.

    When it comes to the most common 100Mbps Fast Ethernet network, 100VG-AnyLAN provides twice the Category 5 UTP cable length (up to 200 meters) as well as a contention-free method of traffic management.

    Fast Ethernet - the IEEE 802.3 u specification, officially adopted on October 26, 1995, defines a link-layer protocol standard for networks operating using both copper and fiber-optic cables at a speed of 100 Mb/s. The new specification is a successor to the IEEE 802.3 Ethernet standard, using the same frame format, CSMA/CD media access mechanism and star topology. The evolution has affected several physical layer configuration elements to increase capacity, including cable types, segment lengths, and the number of hubs.

    Physical layer

    The Fast Ethernet standard defines three types of 100 Mbps Ethernet signaling media.

    · 100Base-TX - two twisted pairs of wires. The transmission is carried out in accordance with the standard for data transmission in a twisted physical medium, developed by ANSI (American National Standards Institute - American National Standards Institute). Twisted data cable can be shielded or unshielded. Uses 4V/5V data encoding algorithm and MLT-3 physical encoding method.

    · 100Base-FX - two cores, fiber optic cable. Transmission is also carried out in accordance with the Fiber Optic Communications Standard developed by ANSI. Uses 4V/5V data encoding algorithm and NRZI physical encoding method.

    · 100Base-T4 is a specific specification developed by the IEEE 802.3u committee. According to this specification, data transmission is carried out over four twisted pairs of telephone cable, which is called UTP category 3 cable. It uses the 8V/6T data encoding algorithm and the NRZI physical encoding method.

    Multimode cable

    This type of fiber optic cable uses a fiber with a core diameter of 50 or 62.5 micrometers and an outer cladding thickness of 125 micrometers. This cable is called a multimode optical cable with 50/125 (62.5/125) micrometer fibers. To transmit a light signal via a multimode cable, an LED transceiver with a wavelength of 850 (820) nanometers is used. If a multimode cable connects two full-duplex switch ports, it can be up to 2000 meters long.

    Singlemode cable

    A single-mode fiber optic cable has a smaller core diameter of 10 micrometers than a multimode fiber optic cable, and a laser transceiver is used for transmission over a single-mode cable, which together ensures efficient transmission over long distances. The wavelength of the transmitted light signal is close to the core diameter, which is 1300 nanometers. This number is known as the zero dispersion wavelength. In a single-mode cable, dispersion and signal loss are very small, which allows light signals to be transmitted over longer distances than in the case of multimode fiber.


    38. Gigabit Ethernet technology, general characteristics, specification of the physical environment, basic concepts.
    3.7.1. General characteristics of the standard

    Quite quickly after Fast Ethernet products appeared on the market, network integrators and administrators felt certain limitations when building corporate networks. In many cases, servers connected via a 100-Mbit channel overloaded network backbones that also operated at 100 Mbit/s speed - FDDI and Fast Ethernet backbones. There was a need for the next level of the speed hierarchy. In 1995, only ATM switches could provide a higher level of speed, and in the absence at that time of convenient means of migrating this technology to local networks (although the LAN Emulation - LANE specification was adopted in early 1995, its practical implementation was ahead) to implement them in Almost no one dared to create a local network. In addition, ATM technology was very expensive.

    Therefore, the next logical step taken by IEEE was that 5 months after the final adoption of the Fast Ethernet standard in June 1995, the IEEE High Speed ​​Technology Research Group was ordered to consider the possibility of developing an Ethernet standard with an even higher bit rate.

    In the summer of 1996, the creation of the 802.3z group was announced to develop a protocol as similar as possible to Ethernet, but with a bit rate of 1000 Mbps. As with Fast Ethernet, the message was received with great enthusiasm by Ethernet proponents.



    The main reason for the enthusiasm was the prospect of the same smooth transfer of network backbones to Gigabit Ethernet, just as overloaded Ethernet segments located at lower levels of the network hierarchy were transferred to Fast Ethernet. In addition, experience in transmitting data at gigabit speeds already existed, both in territorial networks (SDH technology) and in local networks - Fiber Channel technology, which is used mainly to connect high-speed peripherals to large computers and transmits data via fiber-optic cable from speed close to gigabit via 8V/10V redundant code.

    The first version of the standard was reviewed in January 1997, and the 802.3z standard was finally adopted on June 29, 1998 at a meeting of the IEEE 802.3 committee. Work on implementing Gigabit Ethernet on Category 5 twisted pair cables was transferred to a special 802.3ab committee, which has already considered several draft options for this standard, and since July 1998 the project has become fairly stable. Final adoption of the 802.3ab standard is expected in September 1999.

    Without waiting for the standard to be adopted, some companies released the first Gigabit Ethernet equipment on fiber optic cable by the summer of 1997.

    The main idea of ​​the developers of the Gigabit Ethernet standard is to preserve as much as possible the ideas of classical Ethernet technology while achieving a bit speed of 1000 Mbit/s.

    Since when developing a new technology it is natural to expect some technical innovations that follow the general trend of network technology development, it is important to note that Gigabit Ethernet, like its lower-speed counterparts, is at the protocol level there won't be support:

    • quality of service;
    • redundant connections;
    • testing the performance of nodes and equipment (in the latter case, with the exception of testing port-to-port communication, as is done for Ethernet 10Base-T and 10Base-F and Fast Ethernet).

    All three of these properties are considered very promising and useful in modern networks, and especially in networks of the near future. Why do the authors of Gigabit Ethernet abandon them?

    The main idea of ​​the developers of Gigabit Ethernet technology is that there are and will be many networks in which the high speed of the backbone and the ability to assign priorities to packets in switches will be quite sufficient to ensure the quality of transport service to all network clients. And only in those rare cases when the highway is quite congested and the requirements for quality of service are very stringent, it is necessary to use ATM technology, which, due to its high technical complexity, really guarantees quality of service for all major types of traffic.


    39. Structural cabling system used in network technologies.
    A structured cabling system (SCS) is a set of switching elements (cables, connectors, connectors, cross-connect panels and cabinets), as well as a technique for using them together, which allows you to create regular, easily expandable connection structures in computer networks.

    A structured cabling system is a kind of “constructor” with the help of which the network designer builds the configuration he needs from standard cables connected by standard connectors and switched on standard cross-connect panels. If necessary, the connection configuration can be easily changed - add a computer, segment, switch, remove unnecessary equipment, and also change connections between computers and hubs.

    When building a structured cabling system, it is assumed that each workplace in the enterprise must be equipped with sockets for connecting a telephone and computer, even if this is not currently required. That is, a good structured cabling system is built redundant. This can save money in the future, since changes in connecting new devices can be made by reconnecting existing cables.

    A typical hierarchical structure of a structured cabling system includes:

    • horizontal subsystems (within a floor);
    • vertical subsystems (inside the building);
    • campus subsystem (within one territory with several buildings).

    Horizontal subsystem connects the floor cross cabinet to the users' sockets. Subsystems of this type correspond to the floors of the building. Vertical subsystem connects the cross cabinets of each floor with the central equipment room of the building. The next step in the hierarchy is campus subsystem, which connects several buildings to the main equipment room of the entire campus. This part of the cabling system is usually called the backbone.

    Using a structured cabling system instead of haphazardly routed cables provides many benefits to a business.

    · Versatility. A structured cabling system, with thoughtful organization, can become a single environment for transmitting computer data on a local computer network, organizing a local telephone network, transmitting video information, and even transmitting signals from fire safety sensors or security systems. This allows you to automate many processes of control, monitoring and management of economic services and life support systems of the enterprise.

    · Increased service life. The obsolescence of a well-structured cabling system can be 10-15 years.

    · Reduce the cost of adding new users and changing their placements. It is known that the cost of a cable system is significant and is determined mainly not by the cost of the cable, but by the cost of laying it. Therefore, it is more profitable to carry out a one-time job of laying the cable, perhaps with a larger margin in length, than to carry out the laying several times, increasing the length of the cable. With this approach, all work on adding or moving a user is reduced to connecting the computer to an existing outlet.

    · Possibility of easy network expansion. The structured cabling system is modular and therefore easy to expand. For example, you can add a new subnet to a backbone without having any impact on existing subnets. You can change the cable type on a specific subnet independently of the rest of the network. Structured cabling is the basis for dividing the network into easily manageable logical segments, since it itself is already divided into physical segments.

    · Providing more efficient service. A structured cabling system makes maintenance and troubleshooting easier than a bus cabling system. With a bus-based cable system, the failure of one of the devices or connecting elements leads to a difficult-to-localize failure of the entire network. In structured cabling systems, the failure of one segment does not affect the others, since the combination of segments is carried out using hubs. Concentrators diagnose and localize the faulty area.

    · Reliability. A structured cabling system has increased reliability because the manufacturer of such a system guarantees not only the quality of its individual components, but also their compatibility.


    40. Hubs and network adapters, principles, use, basic concepts.
    Hubs, together with network adapters, as well as a cable system, represent the minimum equipment with which you can create a local network. Such a network will represent a common shared environment

    Network Interface Card (NIC) together with its driver, it implements the second, channel level of the open systems model in the final network node - the computer. More precisely, in a network operating system, the adapter and driver pair performs only the functions of the physical and MAC layers, while the LLC layer is usually implemented by an operating system module that is common to all drivers and network adapters. Actually, this is how it should be in accordance with the IEEE 802 protocol stack model. For example, in Windows NT, the LLC level is implemented in the NDIS module, common to all network adapter drivers, regardless of what technology the driver supports.

    The network adapter together with the driver performs two operations: frame transmission and reception.

    In adapters for client computers, a significant part of the work is shifted to the driver, making the adapter simpler and cheaper. The disadvantage of this approach is the high degree of load on the computer's central processor with routine work on transferring frames from the computer's RAM to the network. The central processor is forced to do this work instead of performing the user's application tasks.

    The network adapter must be configured before installation in the computer. When configuring an adapter, you typically specify the IRQ number used by the adapter, the DMA channel number (if the adapter supports DMA mode), and the base address of the I/O ports.

    Almost all modern local network technologies define a device that has several equal names - hub(concentrator), hub (hub), repeater (repeater). Depending on the area of ​​application of this device, the composition of its functions and design changes significantly. Only the main function remains unchanged - this frame repetition either on all ports (as defined in the Ethernet standard) or only on some ports, according to the algorithm defined by the relevant standard.

    A hub usually has several ports, to which the end nodes of the network - computers - are connected using separate physical cable segments. The hub combines individual physical segments of the network into a single shared medium, access to which is carried out in accordance with one of the considered local network protocols - Ethernet, Token Ring, etc. Since the logic of access to a shared medium significantly depends on the technology, for each type technologies produce their own hubs - Ethernet; Token Ring; FDDI and 100VG-AnyLAN. For a specific protocol, sometimes a highly specialized name for this device is used, which more accurately reflects its functions or is used due to tradition, for example, the name MSAU is typical for Token Ring concentrators.

    Each hub performs some basic function, defined in the corresponding protocol of the technology it supports. Although this feature is defined in some detail in the technology standard, when it is implemented, hubs from different manufacturers may differ in such details as the number of ports, support for multiple cable types, etc.

    In addition to the main function, the hub can perform a number of additional functions, which are either not defined in the standard at all or are optional. For example, a Token Ring hub can perform the function of disabling incorrectly operating ports and switching to a backup ring, although such capabilities are not described in the standard. The hub turned out to be a convenient device for performing additional functions that facilitate network control and operation.


    41. Use of bridges and switches, principles, features, examples, limitations
    Structuring with bridges and switches

    the network can be divided into logical segments using two types of devices - bridges and/or switches (switching hub).

    Bridge and switch are functional twins. Both of these devices promote frames based on the same algorithms. Bridges and switches use two types of algorithms: algorithm transparent bridge, described in the IEEE 802.1D standard, or the algorithm source routing bridge IBM company for Token Ring networks. These standards were developed long before the first switch, so they use the term “bridge.” When the first industrial model of a switch for Ethernet technology was born, it performed the same algorithm for promoting IEEE 802.ID frames, which had been worked out for ten years by bridges of local and global networks

    The main difference between a switch and a bridge is that a bridge processes frames sequentially, while a switch processes frames in parallel. This circumstance is due to the fact that bridges appeared in those days when the network was divided into a small number of segments, and inter-segment traffic was small (it was subject to the 80 to 20% rule).

    Today, bridges still work on networks, but only on fairly slow wide-area links between two remote local networks. Such bridges are called remote bridges, and their operating algorithm is no different from the 802.1D or Source Routing standard.

    Transparent bridges can, in addition to transmitting frames within the same technology, translate local network protocols, for example Ethernet to Token Ring, FDDI to Ethernet, etc. This property of transparent bridges is described in the IEEE 802.1H standard.

    In the future, we will call a device that promotes frames using a bridge algorithm and operates in a local network the modern term “switch.” When describing the 802.1D and Source Routing algorithms themselves in the next section, we will traditionally call the device a bridge, as it is actually called in these standards.


    42. Switches for local networks, protocols, operating modes, examples.
    Each of the 8 10Base-T ports is served by one Ethernet packet processor - EPP (Ethernet Packet Processor). In addition, the switch has a system module that coordinates the operation of all EPP processors. The system module maintains the general address table of the switch and provides management of the switch via the SNMP protocol. To transfer frames between ports, a switch fabric is used, similar to those found in telephone switches or multiprocessor computers, connecting multiple processors to multiple memory modules.

    The switching matrix operates on the principle of circuit switching. For 8 ports, the matrix can provide 8 simultaneous internal channels when the ports operate in half-duplex mode and 16 in full-duplex mode, when the transmitter and receiver of each port operate independently of each other.

    When a frame arrives at any port, the EPP processor buffers the first few bytes of the frame to read the destination address. After receiving the destination address, the processor immediately decides to transmit the packet, without waiting for the remaining bytes of the frame to arrive.

    If the frame needs to be transmitted to another port, then the processor accesses the switching matrix and tries to establish a path in it that connects its port with the port through which the route to the destination address goes. The switching matrix can do this only if the port of the destination address is free at that moment, that is, not connected to another port. If the port is busy, then, as in any circuit-switched device, the matrix refuses the connection. In this case, the frame is completely buffered by the input port processor, after which the processor waits for the output port to become free and for the switching matrix to form the desired path. Once the desired path is established, the buffered bytes of the frame are sent to it, which are received by the output port processor. As soon as the output port processor accesses the Ethernet segment connected to it using the CSMA/CD algorithm, the frame bytes immediately begin to be transmitted to the network. The described method of transmitting a frame without fully buffering it is called “on-the-fly” or “cut-through” switching. The main reason for increased network performance when using a switch is parallel processing several frames. This effect is illustrated in Fig. 4.26. The figure shows an ideal situation in terms of increasing performance when four out of eight ports transmit data at a maximum speed of 10 Mb/s for the Ethernet protocol, and they transmit this data to the remaining four ports of the switch without conflicting - data flows between network nodes are distributed so that Each frame receiving port has its own output port. If the switch manages to process input traffic even at the maximum intensity of frames arriving at the input ports, then the overall performance of the switch in the example above will be 4x10 = 40 Mbit/s, and when generalizing the example for N ports - (N/2)xlO Mbit/s. They say that a switch provides each station or segment connected to its ports with a dedicated protocol bandwidth. Naturally, the situation in the network does not always arise as shown in Fig. 4.26. If two stations, for example stations connected to ports 3 And 4, simultaneously you need to write data to the same server connected to the port 8, then the switch will not be able to allocate a data stream of 10 Mbit/s to each station, since port 5 cannot transmit data at a speed of 20 Mbit/s. Station frames will wait in internal queues of input ports 3 And 4, when the port is free 8 to transmit the next frame. Obviously, a good solution for such distribution of data flows would be to connect the server to a higher-speed port, for example Fast Ethernet. Since the main advantage of the switch, thanks to which it has gained very good positions in local networks, is its high performance, switch developers are trying to produce called non-blocking switch models.


    43. Algorithm for the operation of a transparent bridge.
    Transparent bridges are invisible to the network adapters of the end nodes, since they independently build a special address table, based on which they can decide whether the incoming frame needs to be transmitted to some other segment or not. Network adapters using transparent bridges work exactly the same as when they are not, that is, they do not take any additional action to ensure that the frame passes through the bridge. The transparent bridging algorithm is independent of the LAN technology in which the bridge is installed, so Ethernet transparent bridges operate exactly the same as FDDI transparent bridges.

    A transparent bridge builds its address table by passively observing the traffic flowing on the segments connected to its ports. In this case, the bridge takes into account the addresses of the sources of data frames arriving at the bridge ports. Based on the address of the frame source, the bridge concludes that this node belongs to one or another network segment.

    Let's consider the process of automatically creating a bridge address table and using it using the example of a simple network shown in Fig. 4.18.

    Rice. 4.18. Operating principle of a transparent bridge

    A bridge connects two logical segments. Segment 1 consists of computers connected using one piece of coaxial cable to port 1 of the bridge, and segment 2 consists of computers connected using another piece of coaxial cable to port 2 of the bridge.

    Each bridge port operates as the end node of its segment with one exception - a bridge port does not have its own MAC address. The bridge port operates in the so-called illegible (promisquous) packet capture mode, when all packets arriving on the port are stored in buffer memory. Using this mode, the bridge monitors all traffic transmitted on segments attached to it and uses packets passing through it to study the composition of the network. Since all packets are written to the buffer, the bridge does not need a port address.

    In its initial state, the bridge knows nothing about which computers with which MAC addresses are connected to each of its ports. Therefore, in this case, the bridge simply forwards any captured and buffered frame to all its ports except the one from which the frame was received. In our example, the bridge only has two ports, so it transmits frames from port 1 to port 2, and vice versa. When the bridge is about to transmit a frame from segment to segment, for example from segment 1 to segment 2, it again tries to access segment 2 as an end node according to the rules of the access algorithm, in this example the rules of the CSMA/CD algorithm.

    Simultaneously with the transmission of the frame to all ports, the bridge examines the source address of the frame and makes a new entry about its ownership in its address table, which is also called the filtering or routing table.

    Once the bridge has gone through the learning phase, it can operate more efficiently. When receiving a frame directed, for example, from computer 1 to computer 3, it looks through the address table to see if its addresses match the destination address 3. Since such a record exists, the bridge performs the second stage of table analysis - checks whether there are computers with source addresses ( in our case, this is address 1) and the destination address (address 3) in the same segment. Since in our example they are in different segments, the bridge performs the operation forwarding frame - transmits a frame to another port, having previously gained access to another segment.

    If the destination address is unknown, then the bridge transmits the frame to all its ports except the frame source port, as at the initial stage of the learning process.


    44. Bridges with source routing.
    Source-routing bridges are used to connect Token Rings and FDDI, although transparent bridges can also be used for the same purpose. Source Routing (SR) is based on the fact that the sending station places in the frame sent to another ring all the address information about the intermediate bridges and rings that the frame must go through before getting into the ring to which the station is connected - recipient.

    Let's look at the principles of operation of Source Routing bridges (hereinafter referred to as SR bridges) using the example of the network shown in Fig. 4.21. The network consists of three rings connected by three bridges. To set a route, rings and bridges have identifiers. SR bridges do not build an address table, but when moving frames, they use the information available in the corresponding fields of the data frame.

    Fig. 4.21.Source Routing bridges

    When receiving each packet, the SR bridge only needs to look at the Routing Information Field (RIF) in the Token Ring or FDDI frame to see if it contains its identifier. And if it is present there and is accompanied by the identifier of the ring that is connected to this bridge, then in this case the bridge copies the incoming frame to the specified ring. Otherwise, the frame is not copied to another ring. In either case, the original copy of the frame is returned along the original ring to the sending station, and if it has been transmitted to another ring, then the A (address recognized) bit and C (frame copied) bit of the frame status field are set to 1 to inform the sending station that that the frame was received by the destination station (in this case, transmitted by a bridge to another ring).

    Since routing information in a frame is not always needed, but only for frame transmission between stations connected to different rings, the presence of the RIF field in the frame is indicated by setting the individual/group address (I/G) bit to 1 (in this case, this bit is not used by destination, since the source address is always individual).

    The RIF field has a control subfield consisting of three parts.

    • Frame type defines the RIF field type. There are various types of RIF fields used for route discovery and for sending a frame along a known route.
    • Maximum frame length field used by a bridge to link rings that have different MTU values. Using this field, the bridge notifies the station of the maximum possible frame length (that is, the minimum MTU value throughout the entire composite route).
    • RIF field length is necessary since the number of route descriptors that specify the identifiers of the intersected rings and bridges is unknown in advance.

    To operate the source routing algorithm, two additional frame types are used - a single-route broadcast frame (SRBF) and a multi-route broadcast frame (ARBF).

    All SR bridges must be manually configured by the administrator to forward ARBF frames on all ports except the frame's source port, and for SRBF frames, some bridge ports must be blocked to prevent loops in the network.

    Advantages and Disadvantages of Source Routing Bridges

    45. Switches: technical implementation, functions, characteristics affecting their operation.
    Features of the technical implementation of switches. Many first-generation switches were similar to routers, that is, they were based on a general-purpose central processor connected to interface ports via an internal high-speed bus. The main disadvantage of such switches was their low speed. The universal processor could not cope with the large volume of specialized operations for sending frames between interface modules. In addition to processor chips, for successful non-blocking operation, the switch also needs to have a high-speed node for transmitting frames between port processor chips. Currently, switches use one of three schemes as a base, on which such an exchange node is built:

    • switching matrix;
    • shared multi-input memory;
    • common bus.

    Ethernet, despite
    for all his success, it was never elegant.
    Network cards have only rudimentary
    concept of intelligence. They really
    first send the packet, and only then
    look to see if anyone else has transmitted data
    at the same time as them. Someone compared Ethernet with
    a society in which people can communicate
    with each other only when everyone is screaming
    simultaneously.

    Like him
    predecessor, Fast Ethernet uses a method
    CSMACD (Carrier Sense Multiple Access with
    Collision Detection - Multiple access to the environment with
    carrier sensing and collision detection).
    Behind this long and obscure acronym
    hiding a very simple technology. When
    the Ethernet card must send a message, then
    first she waits for silence, then
    sends a packet and listens at the same time, not
    did anyone send a message
    at the same time as him. If this happened, then
    both packets do not reach the destination. If
    there was no collision, but the board should continue
    transmit data, she is still waiting
    a few microseconds before
    will try to send a new portion. This
    made so that other boards also
    could work and no one could capture
    channel is exclusive. In case of a collision, both
    devices go silent for a short time
    time period generated
    randomly and then take
    new attempt to transfer data.

    Due to collisions
    Ethernet nor Fast Ethernet will ever be able to achieve
    its maximum performance 10
    or 100 Mbit/s. As soon as it starts
    network traffic increases, temporary
    delays between sending individual packets
    are reduced, and the number of collisions
    increases. Real
    Ethernet performance cannot exceed
    70% of its potential throughput
    abilities, and maybe even lower if the line
    seriously overloaded.

    Ethernet uses
    packet size is 1516 bytes which is fine
    approached when it was first created.
    Today this is considered a disadvantage when
    Ethernet is used for communication
    servers, since servers and communication lines
    tend to exchange more
    the number of small packages that
    overloads the network. In addition, Fast Ethernet
    imposes a limit on the distance between
    connected devices – no more than 100
    meters and it makes you show
    extra caution when
    designing such networks.

    First Ethernet was
    designed based on bus topology,
    when all devices connected to a common
    cable, thin or thick. Application
    twisted pair only partially changed the protocol.
    When using coaxial cable
    the collision was determined by everyone at once
    stations. In the case of twisted pair
    the "jam" signal is used as soon as
    station detects a collision, then it
    sends a signal to the hub, the latter in
    in turn sends out "jam" to everyone
    devices connected to it.

    In order to
    reduce congestion, Ethernet networks
    are divided into segments that
    united through bridges and
    routers. This allows you to transfer
    between segments only the necessary traffic.
    A message sent between two
    stations in the same segment, there will be no
    transferred to another and will not be able to call in it
    overload.

    Today at
    construction of a central highway,
    unifying servers use
    switched Ethernet. Ethernet switches can be
    consider as high speed
    multi-port bridges that are able
    independently determine which of it
    ports the packet is addressed to. Switch
    looks at packet headers and so on
    thus compiles a table defining
    where is this or that subscriber with this
    physical address. This allows
    limit the distribution scope of a package
    and reduce the likelihood of overflow,
    sending it only to the desired port. Only
    broadcast packets are sent over
    all ports.

    100BaseT
    - big brother 10BaseT

    Technology idea
    Fast Ethernet was born in 1992. In August
    next year group of producers
    merged into the Fast Ethernet Alliance (FEA).
    The FEA's goal was to obtain as quickly as possible
    formal approval of Fast Ethernet from the committee
    Institute of Electrical and Electrical Engineers 802.3
    radio electronics (Institute of Electrical and Electronic
    Engineers, IEEE), since it is this committee
    deals with standards for Ethernet. Luck
    accompanied by new technology and
    to its supporting alliance: in June 1995
    all formal procedures have been completed, and
    Fast Ethernet technology was given the name
    802.3u.

    With a light hand IEEE
    Fast Ethernet is called 100BaseT. This is explained
    simple: 100BaseT is an extension
    10BaseT standard with throughput from
    10 Mbps to 100 Mbps. 100BaseT standard includes
    includes a protocol for processing multiple
    carrier sense access and
    CSMA/CD (Carrier Sense Multiple) collision detection
    Access with Collision Detection), which is also used in
    10BaseT. In addition, Fast Ethernet can operate on
    cables of several types, including
    twisted pair Both of these properties are new
    standards are very important for potential
    buyers, and it is thanks to them that 100BaseT
    turns out to be a successful way to migrate networks
    based on 10BaseT.

    Main
    selling point for 100BaseT
    is that Fast Ethernet is based on
    inherited technology. Since in Fast Ethernet
    the same transmission protocol is used
    messages as in older versions of Ethernet, and
    cable systems of these standards
    compatible, to transition to 100BaseT from 10BaseT
    required

    smaller
    capital investment than for installation
    other types of high-speed networks. Except
    Moreover, since 100BaseT is
    continuation of the old Ethernet standard, everything
    tools and procedures
    analysis of network operation, as well as all
    software running on
    old Ethernet networks should use this standard
    maintain functionality.
    Therefore, the 100BaseT environment will be familiar
    network administrators with experience
    with Ethernet. This means that staff training will take
    less time and will cost significantly
    cheaper.

    SAVE
    PROTOCOL

    Perhaps,
    the greatest practical benefit of the new
    technology brought the decision to leave
    message transfer protocol unchanged.
    Message transfer protocol, in our case
    CSMA/CD, defines the way in which data
    transmitted over a network from one node to another
    through the cable system. In ISO/OSI model
    CSMA/CD protocol is part of the layer
    media access control (MAC).
    At this level, the format is determined, in
    in which information is transmitted over the network, and
    the way a network device receives
    network access (or network management) for
    data transfer.

    Name CSMA/CD
    can be broken down into two parts: Carrier Sense Multiple Access
    and Collision Detection. From the first part of the name you can
    conclude how a node with a network
    adapter determines the moment when it
    message should be sent. According to
    With the CSMA protocol, the network node first “listens”
    network to determine whether it is being transmitted to
    any other message at the moment.
    If a carrier tone is heard,
    this means that the network is currently occupied by someone else
    message - the network node goes into mode
    waiting and remains in it until the network
    will be released. When the network comes
    silence, the node begins transmission.
    In fact, the data is sent to all nodes
    network or segment, but are accepted only by those
    the node to which they are addressed.

    Collision Detection -
    the second part of the name is used to resolve
    situations where two or more nodes are trying to
    transmit messages simultaneously.
    According to the CSMA protocol, everyone is ready for
    transmission, the node must first listen to the network,
    to determine if she is available. However,
    if two nodes are listening at the same time,
    they will both decide that the network is free and start
    transmit your packets simultaneously. In this
    situations transmitted data
    overlap each other (network
    engineers call it a conflict), and not a single
    messages do not reach the point
    appointments. Collision Detection requires that the node
    I also listened to the network after the transmission
    package. If a conflict is detected, then
    the node repeats the transmission through a random
    the chosen period of time and
    checks again to see if a conflict has occurred.

    THREE TYPES OF FAST ETHERNET

    Along with
    maintaining the CSMA/CD protocol, other important
    The solution was to design 100BaseT this way
    in such a way that it can be used
    cables of different types - like those that
    used in older versions of Ethernet, and
    newer models. The standard defines three
    modifications to ensure work with
    different types of Fast Ethernet cables: 100BaseTX, 100BaseT4
    and 100BaseFX. Modifications 100BaseTX and 100BaseT4 are calculated
    on twisted pair, and 100BaseFX was designed for
    optical cable.

    100BaseTX standard
    requires the use of two UTP or STP pairs. One
    a pair serves for transmission, the other for
    reception. These requirements are met by two
    Main cable standard: EIA/TIA-568 UTP
    IBM Category 5 and STP Type 1. In 100BaseTX
    attractive security
    full duplex mode when working with
    network servers, as well as the use
    only two out of four pairs of eight-core
    cable - the other two pairs remain
    free and can be used in
    further to expand opportunities
    networks.

    However, if you
    are going to work with 100BaseTX, using for
    this is Category 5 wiring, then you should
    know about its shortcomings. This cable
    more expensive than other eight-core cables (for example
    Category 3). In addition, to work with it
    requires the use of punchdown blocks
    blocks), connectors and patch panels,
    meeting the requirements of Category 5.
    It should be added that for support
    full duplex mode should be
    install full duplex switches.

    100BaseT4 standard
    has more lenient requirements for
    the cable being used. The reason for this is
    the fact that 100BaseT4 uses
    all four pairs of eight-core cable: one
    for transmitting, another for receiving, and
    the remaining two work as transmission,
    and at the reception. Thus, in 100BaseT4 and reception,
    and data transfer can be carried out via
    three couples By splitting 100 Mbit/s into three pairs,
    100BaseT4 reduces the signal frequency, so
    for its transmission is enough and less
    high quality cable. For implementation
    100BaseT4 networks are suitable for UTP Category 3 and
    5, as well as UTP Category 5 and STP Type 1.

    Advantage
    100BaseT4 is less rigid
    wiring requirements. Category 3 and
    4 are more common, and in addition they
    significantly cheaper than cables
    Category 5, which should not be forgotten before
    start of installation work. The disadvantages
    are that 100BaseT4 requires all four
    pairs and that full duplex mode is this
    not supported by the protocol.

    Fast Ethernet includes
    also a standard for multimode operation
    optical fiber with a 62.5-micron core and 125-micron
    shell. The 100BaseFX standard is focused on
    mainly on the highway - for connection
    Fast Ethernet repeaters within one
    buildings. Traditional benefits
    optical cable are also inherent in the standard
    100BaseFX: Electromagnetic immunity
    noise, improved data protection and large
    distances between network devices.

    RUNNER
    FOR SHORT DISTANCES

    Although Fast Ethernet
    is a continuation of the Ethernet standard,
    transition from a 10BaseT to 100BaseT network is not possible
    considered as a mechanical replacement
    equipment - for this they can
    changes to the network topology will be required.

    Theoretical
    Fast Ethernet segment diameter limit
    is 250 meters; it's only 10
    percent of theoretical size limit
    Ethernet networks (2500 meters). This limitation
    stems from the nature of the CSMA/CD protocol and
    transfer speed 100Mbit/s.

    As already
    noted earlier, transmitting data
    the workstation must listen to the network in
    passage of time to ensure
    that the data has reached the destination station.
    On an Ethernet network with a bandwidth of 10
    Mbit/s (for example 10Base5) period of time,
    required workstation for
    listening to the network for conflicts,
    determined by the distance that 512-bit
    frame (frame size specified in the Ethernet standard)
    will pass during the processing of this frame by
    workstation. For an Ethernet network with bandwidth
    with a capacity of 10 Mbit/s this distance is equal to
    2500 meters.

    On the other side,
    the same 512-bit frame (802.3u standard
    specifies a frame of the same size as 802.3, then
    is in 512 bits), transmitted to the working
    station in the Fast Ethernet network, will travel only 250 m,
    before the workstation completes it
    processing. If the receiving station were
    far from the transmitting station
    distance over 250 m, then the frame could
    come into conflict with another frame on
    the lines are somewhere further, and the transmitting
    the station, having completed the transmission, no longer
    would perceive this conflict. That's why
    The maximum diameter of a 100BaseT network is
    250 meters.

    To
    use the permissible distance,
    you will need two repeaters to connect
    all nodes. According to the standard,
    maximum distance between node and
    repeater range is 100 meters; in Fast Ethernet,
    as in 10BaseT, the distance between
    hub and workstation are not
    must exceed 100 meters. Since
    connecting devices (repeaters)
    introduce additional delays, real
    the working distance between nodes can
    turn out to be even smaller. That's why
    it seems reasonable to take everything
    distances with some margin.

    To work on
    over long distances you will have to purchase
    optical cable. For example, equipment
    100BaseFX in half duplex mode allows
    connect a switch to another switch
    or terminal station located on
    up to 450 meters apart.
    By installing full duplex 100BaseFX, you can
    connect two network devices to
    distance up to two kilometers.

    HOW
    INSTALL 100BASET

    In addition to cables,
    which we have already discussed, to install Fast
    Ethernet will require network adapters for
    workstations and servers, hubs
    100BaseT and possibly some
    100BaseT switches.

    Adapters,
    necessary for organizing a 100BaseT network,
    are called 10/100 Mbit/s Ethernet adapters.
    These adapters are capable (this is a requirement
    100BaseT standard) independently distinguish 10
    Mbit/s from 100 Mbit/s. To serve the group
    servers and workstations transferred to
    100BaseT, you will also need a 100BaseT hub.

    When turned on
    server or personal computer with
    adapter 10/100 the latter produces a signal,
    announcing what he can provide
    bandwidth 100Mbit/s. If
    receiving station (most likely this is
    there will be a hub) is also designed for
    work with 100BaseT, it will respond with a signal
    to which both the hub and the PC or server
    automatically switch to 100BaseT mode. If
    The hub only works with 10BaseT, it does not
    gives a response signal, and the PC or server
    will automatically switch to 10BaseT mode.

    In case
    small-scale 100BaseT configurations are possible
    use a 10/100 bridge or switch that
    will provide connection to the part of the network working with
    100BaseT, with an existing network
    10BaseT.

    DECEIVING
    RAPIDITY

    To sum it all up
    above, we note that, as it seems to us,
    Fast Ethernet is best for solving problems
    high peak loads. For example, if
    some of the users work with CAD or
    image processing programs and
    needs to increase bandwidth
    capabilities, then Fast Ethernet may be
    a good way out of the situation. However, if
    problems are caused by excess numbers
    users on the network, then 100BaseT begins
    slow down the exchange of information at about 50 percent
    network load - in other words, on the same
    same level as 10BaseT. But in the end it's
    after all, it’s nothing more than an expansion.

    Today it is almost impossible to find a laptop or motherboard on sale without an integrated network card, or even two. All of them have the same connector - RJ45 (more precisely, 8P8C), but the speed of the controller can differ by an order of magnitude. In cheap models it is 100 megabits per second (Fast Ethernet), in more expensive ones it is 1000 (Gigabit Ethernet).

    If your computer does not have a built-in LAN controller, then it is most likely already an “old man” based on a processor such as Intel Pentium 4 or AMD Athlon XP, as well as their “ancestors”. Such “dinosaurs” can be “made friends” with a wired network only by installing a discrete network card with a PCI connector, since the PCI Express bus did not yet exist at the time of their birth. But also for the PCI bus (33 MHz), “network cards” are produced that support the most current Gigabit Ethernet standard, although its throughput may not be enough to fully unleash the speed potential of a gigabit controller.

    But even if you have a 100-megabit integrated network card, those who are going to “upgrade” to 1000 megabits will have to purchase a discrete adapter. The best option would be to purchase a PCI Express controller, which will ensure maximum network speed, if, of course, the corresponding connector is present in the computer. True, many will prefer a PCI card, since they are much cheaper (the cost starts from literally 200 rubles).

    What advantages will the transition from Fast Ethernet to Gigabit Ethernet bring in practice? How different is the actual data transfer speed of PCI versions of network cards and PCI Express? Is the speed of a regular hard drive enough to fully load a gigabit channel? You will find answers to these questions in this material.

    Test participants

    The three cheapest discrete network cards (PCI - Fast Ethernet, PCI - Gigabit Ethernet, PCI Express - Gigabit Ethernet) were selected for testing, since they are in greatest demand.

    The 100-megabit network PCI card is represented by the Acorp L-100S model (price starts from 110 rubles), which uses the Realtek RTL8139D chipset, the most popular for cheap cards.

    The 1000-megabit network PCI card is represented by the Acorp L-1000S model (price starts from 210 rubles), which is based on the Realtek RTL8169SC chip. This is the only card with a heatsink on the chipset - the rest of the test participants do not require additional cooling.

    The 1000-megabit network PCI Express card is represented by the TP-LINK TG-3468 model (price starts from 340 rubles). And it was no exception - it is based on the RTL8168B chipset, which is also produced by Realtek.

    Appearance of the network card

    Chipsets from these families (RTL8139, RTL816X) can be seen not only on discrete network cards, but also integrated on many motherboards.

    The characteristics of all three controllers are shown in the following table:

    Show table

    The bandwidth of the PCI bus (1066 Mbit/s) should theoretically be enough to “boost” gigabit network cards to full speed, but in practice it may still not be enough. The fact is that this “channel” is shared by all PCI devices; in addition, it transmits service information on servicing the bus itself. Let's see if this assumption is confirmed by real speed measurements.

    Another nuance: the vast majority of modern hard drives have an average read speed of no more than 100 megabytes per second, and often even less. Accordingly, they will not be able to fully load the gigabit channel of the network card, the speed of which is 125 megabytes per second (1000: 8 = 125). There are two ways to get around this limitation. The first is to combine a pair of such hard drives into a RAID array (RAID 0, striping), and the speed can almost double. The second is to use SSD drives, the speed parameters of which are significantly higher than those of hard drives.

    Testing

    A computer with the following configuration was used as a server:

    • processor: AMD Phenom II X4 955 3200 MHz (quad-core);
    • motherboard: ASRock A770DE AM2+ (AMD 770 chipset + AMD SB700);
    • RAM: Hynix DDR2 4 x 2048 GB PC2 8500 1066 MHz (dual-channel mode);
    • video card: AMD Radeon HD 4890 1024 MB DDR5 PCI Express 2.0;
    • network card: Realtek RTL8111DL 1000 Mbps (integrated on the motherboard);
    • operating system: Microsoft Windows 7 Home Premium SP1 (64-bit version).

    A computer with the following configuration was used as a client into which the tested network cards were installed:

    • processor: AMD Athlon 7850 2800 MHz (dual core);
    • motherboard: MSI K9A2GM V2 (MS-7302, AMD RS780 + AMD SB700 chipset);
    • RAM: Hynix DDR2 2 x 2048 GB PC2 8500 1066 MHz (dual-channel mode);
    • video card: AMD Radeon HD 3100 256 MB (integrated into the chipset);
    • hard drive: Seagate 7200.10 160 GB SATA2;
    • operating system: Microsoft Windows XP Home SP3 (32-bit version).

    Testing was carried out in two modes: reading and writing via a network connection from hard drives (this should show that they can be a bottleneck), as well as from RAM disks in the RAM of computers simulating fast SSD drives. The network cards were connected directly using a three-meter patch cord (eight-core twisted pair cable, category 5e).

    Data transfer rate (hard drive - hard drive, Mbit/s)

    The actual data transfer speed through the 100-megabit Acorp L-100S network card fell just short of the theoretical maximum. But both gigabit cards, although they outperformed the first by about six times, were unable to show the maximum possible speed. It is clearly seen that the speed is limited by the performance of Seagate 7200.10 hard drives, which, when directly tested on a computer, averages 79 megabytes per second (632 Mbit/s).

    In this case, there is no fundamental difference in speed between network cards for the PCI bus (Acorp L-1000S) and PCI Express (TP-LINK), the slight advantage of the latter can be explained by the measurement error. Both controllers were operating at about sixty percent of their capacity.

    Data transfer rate (RAM disk - RAM disk, Mbit/s)

    Acorp L-100S expectedly showed the same low speed when copying data from high-speed RAM disks. This is understandable - the Fast Ethernet standard has not corresponded to modern realities for a long time. Compared to the “hard drive-to-hard drive” testing mode, the Acorp L-1000S gigabit PCI card significantly increased performance - the advantage was approximately 36 percent. The TP-LINK TG-3468 network card showed an even more impressive lead - the increase was about 55 percent.

    This is where the higher bandwidth of the PCI Express bus showed itself - it outperformed the Acorp L-1000S by 14 percent, which can no longer be attributed to an error. The winner fell slightly short of the theoretical maximum, but the speed of 916 megabits per second (114.5 Mb/s) still looks impressive - this means that you will have to wait almost an order of magnitude less for the copying to complete (compared to Fast Ethernet). For example, the time to copy a 25 GB file (a typical HD rip with good quality) from computer to computer will be less than four minutes, and with a previous generation adapter it will take more than half an hour.

    Testing has shown that Gigabit Ethernet network cards have a huge advantage (up to tenfold) over Fast Ethernet controllers. If your computers only have hard drives that are not combined into a striping array (RAID 0), then there will be no fundamental difference in speed between PCI and PCI Express cards. Otherwise, as well as when using high-performance SSD drives, preference should be given to cards with a PCI Express interface, which will provide the highest possible data transfer speed.

    Naturally, it should be taken into account that other devices in the network “path” (switch, router...) must support the Gigabit Ethernet standard, and the category of twisted pair (patch cord) must be at least 5e. Otherwise, the real speed will remain at 100 megabits per second. By the way, backward compatibility with the Fast Ethernet standard remains: you can connect, for example, a laptop with a 100-megabit network card to a gigabit network; this will not affect the speed of other computers on the network.

    Ethernet, but also to equipment of other, less popular networks.

    Ethernet and Fast Ethernet Adapters

    Adapter Specifications

    Network adapters (NIC, Network Interface Card) Ethernet and Fast Ethernet can be interfaced with a computer through one of the standard interfaces:

    • ISA (Industry Standard Architecture) bus;
    • PCI bus (Peripheral Component Interconnect);
    • PC Card bus (aka PCMCIA);

    Adapters designed for the ISA system bus (backbone) were not so long ago the main type of adapters. The number of companies producing such adapters was large, which is why devices of this type were the cheapest. Adapters for ISA are available in 8- and 16-bit. 8-bit adapters are cheaper, while 16-bit adapters are faster. True, information exchange on the ISA bus cannot be too fast (in the limit - 16 MB/s, in reality - no more than 8 MB/s, and for 8-bit adapters - up to 2 MB/s). Therefore, Fast Ethernet adapters, which require high transfer rates for effective operation, are practically not produced for this system bus. The ISA bus is becoming a thing of the past.

    The PCI bus has now practically replaced the ISA bus and is becoming the main expansion bus for computers. It provides 32- and 64-bit data exchange and has high throughput (theoretically up to 264 MB/s), which fully satisfies the requirements of not only Fast Ethernet, but also the faster Gigabit Ethernet. It is also important that the PCI bus is used not only in IBM PC computers, but also in PowerMac computers. In addition, it supports Plug-and-Play automatic hardware configuration. Apparently, in the near future, the majority of computers will be oriented towards the PCI bus. network adapters. The disadvantage of PCI compared to the ISA bus is that the number of expansion slots in a computer is usually small (usually 3 slots). But exactly network adapters connect to PCI first.

    The PC Card bus (old name PCMCIA) is currently used only in Notebook class portable computers. In these computers, the internal PCI bus is usually not routed to the outside. The PC Card interface allows for easy connection of miniature expansion cards to a computer, and the exchange speed with these cards is quite high. However, more and more laptop computers are equipped with built-in network adapters, as network connectivity becomes an integral part of the standard feature set. These onboard adapters are again connected to the computer's internal PCI bus.

    When choosing network adapter oriented to a particular bus, you must first of all make sure that there are free expansion slots for this bus in the computer connected to the network. You should also evaluate the complexity of installing the purchased adapter and the prospects for producing boards of this type. The latter may be needed if the adapter fails.

    Finally, they meet again network adapters, connecting to a computer via a parallel (printer) LPT port. The main advantage of this approach is that you do not need to open the computer case to connect adapters. In addition, in this case, adapters do not occupy computer system resources, such as interrupt channels and DMAs, as well as memory addresses and I/O devices. However, the speed of information exchange between them and the computer in this case is much lower than when using the system bus. In addition, they require more processor time to communicate with the network, thereby slowing down the computer.

    Recently, there are more and more computers in which network adapters built into the system board. The advantages of this approach are obvious: the user does not have to buy a network adapter and install it in the computer. You just need to connect the network cable to the external connector of your computer. However, the disadvantage is that the user cannot select the adapter with the best characteristics.

    Other important characteristics network adapters can be attributed:

    • adapter configuration method;
    • the size of the buffer memory installed on the board and the exchange modes with it;
    • the ability to install a permanent memory chip for remote booting (BootROM) on the board.
    • the ability to connect the adapter to different types of transmission media (twisted pair, thin and thick coaxial cable, fiber optic cable);
    • the network transmission speed used by the adapter and the availability of its switching function;
    • the adapter can use full-duplex exchange mode;
    • compatibility of the adapter (more precisely, the adapter driver) with the network software used.

    User configuration of the adapter was used primarily for adapters designed for the ISA bus. Configuration involves setting up the use of computer system resources (input/output addresses, interrupt channels and direct memory access, buffer memory addresses and remote boot memory). Configuration can be carried out by setting switches (jumpers) to the desired position or using the DOS configuration program supplied with the adapter (Jumperless, Software configuration). When running such a program, the user is prompted to set the hardware configuration using a simple menu: select adapter parameters. The same program allows you to make self-test adapter The selected parameters are stored in the adapter's non-volatile memory. In any case, when choosing parameters, you must avoid conflicts with system devices computer and with other expansion cards.

    The adapter can also be configured automatically in Plug-and-Play mode when the computer is turned on. Modern adapters usually support this particular mode, so they can be easily installed by the user.

    In the simplest adapters, exchange with the internal buffer memory of the adapter (Adapter RAM) is carried out through the address space of input/output devices. In this case, no additional configuration of memory addresses is required. The base address of buffer memory operating in shared memory mode must be specified. It is assigned to the computer's upper memory area (