• QoS settings. QoS Myth

    There is not a single person who has not at least once read some FAQ on Windows XP. And if so, then everyone knows that there is such a harmful Quality of Service service - QoS for short. It is highly recommended to disable it when configuring your system because it limits network bandwidth by 20% by default, and this problem seems to exist in Windows 2000 as well.

    These are the lines:

    Q: How can I completely disable the QoS (Quality of Service) service? How to set it up? Is it true that it limits the network speed?
    A: Indeed, by default, Quality of Service reserves 20% of the channel capacity for its needs (any channel - even a 14400 modem, even a gigabit Ethernet). Moreover, even if you remove the QoS Packet Scheduler service from the Properties connection, this channel is not released. You can free up a channel or simply configure QoS here. Launch the Group Policy applet (gpedit.msc). In Group Policy, find Local computer policy and click on Administrative templates. Select Network - QoS Packet Sheduler. Enable Limit reservable bandwidth. Now we lower the Bandwidth limit 20% to 0% or simply turn it off. If desired, you can also configure other QoS parameters here. To activate the changes made, all you have to do is reboot.

    20% is, of course, a lot. Truly Microsoft is Mazda. Statements of this kind wander from FAQ to FAQ, from forum to forum, from media to media, and are used in all kinds of “tweaks” - programs for “tuning” Windows XP (by the way, open “ Group policies" And " Local policies security", and not a single “tweak” can compare with them in terms of the wealth of configuration options). Unsubstantiated statements of this kind need to be exposed carefully, which we will do now, using a systematic approach. That is, we will thoroughly study the problematic issue, relying on official primary sources.

    What is a network with quality service?

    Let us accept the following simplified definition of a network system. Applications run and run on hosts and communicate with each other. Applications send data to the operating system for transmission over the network. Once data is transferred to the operating system, it becomes network traffic.

    Network QoS relies on the network's ability to process this traffic in a way that ensures that certain application requests are met. This requires a fundamental mechanism for processing network traffic that can identify traffic eligible for special treatment and the right to control those mechanisms.

    QoS functionality is designed to satisfy two network entities: network applications and network administrators. They often have disagreements. The network administrator limits the resources used by a particular application, while at the same time the application tries to grab as many network resources as possible. Their interests can be aligned, taking into account the fact that the network administrator plays a dominant role in relation to all applications and users.

    Basic QoS parameters

    Various applications have different requirements for processing their network traffic. Applications are more or less tolerant of delays and traffic loss. These requirements have found application in the following QoS-related parameters:

    • Bandwidth - the speed at which traffic generated by an application must be transmitted over the network;
    • Latency - the delay that an application can tolerate in delivering a data packet;
    • Jitter - change the delay time;
    • Loss - percentage of lost data.

    If infinite network resources were available, then all application traffic could be transmitted at the required speed, with zero latency, zero latency variation, and zero loss. However, network resources are not unlimited.

    The QoS mechanism controls the allocation of network resources to application traffic to meet transmission requirements.

    Fundamental QoS resources and traffic processing mechanisms

    The networks that connect hosts use a variety of network devices including network adapters hosts, routers, switches and hubs. Each of them has network interfaces. Every network interface can receive and transmit traffic at a finite speed. If the rate at which traffic is sent to an interface is faster than the rate at which the interface forwards traffic further, then congestion occurs.

    Network devices can handle congestion conditions by queuing traffic in the device's memory (buffer) until the congestion passes. In other cases, network equipment may reject traffic to relieve congestion. As a result, applications experience latency changes (as traffic is stored in queues on interfaces) or traffic loss.

    The ability of network interfaces to forward traffic and the availability of memory to store traffic in network devices (until the traffic can no longer be sent) constitute the fundamental resources required to provide QoS for application traffic flows.

    Distribution of QoS resources across network devices

    Devices that support QoS intelligently use network resources to transmit traffic. That is, traffic from more latency-tolerant applications is queued (stored in a buffer in memory), while traffic from latency-critical applications is passed on.

    To perform this task, the network device must identify traffic by classifying packets, and also have queues and mechanisms for servicing them.

    Traffic processing mechanism

    The traffic processing mechanism includes:

    • 802.1p;
    • Differentiated services per-hop-behaviors (diffserv PHB);
    • Integrated Services (intserv);
    • ATM, etc.

    Most local networks are based on IEEE 802 technology including Ethernet, token-ring, etc. 802.1p is a traffic processing mechanism to support QoS in such networks.

    802.1p defines a field (layer 2 in the OSI networking model) in the 802 packet header that can carry one of eight priority values. As a rule, hosts or routers, when sending traffic to a local network, mark each sent packet, assigning it a certain priority value. Network devices such as switches, bridges, and hubs are expected to process packets appropriately using queuing mechanisms. The scope of 802.1p is limited to the local area network (LAN). Once the packet crosses the local network (via OSI Layer 3), the 802.1p priority is removed.

    Diffserv is a layer 3 mechanism. It defines a field in the layer 3 header of IP packets called diffserv codepoint (DSCP).

    Intserv is a whole range of services that defines a guaranteed service and a service that manages downloads. A guaranteed service promises to carry a certain amount of traffic with measurable and limited latency. The service that manages the download agrees to carry some traffic with "light network congestion occurring." These are quantifiable services in the sense that they are defined to provide measurable QoS to a certain amount of traffic.

    Because ATM technology fragments packets into relatively small cells, it can offer very low latency. If a packet needs to be sent urgently, the ATM interface can always be freed up for transmission for as long as it takes to send one cell.

    QoS has many more complex mechanisms that make this technology work. Let us note only one important point: In order for QoS to work, support for this technology and appropriate configuration are required throughout the entire transmission from the starting point to the ending point.

    12/01/2016 | Vladimir Khazov

    Not all Internet traffic is created equal. An online video without a stuttering picture or a Skype call without a stuttering voice is more important than downloading a large file using a torrent client. The quality of service (QoS) feature of a router, shaper, or deep traffic analysis (DPI) system allows you to prioritize which traffic is more important and give it the most bandwidth.

    And if at home each user can configure QoS on his router, then the telecom operator, using modern network equipment, manages the bandwidth for all its subscribers and consistently provides high quality for each of them.

    What is quality of service (QoS)

    QoS is an excellent, but rarely used, tool that can be used to prioritize different types of traffic and, with the help of DPI systems, even specific applications by dividing bandwidth between them in different proportions. Correct setting QoS rules will ensure smooth playback of online videos while downloading a large file, or fast web browsing while children play online games.

    An Internet connection can be compared to a hospital, where the bandwidth is the number of doctors to treat patients, the patients are the applications, and the nurse is the router that distributes them.

    In a typical network, an indifferent nurse distributes patients evenly among available doctors, regardless of the complexity of the disease, be it a person with a bruised arm or a car accident victim with a concussion and broken bones. Each of them will receive help, but they will have to wait the same amount of time until a doctor becomes available. If all patients are treated with the same priority, sooner or later this will lead to disastrous consequences for the hospital and casualties.

    The same thing happens on a home network or provider network. Communication bandwidth is allocated evenly within the tariff plan, without taking into account the importance of each application. For example, if you are on a Skype call and your kids are watching a Netflix movie, the quality of the call will deteriorate dramatically. The Internet provider, in turn, is limited by the speed of the channel to the upstream telecom operator, and its bandwidth may not be enough to ensure the quality of the connection if all users simultaneously start downloading files through a torrent client at maximum speed.

    The router divides the bandwidth equally among everyone, without giving priority to any type of traffic.

    Returning to our comparison with a hospital, the quality of care is a competent nurse who distributes patients between doctors in the most efficient way: several specialists will take care of those injured in an accident, and a person with a bruise will wait for one doctor when he is free.

    In a network with a quality of service function, priority will be given to the application or service that you independently determine (online video, IPTV, online games, etc.), it will receive greater speed and minimal delays.

    How to enableQoS

    There are hundreds of different routers - home and office, as well as complex carrier-grade devices. Not every one of them has a QoS function, and if it does, its implementation may vary across the spectrum possible settings. Some can only determine the priority between devices, some can highlight certain types of traffic (for example, video or voice communication), DPI systems are able to recognize applications that do not use previously known headers and data structures for data exchange, and make changes to the priority field of packets passing through it for further application of QoS rules.

    It's impossible to go into the specifics of setting up each device, but we can outline the basic steps to start using QoS for a better internet experience.

    First Step: Define Your Goal

    Before you begin configuring any device, you must clearly define your QoS configuration goals. If you decide to set up a home router, then this may be the priority of your work computer over other devices with Internet access to ensure comfortable work, or the priority of online games over streaming video to ensure minimal delays and lags during the game.

    In a home network, the rules should be selective and extremely simple. If you apply dozens of different priorities, you can get a negative result when none of the applications will work properly.

    The telecom operator uses QoS to achieve more global goals:

    • traffic differentiation;
    • ensuring a uniform flow of traffic;
    • guarantee of quality and speed of Internet access for each subscriber;
    • preventing network congestion;
    • reduction of Uplink costs.

    But the principles for achieving them are similar to a home network: determining priority types of traffic and applications, setting up rules depending on priority and duration.

    Second step: determine the Internet speed

    For a telecom operator, Internet speed is the speed of access to a higher-level provider (Uplink) or to several providers. This value is fixed and is distributed among all subscribers according to their tariff plans. The problem of its optimization and competent distribution must be solved by QoS rules to ensure customer satisfaction from the service received.

    The speed of home Internet often does not coincide with that declared by the provider for some reason, so determining its real figure is an important task before setting up QoS. There are concepts of outgoing and incoming speed that you need to determine yourself.

    To get the real picture, you need to close all applications on your computer that create a load on the network, and connect it to the router with a copper cable. Wi-Fi wireless network technology, especially if it does not use the modern Wireless N or Wireless AC protocols, can be a bandwidth bottleneck. Measurements may show a speed of 40 Mb/s instead of the available 75 Mb/s precisely because of speed limitations wireless transmission data.

    Go to the website www.speedtest.net and click the “Start Test” button. The obtained result must be converted from “Mbit/s” to “Kbit/s”, since QoS settings are most often specified in these units. This can be done by multiplying the resulting values ​​by 1000.

    In this example, we received an incoming speed of 42,900 Kbps, and an outgoing speed of 3,980 Kbps. It is these values ​​that can be distributed among users and applications on the network.

    Third step: enableQoSon the router

    It is impossible to describe the procedure for enabling QoS on all routers, since each manufacturer provides the user with its own management interface, and carrier-class network devices, such as Cisco, Juniper, Huawei, are configured from the command line.

    In most cases, you will need to go to the device management page (type its address in the browser, most often it is 192.168.1.1), enter the administrator login and password, which are specified in the user manual, and go to the NAT section network settings, QoS tab. Select Enable next to the Start QoS function, the port for applying the rules is WAN (port of connection with the provider), the incoming and outgoing speed settings (downlink and uplink) should be specified in the amount of 85–90% of that measured in the second step.

    Reduced speeds are specified to give the QoS handler some room to maneuver so that it can work effectively. The QoS feature is now enabled and you need to configure the prioritization rules.

    How to prioritize traffic

    After the QoS function is enabled, it is necessary to define the rules by which it will work with traffic.

    Carriers configure rules based on data from DPI system analytics tools that show bandwidth bottlenecks and time-of-day trends. Some home devices have ready-made presets that the user must use for prioritization.

    If the router allows manual priority settings, you need to set their “forks” as a percentage of the total bandwidth:

    • Maximum: 60–100%
    • Premium: 25–100%
    • Express: 10–100%
    • Standard: 5–100%
    • Bulk: 1–100%

    These parameters determine the throughput value for specific device or applications. For example, by setting an application to Maximum, you assign it to use 60% of the bandwidth when the network is busy and 100% when the network is fully available. If you set it to "Trunk", then when the network is idle, the application can use any bandwidth speed, but if there is a load, it will only receive 1%.

    We would like to remind you that prioritization must be approached with a clear understanding of what you want to limit.

    Prioritization options

    1. Service or application priority

    Allows any device on the network to prioritize the bandwidth of a specific application or service over others. For example, if it is necessary that the Skype application always has dedicated bandwidth, and video and audio communications do not have delays, distortion or artifacts.

    2. Interface priority

    Interface in in this case is the method by which your devices connect to the network. You can configure higher priority for devices that connect via wire or wireless network, or, conversely, reduce the priority for guest devices.

    3. Priority of devices by IP address

    You can assign higher priority to a specific device on your network based on its IP address (static or reserved dynamic), thereby providing it with more high speed access compared to others.

    4. Prioritize devices by MAC address

    If you use dynamic addressing, you can still assign high priority to one of the devices on the network by its MAC address, which is unique and can be obtained either from the software or from a label on the case.

    Test and evaluation

    The most important rules in setting up QoS are to add rules sequentially and not to rush. You need to start with the most global ones, and then configure individual applications and services. If you have achieved the desired result and QoS meets all your requirements, you need to save the configuration as screenshots or a backup file in case you need to reset the router and restore the settings.

    You can verify that the rules are working correctly by launching services with high and low priority and comparing their speeds, or running a speedtest on network devices with different priorities and see which one shows the better result.

    Setting up QoS is a more complex process than basic router setup, and for the telecom operator there are also additional capital costs for purchasing a DPI platform, but the result will allow you to achieve better access to the Internet, as well as save money on the purchase of a high-speed communication channel.


    Currently, along with the systematic increase in data transfer rates in telecommunications, the share of interactive traffic, which is extremely sensitive to the parameters of the transportation environment, is increasing. Therefore, the task of ensuring quality of service (QoS) is becoming increasingly important.

    It is best to start considering an issue of such complexity with simple and understandable examples of equipment configuration, for example, from Cisco. The material presented here certainly cannot compete with www.cisco.com. Our task is the initial classification of a huge amount of information in a compact form in order to facilitate understanding and further study.

    1. Definitions and terms.

    There are so many definitions of the term QoS that we will choose the only correct one - correctly, from Cisco: “QoS – QoS refers to the ability of a network to provide better service to selected network traffic over various underlying technologies...”. Which can be literally translated as: “QoS is the ability of a network to provide required service given traffic within certain technological frameworks."

    The required service is described by many parameters; we will note the most important among them.

    Bandwidth (BW)- bandwidth, describes the nominal capacity of the information transmission medium, determines the width of the channel. Measured in bit/s (bps), kbit/s (kbps), mbit/s (mbps).

    Delay- delay during packet transmission.

    Jitter- fluctuation (variation) of delay during packet transmission.

    Packet Loss– packet loss. Determines the number of packets dropped by the network during transmission.

    Most often, to describe the capacity of a channel, an analogy is used with a water pipe. Within its framework, Bandwidth is the width of the pipe, and Delay is the length.

    Time to transmit a packet through the channel Transmit time [s] = packet size / bw .

    For example, let’s find the transmission time of a 64-byte packet over a 64-kilobit/s wide channel:

    Packet size = 64*8=512 (bit) Transmit Time = 512/64000 = 0.008 (s)

    2. QoS service models.

    2.1. Best Effort Service.

    Non-guaranteed delivery. Absolute absence of QoS mechanisms. All available network resources are used without any allocation of separate traffic classes and regulation. It is believed that the best mechanism ensuring QoS is an increase in throughput. This is correct in principle, but some types of traffic (for example, voice) are very sensitive to packet delays and variations in their speed. The Best Effort Service model, even with large reserves, allows for overloads to occur in the event of sudden surges in traffic. Therefore, other approaches to providing QoS have been developed.

    2.2. Integrated Service (IntServ).

    Integrated Service (IntServ, RFC 1633) - integrated service model. Can provide end-to-end quality of service, guaranteeing the necessary throughput. IntServ uses the RSVP signaling protocol for its purposes. Allows applications to express end-to-end resource requirements and contains mechanisms to enforce those requirements. IntServ can be briefly described as a Resource reservation.

    2.3. Differentiated Service (DiffServ).

    Differentiated Service (DiffServ, RFC 2474/2475) - Differentiated service model. Defines QoS provision based on well-defined components that are combined to provide the required services. The DiffServ architecture assumes the presence of classifiers and traffic shapers at the network edge, as well as support for resource allocation functions in the network core in order to provide the required Per-Hop Behavior (PHB) policy. Divides traffic into classes by introducing multiple QoS levels. DiffServ consists of the following functional blocks: traffic edge shapers (packet classification, marking, intensity control) and PHB policy implementers (resource allocation, packet drop policy). DiffServ can be briefly described as traffic prioritization (Prioritization).

    3. Basic QoS functions.

    The basic functions of QoS are to provide the necessary service parameters and are defined in relation to traffic as: classification, marking, congestion control, congestion avoidance and throttling. Functionally, classification and marking are most often provided at the input ports of the equipment, and control and overload prevention are provided at the output ports.

    3.1. Classification and Marking.

    Packet Classification is a mechanism for assigning a packet to a specific traffic class.

    Another equally important task when processing packets is Packet Marking - assigning an appropriate priority (label).

    Depending on the level of consideration (meaning OSI), these tasks are solved differently.

    3.1.1. Layer 2 Classification and Marking.

    Ethernet switches (Layer 2) use data link layer protocols. The pure Ethernet protocol does not support the priority field. Therefore, on Ethernet ports (Access Port) only internal (in relation to the switch) classification is possible by the number of the incoming port and there is no marking.

    A more flexible solution is to use the IEEE 802.1P standard, which was developed in conjunction with 802.1Q. The hierarchy of relationships here is as follows: 802.1D describes bridge technology and is the basis for 802.1Q and 802.1P. 802.1Q describes virtual network (VLAN) technology, and 802.1P provides quality of service. In general, enabling 802.1Q support (trunk with vilans) automatically makes it possible to use 802.1P. According to the standard, 3 bits are used in the second level header, which are called Class of Service (CoS). Thus, CoS can take values ​​from 0 to 7.

    3.1.2. Layer 3 Classification and Marking.

    Routing equipment (Layer 3) operates with IP packets, in which a corresponding field in the header is provided for marking purposes - IP Type of Service (ToS) of one byte in size. The ToS can be populated with an IP Precedence or DSCP classifier depending on the task. IP precedence (IPP) has a dimension of 3 bits (accepts values ​​0-7). DSCP is a DiffServ model and consists of 6 bits (values ​​0-63).

    In addition to digital form, DSCP values ​​can be expressed using special keywords: BE - Best Effort, AF - Assured Forwarding and EF - Expedited Forwarding. In addition to these three classes, there are class selector codes that are added to the class notation and are backward compatible with IPP. For example, a DSCP value of 26 can be written as AF31, which is completely equivalent.

    MPLS contains a QoS indicator inside the label in the corresponding MPLS EXP bits (3 bits).

    You can mark IP packets with a QoS value in different ways: PBR, CAR, BGP.

    Example 1: PBR marking

    Policy Based Route (PBR) can be used for marking purposes, done in the appropriate subroutine (Route-map may contain a set ip precedence parameter):

    !
    interface FastEthernet0/0
    ip policy route-map MARK
    speed 100
    full-duplex
    no cdp enable
    !
    !
    route-map MARK permit 10
    match ip address 1
    set ip precedence priority
    !

    You can see the result at the output of the interface (for example, using the tcpdump program on unix):

    # tcpdump -vv -n -i em0
    ... IP (tos 0x20 ...)

    Example 2: CAR marking.

    The Committed Access Rate (CAR) mechanism is designed to limit the rate, but it can additionally mark packets (set-prec-transmit parameter in rate-limit):

    !
    interface FastEthernet0/0
    ip address 192.168.0.2 255.255.255.252
    rate-limit input access-group 1 1000000 10000 10000 conform-action set-prec-transmit 3 exceed-action set-prec-transmit 3
    no cdp enable
    !
    access-list 1 permit 192.168.0.0 0.0.0.255
    !

    #sh interface FastEthernet0/0 rate-limit

    3.2. Congestion Management. Queuing mechanism.

    3.2.1. Congestions.

    Congestion occurs when the output buffers of the equipment transmitting traffic overflow. The main mechanisms for the occurrence of congestions (or, equivalently, congestions) are traffic aggregation (when the speed of incoming traffic exceeds the speed of outgoing traffic) and inconsistency of speeds on interfaces.

    Bandwidth management in case of congestion (bottlenecks) is carried out using a queuing mechanism. Packets are placed in queues, which are processed in an orderly manner according to a specific algorithm. In fact, congestion control is the determination of the order in which packets leave the interface (queues) based on priorities. If there are no overloads, the queues do not work (and are not needed). Let us list the methods for processing queues.

    3.2.2. Layer 2 Queuing.

    The physical structure of a classic switch can be simplified as follows: a packet arrives at the input port, is processed by the switching mechanism, which decides where to forward the packet, and ends up in the hardware queues of the output port. Hardware queues are fast memory that stores packets before they are sent directly to the output port. Next, according to a certain processing mechanism, packets are removed from the queues and leave the switch. Initially, the queues have equal rights and it is the queue processing mechanism (Scheduling) that determines the prioritization. Typically, each switch port contains a limited number of queues: 2, 4, 8, and so on.

    In general terms, setting up prioritization is as follows:

    1. Initially, the queues are equal. Therefore, it is first necessary to configure them, that is, determine the order (or proportionality of the volume) of their processing. The most common way to do this is to bind 802.1P priorities to queues.

    2. It is necessary to configure the queue handler (Scheduler). The most commonly used are Weighted Round Robin WRR or Strict Priority Queuing.

    3. Assigning priority to incoming packets: by input port, by CoS or, in the case additional features(Layer 3 switch), by some IP fields.

    It all works like this:

    1. The packet hits the switch. If this is a regular Ethernet packet (client Access Port), then it does not have priority labels and these can be set by the switch, for example, by the input port number, if necessary. If the input port is trunk (802.1Q or ISL), then the packet may carry a priority label and the switch can accept it or replace it with the necessary one. In any case, the packet at this stage has reached the switch and has the necessary CoS marking.

    2. After processing by the switching process, the packet in accordance with the CoS priority label is sent by the classifier (Classify) to the appropriate queue of the output port. For example, critical traffic goes into a high-priority queue, and less important traffic goes into a low-priority queue.

    3. The processing mechanism (Scheduling) retrieves packets from queues according to their priorities. More packets will be sent to the output port from a high-priority queue per unit of time than from a low-priority queue.


    3.2.3. Layer 3 Queuing.

    Routing devices operate packets at the third OSI layer (Layer 3). Most often, queue support is provided in software. This means, in most cases, the absence of hardware restrictions on their number and more flexible configuration of processing mechanisms. The general QoS Layer 3 paradigm includes marking and classification of packets at the input (Marking & Classification), distribution into queues and their processing (Scheduling) according to certain algorithms.

    And let us emphasize once again that prioritization (queues) is required mainly only in bottleneck, congested places, when the channel capacity is not enough to transmit all incoming packets and it is necessary to somehow differentiate their processing. In addition, prioritization is also necessary to prevent bursts of network activity from affecting latency-sensitive traffic.

    Let's classify Layer 3 QoS according to queue processing methods.

    3.2.3.1. FIFO.

    An elementary queue with sequential passage of packets, working on the First In First Out (FIFO) principle, which has the Russian equivalent of first in first out and slippers. Essentially, there is no prioritization here. Enabled by default on interfaces with speeds greater than 2 Mbit/s.

    3.2.3.2. P.Q. Priority queues.

    Priority Queuing (PQ) provides unconditional priority for some packets over others. There are 4 queues in total: high, medium, normal and low. Processing is carried out sequentially (from high to low), starts with a high-priority queue and does not move on to lower-priority queues until it is completely cleared. Thus, it is possible for the channel to be monopolized by high-priority queues. Traffic whose priority is not explicitly specified will end up in the default queue.

    Command parameters.
    distribution of protocols by queues:
    priority-list LIST_NUMBER protocol PROTOCOL (high|medium|normal|low) list ACCESS_LIST_NUMBER
    default queue definition:
    priority-list LIST_NUMBER default (high|medium|normal|low)
    determining queue sizes (in packets):
    priority-list LIST_NUMBER queue-limit HIGH_QUEUE_SIZE MEDIUM_QUEUE_SIZE NORMAL_QUEUE_SIZE LOW_QUEUE_SIZE

    designations:
    LIST_NUMBER – number of PQ handler (sheet)
    PROTOCOL - protocol
    ACCESS_LIST_NUMBER – access list number
    HIGH_QUEUE_SIZE – queue size HIGH
    MEDIUM_QUEUE_SIZE - MEDIUM queue size
    NORMAL_QUEUE_SIZE - NORMAL queue size
    LOW_QUEUE_SIZE - LOW queue size

    Setting algorithm.

    1. Define 4 queues
    access-list 110 permit ip any any precedence network
    access-list 120 permit ip any any precedence critical
    access-list 130 permit ip any any precedence internet
    access-list 140 permit ip any any precedence routine

    priority-list 1 protocol ip high list 110
    priority-list 1 protocol ip medium list 120
    priority-list 1 protocol ip normal list 130
    priority-list 1 protocol ip low list 140
    priority-list 1 default low


    priority-list 1 queue-limit 30 60 90 120

    2. Link to the interface

    !
    interface FastEthernet0/0
    ip address 192.168.0.2 255.255.255.0
    speed 100
    full-duplex
    priority-group 1
    no cdp enable
    !

    3. View the result
    #sh queuing priority

    Current priority queue configuration:

    List Queue Args - - 1 low default - 1 high protocol ip list 110 1 medium protocol ip list 120 1 normal protocol ip list 130 1 low protocol ip list 140

    #sh interfaces fastEthernet 0/0

    Queuing strategy: priority-list 1


    Interface FastEthernet0/0 queuing strategy: priority


    high/19 medium/0 normal/363 low/0

    3.2.3.3. C.Q. Random queues.

    Custom Queuing (CQ) provides custom queues. Provides control of the share of channel bandwidth for each queue. 17 queues are supported. System queue 0 is reserved for high-priority control packets (routing, etc.) and is not available to the user.

    The queues are traversed sequentially, starting from the first. Each queue contains a byte counter, which at the beginning of the traversal contains a given value and is decremented by the size of the packet passed from that queue. If the counter is not zero, then the next packet is skipped as a whole, and not its fragment equal to the remainder of the counter.

    Command parameters.
    determination of queue bandwidth:
    queue-list LIST-NUMBER queue QUEUE_NUMBER byte-count
    BYTE_COUT

    determining queue sizes:
    queue-list LIST-NUMBER queue QUEUE_NUMBER limit QUEUE_SIZE

    designations:
    LIST-NUMBER – handler number
    QUEUE_NUMBER – queue number
    BYTE_COUT – queue size in packets

    Setting algorithm.

    1. Defining queues
    access-list 110 permit ip host 192.168.0.100 any
    access-list 120 permit ip host 192.168.0.200 any

    queue-list 1 protocol ip 1 list 110
    queue-list 1 protocol ip 2 list 120
    queue-list 1 default 3

    queue-list 1 queue 1 byte-count 3000
    queue-list 1 queue 2 byte-count 1500
    queue-list 1 queue 3 byte-count 1000

    Additionally, you can set queue sizes in batches
    queue-list 1 queue 1 limit 50
    queue-list 1 queue 2 limit 50
    queue-list 1 queue 3 limit 50

    2. Linking to the interface
    !
    interface FastEthernet0/0
    ip address 192.168.0.2 255.255.255.0
    speed 100
    full-duplex
    custom-queue-list 1
    no cdp enable
    !

    3. View result
    #sh queuing custom

    Current custom queue configuration:

    List Queue Args - 1 3 default - 1 1 protocol ip list 110 1 2 protocol ip list 120 1 1 byte-count 1000 - 1 2 byte-count 1000 - 1 3 byte-count 2000 -

    #sh interface FastEthernet0/0

    Queuing strategy: custom-list 1

    #sh queuing interface fastEthernet 0/0
    Interface FastEthernet0/0 queuing strategy: custom

    Output queue utilization (queue/count)
    0/90 1/0 2/364 3/0 4/0 5/0 6/0 7/0 8/0
    9/0 10/0 11/0 12/0 13/0 14/0 15/0 16/0

    3.2.3.4. WFQ. Weighted fair queues.

    Weighted Fair Queuing (WFQ) automatically breaks traffic into flows. By default, their number is 256, but can be changed (dynamic-queues parameter in the fair-queue command). If there are more threads than queues, then several threads are placed in one queue. A packet's flow membership (classification) is determined based on TOS, protocol, source IP address, destination IP address, source port, and destination port. Each thread uses a separate queue.

    The WFQ handler (scheduler) provides a fair division of bandwidth between existing threads. To do this, the available bandwidth is divided by the number of threads and each receives an equal part. In addition, each thread receives its own weight, with a certain coefficient inversely proportional to the IP priority (TOS). The weight of the thread is also taken into account by the handler.

    As a result, WFQ automatically distributes the available bandwidth fairly, taking into account TOS in addition. Flows with the same IP TOS priorities will receive equal shares of bandwidth; flows with high IP priority – a larger share of the bandwidth. In case of overloads, unloaded high-priority threads function without changes, and low-priority, high-load threads are limited.

    RSVP works together with WFQ. By default, WFQ is enabled on low-speed interfaces.

    Setting algorithm.
    1. We mark the traffic in some way (we set IP priority - TOS) or receive it marked

    2. Enable WFQ on the interface
    interface FastEthernet0/0
    fair-queue

    interface FastEthernet0/0
    fair-queue CONGESTIVE_DISCARD_THRESHOLD DYNAMIC_QUEUES

    Parameters:
    CONGESTIVE_DISCARD_THRESHOLD – the number of packets in each queue, if exceeded, packets are ignored (default - 64)
    DYNAMIC_QUEUES – number of subqueues into which traffic is classified (default - 256)

    3. View the result
    #sh queuing fair
    # sh queuing interface FastEthernet0/0

    3.2.3.5. CBWFQ.

    Class Based Weighted Fair Queuing (CBWFQ) corresponds to a class-based queuing mechanism. All traffic is divided into 64 classes based on the following parameters: input interface, access list, protocol, DSCP value, MPLS QoS label.

    The total output interface bandwidth is distributed across classes. The bandwidth allocated to each class can be determined either as an absolute value (bandwidth in kbit/s) or as a percentage (bandwidth percent) relative to set value on the interface.

    Packets that do not fall into the configured classes end up in a default class, which can be further configured and which receives the remaining free channel bandwidth. If the queue of any class is full, packets of that class are ignored. The algorithm for rejecting packets within each class can be selected: normal dropping is enabled by default (tail-drop, queue-limit parameter) or WRED (random-detect parameter). Only for the default class you can enable uniform (fair) strip division (fair-queue parameter).

    CBWFQ supports interoperability with RSVP.

    Command parameters.

    criteria for selecting packages by class:
    class-map match-all CLASS
    match access-group
    match input-interface
    match protocol
    match ip dscp
    match ip rtp
    match mpls experimental

    class definition:

    class CLASS
    bandwidth BANDWIDTH
    queue-limit QUEUE-LIMIT
    random-detect

    default class definition:

    class class-default
    bandwidth BANDWIDTH
    bandwidth percent BANDWIDTH_PERCENT
    queue-limit QUEUE-LIMIT
    random-detect
    fair-queue

    designations:
    CLASS – class name.
    BANDWIDTH – minimum kbit/s band, the value is independent of the bandwidth on the interface.
    BANDWIDTH_PERCENT - percentage of bandwidth on the interface.
    QUEUE-LIMIT – maximum number of packets in the queue.
    random-detect - use WRED.
    fair-queue – uniform division of the strip, only for the default class

    By default, the absolute Bandwidth value in the CBWFQ class cannot exceed 75% of the Bandwidth value on the interface. This can be changed with the max-reserved-bandwidth command on the interface.

    Setting algorithm.

    1. Distribution of packages by class - class-map

    class-map match-all Class1
    match access-group 101

    2. Description of the rules for each class - policy-map
    policy-map Policy1
    class Class1
    bandwidth 100
    queue-limit 20
    class class-default
    bandwidth 50
    random-detect

    3. Launch the specified policy on the interface - service-policy
    interface FastEthernet0/0
    bandwidth 256

    4. View the result
    #sh class Class1
    #sh policy Policy1
    #sh policy interface FastEthernet0/0

    Example 1.

    Dividing the total band by class as a percentage (40, 30, 20).
    access-list 101 permit ip host 192.168.0.10 any
    access-list 102 permit ip host 192.168.0.20 any
    access-list 103 permit ip host 192.168.0.30 any

    class-map match-all Platinum
    match access-group 101
    class-map match-all Gold
    match access-group 102
    class-map match-all Silver
    match access-group 103

    policy-map Isp
    class Platinum
    bandwidth percent 40
    class Gold
    bandwidth percent 30
    class Silver
    bandwidth percent 20

    interface FastEthernet0/0
    bandwidth 256
    service-policy output Isp

    3.2.3.6. LLQ.

    Low Latency Queuing (LLQ) – low latency queuing. LLQ can be thought of as a CBWFQ mechanism with a PQ priority queue (LLQ = PQ + CBWFQ).
    PQ in LLQ allows for service of latency-sensitive traffic. LLQ is recommended when there is voice (VoIP) traffic. Additionally, it works well with video conferencing.

    Setting algorithm.

    1. Distribution of packages by class - Class-map
    access-list 101 permit ip any any precedence critical

    class-map match-all Voice
    match ip precedence 6
    class-map match-all Class1
    match access-group 101

    2. Description of the rules for each class - Policy-map

    Similar to CBWFQ, only for the priority class (there is only one) the priority parameter is specified.
    policy-map Policy1
    class Voice
    priority 1000
    class Class1
    bandwidth 100
    queue-limit 20
    class class-default
    bandwidth 50
    random-detect

    3. Launch the specified policy on the interface - Service-policy
    interface FastEthernet0/0
    bandwidth 256
    service-policy output Policy1

    Example 1.
    We assign the Voice class to PQ, and everything else to CQWFQ.
    !
    class-map match-any Voice
    match ip precedence 5
    !
    policy-map Voice
    class Voice
    priority 1000
    class VPN
    bandwidth percent 50
    class class-default
    fair-queue 16
    !
    interface X
    Service-policy output Voice
    !

    Example 2.
    Additionally, we limit the overall speed for PQ in LLQ so that it does not monopolize the entire channel in case of incorrect operation.
    !
    class-map match-any Voice
    match ip precedence 5
    !
    policy-map Voice
    class Voice
    priority 1000
    police 1024000 32000 32000 conform-action transmit exceed-action drop
    class VPN
    bandwidth percent 50
    class class-default
    fair-queue 16
    !
    interface FastEthernet0/0
    service-policy output Voice
    !

    QoS is a big topic. Before talking about the intricacies of settings and various approaches to applying traffic processing rules, it makes sense to remind you what QoS is in general.

    Quality of Service (QoS)- technology for providing different classes of traffic with different service priorities.

    Firstly, it is easy to understand that any prioritization only makes sense when there is a queue for service. It is there, in the queue, that you can “slip” first, using your right.
    The queue forms where it is narrow (usually such places are called “bottle neck”, bottle-neck). A typical bottleneck is an office Internet connection, where computers connected to the network at a speed of at least 100 Mbit/sec all use a channel to the provider, which rarely exceeds 100 Mbit/sec, and often amounts to a meager 1-2-10 Mbit/sec . For everyone.

    Secondly, QoS is not a panacea: if the “neck” is too narrow, then the physical buffer of the interface often overflows, where all packets that are going to exit through this interface are placed. And then newly arrived packages will be destroyed, even if they are superfluous. Therefore, if the queue on an interface on average exceeds 20% of its maximum size (on cisco routers maximum size queue is usually 128-256 packets), there is reason to think hard about the design of your network, lay out additional routes or expand the bandwidth to the provider.

    Let's understand the components of technology

    (further under the cut, there’s a lot)

    Marking. In the header fields of various network protocols (Ethernet, IP, ATM, MPLS, etc.) there are special fields allocated for traffic marking. Traffic needs to be marked for subsequent simpler processing in queues.

    Ethernet. Class of Service (CoS) field - 3 bits. Allows you to divide traffic into 8 streams with different markings

    IP. There are 2 standards: old and new. The old one had a ToS field (8 bits), from which 3 bits called IP Precedence were in turn allocated. This field was copied into the CoS Ethernet header field.
    Later a new standard was defined. The ToS field has been renamed DiffServ, and an additional 6 bits have been allocated for the Differential Service Code Point (DSCP) field, in which the required of this type traffic parameters.

    It is best to label data close to the source of the data. For this reason, most IP phones independently add the DSCP = EF or CS5 field to the IP header of voice packets. Many applications also mark traffic themselves in the hope that their packets will be processed with priority. For example, peer-to-peer networks are guilty of this.

    Queues.

    Even if we do not use any prioritization technologies, this does not mean that queues do not occur. At a bottleneck, a queue will arise in any case and will provide a standard FIFO (First In First Out) mechanism. Such a queue, obviously, will make it possible not to destroy packets immediately, storing them in a buffer before sending, but it will not provide any preferences, say, to voice traffic.

    If you want to give some dedicated class absolute priority (i.e. packets from this class will always be processed first), then this technology is called Priority queuing. All packets in the physical outgoing buffer of the interface will be divided into 2 logical queues and packets from the privileged queue will be sent until it is empty. Only after this will packets from the second queue begin to be transmitted. This technology is simple, rather crude, and can be considered outdated, because... processing of non-priority traffic will be constantly stopped. On cisco routers you can create
    4 queues with different priorities. They maintain a strict hierarchy: packets from less privileged queues will not be serviced until all higher priority queues are empty.

    Fair queue ( Fair Queuing). A technology that allows each class of traffic to be given the same rights. As a rule, it is not used, because does little in terms of improving the quality of service.

    Weighted fair queue ( Weighted Fair Queuing, WFQ). A technology that gives different classes of traffic different rights (we can say that the “weight” of different queues is different), but simultaneously services all queues. “At a glance” it looks like this: all packets are divided into logical queues using
    IP Precedence field as a criterion. The same field also sets the priority (the more, the better). Next, the router calculates which packet is “faster” to transmit from which queue and transmits it.

    He calculates this using the formula:

    DT=(t(i)-t(0))/(1+IPP)

    IPP - IP Precedence field value
    t(i) - Time required for actual packet transmission by the interface. Can be calculated as L/Speed, where L is the packet length and Speed ​​is the interface transmission speed

    This queue is enabled by default on all interfaces of Cisco routers, except point-to-point interfaces (HDLC or PPP encapsulation).

    WFQ has a number of disadvantages: such queuing uses previously marked packets and does not allow you to independently determine traffic classes and the allocated bandwidth. Moreover, as a rule, no one marks with the IP Precedence field anymore, so the packets go unmarked, i.e. everyone gets into the same queue.

    An evolution of WFQ was class-based weighted fair queuing ( Class-Based Weighted Fair Queuing, CBWFQ). In this queue, the administrator himself defines traffic classes, following various criteria, for example, using ACLs as a template or analyzing protocol headers (see NBAR). Next, for these classes
    the “weight” is determined and the packets of their queues are serviced in proportion to the weight (more weight - more packets from this queue will be consumed per unit of time)

    But such a queue does not strictly ensure the passage of the most important packets (usually voice or packets of other interactive applications). Therefore, a hybrid of Priority and Class-Based Weighted Fair Queuing appeared - PQ-CBWFQ, also known as, Low Latency Queuing (LLQ). In this technology, you can set up to 4 priority queues, the remaining classes are served using the CBWFQ mechanism

    LLQ is the most convenient, flexible and frequently used mechanism. But it requires setting up classes, setting up policies, and applying policies on the interface.

    Thus, the process of providing quality service can be divided into 2 stages:
    Marking. Closer to the sources.
    Packet processing. Putting them in a physical queue on the interface, dividing them into logical queues and giving those logical queues different resources.

    QoS technology is quite resource-intensive and loads the processor quite significantly. And the more it loads, the deeper into the headers you have to go to classify the packets. For comparison: it is much easier for a router to look into the IP packet header and analyze the 3 IPP bits there, rather than spin up the flow almost to the application level, determining what kind of protocol is inside (NBAR technology)

    To simplify further traffic processing, as well as to create a so-called “trusted boundary” where we trust all QoS-related headers, we can do the following:
    1. On access level switches and routers (close to client machines), catch packets and distribute them by class
    2. As a policy action, recolor the headers in your own way or transfer the values ​​of higher-level QoS headers to lower ones.

    For example, on the router we catch all packets from the guest WiFi domain (we assume that there may be computers and software that we do not manage that can use non-standard QoS headers), change any IP headers to the default ones, match the channel headers to layer 3 (DSCP) headers level (CoS),
    so that further switches can effectively prioritize traffic using only the link-layer label.

    Setting up LLQ

    Setting up queues consists of setting up classes, then for these classes you need to determine the bandwidth parameters and apply the entire created design to the interface.

    Creating classes:

    class-map NAME
    match?

    access-group Access group
    any Any packets
    class-map Class map
    cos IEEE 802.1Q/ISL class of service/user priority values
    destination-address Destination address
    discard-class Discard behavior identifier
    dscp Match DSCP in IP(v4) and IPv6 packets
    flow Flow based QoS parameters
    fr-de Match on Frame-relay DE bit
    fr-dlci Match on fr-dlci
    input-interface Select an input interface to match
    ip IP specific values
    mpls Multi Protocol Label Switching specific values
    not Negate this match result
    packet Layer 3 Packet length
    precedence Match Precedence in IP(v4) and IPv6 packets
    protocol Protocol
    qos-group Qos-group
    source-address Source address
    vlan VLANs to match

    Packets into classes can be sorted by various attributes, for example, by specifying an ACL as a template, or by the DSCP field, or by highlighting a specific protocol (NBAR technology is enabled)

    Create a policy:

    policy-map POLICY
    class NAME1
    ?

    bandwidth Bandwidth
    compression Activate Compression
    drop Drop all packets
    log Log IPv4 and ARP packets
    netflow-sampler NetFlow action
    police Police
    priority Strict Scheduling Priority for this Class
    queue-limit Queue Max Threshold for Tail Drop
    random-detect Enable Random Early Detection as drop policy
    service-policy Configure Flow Next
    set Set QoS values
    shape Traffic Shaping


    For each class in the policy, you can either allocate a priority piece of the strip:

    policy-map POLICY
    class NAME1
    priority?

    Kilo Bits per second
    percent% of total bandwidth


    and then packages of this class can always count on at least this piece.

    Or describe what “weight” this class has within CBWFQ

    policy-map POLICY
    class NAME1
    bandwidth?

    Kilo Bits per second
    percent% of total Bandwidth
    remaining% of the remaining bandwidth


    In both cases, you can specify both an absolute value and a percentage of the entire available band

    A reasonable question arises: how does the router know the ENTIRE band? The answer is simple: from the bandwidth parameter on the interface. Even if it is not explicitly configured, it must have some meaning. You can view it with the sh int command.

    It is also necessary to remember that by default you do not control the entire stripe, but only 75%. Packages that are not explicitly included in other classes end up in class-default. This setting for the default class can be set explicitly

    policy-map POLICY
    class class-default
    bandwidth percent 10

    (UPD, thanks to OlegD)
    You can change the maximum available bandwidth from the default 75% using a command on the interface

    max-reserved-bandwidth

    Routers jealously monitor that the administrator does not accidentally give out more bandwidth than there is and swear at such attempts.

    It appears that the policy will issue no more to classes than is written. However, this situation will only happen if all the queues are full. If one is empty, then the lane allocated to it will be divided by the filled queues in proportion to its “weight”.

    This whole structure will work like this:

    If there are packets from a class indicating priority, then the router focuses on transmitting these packets. Moreover, because There may be several such priority queues, then the bandwidth is divided between them in proportion to the specified percentages.

    Once all the priority packets are gone, it is CBWFQ's turn. For each time count, the proportion of packets specified in the setting for this class is “scooped” from each queue. If some of the queues are empty, then their bandwidth is divided in proportion to the “weight” of the class between the loaded queues.

    Interface Application:

    int s0/0
    service-policy POLICY

    But what should you do if you need to strictly cut down packets from the class that exceed the permitted speed? After all, specifying bandwidth only distributes bandwidth between classes when the queues are busy.

    To solve this problem for the traffic class in the policy, there is a technology

    police conform-action [action] exceed-action [action]

    It allows you to explicitly specify the desired average speed (speed), the maximum “overshoot”, i.e. the amount of data transmitted per unit of time. The larger the “output”, the more real speed transmission may deviate from the desired average. Also indicated: action for normal traffic not exceeding
    specified speed and action for traffic exceeding the average speed. Actions could be like this

    police 100000 8000 conform-action?

    drop drop packet
    exceed-action action when rate is within conform and
    conform + exceed burst
    set-clp-transmit set atm clp and send it
    set-discard-class-transmit set discard-class and send it
    set-dscp-transmit set dscp and send it
    set-frde-transmit set FR DE and send it
    set-mpls-exp-imposition-transmit set exp at tag imposition and send it
    set-mpls-exp-topmost-transmit set exp on topmost label and send it
    set-prec-transmit rewrite packet precedence and send it
    set-qos-transmit set qos-group and send it
    transmit transmit packet

    Often another challenge also arises. Let's assume that we need to limit the flow going towards a neighbor with a slow channel.

    In order to accurately predict which packets will reach the neighbor and which will be destroyed due to channel congestion on the “slow” side, it is necessary to create a policy on the “fast” side that would process queues in advance and destroy redundant packets.

    And here we are faced with one very important thing: to solve this problem we need to emulate a “slow” channel. For this emulation, it is not enough just to scatter packets into queues; you also need to emulate the physical buffer of the “slow” interface. Each interface has a packet transmission rate. Those. Each interface can transmit no more than N packets per unit time. Typically, the physical interface buffer is designed to ensure “autonomous” operation of the interface for several time units. Therefore, the physical buffer of, say, GigabitEthernet will be tens of times larger than some Serial interface.

    What's wrong with remembering a lot? Let's take a closer look at what will happen if the buffer on the fast sending side is significantly larger than the receiving buffer.

    For simplicity, let there be 1 queue. On the “fast” side we simulate a low transmission speed. This means that packets falling under our policy will begin to accumulate in the queue. Because Since the physical buffer is large, the logical queue will be impressive. Some applications (working via TCP) will receive a late notification that some packets have not been received and will wait a long time large size windows, loading the receiver side. This will happen in the ideal case when the transmission speed is equal to or less than the reception speed. But the receiving side interface can itself be loaded with other packages
    and then the small queue on the receiving side will not be able to accommodate all the packets transmitted to it from the center. Losses will begin, which will entail additional transmissions, but in the transmitting buffer there will still be a solid “tail” of previously accumulated packets that will be transmitted “idlely”, because the receiving side did not wait for the earlier packet, which means later ones will simply be ignored.

    Therefore, to correctly solve the problem of reducing the transmission speed to a slow neighbor, the physical buffer must also be limited.

    This is done by a team

    shape average

    Well, now the most interesting thing: what if, in addition to emulating a physical buffer, I need to create logical queues inside it? For example, prioritize voice?

    For this, a so-called nested policy is created, which is applied inside the main one and divides into logical queues what gets into it from the parent one.

    It's time to look at some cool example based on the above picture.

    Let's say we are going to create stable voice channels over the Internet between CO and Remote. For simplicity, let the Remote network (172.16.1.0/24) have only communication with CO (10.0.0.0/8). The interface speed on Remote is 1 Mbit/s and 25% of this speed is allocated to voice traffic.

    Then first we need to select a priority traffic class on both sides and create a policy for this class. In CO we will additionally create a class that describes traffic between offices

    class-map RTP
    match protocol rtp

    Policy-map RTP
    class RTP
    priority percent 25

    IP access-list extended CO_REMOTE
    permit ip 10.0.0.0 0.255.255.255 172.16.1.0 0.0.0.255

    Class-map CO_REMOTE
    match access-list CO_REMOTE


    On Remote we’ll do things differently: even if we can’t use NBAR due to dead hardware, then we can only explicitly describe the ports for RTP

    ip access-list extended RTP
    permit udp 172.16.1.0 0.0.0.255 range 16384 32768 10.0.0.0 0.255.255.255 range 16384 32768

    Class-map RTP
    match access-list RTP

    Policy-map QoS
    class RTP
    priority percent 25

    policy-map QoS
    class CO_REMOTE
    shape average 1000000
    service-policy RTP


    and apply the policy on the interface

    int g0/0
    service-policy output QoS

    On Remote, set the bandwidth parameter (in kbit/sec) to match the interface speed. Let me remind you that it is from this parameter that 25% will be considered. And apply the policy.

    int s0/0
    bandwidth 1000
    service-policy output QoS

    The story would not be complete if we did not cover the capabilities of switches. It is clear that pure L2 switches are not capable of looking so deeply into packets and dividing them into classes according to the same criteria.

    On smarter L2/3 switches on routed interfaces (i.e. either on the interface vlan, or if the port is brought out from the second level with the command no switchport) the same design is used that works on routers, and if the port or the entire switch operates in L2 mode (true for 2950/60 models), then only police can be used for the traffic class, and priority or bandwidth are not available.

    Moreover, the worm often spreads through the ports needed for operation (TCP/135,445,80, etc.). Simply closing these ports on the router would be reckless, so it’s more humane to do this:

    1. Collect statistics on network traffic. Either via NetFlow, or NBAR, or SNMP.

    2. We identify the profile of normal traffic, i.e. According to statistics, on average, the HTTP protocol takes up no more than 70%, ICMP - no more than 5%, etc. Such a profile can either be created manually or by using the statistics accumulated by NBAR. Moreover, you can even automatically create classes, policies and apply them on the interface
    team autoqos :)

    3. Next, you can limit the bandwidth for atypical network traffic. If we suddenly pick up an infection via a non-standard port, there will be no big trouble for the gateway: on a busy interface, the infection will occupy no more than the allocated part.

    4. Having created the design ( class-map - policy-map - service-policy) you can quickly respond to the appearance of an atypical burst of traffic by manually creating a class for it and severely limiting the bandwidth for this class.

    There is such a service as QoS. This abbreviation stands for Quality of Service. If you configure the system, it is highly undesirable to activate it, since it tends to significantly reduce network throughput (by approximately 20 percent)

    Nowadays, it is perhaps impossible to find a person who has never read any of the FAQs regarding the operation of Windows.

    20% is, of course, just insanely high. And of course Microsoft must die. Statements of this kind move from one FAK to another, wander through forums, the media, and enjoy enormous success in all sorts of “tweaks” - Windows “training” software . Time to clothe something like this this kind of statement must be done extremely carefully, and this is what we will now do, using a qualitatively systematic approach. That is, we will consider the problematic issue in great detail, we will rely on authoritative primary sources.

    What is a network with quality service?

    We suggest that you accept a definition of a network system that is as simplified as possible. Applications start their work, do it on the hosts, and as a result of their activities exchange information with each other. Applications send information to the OS for transmission over the network. Once the required information is transmitted to the OS, it automatically becomes network traffic.

    QoS, in turn, relies on the network's ability to process such traffic in such a way as to accurately fulfill the requests of not just one, but several applications at once. This requires the presence of a basic mechanism for processing traffic from the network, which is capable of distinguishing and classifying this traffic, which has the right to special treatment and the right to control the mechanisms themselves.

    The functionality of this service is designed to satisfy several network entities: firstly, network administrators, and secondly, the network applications themselves. Often, they have some disagreements. A network administrator seeks to limit the resources that are used by a particular application, while the same application seeks to seize as many resources as possible. more free resources from the network. Their interests can be aligned, given the fact that the network administrator will play the most important role in relation to all users and applications.

    Basic QoS Service Settings

    Different applications have completely different requirements for processing their traffic. Applications are to some extent more or less tolerant of loss and minor delays in network traffic.

    These requirements find their application in such parameters that are related to QoS:

    Bandwidth – the speed at which the traffic generated by the application can and should be transmitted over the network

    Latency – the delay time that the application itself can tolerate when delivering a packet of information

    Changing the delay (Jitter)

    Loss – coefficient of information loss.

    If we had access to eternal network resources, then we could distribute absolutely all application traffic at the required speed, with a delay time equal to zero, time changes also equal to zero, and with no losses. But network resources are far from eternal.

    The service mechanism in question controls the allocation of network resources to application traffic in order to perform necessary conditions for its transfer.

    Basic QoS service resources and traffic processing methods

    Networks that maintain communications between hosts use a variety of network devices, which also include hubs, routers, switches, and host network adapters. Any of the above has network interfaces. Any network interface is capable of transmitting and receiving traffic at its full speed. When the rate at which traffic is sent to an interface exceeds the rate at which the interface forwards traffic, congestion often results.

    Network devices are able to handle congestion conditions by organizing a whole chain of traffic in the device's memory (in its buffer) until it (the congestion) passes. In other cases, the equipment may not accept traffic in order to make the congestion less severe. As a result, applications experience significant latency changes (because traffic remains queued on interfaces) or even complete loss of traffic.

    The capabilities of the interfaces to forward network traffic and the availability of memory dedicated to storing traffic in network devices will constitute the fundamental resources that in turn are required to provide QoS service to continue traffic flows in applications.

    Distribution of QoS service resources across network devices

    Devices that support the service in question make relatively efficient use of network resources to transmit network traffic. That is, application traffic, which is accordingly more tolerant of delays, is stored in a buffer, and application traffic, which is to a certain extent more critical to delays, is sent further.

    In order to solve this problem, the network device must first of all identify traffic by distributing packets, as well as create queues and, through its own mechanisms, carry out their maintenance

    Mechanisms and methods of traffic processing

    The vast majority of local area networks are based on iEEE 802 technology and include token-ring, Ethernet, and so on. 802.1p is a traffic processing mechanism to support QoS service in these types of networks.

    802.1p is able to define a field (the second layer in the OSI networking model) in the header of an 802 packet that carries some kind of priority value. Typically, routers or hosts, when sending their traffic to the local network, mark all the packets they send, assigning them some kind of priority value. It is assumed that switches, hubs, bridges and other network devices will process packets by organizing queues. The scope of application of the specified traffic processing mechanism is limited to LAN. The moment the packet crosses the LAN (via OSI Layer 3), the 802.1p priority is immediately removed

    The third level mechanism is Diffserv, which defines in a field in the third level the header of IP packets, which are called DSCP (ext. Diffserv codepoint)

    Itserv is a complete package of services that defines the guaranteed service and the service that manages congestion. A guaranteed service is capable of carrying a certain amount of traffic with limited latency. The service that manages the load is called upon to carry some amount of traffic when “light congestion appears on the networks.” They are somewhat measurable services in that they are defined to provide a QoS ratio to some amount of traffic.

    Because ATM technology can fragment packets into relatively small cells, it can offer very low latency. If you need to transmit a packet urgently, the ATM interface can always be free for transmission for the time it takes to transmit only one cell.

    Also, the QoS service has at its disposal quite a lot of complex mechanisms that ensure the operation of such technology. We would like to note only one, but very significant point: in order for the service to start functioning, we need support for such technology and the presence of the necessary settings on all transmissions from the initial point to the final one.

    Need to accept:

    Absolutely all routers take part in the transmission of the necessary protocols;

    The first QoS session, which requires 64 kbps, is initiated between hosts A and B

    A second session, which requires 64 kbps, is initiated between hosts A and D

    In order to significantly simplify the diagram, we assume that the routers are configured in such a way that they have the ability to reserve absolutely all network resources.

    What's important to us is that one 64 kbps reservation question had to reach three routers along the path of information flow between hosts A and B. The following 64 kbps request could reach three routers between hosts A and D . Routers would were able to fulfill requests for resource reservations, since they were no more than the maximum specified point. If, instead, any of Hosts B and C were able to initiate a 64 kbps QoS session with Host A, then the router that serves these hosts would most likely deny one connection.

    Now let’s try to imagine that the network administrator turns off service processing in three routers that serve host E, D, C, B. In this case, requests for resources larger than 64 kbps will be satisfied regardless of the location of the host that takes part in the creation. In this case, quality assurance would be extremely low, since traffic for one host would harm the traffic of another. Quality of service could likely remain the same if the upstream router could limit requests to 64kbps, but this would result in extremely inefficient use of network resources.

    On the other hand, we could increase the throughput of all connections in the network to 128 kbps. However, the increased bandwidth will only be used if two hosts demand resources at the same time. If this is not the case, then network resources will again be used extremely inefficiently

    MicrosoftQoS-component s

    The 98 version of Windows only has user-level QoS components:

    QoS service provider

    Winsock 2 (GQoS API)

    Some application components

    More recent Microsoft operating systems contain all of the above plus features such as:

    Traffic .dll – ability to control traffic

    Rsvpsp .dll and rsvp .exe services, as well as QoS ACS. Not used in XP and 2003

    Mspgps .sys is a packet classifier that can determine the class of service that belongs to a packet.

    Psched.sys is a packet scheduler QoS services. Its function is to define service parameters for a specific information flow. All traffic will be marked with some kind of priority value. The packet scheduler will determine traffic by queuing all packets and handle competing requests through queued data packets that need timely access to the network.

    QoS Packet Scheduler (Psched.sys). Defines QoS parameters for a specific data stream. Traffic is marked with a specific priority value. The QoS packet scheduler determines the queue schedule for each packet and handles competing requests between queued packets that need to access the network at the same time.

    Final chord

    All the above points cannot give one answer to the question of where that same 20 percent goes (which, by the way, no one has yet accurately measured). Based on all the information provided above, this should not happen under any circumstances. However, opponents put forward their argument: the QoS system is excellent, but it’s poorly implemented. And, as a result, 20% still leave. Most likely, the problem has plagued the software giant itself, since it, in turn, began to refute such ideas a long time ago.