How to use QoS to ensure the quality of Internet access

QoS is the ability of a network to provide a special level of service for specific users or applications without compromising the rest of the traffic. The main goal of QoS is to provide more predictable behavior of the data transmission network when working with one or another type of traffic, by providing the necessary bandwidth, control over delay and jitter, and improving performance in case of packet loss. QoS algorithms achieve these goals by limiting traffic, making more efficient use of transmission channels, and assigning policies to traffic. QoS enables intelligent transmission over the corporate network and, when properly configured, improves performance.

QoS Policies

Traffic type QoS Safety When?
Vote Latency less than 150ms one way Voice encryption Monday Friday
Enterprise resource planning system Ensuring at least 512 kbps available bandwidth Encrypted 24 hours a day, 7 days a week, 365 days a year
Traffic generated by the software of machines and equipment Providing at least 256 kb / s available bandwidth In open form Monday Friday
Traffic from the use of Internet resources HTTP / HTTPS Best Effort Non-Guaranteed Delivery HTTP proxy server Monday - Friday, 8 am-9pm.

Implementing QoS in Unified Communications Networks

Conventionally, the process of implementing QoS in networks Unified Communications (unified communications) can be divided into 3 stages:

  1. Determining the type of traffic on the network and its requirements. At this stage, it is necessary to teach the network to determine the types of traffic in order to apply certain QoS algorithms to them;
  2. with the same QoS requirements. For example, you can define 4 types of traffic: voice, high - priority traffic, low - priority traffic and traffic from using a browser to view WEB pages;
  3. Assign QoS Policiesapplied to the classes defined in clause 2.

In modern corporate networks, voice traffic always requires minimal latency. The traffic generated by business-critical applications requires little latency (for example, banking information). Other types of information may not be as sensitive to latency, such as file transfers or email. Routine personal use of the Internet at work may be similarly restricted or even prohibited.

According to these principles, three QoS policies can be conditionally distinguished:

  • No delay: Assigned in voice traffic;
  • Best Service: Assigned to traffic with the highest priority;
  • Rest: Assigned to low - priority and web browser traffic;
Step 1: Determine the type of traffic

The first step towards implementing QoS is identifying the types of traffic on the network and defining the specific requirements of each type. Before implementing QoS, it is highly recommended to conduct a network audit to fully understand how and what applications work on the corporate network. If QoS policies are implemented without a full understanding of the corporate network segment, the results can be dire.

Next, it is necessary to determine the problems of users when working with certain network applications: for example, the application is slow, due to which it has poor performance. It is necessary to measure network traffic during busy hours using special utilities. To understand the processes in the network, a necessary step is to measure the processor load of each of the units of active network equipment during the busiest period, so that you clearly know where problems can potentially arise.

After that, it is necessary to define business goals and work models and draw up a list of business requirements. Based on the results of these actions, each of the items in the list can be compared with one or another traffic class.

Finally, it is necessary to define the service levels that are required for different types of traffic depending on the required availability and performance.

Step 2: Group traffic into classes

After identifying the network traffic, you must use the business requirements list from the first step to determine the traffic classes.

Voice traffic is always defined by a separate class. Cisco has developed QoS mechanisms for voice traffic, for example, Low latency queuing (LLQ) , the purpose of which is to control that the voice gains a service advantage. Once the most critical applications have been identified, traffic classes need to be defined using a list of business requirements.

Not every application has its own class of service. Quite a few applications with similar QoS requirements are grouped together into a single class.

Traffic classification example

A typical corporate landscape defines 5 traffic classes:

  • Vote: Highest priority for VoIP traffic;
  • Critical: A small set of business-critical applications;
  • Transactions: This class contains database services, interactive traffic, and privileged network traffic;
  • Non-guaranteed delivery: Works on the principle of Best Effort, which literally translates as "best effort." This class includes Internet traffic and e-mail.

Step 3: Group traffic into classes

The third step is to describe the QoS policies for each of the traffic classes, which include the following steps:

  • Assign the minimum size of the guaranteed bandwidth;
  • Assign the maximum size of the bandwidth;
  • Assign priorities for each of the classes;
  • Use QoS technologies such as queue control algorithms to manage congestion.

Let's look at the current example for defining QoS policies for each of the classes:

  1. Vote: The available bandwidth is 1 Mbit / s. Use Differentiated Services Code Poin (DSCP) with EF. The EF (Expedited Forwarding) label means that packets with this token receive priority in the queue according to the principle of least delay. Additionally, the LLQ algorithm is used;
  2. Critical: The minimum bandwidth is 1 Mbps. Use a Differentiated Services Code Poin (DSCP) tag with AF31 (tag in DSCP field 011010), which provides the least chance of packet drops. Parallel use of the CBWFQ algorithm guarantees the necessary bandwidth for tagged traffic;
  3. Non-guaranteed delivery: The maximum bandwidth is 500kbps. Use the Differentiated Services Code Poin (DSCP) label with the Default value (the label in the DSCP field is 000000), which provides the default service. The CBWFQ algorithm provides "As Possible" delivery, which is lower in priority for Voice and Mission critical.

Is this article helpful to you?

Please tell us why?

We are sorry that the article was not useful to you: (Please, if it does not make it difficult, indicate why? We will be very grateful for a detailed answer. Thank you for helping us become better!

There is not a single person who has not read any FAQ on Windows XP at least once. And if so, then everyone knows that there is such a harmful Quality of Service - for short, QoS. When configuring the system, it is highly recommended to disable it, because it limits the network bandwidth to 20% by default, and as if this problem exists in Windows 2000 too.

These are the lines:

Q: How to completely disable the QoS (Quality of Service) service? How do I set it up? Is it true that it limits the network speed?
A: Indeed, by default Quality of Service reserves 20% of the channel bandwidth for its needs (any - even a 14400 modem, even a Gigabit Ethernet). Moreover, even if you remove the QoS Packet Scheduler service from the Properties connection, this channel is not released. You can release the channel or simply configure QoS here. Launch the Group Policy applet (gpedit.msc). In Group Policy, find Local computer policy and click on Administrative templates. Select the item Network - QoS Packet Sheduler. Turn on Limit reservable bandwidth. Now we reduce the Bandwidth limit 20% to 0%, or simply disable it. If desired, other QoS parameters can also be configured here. To activate the changes made, you just need to reboot.

20% is, of course, a lot. Truly Microsoft is a Mazdai. Statements of this kind wander from FAQ to FAQ, from forum to forum, from media to media, are used in all kinds of "tweaks" - programs for "tuning" Windows XP (by the way, open "Group Policies" and "Local Security Policies", and no "tweak" can match them in the wealth of customization options). It is necessary to be careful to expose unfounded allegations of this kind, which we will now do using a systematic approach. That is, we will thoroughly study the problematic issue, relying on official primary sources.

What is a quality service network?

Let's adopt the following simplified definition of a networked system. Applications run and run on hosts and communicate with each other. Applications send data to the operating system for transmission over the network. Once the data is transferred to the operating system, it becomes network traffic.

Network QoS relies on the ability of the network to handle this traffic to ensure that the requests of some applications are fulfilled. This requires a fundamental mechanism for handling network traffic that is capable of identifying traffic that is eligible for and control of specific handling.

The QoS functionality is designed to satisfy two network actors: network applications and network administrators. They often have disagreements. The network administrator limits the resources used by a specific application, while the application tries to grab as many network resources as possible. Their interests can be reconciled, taking into account the fact that the network administrator plays a leading role in relation to all applications and users.

Basic QoS parameters

Different applications have different requirements for handling their network traffic. Applications are more or less tolerant of latency and traffic loss. These requirements have found application in the following QoS-related parameters:

  • Bandwidth - The rate at which traffic generated by the application must be transmitted over the network.
  • Latency (latency) - the delay that the application can tolerate in the delivery of the data packet;
  • Jitter - change the delay time;
  • Loss - The percentage of data lost.

If infinite network resources were available, then all application traffic could be transmitted at the required rate, with zero latency, zero latency variation, and zero loss. However, network resources are not limitless.

The QoS mechanism controls the allocation of network resources for application traffic to meet its transmission requirements.

Fundamental QoS Resources and Traffic Handling Mechanisms

The networks that connect hosts use a variety of networking devices including host network adapters, routers, switches, and hubs. Each of them has network interfaces. Each network interface can receive and transmit traffic at a finite rate. If the rate at which traffic is directed to an interface is higher than the rate at which the interface is forwarding traffic, congestion occurs.

Network devices can handle the congestion condition by queuing traffic in the device memory (in a buffer) until the congestion is over. In other cases, network equipment can drop traffic to ease congestion. As a result, applications are faced with a change in latency (as traffic is stored in queues on interfaces) or with traffic loss.

The ability of network interfaces to forward traffic and the availability of memory to store traffic on network devices (until traffic can be sent further) constitute the fundamental resources required to provide QoS for application traffic streams.

Allocating QoS Resources to Network Devices

Devices that support QoS use network resources intelligently to carry traffic. That is, traffic from applications that are more tolerant of latency is queued (stored in a buffer in memory), and traffic from applications that are sensitive to latency is forwarded on.

To accomplish this task, a network device must identify traffic by classifying packets, and must have queues and mechanisms for serving them.

Traffic processing engine

The traffic processing mechanism includes:

  • 802.1p;
  • Differentiated per-hop-behaviors (diffserv PHB);
  • Integrated services (intserv);
  • ATM, etc.

Most local area networks are based on IEEE 802 technology including Ethernet, token-ring, etc. 802.1p is a traffic processing mechanism to support QoS in such networks.

802.1p defines a field (layer 2 in the OSI networking model) in an 802 packet header that can carry one of eight priority values. Typically, hosts or routers, when sending traffic to the local network, mark each packet sent, assigning it a certain priority value. It is expected that network devices such as switches, bridges, and hubs will handle packets appropriately using queuing mechanisms. 802.1p is limited to a local area network (LAN). As soon as the packet crosses the LAN (via OSI layer 3), 802.1p priority is removed.

Diffserv is a Layer 3 mechanism. It defines a field in Layer 3 of the header of IP packets called the diffserv codepoint (DSCP).

Intserv is a whole range of services that define a guaranteed service and a service that manages downloads. The guaranteed service promises to carry some amount of traffic with measurable and limited latency. The service that manages the load agrees to carry some traffic with "light network congestion". These are measurable services in the sense that they are defined to provide measurable QoS to a specific amount of traffic.

Because ATM technology fragments packets into relatively small cells, it can offer very low latency. If you need to send a packet urgently, the ATM interface can always be free for transmission for the time it takes to transmit one cell.

QoS has many more complex mechanisms that make this technology work. Let's note just one important point: in order for QoS to work, it is necessary to support this technology and configure it accordingly throughout the transmission from the start point to the end.


Currently, along with the systematic increase in data transmission rates in telecommunications, the share of interactive traffic, which is extremely sensitive to the parameters of the transportation environment, is increasing. Therefore, the task of ensuring the quality of service (QoS) is becoming more and more urgent.

Consideration of an issue of such complexity is best started with simple and straightforward examples of equipment configuration, for example, from Cisco. The material presented here certainly cannot compete with www.cisco.com. Our task is the initial classification of a huge amount of information in a compact form in order to facilitate understanding and further study.

1. Definitions and terms.

There are so many definitions of the term QoS that we will choose the only correct one - right, from Cisco: "QoS - QoS refers to the ability of a network to provide better service to selected network traffic over various underlying technologies ...". What can be literally translated as: "QoS is the ability of a network to provide the necessary service to a given traffic within a certain technological framework."

The required service is described by many parameters, among them the most important ones.

Bandwidth (BW) - bandwidth, describes the nominal bandwidth of the information transmission medium, determines the channel width. Measured in bit / s (bps), kbit / s (kbps), mbit / s (mbps).

Delay- packet transmission delay.

Jitter - fluctuation (variation) of the packet transmission delay.

Packet Loss - packet loss. Determines the number of packets dropped by the network during transmission.

Most often, to describe the throughput of a canal, an analogy is made with a water pipe. Within its framework, Bandwidth is the width of the pipe, and Delay is the length.

Time of packet transmission through the channel Transmit time [s] \u003d packet size / bw.

For example, let's find the transmission time of a 64-byte packet over a 64-kilobits / s channel:

Packet size \u003d 64 * 8 \u003d 512 (bit) Transmit Time \u003d 512/64000 \u003d 0.008 (s)

2. Service QoS models.

2.1. Best Effort Service.

Non-guaranteed delivery. Absolute lack of QoS mechanisms. All available network resources are used without any allocation of specific traffic classes and throttling. It is believed that the best QoS mechanism is to increase throughput. This is correct in principle, but some types of traffic (for example, voice) are very sensitive to packet delays and rate variations. The Best Effort Service model, even with large reserves, allows congestion in the event of sudden bursts of traffic. Therefore, other approaches to ensuring QoS have been developed.

2.2. Integrated Service (IntServ).

Integrated Service (IntServ, RFC 1633) is an integrated service model. Can provide end-to-end quality of service, guaranteeing the required bandwidth. IntServ uses RSVP signaling protocol for its own purposes. Allows applications to express end-to-end resource requirements and contains mechanisms to enforce these requirements. IntServ can be summarized as a resource reservation.

2.3. Differentiated Service (DiffServ).

Differentiated Service (DiffServ, RFC 2474/2475) - A Differentiated Service Model. Defines QoS provisioning based on well-defined components combined to provide the required services. The DiffServ architecture assumes the presence of classifiers and traffic shapers at the network edge, as well as support for resource allocation in the network core in order to provide the required Hop Behavior (PHB) policy. Divides traffic into classes, introducing multiple QoS levels. DiffServ consists of the following functional blocks: edge traffic shapers (packet classification, labeling, rate control) and PHB policy implementers (resource allocation, packet drop policy). DiffServ can be summarized as traffic prioritization.

3. Basic QoS functions.

The basic functions of QoS are to provide the necessary parameters of the service and are defined in relation to traffic as: classification, marking, congestion management, congestion avoidance and throttling. Functionally, classification and labeling is most often provided at the input ports of the equipment, and control and prevention of congestion at the weekend.

3.1. Classification and Marking

Packet Classification is a mechanism for assigning a packet to a specific traffic class.

Another equally important task in packet processing is Packet Marking - assignment of the corresponding priority (label).

Depending on the level of consideration (meaning OSI), these tasks are solved in different ways.

3.1.1. Layer 2 Classification and Marking.

Ethernet switches (Layer 2) use link layer protocols. Pure Ethernet does not support a priority field. Therefore, on Ethernet ports (Access Port), only an internal (in relation to the switch) classification by the number of the incoming port is possible and there is no marking.

A more flexible solution is to use the IEEE 802.1P standard, which was developed in conjunction with 802.1Q. The hierarchy of relationships here is as follows: 802.1D describes bridging technology and is the baseline for 802.1Q and 802.1P. 802.1Q describes virtual networking (VLAN) technology, while 802.1P provides quality of service. In general, enabling 802.1Q support (trunk with weelen) automatically enables 802.1P. According to the standard, 3 bits are used in the second layer header, which are called Class of Service (CoS). Thus, CoS can take values \u200b\u200bfrom 0 to 7.

3.1.2. Layer 3 Classification and Marking.

Routing equipment (Layer 3) operates with IP packets, in which a corresponding field in the header is provided for the purpose of marking - IP Type of Service (ToS), one byte in size. The ToS can be populated with IP Precedence or DSCP classifier depending on the task. IP precedence (IPP) is 3 bits long (can be 0-7). DSCP refers to the DiffServ model and consists of 6 bits (values \u200b\u200b0-63).

In addition to the digital form, DSCP values \u200b\u200bcan be expressed using special keywords: delivery whenever possible BE - Best Effort, guaranteed delivery AF - Assured Forwarding and urgent delivery EF - Expedited Forwarding. In addition to these three classes, there are class selector codes that are added to the class notation and are backward compatible with IPP. For example, a DSCP value of 26 can be written as AF31, which is exactly the same.

MPLS contains a QoS indicator within the label in the corresponding MPLS EXP bits (3 bits).

There are different ways to mark IP packets with QoS value: PBR, CAR, BGP.

Example 1. Marking PBR

Policy Based Route (PBR) can be used for labeling purposes by doing it in an appropriate subroutine (Route-map can contain a set ip precedence parameter):

!
interface FastEthernet0 / 0
ip policy route-map MARK
speed 100
full-duplex
no cdp enable
!
!
route-map MARK permit 10
match ip address 1
set ip precedence priority
!

At the output of the interface, you can see the result (for example, with the tcpdump program for unix):

# tcpdump -vv -n -i em0
... IP (tos 0x20 ...)

Example 2. Marking CAR.

The Committed Access Rate (CAR) mechanism is designed to limit the rate, but it can additionally mark packets (set-prec-transmit parameter in rate-limit):

!
interface FastEthernet0 / 0
ip address 192.168.0.2 255.255.255.252
rate-limit input access-group 1 1000000 10000 10000 conform-action set-prec-transmit 3 exceed-action set-prec-transmit 3
no cdp enable
!
access-list 1 permit 192.168.0.0 0.0.0.255
!

#sh interface FastEthernet0 / 0 rate-limit

3.2. Congestion Management. Queuing mechanism.

3.2.1. Congestions.

Overload occurs when the output buffers of the traffic-transmitting equipment overflow. The main mechanisms for the occurrence of congestions (or, which is equivalent, congestions) are traffic aggregation (when the speed of incoming traffic exceeds the speed of outgoing traffic) and speed inconsistency on interfaces.

Bandwidth management in case of congestion (bottlenecks) is carried out using a queuing mechanism. Packets are placed in queues, which are processed in an orderly manner according to a certain algorithm. In fact, congestion control is the determination of the order in which packets leave the interface (s) based on priority. If there are no overloads, the queues do not work (and are not needed). Let's list the methods for processing queues.

3.2.2. Layer 2 Queuing.

The physical structure of a classic switch can be simplified as follows: a packet arrives at the input port, is processed by the switching mechanism, which decides where to send the packet, and enters the hardware queues of the output port. Hardware queues are fast memory that stores packets before they go directly to the output port. Then, according to a certain processing mechanism, packets are removed from the queues and leave the switch. Initially, the queues are equal and it is the queue processing mechanism (Scheduling) that determines the prioritization. Typically, each port on a switch contains a limited number of queues: 2, 4, 8, and so on.

In general terms, setting prioritization is as follows:

1. Initially, the queues are equal. Therefore, you must first configure them, that is, determine the sequence (or proportionality of the volume) of their processing. This is most commonly done by binding 802.1P priorities to queues.

2. You need to configure a queue handler (Scheduler). The most commonly used are Weighted Round Robin (WRR) or Strict Priority Queuing.

3. Assigning priority to incoming packets: by the input port, by CoS or, in the case of additional capabilities (Layer 3 switch), by some IP fields.

It all works as follows:

1. The packet enters the switch. If this is a regular Ethernet packet (client Access Port), then it does not have priority labels and such can be set by the switch, for example, by the number of the input port, if necessary. If the input port is trunked (802.1Q or ISL), then the packet can carry a priority tag and the switch can accept it or replace it with the required one. In any case, the packet at this stage has entered the switch and has the necessary CoS markup.

2. After being processed by the switching process, the packet according to the CoS priority label is forwarded by the Classify to the corresponding output port queue. For example, critical traffic goes to a high priority queue, and less important traffic to a low priority queue.

3. The scheduling mechanism extracts packets from queues according to their priorities. More packets will be sent from the high-priority queue per unit of time to the output port than from the low-priority one.


3.2.3. Layer 3 Queuing.

Router devices operate on packets at the third OSI layer (Layer 3). Most of the time, queue support is provided by software. This means, in most cases, there are no hardware restrictions on their number and more flexible configuration of processing mechanisms. The general QoS Layer 3 paradigm includes marking and classification of packets at the entrance (Marking & Classification), allocation to queues and their processing (Scheduling) according to certain algorithms.

And once again we emphasize that prioritization (queues) is mainly required only in narrow, congested places, when the channel capacity is not enough to transmit all incoming packets and you need to somehow differentiate their processing. In addition, prioritization is also necessary in the case of preventing the impact of network spikes on delay-sensitive traffic.

Let's classify Layer 3 QoS by queue processing methods.

3.2.3.1. FIFO.

An elementary queue with a sequential passage of packets, working on the principle of first in, first out (First In First Out - FIFO), which has the Russian equivalent of who first got up and sneakers. Basically, there is no prioritization here. It is enabled by default on interfaces with a speed greater than 2 Mbps.

3.2.3.2. PQ. Priority queues.

Priority Queuing (PQ) ensures that some packets have unconditional priority over others. There are 4 queues in total: high, medium, normal and low. Processing is carried out sequentially (from high to low), starts with a high-priority queue and does not move to lower-priority queues until it is completely cleared. Thus, the channel can be monopolized by high-priority queues. Traffic whose priority is not explicitly specified will go to the default queue.

Command parameters.
distribution of protocols by queues:
priority-list LIST_NUMBER protocol PROTOCOL (high | medium | normal | low) list ACCESS_LIST_NUMBER
defining the default queue:
priority-list LIST_NUMBER default (high | medium | normal | low)
sizing of queues (in batches):
priority-list LIST_NUMBER queue-limit HIGH_QUEUE_SIZE MEDIUM_QUEUE_SIZE NORMAL_QUEUE_SIZE LOW_QUEUE_SIZE

designations:
LIST_NUMBER - PQ handler number (sheet)
PROTOCOL - protocol
ACCESS_LIST_NUMBER - access sheet number
HIGH_QUEUE_SIZE - HIGH queue size
MEDIUM_QUEUE_SIZE - MEDIUM queue size
NORMAL_QUEUE_SIZE - NORMAL queue size
LOW_QUEUE_SIZE - LOW queue size

Tuning algorithm.

1. Define 4 queues
access-list 110 permit ip any any precedence network
access-list 120 permit ip any any precedence critical
access-list 130 permit ip any any precedence internet
access-list 140 permit ip any any precedence routine

priority-list 1 protocol ip high list 110
priority-list 1 protocol ip medium list 120
priority-list 1 protocol ip normal list 130
priority-list 1 protocol ip low list 140
priority-list 1 default low


priority-list 1 queue-limit 30 60 90 120

2. We bind to the interface

!
interface FastEthernet0 / 0
ip address 192.168.0.2 255.255.255.0
speed 100
full-duplex
priority-group 1
no cdp enable
!

3. Viewing the result
# sh queueing priority

Current priority queue configuration:

List Queue args - - 1 low default - 1 high protocol ip list 110 1 medium protocol ip list 120 1 normal protocol ip list 130 1 low protocol ip list 140

#sh interfaces fastEthernet 0/0

Queueing strategy: priority-list 1


Interface FastEthernet0 / 0 queueing strategy: priority


high / 19 medium / 0 normal / 363 low / 0

3.2.3.3. CQ. Arbitrary queues.

Custom Queuing (CQ) provides custom queues. Provides control of the channel bandwidth share for each queue. 17 queues are supported. System 0 queue is reserved for high-priority control packets (routing, etc.) and is not available to the user.

The queues are traversed sequentially, starting with the first. Each queue contains a byte counter, which at the beginning of the traversal contains the specified value and is reduced by the size of the packet skipped from this queue. If the counter is not zero, then the whole next packet is skipped, and not its fragment equal to the remainder of the counter.

Command parameters.
defining the bandwidth of queues:
queue-list LIST-NUMBER queue QUEUE_NUMBER byte-count
BYTE_COUT

sizing of queues:
queue-list LIST-NUMBER queue QUEUE_NUMBER limit QUEUE_SIZE

designations:
LIST-NUMBER - handler number
QUEUE_NUMBER - queue number
BYTE_COUT - queue size in packets

Tuning algorithm.

1... Defining queues
access-list 110 permit ip host 192.168.0.100 any
access-list 120 permit ip host 192.168.0.200 any

queue-list 1 protocol ip 1 list 110
queue-list 1 protocol ip 2 list 120
queue-list 1 default 3

queue-list 1 queue 1 byte-count 3000
queue-list 1 queue 2 byte-count 1500
queue-list 1 queue 3 byte-count 1000

Additionally, you can set the sizes of queues in packages
queue-list 1 queue 1 limit 50
queue-list 1 queue 2 limit 50
queue-list 1 queue 3 limit 50

2. Bind to the interface
!
interface FastEthernet0 / 0
ip address 192.168.0.2 255.255.255.0
speed 100
full-duplex
custom-queue-list 1
no cdp enable
!

3. View result
#sh queueing custom

Current custom queue configuration:

List Queue Args - 1 3 default - 1 1 protocol ip list 110 1 2 protocol ip list 120 1 1 byte-count 1000 - 1 2 byte-count 1000 - 1 3 byte-count 2000 -

#sh interface FastEthernet0 / 0

Queueing strategy: custom-list 1

#sh queueing interface fastEthernet 0/0
Interface FastEthernet0 / 0 queueing strategy: custom

Output queue utilization (queue / count)
0/90 1/0 2/364 3/0 4/0 5/0 6/0 7/0 8/0
9/0 10/0 11/0 12/0 13/0 14/0 15/0 16/0

3.2.3.4. WFQ. Weighted fair queues.

Weighted Fair Queuing (WFQ) automatically splits traffic into flows. The default is 256, but can be changed (dynamic-queues parameter in fair-queue command). If there are more threads than there are queues, then multiple threads are placed in one queue. A packet belongs to a stream (classification) based on TOS, protocol, source IP address, destination IP address, source port and destination port. Each thread uses a separate queue.

The WFQ (scheduler) provides fair (fair) bandwidth sharing between existing threads. To do this, the available bandwidth is divided by the number of threads and each receives an equal share. In addition, each stream receives its own weight, with some coefficient inversely proportional to the IP priority (TOS). The stream weight is also taken into account by the handler.

As a result, WFQ automatically distributes the available bandwidth fairly, additionally taking into account the TOS. Streams with the same IP TOS priority will receive equal shares of bandwidth; high IP priority streams - high bandwidth. In case of congestion, non-loaded high-priority flows function unchanged, while low-priority high-load flows are limited.

RSVP works with WFQ. By default, WFQ is enabled on low speed interfaces.

Tuning algorithm.
1. We mark the traffic in any way (set the IP priority - TOS) or get it marked

2. Turn on WFQ on the interface
interface FastEthernet0 / 0
fair-queue

interface FastEthernet0 / 0
fair-queue CONGESTIVE_DISCARD_THRESHOLD DYNAMIC_QUEUES

Options:
CONGESTIVE_DISCARD_THRESHOLD - the number of packets in each queue, above which packets are ignored (by default - 64)
DYNAMIC_QUEUES - the number of sub-queues by which the traffic is classified (by default - 256)

3. Viewing the result
# sh queueing fair
# sh queueing interface FastEthernet0 / 0

3.2.3.5. CBWFQ.

Class Based Weighted Fair Queuing (CBWFQ) corresponds to a class-based queuing mechanism. All traffic is divided into 64 classes based on the following parameters: input interface, access list, protocol, DSCP value, MPLS QoS label.

The total bandwidth of the egress interface is distributed across classes. The bandwidth allocated to each class can be defined either as an absolute value (bandwidth in kbit / s) or as a percentage (bandwidth percent) relative to the value set on the interface.

Packages that do not fall into the configured classes fall into the default class, which can be further customized and which receives the remaining free bandwidth. If a queue of any class overflows, packets of this class are ignored. The algorithm for rejecting packets within each class can be selected: normal dropping enabled by default (tail-drop, queue-limit parameter) or WRED (random-detect parameter). Only for the default class, you can enable uniform (fair) strip division (parameter fair-queue).

CBWFQ supports interoperability with RSVP.

Command parameters.

criteria for selecting packages by class:
class-map match-all CLASS
match access-group
match input-interface
match protocol
match ip dscp
match ip rtp
match mpls experimental

class definition:

class CLASS
bandwidth BANDWIDTH
queue-limit QUEUE-LIMIT
random-detect

default class definition:

class class-default
bandwidth BANDWIDTH
bandwidth percent BANDWIDTH_PERCENT
queue-limit QUEUE-LIMIT
random-detect
fair-queue

designations:
CLASS - class name.
BANDWIDTH - minimum bandwidth kbit / s, value regardless of the bandwidth on the interface.
BANDWIDTH_PERCENT - percentage of the bandwidth on the interface.
QUEUE-LIMIT - the maximum number of packets in the queue.
random-detect - using WRED.
fair-queue - even division of the strip, only for the default class

By default, the absolute Bandwidth in the CBWFQ class cannot exceed 75% of the Bandwidth on the interface. This can be changed with the max-reserved-bandwidth command on the interface.

Tuning algorithm.

1. Distribution of packages by classes - class-map

class-map match-all Class1
match access-group 101

2. Description of the rules for each class - policy-map
policy-map Policy1
class Class1
bandwidth 100
queue-limit 20
class class-default
bandwidth 50
random-detect

3. Running the specified policy on the interface - service-policy
interface FastEthernet0 / 0
bandwidth 256

4. Viewing the result
#sh class Class1
#sh policy Policy1
#sh policy interface FastEthernet0 / 0

Example 1.

Dividing the total band by class as a percentage (40, 30, 20).
access-list 101 permit ip host 192.168.0.10 any
access-list 102 permit ip host 192.168.0.20 any
access-list 103 permit ip host 192.168.0.30 any

class-map match-all Platinum
match access-group 101
class-map match-all Gold
match access-group 102
class-map match-all Silver
match access-group 103

policy-map Isp
class Platinum
bandwidth percent 40
class Gold
bandwidth percent 30
class Silver
bandwidth percent 20

interface FastEthernet0 / 0
bandwidth 256
service-policy output Isp

3.2.3.6. LLQ.

Low Latency Queuing (LLQ) - low latency queuing. LLQ can be thought of as a CBWFQ mechanism with a PQ priority queue (LLQ \u003d PQ + CBWFQ).
PQ in LLQ allows for the provision of delay-sensitive traffic. LLQ is recommended in case of voice (VoIP) traffic. Plus, it works well with video conferencing.

Tuning algorithm.

1. Distribution of packages by classes - Class-map
access-list 101 permit ip any any precedence critical

class-map match-all Voice
match ip precedence 6
class-map match-all Class1
match access-group 101

2. Description of the rules for each class - Policy-map

Similar to CBWFQ, only the priority parameter is specified for the priority class (it is one).
policy-map Policy1
class Voice
priority 1000
class Class1
bandwidth 100
queue-limit 20
class class-default
bandwidth 50
random-detect

3. Running the specified policy on the interface - Service-policy
interface FastEthernet0 / 0
bandwidth 256
service-policy output Policy1

Example 1.
We assign the Voice class to PQ, and everything else to CQWFQ.
!
class-map match-any Voice
match ip precedence 5
!
policy-map Voice
class Voice
priority 1000
class VPN
bandwidth percent 50
class class-default
fair-queue 16
!
interface X
Sevice-policy output Voice
!

Example 2.
Additionally, we limit the overall speed for PQ in LLQ so that it does not monopolize the entire channel in case of malfunction.
!
class-map match-any Voice
match ip precedence 5
!
policy-map Voice
class Voice
priority 1000
police 1024000 32000 32000 conform-action transmit exceed-action drop
class Vpn
bandwidth percent 50
class class-default
fair-queue 16
!
interface FastEthernet0 / 0
service-policy output Voice
!

There is not a single person who has not read any Windows XP FAQ at least once. And if so, then everyone knows that there is such a harmful Quality of Service - QoS for short. When configuring the system, it is highly recommended to disable it, because it limits the network bandwidth to 20% by default, and as if this problem exists in Windows 2000 too.

These are the lines:

"Q: How to completely disable the QoS (Quality of Service) service? How to configure it? Is it true that it limits the network speed?"
A: Indeed, by default Quality of Service reserves 20% of the channel bandwidth for its needs (any - even a modem for 14400, even a gigabit Ethernet). And even if you remove the QoS Packet Scheduler service from the Properties connection, this channel is not released. You can release the channel or simply configure QoS here. Launch the Group Policy applet (gpedit.msc). In Group Policy, find Local computer policy and click on Administrative templates. We select the item Network - QoS Packet Sheduler. Turn on Limit reservable bandwidth. Now we reduce the Bandwidth limit 20% to 0% or simply disable it. If desired, you can also configure other QoS parameters here. To activate the changes you just need to reboot. "
20% is, of course, a lot. Truly Microsoft is a Mazdai. Statements of this kind wander from FAQ to FAQ, from forum to forum, from media to media, are used in all kinds of "tweaks" - programs for "tuning" Windows XP (by the way, open "Group Policies" and "Local Security Policies", and no "tweak" can match them in the wealth of customization options). It is necessary to be careful to expose unfounded allegations of this kind, which we will now do using a systematic approach. That is, we will thoroughly study the problematic issue, relying on official primary sources.

What is a quality service network?

Let's adopt the following simplified definition of a networked system. Applications run and run on hosts and communicate with each other. Applications send data to the operating system for transmission over the network. Once the data is transferred to the operating system, it becomes network traffic.
Network QoS relies on the ability of the network to handle this traffic to ensure that the requests of some applications are fulfilled. This requires a fundamental mechanism for handling network traffic that is capable of identifying traffic that is eligible for and control of specific handling.

The QoS functionality is designed to satisfy two network actors: network applications and network administrators. They often have disagreements. The network administrator limits the resources used by a specific application, while the application tries to grab as many network resources as possible. Their interests can be reconciled, taking into account the fact that the network administrator plays a leading role in relation to all applications and users.

Basic QoS parameters

Different applications have different requirements for handling their network traffic. Applications are more or less tolerant of latency and traffic loss. These requirements have found application in the following QoS-related parameters:

  • Bandwidth - The rate at which traffic generated by the application must be transmitted over the network.
  • Latency - The latency that an application can tolerate in delivering a data packet.
  • Jitter - change the delay time.
  • Loss - The percentage of data lost.

If infinite network resources were available, then all application traffic could be transmitted at the required rate, with zero latency, zero latency variation, and zero loss. However, network resources are not limitless.

The QoS mechanism controls the allocation of network resources for application traffic to meet its transmission requirements.

Fundamental QoS Resources and Traffic Handling Mechanisms

The networks that connect hosts use a variety of networking devices including host network adapters, routers, switches, and hubs. Each of them has network interfaces. Each network interface can receive and transmit traffic at a finite rate. If the rate at which traffic is directed to an interface is higher than the rate at which the interface is forwarding traffic, congestion occurs.

Network devices can handle the congestion condition by queuing traffic in the device memory (in a buffer) until the congestion is over. In other cases, network equipment can drop traffic to ease congestion. As a result, applications are faced with a change in latency (as traffic is stored in queues on interfaces) or with traffic loss.

The ability of network interfaces to forward traffic and the availability of memory to store traffic on network devices (until traffic can be sent further) constitute the fundamental resources required to provide QoS for application traffic streams.

Allocating QoS Resources to Network Devices

Devices that support QoS use network resources intelligently to carry traffic. That is, traffic from applications that are more tolerant of latency is queued (stored in a buffer in memory), and traffic from applications that are sensitive to latency is forwarded on.

To accomplish this task, a network device must identify traffic by classifying packets, and must have queues and mechanisms for serving them.

Traffic processing engine

The traffic processing mechanism includes:

  • 802.1p
  • Differentiated per-hop-behaviors (diffserv PHB).
  • Integrated services (intserv).
  • ATM, etc.

Most local area networks are based on IEEE 802 technology including Ethernet, token-ring, etc. 802.1p is a traffic processing mechanism to support QoS in such networks.

802.1p defines a field (layer 2 in the OSI networking model) in an 802 packet header that can carry one of eight priority values. Typically, hosts or routers, when sending traffic to the local network, mark each packet sent, assigning it a certain priority value. It is expected that network devices such as switches, bridges, and hubs will handle packets appropriately using queuing mechanisms. 802.1p is limited to a local area network (LAN). As soon as the packet crosses the LAN (via OSI layer 3), 802.1p priority is removed.

Diffserv is a Layer 3 mechanism. It defines a field in Layer 3 of the header of IP packets called the diffserv codepoint (DSCP).

Intserv is a whole range of services that define a guaranteed service and a service that manages downloads. The guaranteed service promises to carry some amount of traffic with measurable and limited latency. The service that manages the load agrees to carry some traffic with "light network congestion". These are measurable services in the sense that they are defined to provide measurable QoS to a specific amount of traffic.

Because ATM technology fragments packets into relatively small cells, it can offer very low latency. If you need to send a packet urgently, the ATM interface can always be free for transmission for the time it takes to transmit one cell.

QoS has many more complex mechanisms that make this technology work. Let's note just one important point: in order for QoS to work, it is necessary to support this technology and configure it accordingly throughout the transmission from the start point to the end.

For clarity, consider Fig. one.

We accept the following:

  • All routers are involved in the transmission of the required protocols.
  • One QoS session requiring 64 Kbps is provisioned between Host A and Host B.
  • Another session, requiring 64 Kbps, is initialized between Host A and Host D.
  • To simplify the diagram, we assume that the routers are configured so that they can reserve all network resources.

In our case, one 64 Kbps reservation request would reach three routers on the data path between Host A and Host B. Another 64 Kbps request would reach three routers between Host A and Host D. The routers would fulfill these resource reservation requests because they do not exceed the maximum. If, instead, each of hosts B and C were to simultaneously initiate a 64 Kbps QoS session with host A, then the router serving these hosts (B and C) would deny one of the connections.

Now suppose the network administrator disables QoS processing on the bottom three routers serving hosts B, C, D, E. In this case, requests for resources up to 128 Kbps would be satisfied regardless of the location of the host participating in the connection. However, the quality assurance would be low because traffic for one host would compromise traffic for another. QoS could be maintained if the upper router limited all requests to 64 Kbps, but this would result in inefficient use of network resources.

On the other hand, the bandwidth of all network connections could be increased to 128 Kbps. But the increased bandwidth will only be used when hosts B and C (or D and E) are simultaneously requesting resources. If this is not the case, then network resources will again be used inefficiently.

Microsoft QoS Components

Windows 98 contains only user-level QoS components including:

  • Application components.
  • GQoS API (part of Winsock 2).
  • QoS service provider.

The Windows 2000 / XP / 2003 operating system contains all of the above and the following components:

  • Resource Reservation Protocol Service Provider (Rsvpsp.dll) and RSVP (Rsvp.exe) and QoS ACS services. Not used in Windows XP, 2003.
  • Traffic control (Traffic.dll).
  • Generic Packet Classifier (Msgpc.sys). The package classifier identifies the class of service to which the package belongs. This will place the package in the appropriate queue. Queues are managed by the QoS Packet Scheduler.
  • QoS Package Scheduler (Psched.sys). Defines the QoS parameters for a specific data stream. Traffic is tagged with a specific priority value. The QoS Packet Scheduler determines the queuing schedule for each packet and handles competing requests between queued packets that need concurrent network access.

The diagram in Figure 2 illustrates the protocol stack, Windows components, and how they interact on a host. Items that were used in Windows 2000 but are not used in Windows XP / 2003 are not shown in the diagram.

Applications are at the top of the stack. They may or may not know about QoS. To harness the full power of QoS, Microsoft recommends using Generic QoS API calls in applications. This is especially important for applications requiring high quality service guarantees. Some utilities can be used to invoke QoS on behalf of applications that are not QoS aware. They work through the traffic management API. For example, NetMeeting uses the GQoS API. But for such applications, the quality is not guaranteed.

The last nail

The above theoretical points do not give an unambiguous answer to the question of where the notorious 20% go (which, I note, no one has measured exactly yet). Based on the above, this should not be the case. But opponents put forward a new argument: the QoS system is good, but the implementation is crooked. Consequently, 20% are "gorged" after all. Apparently, the problem has worn out the software giant as well, since it has already separately refuted such fabrications for a long time.

However, let's give the floor to the developers and present selected moments from the article "316666 - Windows XP Quality of Service (QoS) Enhancements and Behavior" in literary Russian:

"One hundred percent of the network bandwidth is available for allocation among all programs unless a program explicitly requests the prioritized bandwidth. This" reserved "bandwidth is available to other programs if the program that requested it does not send data.

By default, programs can reserve up to 20% of the main connection speed on each computer interface. If the program that reserved the bandwidth is not sending enough data to use it up, the unused portion of the reserved bandwidth is available for other data streams.

There have been statements in various technical articles and newsgroups that Windows XP always reserves 20% of the available bandwidth for QoS. These statements are wrong. "

If now someone is still gorging on 20% of the bandwidth, well, I can advise you to continue using more all kinds of "tweaks" and lopsided network drivers. And not so much will "get fat".

Everyone, the QoS myth, die!

About what problems can be in the network and how QoS can affect them. In this article we will talk about QoS mechanisms.

QoS mechanisms

Because applications may require different QoS levels, many models and mechanisms emerge to meet these needs.

Consider the following models:

  • Best Effort –Unwarranted delivery is used on all networks by default. The positive side is that this model requires absolutely no effort to implement. No QoS mechanisms are used, all traffic is served on a first-come-first-served basis. This model is not suitable for modern network environments;
  • Integrated Services (IntServ) - This integrated service model uses a redundancy method. For example, if a user wanted to make a VoIP call of 80 kbps over a data network, then the network developed exclusively for the model IntServ, would reserve 80 kbps on each network device between two VoIP endpoints using resource reservation protocol RSVP (Resource Reservation Protocol)... During the call, these 80 Kbps will be unavailable for other uses, except for VoIP calls. Although the model IntServ is the only model that offers guaranteed bandwidth, it also has scalability issues. If enough reservations are made, the network will simply run out of bandwidth;
  • Differentiated Services (DiffServ) - the differentiated service model is the most popular and flexible model for using QoS. In this model, each device can be configured to use different QoS methods depending on the type of traffic. You can specify which traffic is included in a particular class and how this class should be processed. Unlike the model IntServ, traffic is not absolutely guaranteed because network devices do not fully reserve bandwidth. However DiffServ gains bandwidth close to guaranteed bandwidth while solving scalability issues IntServ... This allowed this model to become the standard QoS model;

QoS tools

The QoS mechanisms themselves are a set of tools that combine to provide the level of service that traffic needs. Each of these tools fits into one of the following categories:

  • Classification and Marking “These tools allow you to identify and label a packet so that network devices can easily identify it as they traverse the network. Typically, the first device to receive a packet identifies it using tools such as access-lists, inbound interfaces, or deep packet inspection (DPI), which examines the application data itself. These tools can be CPU intensive and add latency to a packet, so once a packet is initially identified, it is tagged immediately. The label can be in a level 2 heading ( data link), allowing switches to read it and / or the Layer 3 header ( network) so that routers can read it. For the second level, the protocol is used 802.1P, and for the third level, the field Type of Service... Then, when the packet traverses the rest of the network, the network devices simply look at the markings to classify them, rather than looking deep into the packet;
  • Congestion Management- Overloads occur when a device's input buffer overflows and increases the processing time of a packet. Queue policies define the rules that a router should enforce when congestion occurs. For example, if the E1 WAN interface was completely saturated with traffic, the router would start holding packets in memory (queues) to forward them when bandwidth becomes available. All queuing strategies aim to answer one question: “when there is available bandwidth, which packet comes first?”;
  • Congestion Avoidance - Most QoS mechanisms are applied only when the network is overloaded. The goal of congestion avoidance tools is to remove enough packets of non-essential (or non-critical) traffic to avoid the severe congestion that occurs in the first place;
  • Control and shaping (Policing and Shaping) - This mechanism limits the bandwidth of certain network traffic. This is useful for many typical "bandwidth eaters" on the network: p2p applications, web surfing, FTP and others. Shaping can also be used to limit the bandwidth of certain network traffic. This is necessary for networks where the permissible actual speed is slower than the physical speed of the interface. The difference between the two is that shaping queues up excess traffic to send it later, while policing usually drops excess traffic;
  • Link Efficiency - This group of tools focuses on delivering traffic in the most efficient way. For example, some low-speed links may perform better if you take the time to compress network traffic before sending it (compression is one of the Link Efficiency tools);
Link Efficiency Mechanisms

There are two main problems when using slow interfaces:

  • Insufficient bandwidth makes it difficult to send the required amount of data on time;
  • Slow speeds can have a significant impact on end-to-end latency due to the serialization process (the amount of time it takes for a router to move a packet from the memory buffer to the network). On these slow links, the larger the packet, the longer the serialization delay;

To overcome these problems, the following were developed Link Efficiency mechanisms:

  • Payload Compression - Compresses application data sent over the network, so the router sends less data over a slow line.
  • Header Compression - Some traffic (such as VoIP) may have a small amount of application data (RTP audio) in each packet, but generally send many packets. In this case, the amount of header information becomes a significant factor and often consumes more bandwidth than data. Header compression solves this problem directly by eliminating many of the redundant fields in the packet header. Surprisingly, RTP header compression, also called real-time compressed transport ( Compressed Real-time Transport Protocol - cRTP) reduces the 40-byte header to 2-4 bytes !;
  • Fragmentation and interleaving (Link Fragmentation and Interleaving) - LFI solves serialization latency by chopping large packets into smaller pieces before sending them. This allows the router to move critical VoIP traffic between fragmented pieces of data (called voice "interleaving");
Queue algorithms

    Queuing ( queuing) defines the rules that the router should apply when congestion occurs. Most network interfaces use basic initialization by default First-in, First-out (FIFO)... In this method, whatever packet comes first is sent first. While this seems to be true, not all network traffic is created equal. The main task of the queue is to ensure that network traffic serving mission-critical or time-sensitive business applications is sent ahead of non-essential network traffic. In addition to FIFO sequencing, three primary sequencing algorithms are used:

  • Weighted Fair Queuing (WFQ)- WFQ tries to balance the available bandwidth evenly among all senders. Using this method, a high-throughput sender receives less priority than a low-throughput sender;
  • Class-Based Weighted Fair Queuing (CBWFQ) - This queuing method allows you to specify guaranteed bandwidth levels for different traffic classes. For example, you can specify that web traffic gets 20 percent of the bandwidth while Citrix traffic gets 50 percent of the bandwidth (you can specify values \u200b\u200bas a percentage or a specific amount of bandwidth). WFQ is then used for all unspecified traffic (the remaining 30 percent in the example);
  • Low Latency Queuing (LLQ) - LLQ is often referred to as PQ-CBWFQ, so it works the same way as CBWFQ, but adds a queue priority component ( Priority Queuing - PQ). If you specify that certain network traffic should go to the priority queue, then the router not only provides bandwidth for the traffic, but also guarantees it the first bandwidth. For example, using pure CBWFQ, Citrix traffic might be guaranteed 50% of the bandwidth, but it might get that bandwidth after the router provides some other traffic guarantees. With LLQ, priority traffic is always sent before any other guarantees are fulfilled. This works very well for VoIP, making LLQ the preferred queuing algorithm for voice;

There are many other algorithms for queues, these three cover the methods used by most modern networks.

Is this article helpful to you?

Please tell us why?

We are sorry that the article was not useful to you: (Please, if it does not make it difficult, indicate why? We will be very grateful for a detailed answer. Thank you for helping us become better!

Did you like the article? To share with friends: