traffic control - htb

Upload: jchuang1977

Post on 07-Apr-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/4/2019 traffic control - htb

    1/10

    122 International Journal of Information Studies Volume 2 Issue 2 April 2010

    Extended Linux HTB Queuing Discipline Implementations

    Doru Gabriel BALAN, Dan Alin POTORAC

    Electrical Engineering and Computer Science Faculty

    Stefan cel Mare University of Suceava

    Romania

    AbstrAct: In computer networks the trafc control is an essential management issue and a permanent challenge for net-

    work engineers. The paper is evaluating the main QoS (Quality of Service) technologies used in networks management. The

    article is mainly focusing on classful queuing disciplines and HTB (Hierarchy Token Bucket) Linux implementations. The

    shaping and prioritization mechanisms are explained, and are proposed three different practical solutions for implementing

    HTB, under a common Linux environment, for a dened QoS scenario.

    Keywords: Network trafc, Linux implementation, Network service

    Received: 11 November, Revised: 18 December 2009, Accepted: 29 December 2009

    1. Introduction

    This material is oriented over the methods in which the HTB queuing discipline (Devera, 2003) can be implemented in a Linux

    environment with scalable and precise results as showed by Ivancic, Hadjina, and Basch (2005). The following discussions

    are focused on three methods that can be used to implement QoS rules using HTB:

    - a command line method

    - a text le method

    - a web interface method

    These methods are:

    - rst, from the classic and traditional UNIX command line representing a common shell (sh, csh, bash, etc.), using the tc

    (trafc control) command line tool from iproute2 software packet (Kuznetsov and Hemminger, 2002),- second, using HTB-tools proposed by Spirlea, Subredu, and Stanimir (2007) to simplify the difcult process of bandwidth

    allocation, and

    - third, a set of WEB tools interfaces: WebHTB (Delicostea, 2008) and T-HTB (Lazarov, 2009) used for shaping packets.

    2. QoS terms and background

    Trafc control consists of the following series of actions (Hubert, 2002):

    - Shaping

    When trafc is shaped, its rate of transmission is under control. Shaping may be more than lowering the available bandwidth;

    it is also used to smooth out bursts in trafc for better network behavior. Shaping occurs on egress.

    - SchedulingBy scheduling the transmission of packets it is possible to improve interactivity for trafc that needs it while still guarantee -

    ing bandwidth to bulk transfers. Reordering is also called prioritizing, and happens only on egress.

    - Policing

    If shaping deals with transmission of trafc, policing pertains to trafc arriving, so it occurs on ingress.

    - Dropping

    Trafc exceeding a set bandwidth may also be dropped forthwith, both on ingress and on egress.

    Processing of trafc is controlled by three kinds of objects: qdiscs, classes and lters.

  • 8/4/2019 traffic control - htb

    2/10

    International Journal of Information Studies Volume 2 Issue 2 April 2010 123

    a. Qdiscs

    Queueing disciplines are the basic elements for understanding trafc control. Whenever the kernel needs to send a packet to

    an interface, it is enqueued to the qdisc congured for that interface. Immediately afterwards, the kernel tries to get as many

    packets as possible from the qdisc, for giving them to the network adaptor driver. The default qdisc in Linux kernel is the

    pfo_fast one, which does no processing at all and is a pure First In, First Out queue. It does however store trafc when

    the network interface cant handle it momentarily.

    b. ClassesSome qdiscs can contain classes, which contain further qdiscs; trafc may then be enqueued in any of the inner qdiscs,

    which are within the classes. When the kernel tries to dequeue a packet from such a classful qdisc it can come from

    any of the classes. A qdisc may for example prioritize certain kinds of trafc by trying to dequeue from certain classes

    before others.

    c. Filters

    Filters reside within qdiscs where a lter is used by a classful qdisc to determine in which class a packet will be enqueued.

    Whenever trafc arrives at a class with subclasses, it needs to be classied. All lters attached to the class are called, until

    one of them returns with a verdict. If no verdict was made, other criteria may be available.

    There are two categories of queuing disciplines:

    A. Classful queuing disciplines (contain classes and provide a handle to which to attach lters), and

    B. Classless queuing disciplines (no classes, nor is it possible to attach lters).

    Classless queuing disciplines are those that can accept data and only reschedule, delay or drop it. These can be used to shape

    trafc for an entire interface, without any subdivisions (Hubert, 2004).

    Each of these queuing disciplines can be used as the primary qdisc on an interface, or can be used inside a leaf class of a

    classful qdisc. These are the fundamental schedulers units used under Linux.

    Some of the classless queuing disciplines used in Linux are:

    - [p|b]fo

    It is the simplest usable qdisc, pure First In, First Out behaviour.

    - pfo_fast

    Standard qdisc for Advanced Router enabled kernels. Consists of a three-band queue, which honors Type of Service ags,as well as the priority that may be assigned to a packet.

    - RED

    Random Early Detection simulates physical congestion by randomly dropping packets when nearing congured bandwidth

    allocation. Well suited to very large bandwidth applications.

    - SFQ

    Stochastic Fairness Queueing reorders queued trafc so each session gets to send a packet in turn.

    - TBF

    The Token Bucket Filter is suited for slowing trafc down to a precisely congured rate. It scales well to large bandwidths.

    The classful queuing disciplines can have lters attached to them, allowing packets to be directed to particular classes and

    subqueues (Brown, 2006). Classful qdiscs are very useful when there are different types of trafc, which should have dif-fering treatment.

    Some of the classful queuing disciplines used in Linux are:

    - CBQ

    Class Based Queueing implements a rich linksharing hierarchy of classes. It contains shaping elements as well as prioritiz -

    ing capabilities. Shaping is performed using link idle time calculations based on average packet size and underlying link

    bandwidth.ede

  • 8/4/2019 traffic control - htb

    3/10

    124 International Journal of Information Studies Volume 2 Issue 2 April 2010

    - HTB

    The Hierarchy Token Bucket implements a rich link sharing hierarchy of classes with an emphasis on conforming to existing

    practices. HTB facilitates guaranteeing bandwidth to classes, while also allowing specication of upper limits to inter-class

    sharing. It contains shaping elements, based on TBF and can prioritize classes.

    - PRIO

    The PRIO qdisc is a non-shaping container for a congurable number of classes, which are dequeued in order. This allows for

    easy prioritization of trafc, where lower classes are only able to send if higher ones have no packets available. To facilitateconguration, Type Of Service (TOS) bits from the IP header are honored by default.

    Hierarchical Token Bucket (HTB) is a packet scheduler and is currently included in Linux kernels; /net/sched/sch_htb.c kernel

    source tree. HTB is meant as a more understandable, intuitive and faster replacement for the CBQ (Class Based Queueing)

    qdisc in Linux (Devera, 2002).

    Both CBQ and HTB help to control the use of the outbound bandwidth on a given link. Both allow using one physical link

    to simulate several slower links and to send different kinds of trafc on different simulated links. In both cases, the admin -

    istrator has to specify how to divide the physical link into simulated links and how to decide which simulated link to use for

    a given packet to be sent.

    Unlike CBQ, HTB shapes trafc based on the Token Bucket Filter (TBF) algorithm, which does not depend on interface char-

    acteristics and so does not need to know the underlying bandwidth of the outgoing interface (Devera and Hubert, 2002).

    3. Linux kernel resources

    Linux kernel provides a set of controls that are used to enable the QoS mechanism. When the kernel has several packets to

    send out over a network device, it has to decide which ones to send rst, which ones to delay, and which ones to drop. This

    is the job of the queuing disciplines, several different algorithms for how to do this fairly have been proposed, and HTB

    is one of them (Torvalds, 2003).

    The QoS mechanism is activated, on a linux kernel, at conguration time (make menucong), by enabling one kernel vari-

    able: NET_SCHED. At this moment it is also activated the default queuing discipline for a linux kernel, pfo_fast, identied

    by the NET_SCH_FIFO variable.

    After this step it has to be selected the queuing discipline that will be compiled at the same time with the kernel code. The

    HTB queuing discipline has a conguration correspondent identied by the kernel variable: NET_SCH_HTB. In the same

    manner, other QoS algorithms can be activated at kernel conguration time to be available later on kernel space to providepacket management. Each algorithm has a kernel variable, for example: CBQ - NET_SCH_CBQ, RED - NET_SCH_RED,

    SFQ - NET_SCH_SFQ, TBF - NET_SCH_TBF, etc.

    After kernel compilation queuing discipline are available for QoS implementations directly in kernel code, in case of a

    monolithic kernel compilation, or as kernel modules if the kernel was compiled with module support. These modules can

    be identied in the linux le system in the /lib/modules/kernel/net/ schedulers directory, for example, HTB kernel module

    is: sch_htb.ko le.

    4. Iproute2/tc tool

    Iproute2 is a collection of utilities for controlling TCP / IP networking and trafc control in Linux (Kuznetsov and Hem-

    minger, 2002), with the objective to realize the QoS implementation in the Linux kernel.

    Most network conguration manuals still refer to ifcong and route as the primary network conguration tools, but ifcongis known to behave inadequately in modern network environments (Kuznetsov and Hemminger, 2002). They should be

    deprecated, but most distros still include them. Most network conguration systems make use of ifcong and thus provide a

    limited feature set. The /etc/net project aims to support most modern network technologies, as it doesnt use ifcong and allows

    a system administrator to make use of all iproute2 features, including trafc control (Kuznetsov and Hemminger, 2002).

    Iproute2 is usually shipped in a package called iproute or iproute2 and consists of several tools, of which the most important

    are ip and tc. ip controls IPv4 and IPv6 conguration and tc stands for trafc control.

  • 8/4/2019 traffic control - htb

    4/10

    International Journal of Information Studies Volume 2 Issue 2 April 2010 125

    The tc tool from iproute2 can be used to show or manipulate trafc control settings in a Linux router, so the tc tool is used

    to congure trafc control at the Linux kernel level (Hubert, 2002).

    Lets see a short example that looks like this:

    We have two customers, A and B, both connected to the Internet via eth0. We want to allocate 60 kbps to B and 40

    kbps to A. Next we want to subdivide As bandwidth 30kbps for WWW and 10kbps for everything else (Devera,

    2002).

    To solve this situation we have to type line by line or create a script with the following tc commands:

    #tc qdisc add dev eth0 root handle 1: htb default 12.

    This command attaches queue discipline HTB to eth0 and gives it the handle 1:. This is just a name or identier with

    which to refer to it below. The default 12 parameter means that any trafc that is not otherwise classied will be assigned

    to class 1:12.

    The immediate result can be visualized with:

    #tc qdisc show dev eth0

    output:

    qdisc htb 1: root r2q 10 default 12 direct_packets_stat 2398.

    Next we can build the classes:

    #tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbps ceil 100kbps

    #tc class add dev eth0 parent 1:1 classid 1:10 htb rate 30kbps ceil 100kbps

    #tc class add dev eth0 parent 1:1 classid 1:11 htb rate 10kbps ceil 100kbps

    #tc class add dev eth0 parent 1:1 classid 1:12 htb rate 60kbps ceil 100kbps.

    The rst line creates a root class, 1:1 under the qdisc 1:. The denition of a root class is one with the htb qdisc as its

    parent. A root class, like other classes under an htb qdisc allows its children to borrow from each other, but one root

    class cannot borrow from another. We could have created the other three classes directly under the htb qdisc, but then

    the excess bandwidth from one would not be available to the others. In this case we do want to allow borrowing, so

    we have to create an extra class to serve as the root and put the classes that will carry the real data under that (the next

    three code lines).

    The immediate result can be viewed with:

    #tc -s -d class show dev eth0

    output:

    class htb 1:11 parent 1:1 prio 0 rate 80000bit ceil 800000bit burst 1600b cburst 1599b

    class htb 1:1 root rate 800000bit ceil 800000bit burst 1599b cburst 1599b

    class htb 1:10 parent 1:1 prio 0 rate 240000bit ceil 800000bit burst 1599b cburst 1599b

    class htb 1:12 parent 1:1 prio 0 rate 480000bit ceil 800000bit burst 1599b cburst 1599b.

    We also have to describe which packets belong in which class using the tc lter options. The commands will look something

    like this:

    #tc lter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip src 1.2.3.4 match ip dport 80 0xffff owid 1:10

    #tc lter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip src 1.2.3.4 owid 1:11.

    The immediate result is:

    #tc -s -d lter show dev eth0

    output:

    lter parent 1: protocol ip pref 1 u32

  • 8/4/2019 traffic control - htb

    5/10

    126 International Journal of Information Studies Volume 2 Issue 2 April 2010

    lter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1

    lter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 owid 1:10 (rule hit 98140 success 97403)

    match 5060782c/ffffffff at 12 (success 98140)

    match 00000050/0000ffff at 20 (success 97403)

    lter parent 1: protocol ip pref 1 u32 fh 800::801order 2049 key ht 800 bkt 0 owid 1:11 (rule hit 483 success 483)

    match 5060782c/ffffffff at 12 (success 483 ).A more detailed output can be realized with placing s (statistics) and/or d (details) to any tc command, like this:

    #tc -s -d qdisc show dev eth0

    #tc -s -d class show dev eth0

    #tc -s -d lter show dev eth0.

    We can optionally attach queuing disciplines to the leaf classes:

    #tc qdisc add dev eth0 parent 1:10 handle 20: pfo limit 5

    #tc qdisc add dev eth0 parent 1:11 handle 30: pfo limit 5

    #tc qdisc add dev eth0 parent 1:12 handle 40: sfq perturb 10.

    In this manner we can create any QoS scenario and implement it with the iproute2/tc tools.

    5. HTB-Tools

    HTB-tools Bandwidth Management Software is a software suite with several tools that help simplify the difcult process

    of bandwidth allocation, for both upload and download trafc, using the Linux kernels HTB facility, proposed by Spirlea,

    Subredu, and Stanimir (2007).

    It can generate and check conguration les and also provides a real time trafc overview for each separate client.

    The principal features of the HTB-Tools are:

    * bandwidth limitation using public IP addresses, using the two conguration les for upload and download

    * bandwidth limitation using private IP addresses (SNAT), using a single conguration le

    * match mark

    * match mark in u32

    * metropolitan/external limitation

    * menu based management software for conguration and administration of HTB-tools (starting with version 0.3.0).

    The set of HTB-tools includes:

    - q_parser: reads a conguration le (the le denes classes, clients, bandwidth limits) and generates an HTB settings

    script;

    - q_checkcfg: check conguration les;

    - q_show: displays in the console the status of the trafc and the allocated bandwidth for each class/client dened in the

    conguration le;- q_show.php: displays in a web page the status of the trafc and the allocated bandwidth for each class/client dened in

    the conguration le;

    - wHTB-tools_cfg_gen: create and generate conguration les from a web page (only in HTB-tools 0.3.0);

    - htbgen: generate conguration les from bash shell.

    The conguration les can be created with the htbgen tool or can be created with any le editor in separated mode, for

    download and upload, as proposed by Rusu, Subredu, Sparlea, and Vraciu. (2002).

  • 8/4/2019 traffic control - htb

    6/10

    International Journal of Information Studies Volume 2 Issue 2 April 2010 127

    The conguration les format must contain declarations for htb classes and clients for each class.

    The class syntax must be like:

    class class_1 {

    bandwidth 192;

    limit 256

    burst 2priority 1

    que sfq;

    }

    The clients syntax rules are:

    client client1 {

    bandwidth 48

    limit 64

    burst 2; # or burst 0; only for HTB-tools 0.3.

    priority 1

    mark 20

    dst {

    192.168.100.4/32

    }

    };

    The conguration les can be checked with the q_checkcfg tool, from command line:

    #q_checkcfg /etc/htb/eth1-qos.cfg.

    Verications made by this tool include the calculation of the CIR (Committed Information Rat) and MIR (Maximum Infor-

    mation Rate) values for each dened trafc class in conguration les and look like this:Default bandwidth: 8

    Class class_1, CIR: 192, MIR: 256

    ** 4 clients, CIR2: 192, MIR2: 256

    1 classes; CIR / MIR = 192 / 256;

    CIR2 / MIR2 = 192 / 256.

    Visualization in real-time of trafc can be made with the command line tool q_show. An example of the output of this tool

    is presented in Figure 1.

    6. WebHTB

    WebHTB (Delicostea, 2008) is a software suite that helps in process of QoS implementation using HTB qdisc by simplifying

    the difcult process of bandwidth allocation providing a simple and efcient web interface for online conguration.

    This is an application with a web interface that has the facility to generate and check the conguration les, providing a real

    time trafc overview for each separate client.

    The front interface of the WebHTB is presented in Figure 2.

    The applications menu is intuitive, offering to administrator the possibility to add interfaces for witch trafc shaping will

    apply and to dene classes and clients for each class.

  • 8/4/2019 traffic control - htb

    7/10

    128 International Journal of Information Studies Volume 2 Issue 2 April 2010

    Figure 1. HTB-tools q_show output

    Figure 2. WebHTB interface

  • 8/4/2019 traffic control - htb

    8/10

    International Journal of Information Studies Volume 2 Issue 2 April 2010 129

    WebHTB stores the conguration settings in a mysql database and saves the conguration le in xml format. The congura -

    tion le for the eth0 interface is stored in xml directory and named xml/eth0-qos.xml.

    A sample conguration le in xml format looks like this:

    download

    20

    100000

    100000

    10

    0

    sfq

    client_A

    21

    10000

    20000

    0

    0

    80.96.120.5

    default

    8000

    .

    Figure 3. Trafc show from WebHTB

  • 8/4/2019 traffic control - htb

    9/10

    130 International Journal of Information Studies Volume 2 Issue 2 April 2010

    Show menu will give access to the real time monitoring of the shaped trafc. A short capture of this facility is presented in

    Figure 3.

    7. T-HTB WEB manager

    T-HTB WEB manager is another useful WEB frontend application that provides a very simple and intuitive method for

    generating the trafc control rules.

    In gure 4 is presented a part of the web interface of the T-HTB application, used to create the trafc classes that will beimplemented in the QoS scenario.

    The T-HTB application uses a web server (Apache+PHP), a sql server (mysql) and a scripting language (perl) to interact with

    tc tools (Iproute2) with the nal result of obtaining two things:

    - a system script (command line) with rules for trac control (tc commands), named: rc.rules, and

    - a shell script, for linux crontab, to generate graphical statistics of the trafc classes (rc.graph).

    Trafc control rules from rc.rules looks like that:

    #!/bin/bash

    #Flush mangle table

    /sbin/iptables -t mangle -D POSTROUTING -j SHARE_USERS

    /sbin/iptables -t mangle -F SHARE_USERS /sbin/iptables -t mangle -X SHARE_USERS /sbin/iptables -t mangle -N

    SHARE_USERS #Shaper interfaces: eth0

    /sbin/tc qdisc del dev eth0 root

    /sbin/tc qdisc add dev eth0 root handle 1: htb r2q 2 #Root class:

    Figure 4. T-HTB interface

  • 8/4/2019 traffic control - htb

    10/10

    International Journal of Information Studies Volume 2 Issue 2 April 2010 131

    /sbin/tc class add dev eth0 parent 1: classid 1:1 htb rate 90 ceil 90

    #Class::USV

    /sbin/tc class add dev eth0 parent 1:1 classid 1:1001 htb rate 100mbps ceil 100kbps burst 2Kbit prio 3 /sbin/tc qdisc add dev

    eth0 parent 1:1001 handle 1001: sfq perturb 10

    /sbin/iptables -t mangle -A SHARE_USERS -o eth0 --protocol all -s 0.0.0.0/0 -d 0.0.0.0/0 -j CLASSIFY --set-class

    1:1001

    #Class::dcti

    /sbin/tc class add dev eth0 parent 1:1 classid 1:1002 htb rate 30kbps ceil 80kbps burst 2Kbit prio 3 /sbin/tc qdisc add dev

    eth0 parent 1:1002 handle 1002: sfq perturb 10

    /sbin/iptables -t mangle -A SHARE_USERS -o eth0 --protocol all -s 80.96.120.0/22 -j CLASSIFY --set-class 1:1002

    #Class::client

    /sbin/tc class add dev eth0 parent 1:1002 classid 1:1003 htb rate 40kbps ceil 50kbps burst 2Kbit prio 3

    /sbin/tc qdisc add dev eth0 parent 1:1003 handle 1003: sfq perturb 10

    /sbin/iptables -t mangle -A SHARE_USERS -o eth0 --protocol all -s 172.20.12.15/32 -d 80.96.120.30/32 -j CLASSIFY

    --set-class 1:1003

    #IPTABLES run

    /sbin/iptables -t mangle -A POSTROUTING -j SHARE_USERS.

    8. Conclusions

    Implementations of QoS mechanism based on HTB queuing discipline with the presented solutions are very scalable and

    competitive, being tested and used in various environments.

    The preferred medium for implementations was represented by a Linux operating system, named Fedora Linux, but the

    presented solutions can run in any Linux based router.

    The solutions cover the full scale of the actual methods of QoS implementation, starting with the oldest and most usual method,

    the CLI (Command Line Interface), passing through the conguration of a Linux network service (/etc/init.d/htb), using the

    text conguration les (/etc/htb/eth0-qos.cfg), and ending with the most actual method, the Web interface.

    Personally tests and implementations made in real environments (public institutions and private corporations), using all of

    these three methods, provided eloquent results which can prove that HTB implementations can fully satisfy any complexQoS requirements.

    References

    Devera, M. (2003).[1] Hierarchical Token Bucket Theory. http://luxik.cdi.cz/~devik/qos/htb/

    Kuznetsov, A., Hemminger, S.(2002).[2] NET:Iproute2, http://www.linuxfoundation.org/en/Net:Iproute

    Spirlea, I., Subredu, M., Stanimir, V. (2007).[3] HTB-Tools.http://htb-tools.skydevel.ro

    Delicostea, D. (2008).[4] WebHTB. http://webhtb.sourceforge.net

    Devera, M. (2002).[5] HTB manual user guide. http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm

    Hubert, B. (2002). iproute2 TC (8).[6] Linux man page (8).

    Hubert, B. (2004).[7] Linux Advanced Routing & Trafc Control HOWTO.http://lartc.org/howto/

    Brown, M. A. (2006).[8] Trafc Control.http://linux-ip.net/articles/Trafc-Control-HOWTOO Devera, M., Hubert B.(2002). iproute2 HTB.[9] Linux man page (8).

    Rusu, O., Subredu, M., Sparlea, I., Vraciu, V. (2002). Implementing Real Time Packet Forwarding Policies Using HTB.[10]

    First RoEduNet International Conference. Cluj-Napoca, Romania.

    Ivancic, D., Hadjina, N., Basch, D. (2005). Analysis of precision of the HTB packet scheduler.[11] Applied Electromagnet-

    ics and Communications- ICECom 2005. Dubrovnik, Croatia.

    Torvalds, L. (2003).[12] The Linux Kernel Archives. http://www.kernel.org

    Lazarov, T. (2009). T-HTBmanager.[13] http://sourceforge.net/apps/mediawiki/t-htbmanager