Rabu, 30 Mei 2012

Appendix C. Delay Pools


Delay Pools


1. Overview


The delay pools are, essentially "bandwidth buckets." A response is delayed until some amount of bandwidth is available from an appropriate bucket. The buckets don't actually store bandwidth (e.g., 100 Kbit/s), but rather some amount of traffic (e.g., 384 KB). Squid adds some amount of traffic to the buckets each second. Cache clients take some amount of traffic out when they receive data from an upstream source (origin server or neighbor).
The size of a bucket determines how much burst bandwidth is available to a client. If a bucket starts out full, a client can take as much traffic as it needs until the bucket becomes empty. The client then receives traffic allotments at the fill rate.
The mapping between Squid clients and actual buckets is a bit complicated. Squid uses three different constructs to do it: access rules, delay pool classes, and types of buckets. First, Squid checks a client request against the delay_access list. If the request is a match, it points to a particular delay pool. Each delay pool has a class: 1, 2, or 3. The classes determine which types of buckets are in use. Squid has three types of buckets: aggregate, individual, and network:
  • A class 1 pool has a single aggregate bucket.
  • A class 2 pool has an aggregate bucket and 256 individual buckets.
  • A class 3 pool has an aggregate bucket, 256 network buckets, and 65,536 individual buckets.
As you can probably guess, the individual and network buckets correspond to IP address octets. In a class 2 pool, the individual bucket is determined by the last octet of the client's IPv4 address. In a class 3 pool, the network bucket is determined by the third octet, and the individual bucket by the third and fourth octets.
For the class 2 and 3 delay pools, you can disable buckets you don't want to use. For example, you can define a class 2 pool with only individual buckets by disabling the aggregate bucket.
When a request goes through a pool with more than one bucket type, it takes bandwidth from all buckets. For example, consider a class 3 pool with aggregate, network, and individual buckets. If the individual bucket has 20 KB, the network bucket 30 KB, but the aggregate bucket only 2 KB, the client receives only a 2-KB allotment. Even though some buckets have plenty of traffic, the client is limited by the bucket with the smallest amount.


2. Configuring Squid


Before you can use delay pools, you must enable the feature when compiling. Use the enable-delay-poolsoption when running ./configure. You can then use the following directives to set up the delay pools.

2.1 delay_pools

The delay_pools directive tells Squid how many pools you want to define. It should go before any other delay pool-configuration directives in squid.conf. For example, if you want to have five delay pools:
delay_pools 5
The next two directives actually define each pool's class and other characteristics.

2.2 delay_class

You must use this directive to define the class for each pool. For example, if the first pool is class 3:
delay_class 1 3
Similarly, if the fourth pool is class 2:
delay_class 4 2
In theory, you should have one delay_class line for each pool. However, if you skip or omit a particular pool, Squid doesn't complain.

2.3 delay_parameters
Finally, this is where you define the interesting delay pool parameters. For each pool, you must tell Squid the fill rate and maximum size for each type of bucket. The syntax is:
delay_parameters N rate/size [rate/size [rate/size]]
The rate value is given in bytes per second, and size in total bytes. If you think of rate in terms of bits per second, you must remember to divide by 8.
Note that if you divide the size by the rate, you'll know how long it takes (number of seconds) the bucket to go from empty to full when there are no clients using it.
A class 1 pool has just one bucket and might look like this:
delay_class 2 1

delay_parameters 2 2000/8000
For a class 2 pool, the first bucket is the aggregate, and the second is the group of individual buckets. For example:
delay_class 4 2

delay_parameters 4 7000/15000 3000/4000
Similarly, for a class 3 pool, the aggregate bucket is first, the network buckets are second, and the individual buckets are third:
delay_class 1 3

delay_parameters 1 7000/15000 3000/4000 1000/2000

C.2.4 delay_initial_bucket_level

This directive sets the initial level for all buckets when Squid first starts or is reconfigured. It also applies to individual and network buckets, which aren't created until first referenced. The value is a percentage. For example:
delay_initial_bucket_level 75%
In this case, each newly created bucket is initially filled to 75% of its maximum size.

C.2.5 delay_access

This list of access rules determines which requests go through which delay pools. Requests that are allowed go through the delay pools, while those that are denied aren't delayed at all. If you don't have anydelay_access rules, Squid doesn't delay any requests.
The syntax for delay_access is similar to the other access rule lists (see Section 6.2), except that you must put a pool number before the allow or deny keyword. For example:
delay_access 1 allow TheseUsers

delay_access 2 allow OtherUsers
Internally, Squid stores a separate access rule list for each delay pool. If a request is allowed by a pool's rules, Squid uses that pool and stops searching. If a request is denied, however, Squid continues examining the rules for remaining pools. In other words, a deny rule causes Squid to stop searching the rules for a single pool but not for all pools.

C.2.6 cache_peer no-delay Option

The cache_peer directive has a no-delay option. If set, it makes Squid bypass the delay pools for any requests sent to that neighbor.

    3. Examples

    Let's start off with a simple example. Suppose that you have a saturated Internet connection, shared by many users. You can use delay pools to limit the amount of bandwidth that Squid consumes on the link, thus leaving the remaining bandwidth for other applications. Use a class 1 delay pool to limit the bandwidth for all users. For example, this limits everyone to 512 Kbit/s and keeps 1 MB in reserve if Squid is idle:
    delay_pools 1
    
    delay_class 1 1
    
    delay_parameters 1 65536/1048576
    
    acl All src 0/0
    
    delay_access 1 allow All
    One of the problems with this simple approach is that some users may receive more than their fair share of the bandwidth. If you want to try something more balanced, use a class 2 delay pool that has individual buckets. Recall that the individual bucket is determined by the fourth octet of the client's IPv4 address. Thus, if you have more than a /24 subnet, you might want to use a class 3 pool instead, which gives you 65536 individual buckets. In this example, I won't use the network buckets. While the overall bandwidth is still 512 Kbit/s, each individual is limited to 128 Kbit/s:
    delay_pools 1
    
    delay_class 1 3
    
    delay_parameters 1 65536/1048576 -1/-1 16384/262144
    
    acl All src 0/0
    
    delay_access 1 allow All
    You can also use delay pools to provide different classes of service. For example, you might have important users and unimportant users. In this case, you could use two class 1 delay pools. Give the important users a higher bandwidth limit than everyone else:
    delay_pools 2
    
    delay_class 1 1
    
    delay_class 2 1
    
    delay_parameters 1 65536/1048576
    
    delay_parameters 2 10000/50000
    
    acl ImportantUsers src 192.168.8.0/22
    
    acl All src 0/0
    
    delay_access 1 allow ImportantUsers
    
    delay_access 2 allow All
    
    
    
    

    4. Issues

    Squid's delay pools are often useful, but not perfect. You need to be aware of a few drawbacks and limitations before you use them.

    4.1 Fairness

    One of the most important things to realize about the current delay pools implementation is that it does nothing to guarantee fairness among all users of a single bucket. This is especially important for aggregate buckets (where sharing is high), but less so for individual buckets (where sharing is low).
    Squid generally services requests in order of increasing file descriptors. Thus, a request whose server-side TCP connection has a lower file descriptor may receive more bandwidth from a shared bucket than it should.

    4.2 Application Versus Transport Layer

    Bandwidth shaping and rate limiting usually operate at the network transport layer. There, the flow of packets can be controlled very precisely. Delay pools, however, are implemented in the application layer. Because Squid doesn't actually send and receive TCP packets (the kernel does), it has less control over the flow of individual packets. Rather than controlling the transmission and receipt of packets on the wire, Squid controls only how many bytes to read from the kernel.
    This means, for example, that incoming response data is queued up in the kernel. The TCP/IP stack can buffer some number of bytes that haven't yet been read by Squid. On most systems, the default TCP receive buffer size is usually between 32 KB and 64 KB. In other words, this much data can arrive over the network very quickly, regardless of anything Squid can do. On the one hand, it seems silly to read this data slowly even though it is already on your system. On the other hand, because the client doesn't receive the whole response right away, it is likely to postpone any future requests until the delayed responses are complete.
    If you are concerned that the kernel buffers too much server-side data, you can decrease the TCP receive buffer size with the tcp_recv_bufsize directive. Even better, your operating system probably has a way to set this parameter for the whole system. On NetBSD/FreeBSD/OpenBSD, you can use the sysctl variable named net.inet.tcp.recvspace. For Linux, read about /proc/sys/net/ipv4/tcp_rmem inDocumentation/networking/ip-sysctl.txt.

    4.3 Fixed Subnetting Scheme

    The current delay pools implementation assumes that your LAN uses /24 (class C) subnets, and that all users are in the same /16 (class B) subnet. This might not be so bad, depending on how your network is configured. However, it would be nice if the delay pools subnetting scheme were fully customizable.
    If your address space is larger than a /24 and smaller than a 16/, you can always create a class 3 pool and treat it as a class 2 pool (that is one of the examples given earlier).
    If you use just one class 2 pool with more than 256 users, some users will share the individual buckets. That might not be so bad, unless you happen to have a bunch of heavy users fighting over one measly bucket.
    You might also create multiple class 2 pools and use delay_access rules to divide them up among all users. The problem with this approach is that you can't have all users share a single aggregate bucket. Instead, each subgroup has their own aggregate bucket. You can't make a single client go through more than one delay pool.

    5. Monitoring Delay Pools

    You can monitor the delay pool levels with the cache manager interface. Request the delay page from the CGI interface or with the squidclient utility:
    % squidclient mgr:delay | less
    See Section 14.2.1.44 for a description of the output.

    Tidak ada komentar:

    Posting Komentar