Такая возможность появилась в squid 2.0 и называется delay pools. Прежде всего проверьте, была ли включена эта возможность на этапе компиляции squid ключом утилиты configure --enable-delay-pools.
В squid.conf нужно будет прописать нечто вроде:
acl comp src 192.168.10.15/255.255.255.255
delay_class1_access allow comp
delay_class1_aggregate_max 8000
delay_class1_aggregate_restore 8000i
Заметьте, что объекты из кэша не замедляются.
Поищите в архивах squid переписку по этому поводу, а точнее, http://www.cineca.it/proxy/search/html/9812/139.html. Это письмо автора с подробными разъяснениями. По слухам, реализация delay pools для class1 работает плохо, поэтому уменьшать скорость работы нужно по class2, а для class1 сделать следующее:
delay_class1_access deny all
Delay pools are meant to provide a way of assigning tokens of
bandwidth allowance to Squid users. Pools are assigned on a
host basis (IP), which basically means that what you restrict
is the WAN bandwidth pumped by Squid for any individual host
accessing the Net through it. I am beginning from the most
general case, which is the allowance for a group of hosts (a
block of IP addresses). An individual host may be part of a
group or be a group in itself. In delay pools there are four
basic parameters to be defined: 1) aggregate_max 2)
aggregate_restore and 3) individual_max 4) individual_restore
In order to understand the meaning of each parameter I will
briefly explain the concept behind delay pools: For every
host in one group there is one delay pool size descriptor
(individual_max) and one increment level (individual_restore)
associated. We can think delay pools as a bucket of tokens
(bytes). The size of the bucket (for one host) is
individual_max bytes. At any given time, the bucket can have
at max that many tokens (bytes) accumulated. Before Squid
fetches a page for that host, it looks up the number of
tokens (bytes) available at the bucket. Let's call it the
current bucket watermark. If there are enough tokens in the
bucket, Squid gets the tokens it needs, lowers the bucket
watermark accordingly and proceeds to reading a number of
bytes from the connection. If there are no tokens available,
Squid is enforced to defer the read, until some tokens
(bytes) are available within the bucket. How is this done?
This is where the increment level value comes into play:
individual_restore defines the number of tokens (bytes) added
to the current bucket watermark level every second. To
further visualise it, think of a real bucket under a faucet:
the bucket can hold a certain amount of water (e.g. 1 litre).
The faucet drops a certain amount of water each second (for
example 10 cl). We start with the bucket filled of water, but
as we start taking water off the bucket with a glass, the
available water goes down. If we want to keep a steady
watermark level we must take water with the same rate it is
dropping from the faucet. If we have a higher rate, the
bucket will soon become empty, and then we will have to wait
until the faucet has dropped enough water in the bucket
again. If we do not use any water for some time the water
will overflow the bucket, therefore the bucket will continue
to have at most 1 litre of water for immediate use. At
telecomm parlance, this is a variation of the leaky bucket
algorithm (no wonder why I used the bucket as an example)! It
will (?)be used for bandwidth policing in modern data
networks (ATM for example). How to configure the delay pools
in Squid now: At first decide what kind of grouping you need.
If you want all your users to have only one bucket then you
go for delay_class1. If you want to have separate buckets for
each user then you must opt for delay_class2 or class3
(depends on your network). The Aggregate_* settings work
almost the same way as the individual with one distinction:
in class 2 (this is what I use) and class3 the aggregate
delay pool is used only if no individual pool is defined.
IMHO I think that it would be more efficient if the aggregate
delay pool could provide tokens if an individual pool ran out
of tokens at a certain time. (This could be an additional
measure to accommodate bursts for some users). If you decide
that you need class2 then remember to put the statement acl
all src 0.0.0.0/0.0.0.0 delay_class1_access deny all At
present it is needed in order to allow class2/class3. Assume
that you want class2 delay pools. Then you need something
like the following: acl hosts src
xxx.xxx.xxx.a-xxx.xxx.xxx.b/255.255.255.255
delay_class2_access allow hosts #delay_class2_aggregate_max
32000 #delay_class2_aggregate_restore 2000
delay_class2_individual_max 24000
delay_class2_individual_restore 1200 This defines a block of
hosts with IP addresses in the Class C net from a to b (but
you can add also other individual hosts to the acl) that will
share the following pool settings: Maximum pool size 24000
bytes Increment level 1200 bytes I don't use the aggregate
settings because they do not have any effect when individual
pools are set. The individual settings more or less let the
host fetch "immediately" pages that are slightly larger than
24000 bytes. If the host tries to fetch a large page then
Squid after the initial burst (~24KB) it will slow down to
the rate defined by the increment value defined by
"delay_class2_individual_restore" (i.e. 1200 bytes/sec). If
you do not want to allow for any burstiness in the delay pool
then simply define the max size of the pool equal to the
restore size. (but this is already documented). Right now,
you can have only one effective delay pool setting. This
means that you cannot set more than one group of users in the
same class. I hope that David will find the time to extend
delay pools in order to handle more than one type of host
groups in one class. In my view this is necessary. After
using delay pools for some time now, I strongly think that it
is a powerful add-on to Squid. I think that if it is extended
to handle multiple groups (with different pool settings in
each group) its value will be really leveraged. Cheers,
Evaghelos. И еще одна добавка от автора delay pools Subject:
Re: DELAY_POOLS: How to setup Resent-Date: Wed, 11 Nov 1998
18:27:05 -0800 (PST) Resent-From: squid-users@ircache.net
Date: Thu, 12 Nov 1998 09:29:04 +0800 From: David Luyer
<luyer@ucs.uwa.edu.au> To: Evaghelos Tsiotsios
<etsiot@archetypon.gr> CC: squid-users@ircache.net,
tarkhil@synchroline.ru Thanks to Evaghelos Tsiotsios for his
description of delay pools. A few clarifying comments are
below; > The Aggregate_* settings work almost the same way
as the individual with one > distinction: in class 2 (this
is what I use) and class3 the aggregate delay > pool is
used only if no individual pool is defined. This assumption
(and related comments later) is incorrect. (Or if it is
correct it is a coding error.) The aggregate and individual
totals both act to limit the traffic, if either is empty then
the request is delayed. For example, I run approx 64kbps peak
rate (with small 'bucket size') on one of the delay_class2
pools and then give each user in it approx 8kbps (with large
'bucket size'). The large bucket size on individual users
lets each user get a web page "instantly" but if they start
downloading a large file it is slow. The small bucket size on
the aggregate means that the 64kbps is the limit of bandwidth
use (modulo overheads) for these users. Also, I didn't notice
any mention of the 'no-delay' tag you can put on neighbors -
this will prevent traffic fetched from that peer from being
'taken out of the bucket'; for example if you have a fast,
"free traffic" ATM network to local universities but anything
further away costs money, and you peer caches with local
universities, you can put the 'no-delay' tag on these peers
cache_host lines. Regarding class2/3 delay pools: It could
take some time before I'm able to do any work on multiple
delay pools of a given class. How I think it should work is
this (sorry, sparse documentation on this, but I hope enough
to show anyone whose interested and understands the current
system what it means): delay_pools 3 # 3 delay pools
delay_class 1 1 # pool 1 is class 1 delay_class 2 1 # pool 2
is class 1 delay_class 3 3 # pool 3 is class 3 delay_access 1
allow staff delay_access 1 deny all delay_access 2 allow
students delay_access 2 deny all delay_access 3 allow college
delay_access 3 deny all delay_parameters 1 640000/640000
delay_parameters 2 64000/64000 delay_parameters 3 64000/64000
32000/64000 6400/32000 # ttl_rest/ttl_max net_rest/net_max
ind_rest/ind_max The acls and delay pool data could then be
dynamically allocated. Maybe once this was done delay pools
could be put into squid by default as the cost of disabled
delay pools would be very close to zero (simply a few checks
if delay_pools number was 0). David.