If squid is in httpd-accelerator mode, it will accept normal HTTP requests and forward them to a HTTP server, but it will not honor proxy requests. If you want your cache to also accept proxy-HTTP requests then you must enable this feature:
http_accel_with_proxy onAlternately, you may have misconfigured one of your ACLs. Check the access.log and squid.conf files for clues.
local_domain
to work; Squid is caching the objects from my local servers.
The local_domain
directive does not prevent local
objects from being cached. It prevents the use of sibling caches
when fetching local objects. If you want to prevent objects from
being cached, use the cache_stoplist
or http_stop
configuration options (depending on your version).
Connection Refused
when the cache tries to retrieve an object located on a sibling, even though the sibling thinks it delivered the object to my cache.
If the HTTP port number is wrong but the ICP port is correct you
will send ICP queries correctly and the ICP replies will fool your
cache into thinking the configuration is correct but large objects
will fail since you don't have the correct HTTP port for the sibling
in your squid.conf file. If your sibling changed their
http_port
, you could have this problem for some time
before noticing.
If you see the Too many open files
error message, you
are most likely running out of file descriptors. This may be due
to running Squid on an operating system with a low filedescriptor
limit. This limit is often configurable in the kernel or with
other system tuning tools. There are two ways to run out of file
descriptors: first, you can hit the per-process limit on file
descriptors. Second, you can hit the system limit on total file
descriptors for all processes.
Have a look at filehandle.patch by Michael O'Reilly
If your kernel version is 2.2.x or greater, you can read and write the maximum number of file handles and/or inodes simply by accessing the special files:
/proc/sys/fs/file-max /proc/sys/fs/inode-maxSo, to increase your file descriptor limit:
echo 3072 > /proc/sys/fs/file-max
If your kernel version is between 2.0.35 and 2.1.x (?), you can read and write the maximum number of file handles and/or inodes simply by accessing the special files:
/proc/sys/kernel/file-max /proc/sys/kernel/inode-max
While this does increase the current number of file descriptors, Squid's configure script probably won't figure out the new value unless you also update the include files, specifically the value of OPEN_MAX in /usr/include/linux/limits.h.
Add the following to your /etc/system file to increase your maximum file descriptors per process:
set rlim_fd_max = 4096 set rlim_fd_cur = 1024
You should also #define SQUID_FD_SETSIZE
in
include/config.h to whatever you set
rlim_fd_max
to. Going beyond 4096 may break things
in the kernel.
Solaris' select(2)
only handles 1024 descriptors, so
if you need more, edit src/Makefile and enable
$(USE_POLL_OPT)
. Then recompile squid.
(version 1.1 only, version 2 automatically uses poll() on Solaris).
Do sysctl -a
and look for the value of
kern.maxfilesperproc
.
sysctl -w kern.maxfiles=XXXX sysctl -w kern.maxfilesperproc=XXXXWarning: You probably want
maxfiles
> maxfilesperproc
if you're going to be pushing the
limit.I don't think there is a formal upper limit inside the kernel. All the data structures are dynamically allocated. In practice there might be unintended metaphenomena (kernel spending too much time searching tables, for example).
For most BSD-derived systems (SunOS, 4.4BSD, OpenBSD, FreeBSD, NetBSD, BSD/OS, 386BSD, Ultrix) you can also use the ``brute force'' method to increase these values in the kernel (requires a kernel rebuild):
Do pstat -T
and look for the files
value, typically expressed as the ratio of current
maximum/.
One way is to increase the value of the maxusers
variable
in the kernel configuration file and build a new kernel. This method
is quick and easy but also has the effect of increasing a wide variety of
other variables that you may not need or want increased.
Another way is to find the param.c file in your kernel
build area and change the arithmetic behind the relationship between
maxusers
and the maximum number of open files.
Change the value of nfile
in usr/kvm/sys/conf.common/param.c/tt> by altering this equation:
int nfile = 16 * (NPROC + 16 + MAXUSERS) / 10 + 64;Where
NPROC
is defined by:
#define NPROC (10 + 16 * MAXUSERS)
Very similar to SunOS, edit /usr/src/sys/conf/param.c
and alter the relationship between maxusers
and the
maxfiles
and maxfilesperproc
variables:
int maxfiles = NPROC*2; int maxfilesperproc = NPROC*2;Where
NPROC
is defined by:
#define NPROC (20 + 16 * MAXUSERS)
The per-process limit can also be adjusted directly in the kernel
configuration file with the following directive:
options OPEN_MAX=128
Edit /usr/src/sys/conf/param.c
and adjust the
maxfiles
math here:
int maxfiles = 3 * (NPROC + MAXUSERS) + 80;Where
NPROC
is defined by:
#define NPROC (20 + 16 * MAXUSERS)
You should also set the OPEN_MAX
value in your kernel
configuration file to change the per-process limit.
NOTE: After you rebuild/reconfigure your kernel with more filedescriptors, you must then recompile Squid. Squid's configure script determines how many filedescriptors are available, so you must make sure the configure script runs again as well. For example:
cd squid-1.1.x make realclean ./configure --prefix=/usr/local/squid make
For example:
97/01/23 22:31:10| Removed 1 of 9 objects from bucket 3913 97/01/23 22:33:10| Removed 1 of 5 objects from bucket 4315 97/01/23 22:35:40| Removed 1 of 14 objects from bucket 6391
These log entries are normal, and do not indicate that squid has
reached cache_swap_high
.
Consult your cache information page in cachemgr.cgi for a line like this:
Storage LRU Expiration Age: 364.01 days
Objects which have not been used for that amount of time are removed as
a part of the regular maintenance. You can set an upper limit on the
LRU Expiration Age
value with reference_age
in the config
file.
Why, yes you can! Select the following menus:
This will bring up a box with icons for your various services. One of them should be a little ftp ``folder.'' Double click on this.
You will then have to select the server (there should only be one) Select that and then choose ``Properties'' from the menu and choose the ``directories'' tab along the top.
There will be an option at the bottom saying ``Directory listing style.'' Choose the ``Unix'' type, not the ``MS-DOS'' type.
--Oskar Pearson <oskar@is.co.za>
You are receiving ICP MISSes (via UDP) from a parent or sibling cache whose IP address your cache does not know about. This may happen in two situations.
on your parent squid.conf:
udp_outgoing_address proxy.parent.comon your squid.conf:
cache_host proxy.parent.com parent 3128 3130
The standards for naming hosts ( RFC 952, RFC 1101) do not allow underscores in domain names:
A "name" (Net, Host, Gateway, or Domain name) is a text string up to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus sign (-), and period (.).The resolver library that ships with recent versions of BIND enforces this restriction, returning an error for any host with underscore in the hostname. The best solution is to complain to the hostmaster of the offending site, and ask them to rename their host.
Some people have noticed that RFC 1033 implies that underscores are allowed. However, this is an informational RFC with a poorly chosen example, and not a standard by any means.
See the above question. The underscore character is not valid for hostnames.
Some DNS resolvers allow the underscore, so yes, the hostname might work fine when you don't use Squid.
To make Squid allow underscores in hostnames, add this line to src/squid.h:
#define ALLOW_HOSTNAME_UNDERSCORES 1and then recompile.
The answer to this is somewhat complicated, so please hold on. NOTE: most of this text is taken from ICP and the Squid Web Cache.
An ICP query does not include any parent or sibling designation,
so the receiver really has no indication of how the peer
cache is configured to use it. This issue becomes important
when a cache is willing to serve cache hits to anyone, but only
handle cache misses for its paying users or customers. In other
words, whether or not to allow the request depends on if the
result is a hit or a miss. To accomplish this,
Squid acquired the miss_access
feature
in October of 1996.
The necessity of ``miss access'' makes life a little bit complicated,
and not only because it was awkward to implement. Miss access
means that the ICP query reply must be an extremely accurate prediction
of the result of a subsequent HTTP request. Ascertaining
this result is actually very hard, if not impossible to
do, since the ICP request cannot convey the
full HTTP request.
Additionally, there are more types of HTTP request results than there
are for ICP. The ICP query reply will either be a hit or miss.
However, the HTTP request might result in a ``304 Not Modified
'' reply
sent from the origin server. Such a reply is not strictly a hit since the peer
needed to forward a conditional request to the source. At the same time,
its not strictly a miss either since the local object data is still valid,
and the Not-Modified reply is quite small.
One serious problem for cache hierarchies is mismatched freshness parameters. Consider a cache C using ``strict'' freshness parameters so its users get maximally current data. C has a sibling S with less strict freshness parameters. When an object is requested at C, C might find that S already has the object via an ICP query and ICP HIT response. C then retrieves the object from S.
In an HTTP/1.0 world, C (and C's client) will receive an object that was never subject to its local freshness rules. Neither HTTP/1.0 nor ICP provides any way to ask only for objects less than a certain age. If the retrieved object is stale by Cs rules, it will be removed from Cs cache, but it will subsequently be fetched from S so long as it remains fresh there. This configuration miscoupling problem is a significant deterrent to establishing both parent and sibling relationships.
HTTP/1.1 provides numerous request headers to specify freshness
requirements, which actually introduces
a different problem for cache hierarchies: ICP
still does not include any age information, neither in query nor
reply. So S may return an ICP HIT if its
copy of the object is fresh by its configuration
parameters, but the subsequent HTTP request may result
in a cache miss due to any
Cache-control:
headers originated by C or by
C's client. Situations now emerge where the ICP reply
no longer matches the HTTP request result.
In the end, the fundamental problem is that the ICP query does not provide enough information to accurately predict whether the HTTP request will be a hit or miss. In fact, the current ICP Internet Draft is very vague on this subject. What does ICP HIT really mean? Does it mean ``I know a little about that URL and have some copy of the object?'' Or does it mean ``I have a valid copy of that object and you are allowed to get it from me?''
So, what can be done about this problem? We really need to change ICP so that freshness parameters are included. Until that happens, the members of a cache hierarchy have only two options to totally eliminate the ``access denied'' messages from sibling caches:
refresh_rules
parameters.miss_access
at all. Promise your sibling cache
administrator that your cache is properly configured and that you
will not abuse their generosity. The sibling cache administrator can
check his log files to make sure you are keeping your word.
This means that another processes is already listening on port 8080 (or whatever you're using). It could mean that you have a Squid process already running, or it could be from another program. To verify, use the netstat command:
netstat -naf inet | grep LISTENThat will show all sockets in the LISTEN state. You might also try
netstat -naf inet | grep 8080If you find that some process has bound to your port, but you're not sure which process it is, you might be able to use the excellent lsof program. It will show you which processes own every open file descriptor on your system.
This means that the client socket was closed by the client
before Squid was finished sending data to it. Squid detects this
by trying to read(2)
some data from the socket. If the
read(2)
call fails, then Squid konws the socket has been
closed. Normally the read(2)
call returns ECONNRESET: Connection reset by peer
and these are NOT logged. Any other error messages (such as
EPIPE: Broken pipe are logged to cache.log. See the ``intro'' of
section 2 of your Unix manual for a list of all error codes.
These are caused by misbehaving Web clients attempting to use persistent connections. Squid-1.1 does not support persistent connections.
We are not sure. We were unable to find any detailed information on NTLM (thanks Microsoft!), but here is our best guess:
Squid transparently passes the NTLM request and response headers between clients and servers. The encrypted challenge and response strings most likely encode the IP address of the client. Because the proxy is passing these strings and is connected with a different IP address, the authentication scheme breaks down. This implies that if NTLM authentication works at all with proxy caches, the proxy would need to intercept the NTLM headers and process them itself.
If anyone knows more about NTLM and knows the above to be false, please let us know.
This message was received at squid-bugs:
If you have only one parent, configured as:cache_host xxxx parent 3128 3130 no-query defaultnothing is sent to the parent; neither UDP packets, nor TCP connections.
Simply adding default to a parent does not force all requests to be sent to that parent. The term default is perhaps a poor choice of words. A default parent is only used as a last resort. If the cache is able to make direct connections, direct will be preferred over default. If you want to force all requests to your parent cache(s), use the inside_firewall option:
inside_firewall none
``Hot Mail'' is proxy-unfriendly and requires all requests to come from the same IP address. You can fix this by adding to your squid.conf:
hierarchy_stoplist hotmail.com
This is most likely because Squid is using more memory than it should be for your system. When the Squid process becomes large, it experiences a lot of paging. This will very rapidly degrade the performance of Squid. Memory usage is a complicated problem. There are a number of things to consider.
First, examine the Cache Manager Info ouput and look at these two lines:
Number of TCP connections: 121104 Page faults with physical i/o: 16720Note, if your system does not have the getrusage() function, then you will not see the page faults line.
Divide the number of page faults by the number of connections. In this case 16720/121104 = 0.14. Ideally this ratio should be in the 0.0 - 0.1 range. It may be acceptable to be in the 0.1 - 0.2 range. Above that, however, and you will most likely find that Squid's performance is unacceptably slow.
If the ratio is too high, you will need to make some changes to lower the amount of memory Squid uses.
This could be a permission problem. Does the Squid userid have permission to execute the dnsserver program?
You might also try testing dnsserver from the command line:
> echo oceana.nlanr.net | ./dnsserverShould produce something like:
$name oceana.nlanr.net $h_name oceana.nlanr.net $h_len 4 $ipcount 1 132.249.40.200 $aliascount 0 $ttl 82067 $end
Bug reports for Squid should be sent to the squid-bugs alias. Any bug report must include
There are two conditions under which squid will exit abnormally and generate a coredump. First, a SIGSEGV or SIGBUS signal will cause Squid to exit and dump core. Second, many functions include consistency checks. If one of those checks fail, Squid calls abort() to generate a core dump.
Many people report that Squid doesn't leave a coredump anywhere. This is likely because of ``resource limits.'' These limits can usually be changed in shell scripts. The command to change the resource limits is usually either limit or limits. Sometimes it is a shell-builtin function, and sometimes it is a regular program. Also note that you can set resource limits in the /etc/login.conf file on FreeBSD and maybe other BSD systems.
To change the coredumpsize limit you might use a command like:
limit coredumpsize unlimitedor
limits coredump unlimited
The core dump file will be left in either one of two locations:
cd /tmpto your script which starts Squid (e.g. RunCache).
Once you have located the core dump file, use a debugger such as dbx or gdb to generate a stack trace:
tirana-wessels squid/src 270% gdb squid /T2/Cache/core GDB is free software and you are welcome to distribute copies of it under certain conditions; type "show copying" to see the conditions. There is absolutely no warranty for GDB; type "show warranty" for details. GDB 4.15.1 (hppa1.0-hp-hpux10.10), Copyright 1995 Free Software Foundation, Inc... Core was generated by `squid'. Program terminated with signal 6, Aborted. [...] (gdb) where #0 0xc01277a8 in _kill () #1 0xc00b2944 in _raise () #2 0xc007bb08 in abort () #3 0x53f5c in __eprintf (string=0x7b037048 "", expression=0x5f <Address 0x5f out of bounds>, line=8, filename=0x6b <Address 0x6b out of bounds>) #4 0x29828 in fd_open (fd=10918, type=3221514150, desc=0x95e4 "HTTP Request") at fd.c:71 #5 0x24f40 in comm_accept (fd=2063838200, peer=0x7b0390b0, me=0x6b) at comm.c:574 #6 0x23874 in httpAccept (sock=33, notused=0xc00467a6) at client_side.c:1691 #7 0x25510 in comm_select_incoming () at comm.c:784 #8 0x25954 in comm_select (sec=29) at comm.c:1052 #9 0x3b04c in main (argc=1073745368, argv=0x40000dd8) at main.c:671
If possible, you might keep the coredump file around for a day or two. It is often helpful if we can ask you to send additional debugger output, such as the contents of some variables.
If you believe you have found a non-fatal bug (such as incorrect HTTP processing) please send us a section of your cache.log with debugging to demonstrate the problem. The cache.log file can become very large, so alternatively, you may want to copy it to an FTP or HTTP server where we can download it.
It is very simple to enable full debugging on a running squid process. Simply use the -k debug command line option:
% ./squid -k debugThis causes every debug() statement in the source code to write a line in the cache.log file. You also use the same command to restore Squid to normal debugging.
To enable selective debugging (e.g. for one source file only), you need to edit squid.conf and add to the debug_options line. Every Squid source file is assigned a different debugging section. The debugging section assignments can be found by looking at the top of individual source files, or by reading the file doc/debug-levels.txt (correctly renamed to debug-sections.txt for Squid-2). You also specify the debugging level to control the amount of debugging. Higher levels result in more debugging messages. For example, to enable full debugging of Access Control functions, you would use
debug_options ALL,1 28,9Then you have to restart or reconfigure Squid.
Once you have the debugging captured to cache.log, take a look at it yourself and see if you can make sense of the behaviour which you see. If not, please feel free to send your debugging output to the squid-users or squid-bugs lists.
Squid normally tests your system's DNS configuration before it starts server requests. Squid tries to resolve some common DNS names, as defined in the dns_testnames configuration directive. If Squid cannot resolve these names, it could mean:
To disable this feature, use the -D command line option.
Note, Squid does NOT use the dnsservers to test the DNS. The test is performed internally, before the dnsservers start.
Starting with version 1.1.15, we have required that you first run
squid -zto create the swap directories on your filesystem. If you have set the cache_effective_user option, then the Squid process takes on the given userid before making the directories. If the cache_dir directory (e.g. /var/spool/cache) does not exist, and the Squid userid does not have permission to create it, then you will get the ``permission denied'' error. This can be simply fixed by manually creating the cache directory.
# mkdir /var/spool/cache # chown <userid> <groupid> /var/spool/cache # squid -z
Alternatively, if the directory already exists, then your operating system may be returning ``Permission Denied'' instead of ``File Exists'' on the mkdir() system call. This patch by Miquel van Smoorenburg should fix it.
Either (1) the Squid userid does not have permission to bind to the port, or (2) some other process has bound itself to the port. Remember that root privileges are required to open port numbers less than 1024. If you see this message when using a high port number, or even when starting Squid as root, then the port has already been opened by another process. Maybe you are running in the HTTP Accelerator mode and there is already a HTTP server running on port 80? If you're really stuck, install the way cool lsof utility to show you which process has your port in use.
This is explained in the Redirector section.
Squid keeps an in-memory bitmap of disk files that are available for use, or are being used. The size of this bitmap is determined at run name, based on two things: the size of your cache, and the average (mean) cache object size.
The size of your cache is specified in squid.conf, on the cache_dir lines. The mean object size can also be specified in squid.conf, with the 'store_avg_object_size' directive. By default, Squid uses 13 Kbytes as the average size.
When allocating the bitmaps, Squid allocates this many bits:
2 * cache_size / store_avg_object_size
So, if you exactly specify the correct average object size, Squid should have 50% filemap bits free when the cache is full. You can see how many filemap bits are being used by looking at the 'storedir' cache manager page. It looks like this:
Store Directory #0: /usr/local/squid/cache First level subdirectories: 4 Second level subdirectories: 4 Maximum Size: 1024000 KB Current Size: 924837 KB Percent Used: 90.32% Filemap bits in use: 77308 of 157538 (49%) Flags:
Now, if you see the ``You've run out of swap file numbers'' message, then it means one of two things:
To check the average file size of object currently in your cache, look at the cache manager 'info' page, and you will find a line like:
Mean Object Size: 11.96 KB
To make the warning message go away, set 'store_avg_object_size' to that value (or lower) and then restart Squid.
If I try by way of a test, to access
ftp://username:password@ftpserver/somewhere/foo.tar.gzI get
somewhere/foo.tar.gz: Not a directory.
Use this URL instead:
ftp://username:password@ftpserver/%2fsomewhere/foo.tar.gz
This means your pinger program does not have root priveleges. You should either do this:
% su # make install-pingeror
# chown root /usr/local/squid/bin/pinger # chmod 4755 /usr/local/squid/bin/pinger
A forwarding loop is when a request passes through one proxy more than once. You can get a forwarding loop if
Forwarding loops are detected by examining the Via request header. Each cache which "touches" a request must add its hostname to the Via header. If a cache notices its own hostname in this header for an incoming request, it knows there is a forwarding loop somewhere. NOTE: A pair of caches which have the same visible_hostname value will report forwarding loops.
When Squid detects a forwarding loop, it is logged to the cache.log file with the recieved Via header. From this header you can determine which cache (the last in the list) forwarded the request to you.
One way to reduce forwarding loops is to change a parent relationship to a sibling relationship.
Another way is to use cache_host_acl rules. For example:
# Our parent caches cache_peer A.example.com parent 3128 3130 cache_peer B.example.com parent 3128 3130 cache_peer C.example.com parent 3128 3130 # An ACL list acl PEERS src A.example.com acl PEERS src B.example.com acl PEERS src C.example.com # Prevent forwarding loops cache_host_acl A.example.com !PEERS cache_host_acl B.example.com !PEERS cache_host_acl C.example.com !PEERSThe above configuration instructs squid to NOT forward a request to parents A, B, or C when a request is received from any one of those caches.
This error message is seen mostly on Solaris systems. Mark Kennedy gives a great explanation:
Error 71 [EPROTO] is an obscure way of reporting that clients made it onto your server's TCP incoming connection queue but the client tore down the connection before the server could accept it. I.e. your server ignored its clients for too long. We've seen this happen when we ran out of file descriptors. I guess it could also happen if something made squid block for a long time.
Got these messages in my cache log - I guess it means that the index contents do not match the contents on disk.
1998/09/23 09:31:30| storeSwapInFileOpened: /var/cache/00/00/00000015: Size mismatch: 776(fstat) != 3785(object) 1998/09/23 09:31:31| storeSwapInFileOpened: /var/cache/00/00/00000017: Size mismatch: 2571(fstat) != 4159(object)
What does Squid do in this case?
NOTE, these messages are specific to Squid-2. These happen when Squid reads an object from disk for a cache hit. After it opens the file, Squid checks to see if the size is what it expects it should be. If the size doesn't match, the error is printed. In this case, Squid does not send the wrong object to the client. It will re-fetch the object from the source.
These messages are caused by buggy clients, mostly Netscape Navigator. What happens is, Netscape sends an HTTPS/SSL request over a persistent HTTP connection. Normally, when Squid gets an SSL request, it looks like this:
CONNECT www.buy.com:443 HTTP/1.0Then Squid opens a TCP connection to the destination host and port, and the real request is sent encrypted over this connection. Thats the whole point of SSL, that all of the information must be sent encrypted.
With this client bug, however, Squid receives a request like this:
GET https://www.buy.com/corp/ordertracking.asp HTTP/1.0 Accept: */* User-agent: Netscape ... ...Now, all of the headers, and the message body have been sent, unencrypted to Squid. There is no way for Squid to somehow turn this into an SSL request. The only thing we can do is return the error message.
Note, this browser bug does represent a security risk because the browser is sending sensitive information unencrypted over the network.