Next Previous Contents

14. System-Dependent Weirdnesses

14.1 Solaris

select()

select(2) won't handle more than 1024 file descriptors. Compile with -DUSE_POLL if you need more than 1024 descriptors.

malloc

libmalloc.a is leaky. Squid's configure does not use -lmalloc on Solaris.

DNS lookups and nscd

by David J N Begley.

DNS lookups can be slow because of some mysterious thing called ncsd. You should edit /etc/nscd.conf and make it say:

        enable-cache            hosts           no

Apparently nscd serializes DNS queries thus slowing everything down when an application (such as Squid) hits the resolver hard. You may notice something similar if you run a log processor executing many DNS resolver queries - the resolver starts to slow.. right.. down.. . . .

DNS lookups and /etc/nsswitch.conf

by Jason Armistead.

The /etc/nsswitch.conf file determines the order of searches for lookups (amongst other things). You might only have it set up to allow NIS and HOSTS files to work. You definitely want the "hosts:" line to include the word dns, e.g.:

        hosts:      nis dns [NOTFOUND=return] files 

DNS lookups and NIS

by Chris Tilbury.

Our site cache is running on a Solaris 2.6 machine. We use NIS to distribute authentication and local hosts information around and in common with our multiuser systems, we run a slave NIS server on it to help the response of NIS queries.

We were seeing very high name-ip lookup times (avg ~2sec) and ip->name lookup times (avg ~8 sec), although there didn't seem to be that much of a problem with response times for valid sites until the cache was being placed under high load. Then, performance went down the toilet.

After some time, and a bit of detective work, we found the problem. On Solaris 2.6, if you have a local NIS server running (ypserv) and you have NIS in your /etc/nsswitch.conf hosts entry, then check the flags it is being started with. The 2.6 ypstart script checks to see if there is a resolv.conf file present when it starts ypserv. If there is, then it starts it with the -d option.

This has the same effect as putting the YP_INTERDOMAIN key in the hosts table -- namely, that failed NIS host lookups are tried against the DNS by the NIS server.

This is a bad thing(tm)! If NIS itself tries to resolve names using the DNS, then the requests are serialised through the NIS server, creating a bottleneck (This is the same basic problem that is seen with nscd). Thus, one failing or slow lookup can, if you have NIS before DNS in the service switch file (which is the most common setup), hold up every other lookup taking place.

If you're running in this kind of setup, then you will want to make sure that

  1. ypserv doesn't start with the -d flag.
  2. you don't have the YP_INTERDOMAIN key in the hosts table (find the B=-b line in the yp Makefile and change it to B=)

We changed these here, and saw our average lookup times drop by up to an order of magnitude (~150msec for name-ip queries and ~1.5sec for ip-name queries, the latter still so high, I suspect, because more of these fail and timeout since they are not made so often and the entries are frequently non-existent anyway).

Tuning

Solaris 2.x - tuning your TCP/IP stack and more by Jens-S. Vöckler

disk write error: (28) No space left on device

You might get this error even if your disk is not full, and is not out of inodes. Check your syslog logs (/var/adm/messages, normally) for messages like either of these:

        NOTICE: realloccg /proxy/cache: file system full
        NOTICE: alloc: /proxy/cache: file system full

In a nutshell, the UFS filesystem used by Solaris can't cope with the workload squid presents to it very well. The filesystem will end up becoming highly fragmented, until it reaches a point where there are insufficient free blocks left to create files with, and only fragments available. At this point, you'll get this error and squid will revise its idea of how much space is actually available to it. You can do a "fsck -n raw_device" (no need to unmount, this checks in read only mode) to look at the fragmentation level of the filesystem. It will probably be quite high (>15%).

Sun suggest two solutions to this problem. One costs money, the other is free but may result in a loss of performance (although Sun do claim it shouldn't, given the already highly random nature of squid disk access).

The first is to buy a copy of VxFS, the Veritas Filesystem. This is an extent-based filesystem and it's capable of having online defragmentation performed on mounted filesystems. This costs money, however (VxFS is not very cheap!)

The second is to change certain parameters of the UFS filesystem. Unmount your cache filesystems and use tunefs to change optimization to "space" and to reduce the "minfree" value to 3-5% (under Solaris 2.6 and higher, very large filesystems will almost certainly have a minfree of 2% already and you shouldn't increase this). You should be able to get fragmentation down to around 3% by doing this, with an accompanied increase in the amount of space available.

Thanks to Chris Tilbury.

Solaris X86 and IPFilter

by Jeff Madison

Important update regarding Squid running on Solaris x86. I have been working for several months to resolve what appeared to be a memory leak in squid when running on Solaris x86 regardless of the malloc that was used. I have made 2 discoveries that anyone running Squid on this platform may be interested in.

Number 1: There is not a memory leak in Squid even though after the system runs for some amount of time, this varies depending on the load the system is under, Top reports that there is very little memory free. True to the claims of the Sun engineer I spoke to this statistic from Top is incorrect. The odd thing is that you do begin to see performance suffer substantially as time goes on and the only way to correct the situation is to reboot the system. This leads me to discovery number 2.

Number 2: There is some type of resource problem, memory or other, with IPFilter on Solaris x86. I have not taken the time to investigate what the problem is because we no longer are using IPFilter. We have switched to a Alteon ACE 180 Gigabit switch which will do the trans-proxy for you. After moving the trans-proxy, redirection process out to the Alteon switch Squid has run for 3 days strait under a huge load with no problem what so ever. We currently have 2 boxes with 40 GB of cached objects on each box. This 40 GB was accumulated in the 3 days, from this you can see what type of load these boxes are under. Prior to this change we were never able to operate for more than 4 hours.

Because the problem appears to be with IPFilter I would guess that you would only run into this issue if you are trying to run Squid as a transparent proxy using IPFilter. That makes sense. If there is anyone with information that would indicate my finding are incorrect I am willing to investigate further.

Changing the directory lookup cache size

by Mike Batchelor

On Solaris, the kernel variable for the directory name lookup cache size is ncsize. In /etc/system, you might want to try

        set ncsize = 8192
or even higher. The kernel variable ufs_inode - which is the size of the inode cache itself - scales with ncsize in Solaris 2.5.1 and later. Previous versions of Solaris required both to be adjusted independently, but now, it is not recommended to adjust ufs_inode directly on 2.5.1 and later.

You can set ncsize quite high, but at some point - dependent on the application - a too-large ncsize will increase the latency of lookups.

Defaults are:

        Solaris 2.5.1 : (max_nprocs + 16 + maxusers) + 64
        Solaris 2.6/Solaris 7 : 4 * (max_nprocs + maxusers) + 320

The priority_paging algorithm

by Mike Batchelor

Another new tuneable (actually a toggle) in Solaris 2.5.1, 2.6 or Solaris 7 is the priority_paging algorithm. This is actually a complete rewrite of the virtual memory system on Solaris. It will page out application data last, and filesystem pages first, if you turn it on (set priority_paging = 1 in /etc/system). As you may know, the Solaris buffer cache grows to fill available pages, and under the old VM system, applications could get paged out to make way for the buffer cache, which can lead to swap thrashing and degraded application performance. The new priority_paging helps keep application and shared library pages in memory, preventing the buffer cache from paging them out, until memory gets REALLY short. Solaris 2.5.1 requires patch 103640-25 or higher and Solaris 2.6 requires 105181-10 or higher to get priority_paging. Solaris 7 needs no patch, but all versions have it turned off by default.

14.2 FreeBSD

T/TCP bugs

We have found that with FreeBSD-2.2.2-RELEASE, there some bugs with T/TCP. FreeBSD will try to use T/TCP if you've enabled the ``TCP Extensions.'' To disable T/TCP, use sysinstall to disable TCP Extensions, or edit /etc/rc.conf and set

        tcp_extensions="NO"             # Allow RFC1323 & RFC1544 extensions (or NO).
or add this to your /etc/rc files:
        sysctl -w net.inet.tcp.rfc1644=0

mbuf size

We noticed an odd thing with some of Squid's interprocess communication. Often, output from the dnsserver processes would NOT be read in one chunk. With full debugging, it looks like this:

1998/04/02 15:18:48| comm_select: FD 46 ready for reading
1998/04/02 15:18:48| ipcache_dnsHandleRead: Result from DNS ID 2 (100 bytes)
1998/04/02 15:18:48| ipcache_dnsHandleRead: Incomplete reply
....other processing occurs...
1998/04/02 15:18:48| comm_select: FD 46 ready for reading
1998/04/02 15:18:48| ipcache_dnsHandleRead: Result from DNS ID 2 (9 bytes)
1998/04/02 15:18:48| ipcache_parsebuffer: parsing:
$name www.karup.com
$h_name www.karup.inter.net
$h_len 4
$ipcount 2
38.15.68.128
38.15.67.128
$ttl 2348
$end

Interestingly, it is very common to get only 100 bytes on the first read. When two read() calls are required, this adds additional latency to the overall request. On our caches running Digital Unix, the median dnsserver response time was measured at 0.01 seconds. On our FreeBSD cache, however, the median latency was 0.10 seconds.

Here is a simple patch to fix the bug:

===================================================================
RCS file: /home/ncvs/src/sys/kern/uipc_socket.c,v
retrieving revision 1.40
retrieving revision 1.41
diff -p -u -r1.40 -r1.41
--- src/sys/kern/uipc_socket.c  1998/05/15 20:11:30     1.40
+++ /home/ncvs/src/sys/kern/uipc_socket.c       1998/07/06 19:27:14     1.41
@@ -31,7 +31,7 @@
  * SUCH DAMAGE.
  *
  *     @(#)uipc_socket.c       8.3 (Berkeley) 4/15/94
- *     $Id: FAQ.sgml,v 1.74 1999/06/25 16:18:14 wessels Exp $
+ *     $Id: FAQ.sgml,v 1.74 1999/06/25 16:18:14 wessels Exp $
  */
 
 #include <sys/param.h>
@@ -491,6 +491,7 @@ restart:
                                mlen = MCLBYTES;
                                len = min(min(mlen, resid), space);
                        } else {
+                               atomic = 1;
 nopages:
                                len = min(min(mlen, resid), space);
                                /*

Another technique which may help, but does not fix the bug, is to increase the kernel's mbuf size. The default is 128 bytes. The MSIZE symbol is defined in /usr/include/machine/param.h. However, to change it we added this line to our kernel configuration file:

        options         MSIZE="256"

Dealing with NIS

/var/yp/Makefile has the following section:

        # The following line encodes the YP_INTERDOMAIN key into the hosts.byname
        # and hosts.byaddr maps so that ypserv(8) will do DNS lookups to resolve
        # hosts not in the current domain. Commenting this line out will disable
        # the DNS lookups.
        B=-b
You will want to comment out the B=-b line so that ypserv does not do DNS lookups.

14.3 OSF1/3.2

If you compile both libgnumalloc.a and Squid with cc, the mstats() function returns bogus values. However, if you compile libgnumalloc.a with gcc, and Squid with cc, the values are correct.

14.4 BSD/OS

gcc/yacc

Some people report difficulties compiling squid on BSD/OS.

process priority

I've noticed that my Squid process seems to stick at a nice value of four, and clicks back to that even after I renice it to a higher priority. However, looking through the Squid source, I can't find any instance of a setpriority() call, or anything else that would seem to indicate Squid's adjusting its own priority.

by Bill Bogstad

BSD Unices traditionally have auto-niced non-root processes to 4 after they used alot (4 minutes???) of CPU time. My guess is that it's the BSD/OS not Squid that is doing this. I don't know offhand if there is a way to disable this on BSD/OS.

by Arjan de Vet

You can get around this by starting Squid with nice-level -4 (or another negative value).

14.5 Linux

Cannot bind socket FD 5 to 127.0.0.1:0: (49) Can't assign requested address

Try a different version of Linux. We have received many reports of this ``bug'' from people running Linux 2.0.30. The bind(2) system call should NEVER give this error when binding to port 0.

FATAL: Don't run Squid as root, set 'cache_effective_user'!

Some users have reported that setting cache_effective_user to nobody under Linux does not work. However, it appears that using any cache_effective_user other than nobody will succeed. One solution is to create a user account for Squid and set cache_effective_user to that. Alternately you can change the UID for the nobody account from 65535 to 65534.

Another problem is that RedHat 5.0 Linux seems to have a broken setresuid() function. There are two ways to fix this. Before running configure:

        % setenv ac_cv_func_setresuid no
        % ./configure ...
        % make clean
        % make install
Or after running configure, manually edit include/autoconf.h and change the HAVE_SETRESUID line to:
        #define HAVE_SETRESUID 0

Also, some users report this error is due to a NIS configuration problem. By adding compat to the passwd and group lines of /etc/nsswitch.conf, the problem goes away. ( Ambrose Li).

Russ Mellon notes that these problems with cache_effective_user are fixed in version 2.2.x of the Linux kernel.

Large ACL lists make Squid slow

The regular expression library which comes with Linux is known to be very slow. Some people report it entirely fails to work after long periods of time.

To fix, use the GNUregex library included with the Squid source code. With Squid-2, use the --enable-gnuregex configure option.

14.6 HP-UX

StatHist.c:74: failed assertion `statHistBin(H, min) == 0'

working on it...

14.7 IRIX

dnsserver always returns 255.255.255.255

There is a problem with GCC (2.8.1 at least) on Irix 6 which causes it to always return the string 255.255.255.255 for _ANY_ address when calling inet_ntoa(). If this happens to you, compile Squid with the native C compiler instead of GCC.

14.8 SCO-UNIX

by F.J. Bosscha

To make squid run comfortable on SCO-unix you need to do the following:

Increase the NOFILES paramater and the NUMSP parameter and compile squid with I had, although squid told in the cache.log file he had 3000 filedescriptors, problems with the messages that there were no filedescriptors more available. After I increase also the NUMSP value the problems were gone.

One thing left is the number of tcp-connections the system can handle. Default is 256, but I increase that as well because of the number of clients we have.


Next Previous Contents