the djb way

daemontools


softlimit resources

Long-running processes, such as the background services managed by daemontools, must be carefully written. The presence of programming errors that result in memory leaks, or files not being closed, can lead to unexpected resource exhaustion. Processes that chew through resources may degrade the performance of the server over time, and ultimately crash the whole system.

The softlimit utility may be used to mitigate the effects of such problems, and constrain the use of system resources within specific limits. It is an important tool to protect your server from runaway processes.

Usage of softlimit is common in daemontools "run" scripts. The general syntax is:

softlimit options program ...

That is, softlimit sets resource limits described by options, then runs program within this environment. Any changes to resource limits are subsequently propagated to all the child processes that follow.

Here is a summary of softlimit options. The "resources.h" column describes the associated rlimit. See man 2 getrlimit for more information, and to find out if each of these constraints is supported on your system.

option mnemonic category <resource.h> signal
softlimit -d bytes data segment memory RLIMIT_DATA  
softlimit -s bytes stack segment memory RLIMIT_STACK SIGSEGV
softlimit -l bytes locked physical pages memory RLIMIT_MEMLOCK  
softlimit -a bytes all segments memory RLIMIT_AS  
softlimit -m bytes memory,
shorthand for:
-d b -s b -l b -a b
memory (see above)  
softlimit -r bytes resident set size memory RLIMIT_RSS  
softlimit -o n open file descriptors files RLIMIT_NOFILE  
softlimit -f bytes file size files RLIMIT_FSIZE SIGXFSZ
softlimit -c bytes core file size files RLIMIT_CORE SIGXFSZ
softlimit -p n processes per uid processes RLIMIT_NPROC  
softlimit -t secs cpu time cpu RLIMIT_CPU SIGXCPU

This table shows the syntax used for setting a limit on a particular resource. That is, to set the soft limit on the data segment of a process to 3 million bytes, use:

softlimit -d 3000000

Each system resource actually has two associated limits, described as "soft" and "hard". The softlimit utility uses the capabilities of the setrlimit(2) system call to set only the soft limits. Note: the softlimit utility provides no way to change the hard limit of a resource.

We mention this difference because softlimit actually sets the soft limit of a resource to the lesser of:

That is, a soft limit may never be greater than its hard limit. If you want to increase the value of a soft limit up to its hard limit, use an equal sign ("=") as an argument to the option, in place of a value. This sets the soft limit to the current value of the hard limit:

softlimit -d =

We have never seen this usage in a run script, however. Generally, one uses softlimit to reduce resource limits in a process environment, rather than increase them.

Of course, resource limits may also be viewed and set by shell built-in commands. Use ulimit in Bourne-type shells, and limit in C-type shells. For example, to see the resource constraints in a Bash shell environment:

$ ulimit -a
core file size        (blocks, -c) unlimited
data seg size         (kbytes, -d) 524288
file size             (blocks, -f) unlimited
max locked memory     (kbytes, -l) unlimited
max memory size       (kbytes, -m) unlimited
open files                    (-n) 7293
pipe size          (512 bytes, -p) 1
stack size            (kbytes, -s) 65536
cpu time             (seconds, -t) unlimited
max user processes            (-u) 3646
virtual memory        (kbytes, -v) unlimited

Notice that the options and arguments to ulimit are similar to softlimit, but not identical.

Open a new shell under softlimit and check the results:

$ softlimit -c 0 -m 3000000 -o 256 bash -l
$ ulimit -a
core file size        (blocks, -c) 0
data seg size         (kbytes, -d) 2929
file size             (blocks, -f) unlimited
max locked memory     (kbytes, -l) 2929
max memory size       (kbytes, -m) unlimited
open files                    (-n) 256
pipe size          (512 bytes, -p) 1
stack size            (kbytes, -s) 2929
cpu time             (seconds, -t) unlimited
max user processes            (-u) 3646
virtual memory        (kbytes, -v) 2929
$ ulimit -Ha
core file size        (blocks, -c) unlimited
data seg size         (kbytes, -d) 524288
file size             (blocks, -f) unlimited
max locked memory     (kbytes, -l) unlimited
max memory size       (kbytes, -m) unlimited
open files                    (-n) 7293
pipe size          (512 bytes, -p) 1
stack size            (kbytes, -s) 65536
cpu time             (seconds, -t) unlimited
max user processes            (-u) 3646
virtual memory        (kbytes, -v) unlimited

Here is a new shell with decreased soft limits, and we can see the lower limits for memory, open files, and size of core dump file.

Notice, however, the hard limits have not changed.

If a resource limit is set lower than a process requires, the process will usually fail to run:

$ softlimit -m 50000 bash -l
Abort trap

If your run script is failing, this is a common source of the error. One indicator is when you see your daemon restarting once every second, --continuously repeating the "*** Starting ..." banner we echo into every service log-- as supervise tries to restart it every time it dies. If this is happening, try the following:

This is a trial-and-error process, and each service will have its own resource requirements, some more than others. But you will quickly converge on some basic values that will be generally satisfactory for a given platform. These can then be used as a starting point for other run scripts on the system.


Copyright © 2004, Wayne Marshall.
All rights reserved.

Last edit 2004.03.15, wcm.