For security purposes, the Linux kernel has a mechanism to set resource limits on a process-by-process basis. These resource limits are called ulimits. The defaults are reasonable for average use but need to be adjusted for most corporate and enterprise applications. As your organization and applications scale, you will need to address these limits.

    You can check the ulimits for any process ID by reading /proc/<pid>/limits, where <pid> is replaced by the numeric pid of the process. New processes will inherit the ulimits of the parent process.

    The ulimit command is specific to each terminal. The arguments are not the same from bash to sh to zsh. Keep this in mind when changing init scripts for daemons. If you get an invalid argument, you are likely running the ulimit command in a different shell than the one you’re testing with.

    All ulimits come with a hard and a soft limit. Unless you’re intimately familiar with them, there’s no need to set a different number for hard and soft limits. Use the same number for both.



    As processes fluctuate within their limits, everything works as expected. It’s not until the limit is reached that problems begin to happen. When any one of the resource caps is exceeded, the request is rejected, and you’ll notice the following type of problems:

    • Unable to accept incoming network connections
    • Unable to spawn new sysV threads
    • Unable to open files or network sockets
    • Databases will shutdown unexpectedly


    Quick Fix

    Set all ulimits, both hard and soft, to 65536 or “unlimited” for the affected processes. Restart the processes. Verify the change took effect.

    For a system-wide or user-wide setting, change the defaults in /etc/security/limits.conf.


    Thorough Fix

    Set ulimits according to calculated usage and application configuration. Allow a 10% buffer for safety.

    For daemons, change the appropriate init script or config file.

    For common applications, follow best practices in the application’s online documentation.