How to Configure the Linux OOM Killer to Stop Crashing MySQL
Your pager goes off at 3:14 AM. The monitoring dashboard is a sea of red. You groggily SSH into your production database server, run htop, and see that MySQL is nowhere to be found. The service is down. You restart it, everything comes back up, and you start digging through the logs.
You run dmesg -T and there it is. The dreaded Out of Memory: Killed process (mysqld) message. The Linux kernel’s Out of Memory (OOM) Killer decided your server was running critically low on RAM and mercilessly assassinated your most important process to save the operating system.
If you manage any infrastructure, from a single Ubuntu VPS on DigitalOcean to a massive fleet of Red Hat Enterprise Linux instances on AWS, you will eventually face this exact scenario. The OOM killer is a necessary evil in Linux memory management, but its default behavior is a disaster for database servers. When push comes to shove, it almost always targets the database.
I’ve spent over a decade in Linux DevOps and administration, and I’ve seen this exact failure mode take down everything from critical e-commerce backends to internal GitLab CI runners. Today, I’m going to show you exactly how to prevent oom killer from killing process linux, specifically focusing on shielding MySQL, MariaDB, and PostgreSQL from its wrath.
Understanding the Linux OOM Killer: Why It Hates Your Database
To stop the OOM killer, you first have to understand how it thinks. The Linux kernel uses a memory allocation strategy called overcommit. When an application asks for memory, the kernel says “yes” and allocates virtual memory, even if it doesn’t have enough physical RAM to back it up at that exact moment.
This is generally a great feature for Linux desktop environments and general Linux server workloads because applications rarely use all the memory they request. It’s an efficient way to maximize hardware utilization.
But what happens when all those applications suddenly try to use the memory they were promised? The system runs out of physical RAM and swap space. If the kernel does nothing, the entire operating system will hard lock, requiring a physical reboot.
Enter the OOM Killer. When the kernel detects critical memory exhaustion, it scans all running processes and calculates an oom_score for each one. The process with the highest score gets SIGKILL’d. No graceful shutdown, no flushing buffers to disk. Immediate termination.
How is the oom_score calculated? The heuristic is complex, but it essentially boils down to: kill the process using the most memory. Because MySQL (specifically the InnoDB storage engine) is designed to cache as much data in RAM as possible for performance, it is almost always the fattest target on the server. The OOM killer looks at MySQL, sees a massive memory footprint, and snipes it.
Diagnosing an OOM Kill Event
Before making system-wide changes, you need to prove the OOM killer is actually the culprit. MySQL can crash for plenty of other reasons (corrupt tables, storage failures, segmentation faults). We need to check the Linux kernel logs.
Use the dmesg command or journalctl to search for OOM events. Run this on your Linux terminal:
sudo dmesg -T | grep -i 'out of memory'
Alternatively, if you’re on a modern systemd-based distro (like Ubuntu 22.04, Debian 12, or Rocky Linux 9), query the journal directly:
sudo journalctl -k | grep -i -e memory -e oom
You’ll see an output block that looks something like this:
[Tue Oct 24 03:14:12 2023] mysqld invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
[Tue Oct 24 03:14:12 2023] CPU: 2 PID: 14592 Comm: mysqld Not tainted 5.15.0-86-generic #96-Ubuntu
...
[Tue Oct 24 03:14:12 2023] Out of memory: Killed process 14592 (mysqld) total-vm:4589312kB, anon-rss:3145728kB, file-rss:0kB, shmem-rss:0kB, UID:114 pgtables:8216kB oom_score_adj:0
That Out of memory: Killed process line is your smoking gun. Now, let’s fix it.
How to Prevent OOM Killer From Killing Process Linux via Systemd
The most direct way to protect a critical process is to manipulate its oom_score_adj value. This is an adjustment factor applied to the final OOM score. It ranges from -1000 to +1000.
- A value of +1000 means “kill me first.”
- A value of -1000 makes the process completely immune to the OOM killer.
If you are trying to figure out how to prevent oom killer from killing process linux, setting this value to -1000 (or something very low, like -900) is the silver bullet. Since almost all modern Linux distributions use systemd, we can apply this adjustment seamlessly using systemd service drop-ins.
Do not edit the main /lib/systemd/system/mysql.service file. Your changes will be overwritten the next time the apt or dnf package manager updates MySQL. Instead, use the systemctl edit command to create a safe override file.
sudo systemctl edit mysql.service
Note: If you are using MariaDB, the service is usually mariadb.service.
This will open your default Linux text editor (usually Nano or Vim). Add the following lines exactly as shown at the top of the file, between the generated comments:
[Service]
OOMScoreAdjust=-1000
Save the file and exit. This creates a drop-in file at /etc/systemd/system/mysql.service.d/override.conf. Next, reload the systemd daemon to recognize the change and restart MySQL:
sudo systemctl daemon-reload
sudo systemctl restart mysql
To verify the change took effect, find the PID of your MySQL process and check its oom_score_adj file in the /proc filesystem:
MYSQL_PID=$(pgrep -x mysqld)
cat /proc/$MYSQL_PID/oom_score_adj
It should output -1000. Congratulations, the OOM killer will now completely ignore MySQL. If the server runs out of memory, the kernel will kill other processes (like Apache, Nginx, or SSH sessions) before it touches your database.

The Danger of OOM Immunity: Why You Must Tune MySQL
I need to be very clear here: setting OOMScoreAdjust=-1000 is a powerful bandage, but it does not cure the underlying disease. You have simply shifted the target.
If MySQL is the application actually leaking memory, and you make it immune to the OOM killer, the kernel will kill everything else on the server until the OS itself panics and crashes. To truly fix Linux performance issues, you must configure MySQL to stay within the boundaries of your physical RAM.
MySQL memory usage is heavily dictated by a few key variables in your my.cnf (or mysqld.cnf) file. The formula for MySQL’s maximum potential memory usage looks like this:
Max Memory = innodb_buffer_pool_size + key_buffer_size + (read_buffer_size + sort_buffer_size + read_rnd_buffer_size + join_buffer_size) * max_connections
1. Size the InnoDB Buffer Pool Correctly
The innodb_buffer_pool_size is the largest chunk of memory MySQL uses. It holds cached data and indexes. A common rule of thumb you’ll read online is to set this to 80% of your system RAM. Do not do this on a small server.
If you have a 2GB VPS and give 1.6GB to the buffer pool, the remaining 400MB is not enough for the Linux kernel, systemd, SSH, logging daemons, and MySQL’s per-thread memory. You are guaranteeing an OOM event.
For servers under 8GB of RAM, allocate no more than 50-60% to the buffer pool. Open your MySQL configuration file (usually /etc/mysql/mysql.conf.d/mysqld.cnf) and set it appropriately:
[mysqld]
innodb_buffer_pool_size = 2G
2. Control Your Max Connections
Every time a web application or worker script connects to MySQL, a new thread is spawned. Each thread allocates its own buffers (sort, read, join). If you have max_connections set to 1000, and a traffic spike hits, MySQL will spawn hundreds of threads, massive amounts of memory will be requested, and the system will OOM.
Keep max_connections as low as your application can tolerate. Use connection pooling (like ProxySQL, or connection pooling built into Python/Go/Java) instead of opening thousands of raw database connections.
max_connections = 150
Tuning Linux Memory Management: Sysctl Tweaks
Beyond MySQL tuning, you can adjust how the Linux kernel handles memory allocation and swapping to make the system more resilient. We do this by editing /etc/sysctl.conf.
Adjusting vm.swappiness
The vm.swappiness parameter controls how aggressively the kernel moves memory pages from physical RAM to swap space on disk. It accepts a value from 0 to 100. The default on most Linux distros (like Ubuntu and CentOS) is 60.
For a database server, you want physical RAM prioritized for the database cache, but you do want the kernel to swap out idle processes (like an unused bash shell or a cron daemon) to free up RAM. However, a value of 60 is often too aggressive and can cause disk I/O bottlenecks.
I recommend setting it to a low value, like 10, for database servers.
Disabling Memory Overcommit
Remember how I said the kernel blindly promises memory it doesn’t have? You can tell it to stop doing that by changing the vm.overcommit_memory setting. It takes three values:
- 0 (Default): Heuristic overcommit. The kernel guesses if it has enough memory.
- 1: Always overcommit. Never refuse a memory allocation (a recipe for disaster on DB servers).
- 2: Don’t overcommit. The kernel will refuse to allocate memory if the request exceeds the size of swap plus a percentage of physical RAM.
Setting this to 2 is the safest configuration for a dedicated PostgreSQL or MySQL Linux server. The database will receive a standard “Out of Memory” error directly from the kernel if it asks for too much, rather than being mysteriously assassinated by the OOM killer later.
Apply these settings by editing /etc/sysctl.conf:
vm.swappiness = 10
vm.overcommit_memory = 2
vm.overcommit_ratio = 80
Apply the changes instantly without rebooting:
sudo sysctl -p
Add a Swap File: The Ultimate Safety Net
I am constantly surprised by how many developers deploy Linux cloud infrastructure (especially on AWS EC2 or DigitalOcean Droplets) without configuring swap space. Cloud providers often ship their base OS images with zero swap configured to save on disk I/O.
Swap is not a replacement for physical RAM. If your database actively relies on swap to serve queries, your performance will tank. However, swap acts as a pressure relief valve. When memory spikes unexpectedly, the kernel can page out inactive memory to disk, giving you time to receive a Prometheus or Grafana alert and investigate, rather than the server instantly crashing.
If your server doesn’t have swap, add a 4GB swap file immediately. Here is the safest way to do it on modern Linux filesystems (ext4/Btrfs):
# Create a 4GB file
sudo fallocate -l 4G /swapfile
# Set restrictive file permissions (critical for Linux security)
sudo chmod 600 /swapfile
# Format the file as swap
sudo mkswap /swapfile
# Enable the swap file
sudo swapon /swapfile
To make this permanent across reboots, add it to your /etc/fstab file:
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Verify it’s working by running free -h or htop. You should now see 4GB of available swap space.
Using Systemd Cgroups to Sandbox MySQL
If you are running multiple services on the same host (e.g., a classic LAMP stack with Apache, PHP, and MySQL all sharing one box), adjusting the OOM score might just cause Apache to die instead of MySQL. While preferable, it still results in downtime.
A more modern Linux DevOps approach is to use systemd’s cgroups v2 integration to set hard memory limits on the service itself. This creates a sandbox. If MySQL tries to use more memory than allowed, systemd will constrain it, and if it triggers an OOM event, only the cgroup is affected, leaving the rest of the OS pristine.
Open the systemd override file again:
sudo systemctl edit mysql.service
Add memory limits. For example, if you have an 8GB server and want to guarantee MySQL can never use more than 5GB of RAM (leaving 3GB safely for the OS and web servers):
[Service]
MemoryHigh=4.5G
MemoryMax=5G
- MemoryHigh: When MySQL hits this limit, the kernel will heavily throttle its processes and aggressively swap its memory to disk, slowing it down to prevent a crash.
- MemoryMax: This is a hard limit. If MySQL hits 5GB, it will be OOM killed by systemd (not the kernel), but the rest of the system will survive.
This is much safer than letting the global Linux kernel OOM killer run wild, as it isolates the failure.
Monitoring and Observability
Configuring the kernel and tuning MySQL are reactive measures. As a senior engineer, you need to be proactive. If you are relying on dmesg to tell you your server ran out of memory, you’re already too late.
Implement a proper Linux observability stack. I recommend the Prometheus and Grafana ecosystem. Install the node_exporter on your Linux servers to track system-level metrics like RAM usage, swap activity, and load averages. Install the mysqld_exporter to track MySQL-specific metrics like buffer pool utilization and active connections.
Set up an alert in Prometheus Alertmanager to ping your Slack or PagerDuty when available memory drops below 15%, or when swap utilization exceeds 50%. This gives you the runway to scale up your instance or optimize heavy database queries before the OOM killer ever needs to wake up.
FAQ
Why does the OOM killer target MySQL instead of the process causing the memory leak?
The Linux kernel’s OOM killer doesn’t track which process is leaking memory rapidly; it primarily looks at total memory consumption. Because database engines like MySQL and PostgreSQL intentionally cache large amounts of data in RAM for performance, they almost always have the highest oom_score, making them the primary target during any system memory crisis.
Is it safe to set OOMScoreAdjust to -1000 for all my services?
No, this is highly dangerous. If you make every process immune to the OOM killer, the kernel will have no way to free up memory during an exhaustion event. This will result in a kernel panic and a complete system freeze, requiring a hard hardware reboot. Only shield essential data stores.
How do I know if my MySQL configuration is using too much memory?
You can calculate your maximum potential memory footprint by adding your global buffers (like innodb_buffer_pool_size) to the product of your per-thread buffers multiplied by max_connections. Alternatively, use tools like MySQLTuner (a Perl script) to automatically analyze your running database and suggest memory limits based on your hardware.
Can adding swap space completely replace physical RAM for a database?
Absolutely not. Swap space uses your storage disk (SSD or HDD), which is orders of magnitude slower than physical RAM. While having a swap file prevents sudden OOM crashes by providing an overflow buffer, if your database starts actively reading and writing from swap (thrashing), your query latency will skyrocket and the application will become practically unusable.
Conclusion
Dealing with the Linux OOM killer is a rite of passage for every sysadmin. When your database vanishes in the middle of the night, the immediate fix to prevent oom killer from killing process linux is to use systemd to set OOMScoreAdjust=-1000. This guarantees your data layer stays online while you investigate.
However, an OOM kill is always a symptom, never the root cause. The ultimate solution requires a holistic approach: calculate your MySQL memory footprint accurately, size your innodb_buffer_pool_size safely, disable kernel overcommit, and ensure you have a fallback swap file. Combine these system configurations with robust Prometheus monitoring, and you’ll never have to wake up to a dead database at 3 AM again.
