2026-03-23 · CronLinuxDevOps

Cron Job Best Practices That Actually Matter

Most cron problems are invisible. The job appears to run. The logs are empty. The output was silently discarded. And nobody notices until the backup that should have been running for three weeks simply isn't there.

Most cron jobs are set up once and never touched again. That's fine when they're simple. It stops being fine when you have six of them all running at midnight, one of them silently failing for three weeks, and nobody notices until the backup you needed doesn't exist.

Here are the patterns that actually matter — things I've seen go wrong on real servers.

Schedule them so they don't pile up

The instinct is to run things at midnight or at the top of the hour. Everyone does this. The result is a server that idles all day then spikes at 00:00 when five jobs hit simultaneously.

Spread them out. Not by much — even five minutes apart makes a difference:

# Bad — everything at midnight:
0 0 * * * /usr/local/bin/backup.sh
0 0 * * * /usr/local/bin/cleanup.sh
0 0 * * * /usr/local/bin/report.sh

# Better — staggered:
0 0 * * *  /usr/local/bin/backup.sh
5 0 * * *  /usr/local/bin/cleanup.sh
15 0 * * * /usr/local/bin/report.sh

The backup job finishes before cleanup starts. The disk I/O from the backup doesn't compete with cleanup's I/O. Simple.

Use flock — always

A cron job scheduled to run every 5 minutes that sometimes takes 7 minutes will eventually run twice at the same time. This is not hypothetical. It will happen.

When it does, two instances of your script will be writing to the same files, hitting the same database, or sending the same emails twice.

# Wrap any job that shouldn't run concurrently:
*/5 * * * * flock -n /tmp/sync.lock /usr/local/bin/sync.sh

flock -n means non-blocking — if the lock is already held (previous run still going), the new invocation exits immediately without doing anything. No error, no data corruption, no duplicate emails.

The lock file is just a regular file. It doesn't need to exist before you run the command. And when the process exits — even if it crashes — the kernel releases the lock automatically. No cleanup required.

Use a unique lock file per job. /tmp/sync.lock and /tmp/backup.lock — not both using /tmp/job.lock.

Capture output or you'll never know it failed

By default, cron emails output to the local root mailbox. On most servers, nobody reads that. So when your job starts failing, the errors disappear into a mailbox that hasn't been opened in two years.

Two options — pick one:

# Option 1: Log to a file with timestamps:
0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1

# Option 2: Silence successful runs, keep errors visible (my preference):
0 2 * * * /usr/local/bin/backup.sh 2>/var/log/backup-errors.log

Then set up log rotation so these files don't grow forever:

# /etc/logrotate.d/backup
/var/log/backup.log {
    daily
    rotate 14
    compress
    missingok
    notifempty
}

Set the PATH explicitly

Cron runs with a stripped-down environment. The PATH is minimal — usually just /usr/bin:/bin. If your script calls anything in /usr/local/bin or uses a tool installed via npm or pip, it won't be found.

The fix is either to use absolute paths everywhere in your scripts, or to set PATH at the top of your crontab:

# At the top of crontab -e:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Now your jobs can call things in /usr/local/bin:
0 3 * * * certbot renew --quiet

Test before you rely on it

The most common cron debugging mistake is editing the crontab and waiting to see if the job runs at the scheduled time. You might be waiting hours.

Test it now:

# Run exactly what cron would run, in a minimal environment:
env -i HOME=/root PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin /usr/local/bin/your-script.sh

The env -i strips your current environment variables. If the script works in your shell but fails here, you've found your problem — usually a missing PATH entry or an environment variable your script assumed would exist.

@reboot jobs need special care

Jobs with @reboot run once when the system starts. They run before most services are ready — before your database is up, before your network is fully configured, before your mounted drives are necessarily mounted.

# Add a sleep to let things settle first:
@reboot sleep 30 && /usr/local/bin/start-service.sh

30 seconds is usually enough for a standard VPS. If your startup script depends on a database, add a proper wait loop instead:

@reboot sleep 10 && until pg_isready; do sleep 2; done && /usr/local/bin/start-service.sh

Paste your crontab output to visualise every job on a 24-hour timeline — see overlaps, collision risks, and get flock-safe versions of all your jobs.

Open Cron Visualiser →

Related Glossary Terms