Back to Blog

PostgreSQL FOR UPDATE SKIP LOCKED: The One-Liner Job Queue

JayJay

If you've ever built a job queue backed by a database, you've probably hit the same wall everyone does: how do you prevent two workers from grabbing the same job?

The naive approach seems obvious - just SELECT a pending job and UPDATE its status to "processing". But there's a race condition hiding in plain sight. Between the SELECT and the UPDATE, another worker can grab the same job. Suddenly you're processing the same email twice, charging a credit card twice, or sending duplicate webhooks.

You could reach for Redis, RabbitMQ, or a dedicated queue service. But if you're already using PostgreSQL, there's an elegant solution hiding in plain sight: FOR UPDATE SKIP LOCKED.

The Problem: Race Conditions in Job Queues

Let's look at what goes wrong with the naive approach. Here's a simple jobs table:

Loading SQL environment...

Now imagine two workers trying to grab a job at the exact same time. Both run:

SQL
-- Worker A and Worker B both run this query simultaneously
SELECT * FROM jobs WHERE status = 'pending' LIMIT 1;

Both workers see job #1. Both think "great, I'll take this one!" Both try to process it. Chaos ensues.

The Solution: FOR UPDATE SKIP LOCKED

PostgreSQL provides a feature specifically designed for this scenario. Here's the magic incantation:

SQL
SELECT * FROM jobs
WHERE status = 'pending'
FOR UPDATE SKIP LOCKED
LIMIT 1;

Let's break down what each part does:

  • FOR UPDATE - Locks the selected rows. Other transactions trying to select the same rows will block until this transaction completes.

  • SKIP LOCKED - Instead of blocking, skip over any rows that are already locked by another transaction. This is the key innovation - workers don't wait, they just grab the next available job.

  • LIMIT 1 - Only grab one job at a time (you can adjust this if workers should batch).

The beauty is that this happens atomically. There's no window for race conditions. If two workers query simultaneously, they'll each get different unlocked rows.

The Full Pattern

Here's what a complete job processing transaction looks like in PostgreSQL:

SQL
BEGIN;

-- Atomically select and lock one pending job
SELECT * FROM jobs
WHERE status = 'pending'
FOR UPDATE SKIP LOCKED
LIMIT 1;

-- Your application processes the job here...

-- Mark the job as completed
UPDATE jobs
SET status = 'completed', worker_id = 'worker_1'
WHERE id = 1;  -- Use the ID from the SELECT above

COMMIT;

If any worker crashes mid-processing, the transaction rolls back, and the job becomes available again. No lost jobs, no double-processing.

Simulating the Pattern

While FOR UPDATE SKIP LOCKED is PostgreSQL-specific, we can simulate the concept with SQLite to understand the workflow. The key is the state machine: jobs move from pendingprocessingcompleted.

Loading SQL environment...

In PostgreSQL with FOR UPDATE SKIP LOCKED, this claiming process is atomic and race-free. Each worker gets a different job, guaranteed.

Worker Completion

Let's see what happens as workers complete their jobs:

Loading SQL environment...

Real-World Use Cases

This pattern is perfect for:

  • Background job processing - Email sending, image processing, data imports
  • Webhook delivery - Reliable delivery with automatic retries
  • Task distribution - Spread work across multiple worker processes
  • Event processing - Handle events in order without duplicates
  • Scheduled jobs - Cron-like tasks with safe concurrent execution

Many popular job queue libraries (like Que for Ruby, or Oban for Elixir) use exactly this pattern under the hood.

Adding Priorities and Retries

The pattern extends naturally to support priorities and retry logic:

SQL
-- Add priority and retry columns
ALTER TABLE jobs ADD COLUMN priority INT DEFAULT 0;
ALTER TABLE jobs ADD COLUMN attempts INT DEFAULT 0;
ALTER TABLE jobs ADD COLUMN max_attempts INT DEFAULT 3;

-- Grab the highest-priority job that hasn't exceeded retries
SELECT * FROM jobs
WHERE status = 'pending'
  AND attempts < max_attempts
ORDER BY priority DESC, created_at ASC
FOR UPDATE SKIP LOCKED
LIMIT 1;

Why Not Just Use Redis?

You might be wondering why bother with this when Redis queues exist. Fair question! Here are some reasons to consider the PostgreSQL approach:

  1. Transactional integrity - Your job can be created in the same transaction as the data it operates on. No risk of jobs pointing to data that was rolled back.
  2. One less system - If you're already running PostgreSQL, you don't need to operate Redis just for queuing.
  3. SQL query power - Need to find all failed jobs for a specific user? That's just a WHERE clause.
  4. ACID guarantees - Jobs won't disappear or duplicate during crashes.

That said, if you need millions of jobs per second, a dedicated queue might make sense. But for most applications, PostgreSQL handles this beautifully.

Sandbox

Here's a sandbox to experiment with the job queue pattern. Try simulating multiple workers claiming and completing jobs:

Loading SQL environment...

Try these experiments:

  • Claim a job by updating its status to 'processing' with a worker_id
  • Complete a job by setting status to 'completed'
  • Simulate a failed job by incrementing attempts
  • Add more jobs with different priorities

Wrapping Up

FOR UPDATE SKIP LOCKED is one of those PostgreSQL features that solves a real problem elegantly. It turns your database into a reliable job queue with just a few lines of SQL.

The pattern is simple:

  1. SELECT ... FOR UPDATE SKIP LOCKED to atomically claim a job
  2. Process the job in your application
  3. UPDATE to mark it complete (or failed)
  4. COMMIT to release the lock

No race conditions. No double-processing. No deadlocks. Just reliable job processing with the database you already have.


Jay