Back to Blog

Redis Streams: Complete Guide to Real-Time Data

JayJay

Redis Streams is one of the most underrated features in Redis. Introduced in Redis 5.0 (2018), it provides a log-like data structure that's perfect for event sourcing, message queues, activity feeds, and real-time data processing. If you've been using Redis Lists or Pub/Sub for messaging, Streams might be what you actually need.

What Are Redis Streams?

A Stream is an append-only log. You add entries to the end, and they stay in order forever (or until you trim them). Each entry has a unique ID and contains field-value pairs, basically key-value data with guaranteed ordering.

REDIS
# Add an entry to a stream
XADD mystream * user alice action login ip 192.168.1.1

# Redis returns the entry ID
"1704067200000-0"

The * tells Redis to auto-generate the ID (timestamp-sequence format). The entry contains three fields: user, action, and ip.

Unlike Pub/Sub, Stream entries persist. Unlike Lists, Streams support consumer groups for distributed processing. It's the best of both worlds.

Why Streams Over Pub/Sub?

Redis Pub/Sub has a fundamental limitation: if a subscriber isn't connected when a message is published, the message is lost. There's no history, no replay, no persistence.

Streams solve this:

REDIS
# Publisher adds events
XADD events * type order_created order_id 12345
XADD events * type payment_received order_id 12345
XADD events * type order_shipped order_id 12345

# Consumer can read from any point in history
XRANGE events - +
# Returns all events

XREAD STREAMS events 0
# Also returns all events

XREAD BLOCK 5000 STREAMS events $
# Block and wait for new events

A consumer that crashes and restarts can pick up where it left off. New consumers can replay the entire history. This changes what's possible.

Basic Stream Operations

Adding Entries

REDIS
# Auto-generated ID (recommended)
XADD orders * customer_id 123 product_id 456 quantity 2

# Custom ID (must be greater than last ID)
XADD orders 1704067200000-0 customer_id 123 product_id 456 quantity 2

Reading Entries

REDIS
# Read range (- is minimum, + is maximum)
XRANGE orders - +
XRANGE orders 1704067200000-0 1704067300000-0

# Read in reverse
XREVRANGE orders + -

# Read specific count
XRANGE orders - + COUNT 10

Blocking Reads

REDIS
# Wait for new entries (5 second timeout)
XREAD BLOCK 5000 STREAMS orders $

# Wait forever
XREAD BLOCK 0 STREAMS orders $

# Read from multiple streams
XREAD BLOCK 5000 STREAMS orders events payments $ $ $

The $ means "only new entries." You can also specify an ID to read entries after that point.

Consumer Groups

Consumer groups enable distributed processing. Multiple consumers share the workload, and Redis tracks what each consumer has processed.

Creating a Consumer Group

REDIS
# Create group starting from the beginning
XGROUP CREATE orders order-processors 0 MKSTREAM

# Create group starting from now (only new entries)
XGROUP CREATE orders order-processors $ MKSTREAM

Reading with Consumer Groups

REDIS
# Consumer "worker-1" reads pending entries
XREADGROUP GROUP order-processors worker-1 COUNT 10 STREAMS orders >

# The ">" means "entries never delivered to this consumer"

Multiple workers can call XREADGROUP simultaneously. Redis distributes entries across them automatically.

Acknowledging Entries

After processing, acknowledge the entry:

REDIS
XACK orders order-processors 1704067200000-0

Unacknowledged entries can be claimed by other consumers if the original consumer crashes (using XPENDING and XCLAIM).

Handling Failed Consumers

REDIS
# Check pending entries (unacknowledged)
XPENDING orders order-processors

# See details about pending entries
XPENDING orders order-processors - + 10

# Claim entries from dead consumer (idle > 60 seconds)
XCLAIM orders order-processors worker-2 60000 1704067200000-0

Real-World Use Cases

Event Sourcing

Store every state change as an event:

JAVASCRIPT
// Node.js with ioredis
const Redis = require('ioredis');
const redis = new Redis();

// Record events
async function recordOrderEvent(orderId, eventType, data) {
  await redis.xadd(
    `order:${orderId}:events`,
    '*',
    'type', eventType,
    'data', JSON.stringify(data),
    'timestamp', Date.now().toString()
  );
}

// Replay to rebuild state
async function rebuildOrderState(orderId) {
  const events = await redis.xrange(`order:${orderId}:events`, '-', '+');

  let state = {};
  for (const [id, fields] of events) {
    const event = {
      type: fields[1],
      data: JSON.parse(fields[3])
    };
    state = applyEvent(state, event);
  }
  return state;
}

Activity Feeds

JAVASCRIPT
// Add activity
async function addActivity(userId, activity) {
  await redis.xadd(
    `user:${userId}:activity`,
    'MAXLEN', '~', '1000',  // Keep roughly 1000 entries
    '*',
    'type', activity.type,
    'data', JSON.stringify(activity.data)
  );
}

// Get recent activity
async function getRecentActivity(userId, count = 50) {
  const entries = await redis.xrevrange(
    `user:${userId}:activity`,
    '+', '-',
    'COUNT', count
  );
  return entries.map(([id, fields]) => ({
    id,
    type: fields[1],
    data: JSON.parse(fields[3])
  }));
}

Work Queue with Retries

JAVASCRIPT
// Producer
async function enqueueJob(jobType, payload) {
  await redis.xadd(
    'jobs',
    '*',
    'type', jobType,
    'payload', JSON.stringify(payload),
    'attempts', '0'
  );
}

// Worker
async function processJobs(workerName) {
  while (true) {
    const result = await redis.xreadgroup(
      'GROUP', 'job-workers', workerName,
      'BLOCK', 5000,
      'COUNT', 1,
      'STREAMS', 'jobs', '>'
    );

    if (!result) continue;

    const [[stream, entries]] = result;
    for (const [id, fields] of entries) {
      try {
        const job = {
          id,
          type: fields[1],
          payload: JSON.parse(fields[3])
        };
        await handleJob(job);
        await redis.xack('jobs', 'job-workers', id);
      } catch (error) {
        console.error(`Job ${id} failed:`, error);
        // Will be retried via XPENDING/XCLAIM
      }
    }
  }
}

Stream Management

Trimming

Streams grow forever by default. Trim them:

REDIS
# Keep exactly 1000 entries
XTRIM orders MAXLEN 1000

# Keep approximately 1000 (faster, slight over-retention)
XTRIM orders MAXLEN ~ 1000

# Trim on add (recommended)
XADD orders MAXLEN ~ 1000 * customer_id 123 product_id 456

Getting Stream Info

REDIS
# Stream length
XLEN orders

# Stream info
XINFO STREAM orders

# Consumer group info
XINFO GROUPS orders

# Consumer info
XINFO CONSUMERS orders order-processors

Performance Considerations

Streams are efficient, but keep these in mind:

Memory: Each entry stores field names. If you have millions of entries with the same fields, consider shorter field names or use JSON in a single field.

Consumer Groups: Add overhead for tracking delivery. If you don't need exactly-once processing semantics, simple XREAD is faster.

Trimming: Use MAXLEN ~ (approximate) rather than exact for better performance. The ~ allows Redis to trim in larger batches.

Blocking Reads: Use reasonable timeouts. Very long blocks can cause connection issues with some clients.

When to Use Redis Streams

Good fits:

  • Event sourcing and audit logs
  • Activity feeds and timelines
  • Work queues with retry support
  • Real-time analytics pipelines
  • Chat message history
  • IoT sensor data

Consider alternatives:

  • If you need complex routing: RabbitMQ or Kafka
  • If you need guaranteed persistence: Kafka or a database
  • If you need pub/sub without history: Redis Pub/Sub
  • If you need massive scale (millions of messages/second): Kafka

Summary

Redis Streams fills a gap between simple message queues and full event streaming platforms. It's powerful enough for serious use cases but simple enough to understand in an afternoon.

Key takeaways:

  • Streams are append-only logs with persistent storage
  • Consumer groups enable distributed, fault-tolerant processing
  • Use MAXLEN ~ to control memory usage
  • The XPENDING/XCLAIM pattern handles failed consumers
  • Perfect for event sourcing, activity feeds, and work queues

If you're currently using Redis Lists for queuing or Pub/Sub for events, Streams is worth evaluating. The persistence and consumer group features solve real problems.

Keep Reading