Back to Blog

Memcached vs Redis: Caching Solutions Compared

JayJay

Memcached and Redis both live in memory and both serve cached data fast. They're often mentioned together, but they solve different problems at different scales of complexity. Memcached is a focused caching tool. Redis is a caching tool that grew into a data structure server, a message broker, and more.

Choosing between them depends on what you need beyond "store this value and give it back quickly."

Quick comparison

Before getting into details, here's the high-level picture:

FeatureMemcachedRedis
Data structuresStrings onlyStrings, lists, sets, sorted sets, hashes, streams, bitmaps, HyperLogLog, geospatial
PersistenceNoneRDB snapshots, AOF, or both
ReplicationNone (client-side)Built-in primary/replica
ClusteringClient-side shardingRedis Cluster (automatic partitioning)
ThreadingMulti-threadedSingle-threaded (with I/O threads in 6.0+)
Memory efficiencyBetter for plain stringsHigher overhead per key
Max key size250 bytes512 MB
Max value size1 MB (default, configurable)512 MB
Pub/SubNoYes
Lua scriptingNoYes
TransactionsNoYes (MULTI/EXEC)
Eviction policiesLRU only8 policies including LRU, LFU, TTL-based

Architecture differences

Memcached: multi-threaded simplicity

Memcached was built with one job in mind: cache key-value pairs in memory. Its architecture reflects that focus.

It uses a multi-threaded, event-driven model. Multiple worker threads handle client connections and process requests in parallel. This means Memcached can take advantage of multiple CPU cores out of the box, which matters on modern hardware with 8, 16, or more cores.

Memory allocation uses a slab allocator. Memory is divided into slabs of different size classes. When you store a value, Memcached picks the smallest slab class that fits. This reduces memory fragmentation but can waste space if your values don't align well with slab boundaries.

The protocol is straightforward: get, set, delete, incr, decr, and a few others. There's no concept of data types beyond byte strings. There's no persistence, no replication, no scripting. The server does one thing and does it well.

Redis: single-threaded versatility

Redis takes a different approach. A single main thread handles all commands sequentially, which eliminates the need for locks and makes the code simpler. This design choice has trade-offs.

On the positive side, every operation is atomic by default. You never need to worry about race conditions between commands. Complex operations on data structures (like pushing to a list while trimming it) are safe without explicit locking.

The downside is that one slow command blocks everything. A large KEYS * call or a Lua script that runs too long will stall all other clients. Redis 6.0 added I/O threading to handle network reads and writes on multiple threads, but command execution still happens on the main thread.

Redis also has a richer internal architecture: a pub/sub system, a stream engine, built-in replication, and optional persistence. This means more moving parts, but also more capability.

Data structures

This is the biggest difference between the two, and often the deciding factor.

Memcached: strings and nothing else

Memcached stores opaque byte strings. You set a key to a value, you get the value back by key. That's it.

If you need to store a list, you serialize it to JSON or MessagePack, store the whole thing, retrieve it, deserialize, modify, re-serialize, and store it again. This works fine when your cached objects are read-heavy and rarely modified.

Redis: a data structure server

Redis supports a wide range of data types, each with its own set of operations:

Strings are the basic type. Same as Memcached, plus atomic increment/decrement, bit operations, and range operations.

Lists are linked lists of strings. You can push and pop from either end, trim to a range, or get elements by index. Useful for queues, recent activity feeds, and bounded logs.

Sets are unordered collections of unique strings. You can add, remove, check membership, and perform set operations (union, intersection, difference). Good for tagging, unique visitor tracking, and relationship modeling.

Sorted sets associate a score with each member. Elements are ordered by score, so you can retrieve ranges, find rank, and query by score range. This powers leaderboards, priority queues, and time-series indexing.

Hashes are maps of field-value pairs. You can get/set individual fields without retrieving the whole object. Ideal for storing objects with multiple attributes (user profiles, configuration, session data).

Streams are append-only log structures with consumer groups. They enable event sourcing, message queuing, and activity logging with guaranteed delivery.

Bitmaps let you perform bit-level operations on strings. Useful for tracking boolean states across large populations (daily active users, feature flags).

HyperLogLog provides probabilistic counting of unique elements using minimal memory (12 KB regardless of cardinality). Good for counting unique visitors, unique searches, or unique events.

Geospatial indexes store longitude/latitude pairs and support radius queries, distance calculations, and member searches. Useful for location-based features.

When data structures matter

If you're caching serialized API responses or rendered HTML fragments, Memcached's string-only model is fine. You're treating the cache as a simple lookup table.

But if you need to modify cached data without replacing the entire value, Redis's data structures save you from the read-modify-write cycle. Examples:

  • Adding an item to a user's cart (LPUSH to a list)
  • Incrementing a view counter (INCR on a string)
  • Checking if a user has permission (SISMEMBER on a set)
  • Fetching the top 10 players (ZREVRANGE on a sorted set)
  • Updating a single field in a cached profile (HSET on a hash)

Each of these is a single atomic operation in Redis. In Memcached, each would require a full read-modify-write cycle with potential race conditions.

Persistence

Memcached: pure volatile cache

Memcached has no persistence. When the process restarts, all data is gone. This is by design. Memcached is a cache, and caches are expected to be rebuilt from the source of truth.

This keeps the codebase simple and the performance predictable. There's no background save process consuming CPU or I/O.

Redis: optional durability

Redis offers two persistence mechanisms:

RDB (Redis Database) creates point-in-time snapshots. Redis forks the process and writes the entire dataset to disk. You configure how often snapshots happen (for example, "save after 60 seconds if at least 1000 keys changed"). RDB files are compact and fast to load, but you can lose data between snapshots.

AOF (Append-Only File) logs every write command. On restart, Redis replays the log to rebuild state. You can configure the fsync policy:

  • always: fsync after every write (safest, slowest)
  • everysec: fsync once per second (good balance)
  • no: let the OS decide (fastest, least safe)

You can use both together. RDB for fast restarts and AOF for minimal data loss.

When persistence helps:

Redis persistence is useful when your cached data is expensive to rebuild. If warming a cache takes hours (think: precomputed aggregations, ML model predictions, or large recommendation sets), losing that data on restart is painful. Persistence lets Redis recover without a cold start.

It also enables Redis to function as a primary data store for certain use cases, not as a replacement for a relational database, but for data that fits naturally in Redis's structures (session state, rate limiting counters, real-time leaderboards).

When persistence hurts:

The RDB fork can cause latency spikes, especially with large datasets. The child process copies the page table, and on a 50 GB dataset, this can freeze Redis for hundreds of milliseconds. AOF rewrites have a similar (though usually smaller) impact.

If you're using Redis purely as a cache with easy-to-rebuild data, disable persistence entirely. You'll get Memcached-like simplicity with Redis's data structures.

Clustering and replication

Memcached: client-side distribution

Memcached has no built-in clustering. Distribution is handled by the client using consistent hashing. The client determines which Memcached server holds a given key and sends the request directly.

This is a simple model. There's no cluster coordination, no gossip protocol, and no rebalancing. But it has limitations:

  • Adding or removing servers changes the hash ring, invalidating a portion of cached keys
  • There's no replication. If a server dies, its cached data is lost
  • The client must know about all servers and handle failover

Some organizations use tools like mcrouter (Facebook's Memcached proxy) to add connection pooling, replication, and routing logic. But these are external solutions, not part of Memcached itself.

Redis: built-in clustering and replication

Redis has two levels of distribution:

Replication creates read replicas of a primary. Replicas receive a continuous stream of commands from the primary and maintain a copy of the dataset. This provides read scaling and failover capability. Redis Sentinel can monitor primaries and automatically promote a replica if the primary fails.

Redis Cluster partitions data across multiple primaries, each responsible for a subset of the 16,384 hash slots. Each primary can have its own replicas. The cluster handles automatic failover, resharding, and client redirection.

Redis Cluster trade-offs:

  • Multi-key operations only work if all keys are in the same hash slot (use hash tags to control this)
  • Lua scripts must operate on keys in the same slot
  • There's a performance cost for cross-slot redirections
  • Cluster management adds operational complexity

For most caching workloads, replication with Sentinel is enough. Redis Cluster becomes necessary when your dataset exceeds a single server's memory.

Performance benchmarks

Performance comparisons between Memcached and Redis are nuanced. Raw throughput depends on workload characteristics, connection count, value sizes, and hardware.

Throughput

For simple GET/SET operations on small values (under 1 KB), both systems deliver similar throughput: hundreds of thousands of operations per second on a single server.

Benchmark scenarioMemcachedRedis
GET (100-byte values, single thread)~100K ops/sec~100K ops/sec
SET (100-byte values, single thread)~100K ops/sec~100K ops/sec
GET (100-byte values, multi-threaded)~500K+ ops/sec~100K ops/sec (single main thread)
GET (1 KB values)~80K ops/sec~80K ops/sec
GET (10 KB values)~30K ops/sec~30K ops/sec
Pipelined GET (100-byte values)N/A~500K+ ops/sec

The numbers above are approximate and vary by hardware. The key takeaway: Memcached's multi-threading gives it an advantage when you have many concurrent connections and multiple CPU cores available. Redis's single-threaded model caps its throughput on a single instance, but pipelining (sending multiple commands without waiting for responses) can close the gap.

Redis 6.0+ with I/O threads enabled can handle more concurrent connections, though command execution remains single-threaded.

Latency

Both systems typically respond in sub-millisecond time for simple operations. Median latency is comparable.

Where they differ:

  • Memcached has more consistent tail latency because there's no background persistence, no fork, and no complex data structure operations
  • Redis can experience latency spikes during RDB saves (fork), AOF rewrites, or when processing slow commands (like KEYS * or large sorted set operations)

If you disable Redis persistence and avoid slow commands, latency profiles are similar.

Connection handling

Memcached handles high connection counts more gracefully due to its multi-threaded architecture. With thousands of concurrent connections, Memcached distributes work across threads.

Redis handles connections on a single thread (with I/O threading in 6.0+). For extremely high connection counts (10K+), consider using a connection proxy like Twemproxy or Redis's own client-side connection pooling.

Memory efficiency

Memcached's slab allocator

Memcached pre-allocates memory in slabs. Each slab class holds items of a specific size range. This approach is fast (no per-item malloc/free) but can waste memory through internal fragmentation.

For example, if you store a 120-byte item, it goes into the 128-byte slab class, wasting 8 bytes. If your values are uniformly sized, the waste is minimal. If sizes vary widely, you might lose 20-30% to fragmentation.

Memcached also stores less metadata per item. A typical item has around 48 bytes of overhead (key, flags, expiration, CAS token, pointers).

Redis's memory overhead

Redis stores each key-value pair with more metadata: a redisObject wrapper, an SDS (Simple Dynamic String) for the key, type-specific encoding, and a dictionary entry. Small objects can have 100+ bytes of overhead.

Redis does optimize small data structures. Small hashes, lists, sets, and sorted sets use compact encodings (ziplist or listpack) when they're below configured thresholds. Once they grow past those thresholds, they switch to standard data structures with higher memory usage.

Practical memory comparison

For workloads storing millions of small key-value pairs (like session tokens or feature flags):

MetricMemcachedRedis
Overhead per item~48 bytes~80-120 bytes
10M items (100-byte values)~1.5 GB~2.0-2.5 GB
10M items (1 KB values)~10.5 GB~11-12 GB

The percentage difference shrinks as value size grows. For 1 KB+ values, the overhead difference is negligible. For tiny values (under 100 bytes), Memcached uses noticeably less memory per key.

Redis's MEMORY USAGE command lets you inspect per-key memory consumption. There's no equivalent in Memcached, which makes capacity planning harder on the Memcached side.

Eviction policies

Memcached: LRU with slabs

Memcached uses LRU (Least Recently Used) eviction within each slab class. When a slab class runs out of space, the least recently used item in that class is evicted.

This per-slab LRU can lead to unintuitive behavior. A rarely used slab class might hold stale items while a busy slab class evicts frequently accessed items. Memcached 1.5+ added a "modern" LRU with multiple sub-queues that improves on this, but the fundamental per-slab limitation remains.

Redis: eight eviction policies

Redis gives you more control over what gets evicted:

  • noeviction: Return errors when memory is full (no eviction)
  • allkeys-lru: Evict least recently used keys from all keys
  • volatile-lru: Evict LRU keys, but only from keys with TTL set
  • allkeys-lfu: Evict least frequently used keys (added in Redis 4.0)
  • volatile-lfu: Evict LFU keys with TTL
  • allkeys-random: Random eviction
  • volatile-random: Random eviction from keys with TTL
  • volatile-ttl: Evict keys with the shortest remaining TTL

The LFU (Least Frequently Used) policies are particularly useful for caching. LFU keeps hot keys in memory even if they haven't been accessed in the last few seconds, while LRU would evict them in favor of a key accessed once. This makes LFU better for workloads with skewed access patterns (some keys are accessed orders of magnitude more than others).

Use cases

When Memcached is the better choice

Simple page/object caching. You're caching serialized objects, rendered HTML, or API responses. You read them frequently and regenerate them when they expire. Memcached's string model is a perfect fit, and its multi-threaded performance shines under high connection counts.

Extremely high throughput on multi-core hardware. If you're running on a machine with 32 cores and need to saturate all of them with cache operations, Memcached can do it with a single instance. Redis would require multiple instances behind a proxy.

Reducing operational complexity. Memcached has fewer knobs to turn. No persistence to configure, no replication to manage, no eviction policies to choose between. If you want a cache that requires minimal tuning, Memcached is simpler to operate.

Memory-constrained environments with small values. When you're storing tens of millions of small items (session tokens, user IDs, short strings), Memcached's lower per-item overhead means you fit more data in the same amount of RAM.

When Redis is the better choice

You need data structures. Any time you find yourself serializing a list or set to store in a cache, and then deserializing it to modify one element, you should be using Redis. The ability to manipulate data in place saves bandwidth, reduces latency, and eliminates race conditions.

You need persistence. If cache warm-up takes significant time and a cold restart is disruptive, Redis persistence helps. This is common with precomputed data, recommendation engines, and session stores.

You need pub/sub or message queuing. Redis Streams and pub/sub provide messaging capabilities without deploying a separate message broker. For lightweight event distribution or task queuing, Redis can handle it.

You need atomic multi-step operations. Redis transactions (MULTI/EXEC) and Lua scripting let you execute multiple commands atomically. This is essential for rate limiting, distributed locking, and other patterns that require read-then-write atomicity.

You want clustering with automatic failover. Redis Sentinel and Redis Cluster provide high availability out of the box. If cache availability is critical (not all workloads can tolerate cache misses during failover), Redis has built-in solutions.

Session management. Storing user sessions in Redis is common. You get persistence (sessions survive restarts), TTL-based expiration, and the ability to store structured session data in hashes.

Common deployment patterns

Memcached patterns

Simple cache tier. Application servers connect to a pool of Memcached instances via consistent hashing. The client library handles distribution. This is the classic Memcached deployment and works well for read-heavy web applications.

Multi-layer caching. Use a local in-process cache (like a Go map or Python dict with TTL) as L1 and Memcached as L2. Reduces network round trips for the hottest keys.

mcrouter (Facebook's proxy). Adds features like replication, failover, and warm-up to Memcached. If you need Memcached's simplicity but also need some operational features, mcrouter fills the gaps.

Redis patterns

Cache with fallback. Application checks Redis first, falls back to the database on cache miss, and populates Redis on the read path. TTL handles expiration. This is the most common pattern.

Write-behind caching. Application writes to Redis, and a background process persists changes to the database. Reduces database write load but adds complexity around failure handling.

Session store. Each session is a Redis hash. Individual session fields can be read and updated without loading the entire session. TTL handles session expiration automatically.

Rate limiter. Using INCR with EXPIRE (or a Lua script for atomicity), Redis can enforce rate limits per user, per IP, or per API key. The sliding window pattern with sorted sets provides more accurate rate limiting.

Distributed lock. The Redlock algorithm uses multiple Redis instances to implement distributed locks. Libraries like Redisson (Java) and redis-py (Python) provide Redlock implementations.

If you're working with Redis and want a GUI that helps you inspect keys, monitor performance, and run commands, DB Pro supports Redis connections alongside other databases.

Operational considerations

Monitoring

Memcached exposes stats via the stats command: hit rate, eviction count, connection count, bytes stored, and per-slab metrics. The stat output is flat and easy to parse.

Key metrics to watch:

  • Hit ratio (should be above 90% for most workloads)
  • Eviction rate (high evictions mean you need more memory or shorter TTLs)
  • Connection count (approaching the max can cause dropped connections)

Redis provides the INFO command with detailed metrics organized by section: server, clients, memory, persistence, stats, replication, CPU, and more. Redis also has a SLOWLOG for identifying slow commands and LATENCY MONITOR for tracking latency events.

Key metrics to watch:

  • Hit ratio (keyspace_hits / (keyspace_hits + keyspace_misses))
  • Memory usage and fragmentation ratio
  • Connected clients and blocked clients
  • RDB/AOF status and last save time
  • Replication lag (if using replicas)

Backup and recovery

Memcached has no backup story. If you need the data, it must be rebuildable from the source of truth.

Redis can be backed up by copying RDB files. You can also use BGSAVE to trigger a snapshot, then copy the dump file. For point-in-time recovery, AOF provides a more granular option.

Security

Both systems were designed for trusted network environments and lack strong built-in security.

Memcached has no authentication in the standard build. SASL authentication is available as a compile-time option. No encryption. Always run Memcached behind a firewall or on a private network.

Redis added password authentication (the AUTH command) and, in Redis 6.0, ACLs (Access Control Lists) with per-user command and key restrictions. TLS encryption is supported in Redis 6.0+. Redis is more secure out of the box, but still should not be exposed to the public internet.

Upgrades and compatibility

Memcached has been remarkably stable. The protocol hasn't changed significantly in years. Upgrading is typically straightforward, and clients written a decade ago still work.

Redis evolves faster. New data types, new commands, and new modules appear regularly. This means more features but also more breaking changes. Redis 7.0, for example, changed several behaviors from 6.x. Always read the release notes before upgrading.

Migration considerations

Moving from Memcached to Redis

If you're considering switching from Memcached to Redis:

  1. Protocol compatibility. Redis supports a subset of the Memcached protocol via the memcached module or third-party proxies. This can ease migration but doesn't give you access to Redis-specific features.

  2. Client library changes. You'll need to swap Memcached client libraries for Redis ones. Most languages have mature Redis clients (redis-py, Jedis, ioredis, go-redis).

  3. Memory planning. Expect 20-50% higher memory usage for the same dataset due to Redis's per-key overhead. Profile your actual workload to get accurate numbers.

  4. Performance testing. Run benchmarks with your actual workload before switching. Pay attention to tail latency, not averages.

  5. Gradual rollout. Consider running both systems in parallel and using feature flags to route traffic. This lets you compare performance and catch issues before fully committing.

Moving from Redis to Memcached

This is less common, but it happens when teams realize they're only using Redis for simple key-value caching and want to reduce complexity.

  1. Audit your Redis usage. Check which commands your application uses. If it's all GET/SET/DEL with TTL, Memcached can handle it. If you're using lists, sets, or pub/sub, you'll need to find alternatives.

  2. Handle the lack of persistence. If you rely on Redis persistence, you need another solution for that data. Move it to a database or accept cold starts.

  3. Replace clustering. If you're using Redis Cluster or Sentinel, you need a client-side sharding strategy or a proxy like mcrouter.

Redis forks and alternatives

The Redis ecosystem includes several forks and compatible alternatives worth knowing about:

Valkey is the Linux Foundation fork of Redis, created after Redis changed its license in 2024. It's API-compatible with Redis and backed by AWS, Google, and Oracle. If Redis licensing concerns you, Valkey is the primary alternative.

KeyDB is a multi-threaded fork of Redis. It addresses Redis's single-threaded limitation by processing commands on multiple threads. This can provide higher throughput on multi-core machines while maintaining Redis compatibility.

Dragonfly is a Redis/Memcached compatible in-memory store built from scratch with a multi-threaded architecture. It claims significantly higher throughput than Redis and lower memory usage. Worth evaluating if you need extreme performance.

Garnet is Microsoft's cache-store built in C#. It implements the RESP protocol and is compatible with Redis clients. It focuses on high throughput and low latency with a thread-scalable architecture.

Bottom line

Memcached and Redis are both excellent at caching, but they serve different needs.

Choose Memcached when:

  • You need a straightforward key-value cache with no frills
  • Multi-threaded performance on a single instance matters
  • You want the smallest memory footprint per cached item
  • Operational simplicity is a priority

Choose Redis when:

  • You need data structures beyond strings
  • Persistence is valuable for your workload
  • You want built-in replication and clustering
  • You need atomic operations, scripting, or pub/sub
  • You're building features on top of the cache (rate limiting, leaderboards, queues)

For most new projects, Redis is the safer default. It can do everything Memcached does (with slightly more memory overhead), and when your requirements grow beyond simple caching, Redis grows with you. The only strong case for Memcached is when you know you'll never need more than key-value caching and you want maximum simplicity or maximum multi-threaded throughput on a single box.

If you're running both or either system at scale, measure your own workload. Synthetic benchmarks tell you what's possible. Production metrics tell you what's happening. The right choice depends on your specific access patterns, value sizes, consistency requirements, and operational capacity.

Keep Reading