Back to Blog

How to Fix Slow MySQL Queries

JayJay

A query that took 50ms in development now takes 30 seconds in production. Your API is timing out. Users are complaining. Sound familiar?

Slow MySQL queries are one of the most common performance problems in web applications. The good news: most slow queries can be fixed with a few straightforward techniques. This guide covers the diagnostic tools and fixes that work in the real world.

Step 1: Find the slow queries

Before you can fix slow queries, you need to find them. MySQL has a built-in slow query log that records any query exceeding a time threshold.

Enable the slow query log

Check if it's already enabled:

SQL
SHOW VARIABLES LIKE 'slow_query%';
SHOW VARIABLES LIKE 'long_query_time';

To enable it:

SQL
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;  -- Log queries taking more than 1 second
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';

For persistent configuration, add to your my.cnf:

INI
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = 1

The log_queries_not_using_indexes option is useful for catching queries that might be fast now but will slow down as your data grows.

Analyze the slow query log

The log file can get large. Use mysqldumpslow to summarize it:

BASH
mysqldumpslow -s t -t 10 /var/log/mysql/slow.log

This shows the top 10 queries sorted by total time. The -s flag accepts:

  • t - sort by total time
  • c - sort by count
  • l - sort by lock time
  • r - sort by rows sent

Step 2: Understand the query with EXPLAIN

Once you have a slow query, run EXPLAIN to see how MySQL executes it:

SQL
EXPLAIN SELECT * FROM orders
WHERE customer_id = 12345
AND status = 'pending';

This returns a breakdown of the query execution plan:

+----+-------------+--------+------+---------------+------+---------+------+--------+-------------+
| id | select_type | table  | type | possible_keys | key  | key_len | ref  | rows   | Extra       |
+----+-------------+--------+------+---------------+------+---------+------+--------+-------------+
|  1 | SIMPLE      | orders | ALL  | NULL          | NULL | NULL    | NULL | 500000 | Using where |
+----+-------------+--------+------+---------------+------+---------+------+--------+-------------+

Key columns to watch

type - This is the most important column. From best to worst:

| Type | Meaning | Action needed? | |------|---------|----------------| | system | Table has one row | No | | const | Single row match via primary key | No | | eq_ref | One row per join, using unique index | No | | ref | Multiple rows via non-unique index | Usually fine | | range | Index range scan | Usually fine | | index | Full index scan | Maybe | | ALL | Full table scan | Yes, add an index |

rows - The estimated number of rows MySQL will examine. If this number is close to your total row count, you're doing a full table scan.

key - Which index MySQL chose. If this is NULL, no index is being used.

Extra - Look for these warning signs:

  • Using filesort - MySQL is sorting results without an index
  • Using temporary - MySQL created a temporary table
  • Using where - Filtering after fetching rows (not always bad)

Use EXPLAIN ANALYZE for real numbers

EXPLAIN shows estimates. EXPLAIN ANALYZE (MySQL 8.0.18+) runs the query and shows actual execution times:

SQL
EXPLAIN ANALYZE SELECT * FROM orders
WHERE customer_id = 12345
AND status = 'pending';

Output includes actual row counts and timing:

-> Filter: ((orders.customer_id = 12345) and (orders.status = 'pending'))
    (cost=50234 rows=500000) (actual time=0.052..2834.123 rows=23 loops=1)
    -> Table scan on orders
        (cost=50234 rows=500000) (actual time=0.043..2341.532 rows=500000 loops=1)

This tells you the query scanned 500,000 rows to find 23 matches. That's the problem.

Step 3: Add the right indexes

Most slow queries are missing an index. An index lets MySQL find rows without scanning the entire table.

Single-column indexes

For a simple filter:

SQL
-- Slow: full table scan
SELECT * FROM orders WHERE customer_id = 12345;

-- Add an index
CREATE INDEX idx_orders_customer_id ON orders(customer_id);

-- Now fast: index lookup

Composite indexes for multiple conditions

If you filter on multiple columns, a composite index is more effective than multiple single-column indexes:

SQL
-- Query filters on two columns
SELECT * FROM orders
WHERE customer_id = 12345
AND status = 'pending';

-- Single indexes help, but a composite index is better
CREATE INDEX idx_orders_customer_status ON orders(customer_id, status);

Column order matters

In a composite index, column order determines what queries it can help. The index (customer_id, status) helps these queries:

SQL
-- Uses the index (both columns)
WHERE customer_id = 12345 AND status = 'pending'

-- Uses the index (leftmost column)
WHERE customer_id = 12345

-- Cannot use the index (missing leftmost column)
WHERE status = 'pending'

Put the most selective column first, or the column you always filter on.

Covering indexes

A covering index includes all columns the query needs, so MySQL never reads the actual table:

SQL
-- Query only needs these columns
SELECT customer_id, status, total FROM orders
WHERE customer_id = 12345;

-- Covering index
CREATE INDEX idx_orders_covering ON orders(customer_id, status, total);

Check for Using index in the EXPLAIN output to confirm.

When not to add indexes

Indexes aren't free. They slow down INSERT, UPDATE, and DELETE operations. Avoid indexing:

  • Columns with low cardinality (few unique values, like boolean flags)
  • Tables that are write-heavy with few reads
  • Columns you rarely filter or sort on

Step 4: Rewrite the query

Sometimes the query itself is the problem.

Avoid SELECT *

Fetching all columns when you only need a few wastes memory and network bandwidth:

SQL
-- Slow: fetches all 50 columns
SELECT * FROM orders WHERE customer_id = 12345;

-- Faster: only the columns you need
SELECT id, status, total, created_at FROM orders WHERE customer_id = 12345;

This also enables covering indexes.

Use LIMIT for pagination

If you only display 20 results, don't fetch 10,000:

SQL
-- Slow: fetches everything, application discards most
SELECT * FROM orders WHERE status = 'pending' ORDER BY created_at DESC;

-- Fast: database does the limiting
SELECT * FROM orders WHERE status = 'pending' ORDER BY created_at DESC LIMIT 20;

For deep pagination, use keyset pagination instead of OFFSET:

SQL
-- Slow: OFFSET still scans skipped rows
SELECT * FROM orders ORDER BY id LIMIT 20 OFFSET 10000;

-- Fast: start from a known point
SELECT * FROM orders WHERE id > 12345 ORDER BY id LIMIT 20;

Avoid functions on indexed columns

Wrapping an indexed column in a function prevents index usage:

SQL
-- Slow: function prevents index usage
SELECT * FROM orders WHERE YEAR(created_at) = 2025;

-- Fast: rewrite as range
SELECT * FROM orders
WHERE created_at >= '2025-01-01'
AND created_at < '2026-01-01';

Same problem with string functions:

SQL
-- Slow: LOWER() prevents index usage
SELECT * FROM users WHERE LOWER(email) = 'user@example.com';

-- Better: store normalized data, or use a generated column with an index

Optimize JOINs

Join order and index usage matter:

SQL
-- Make sure join columns are indexed
SELECT o.id, o.total, c.name
FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE o.status = 'pending';

-- Both tables need indexes
CREATE INDEX idx_orders_status ON orders(status);
CREATE INDEX idx_customers_id ON customers(id);  -- Usually already exists if it's a primary key

For large result sets, ensure the smaller table is scanned and the larger table uses an index lookup.

Break up complex queries

A single query doing too much can be slower than multiple simple queries:

SQL
-- Complex query with multiple joins and subqueries
SELECT ... FROM orders o
JOIN customers c ON ...
JOIN products p ON ...
WHERE o.id IN (SELECT order_id FROM order_items WHERE ...)
AND EXISTS (SELECT 1 FROM payments WHERE ...);

-- Sometimes faster: break it up
-- 1. Get the order IDs you need
-- 2. Fetch order details
-- 3. Fetch related data

This depends on your specific case. Profile both approaches.

Step 5: Check your configuration

MySQL's default configuration is conservative. A few settings can make a big difference.

Buffer pool size (InnoDB)

The buffer pool caches table and index data in memory. If it's too small, MySQL reads from disk constantly.

SQL
SHOW VARIABLES LIKE 'innodb_buffer_pool_size';

For a dedicated database server, set this to 70-80% of available RAM:

INI
[mysqld]
innodb_buffer_pool_size = 12G

Check buffer pool hit rate:

SQL
SHOW STATUS LIKE 'Innodb_buffer_pool_read%';

Calculate: 1 - (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests). If this is below 99%, your buffer pool is too small.

Query cache (MySQL 5.7 and earlier)

The query cache was removed in MySQL 8.0 because it often caused more problems than it solved. If you're on 5.7, consider disabling it for write-heavy workloads:

INI
[mysqld]
query_cache_type = 0
query_cache_size = 0

Connection limits

If you're running out of connections:

SQL
SHOW VARIABLES LIKE 'max_connections';
SHOW STATUS LIKE 'Threads_connected';

But increasing max_connections is usually the wrong fix. Look for:

  • Connection leaks in your application
  • Queries holding connections too long
  • Missing connection pooling

Common slow query patterns

The N+1 query problem

Your ORM fetches a list, then runs a query for each item:

SQL
-- 1 query to get orders
SELECT * FROM orders WHERE customer_id = 123;

-- N queries to get items (one per order)
SELECT * FROM order_items WHERE order_id = 1;
SELECT * FROM order_items WHERE order_id = 2;
SELECT * FROM order_items WHERE order_id = 3;
-- ... repeated 100 times

Fix: Use a JOIN or batch the lookups:

SQL
-- Single query with JOIN
SELECT o.*, oi.* FROM orders o
JOIN order_items oi ON o.id = oi.order_id
WHERE o.customer_id = 123;

-- Or batch with IN
SELECT * FROM order_items WHERE order_id IN (1, 2, 3, ...);

Missing index on foreign key

Foreign key columns are often filtered and joined on, but MySQL doesn't automatically index them:

SQL
-- This is slow without an index on order_id
SELECT * FROM order_items WHERE order_id = 12345;

-- Add the missing index
CREATE INDEX idx_order_items_order_id ON order_items(order_id);

Sorting without an index

ORDER BY on an unindexed column forces a filesort:

SQL
-- EXPLAIN shows "Using filesort"
SELECT * FROM orders WHERE status = 'pending' ORDER BY created_at DESC;

-- Composite index covers both filter and sort
CREATE INDEX idx_orders_status_created ON orders(status, created_at);

LIKE with leading wildcard

Leading wildcards can't use indexes:

SQL
-- Slow: full table scan
SELECT * FROM products WHERE name LIKE '%widget%';

-- Faster: only trailing wildcard uses index
SELECT * FROM products WHERE name LIKE 'widget%';

-- For full-text search, use FULLTEXT indexes
CREATE FULLTEXT INDEX idx_products_name ON products(name);
SELECT * FROM products WHERE MATCH(name) AGAINST('widget');

Quick checklist

When you encounter a slow query:

  1. Run EXPLAIN - Is type showing ALL? Is key NULL?
  2. Check the rows estimate - Is MySQL examining far more rows than it returns?
  3. Look for filesort/temporary - Can you add an index to avoid these?
  4. Review the query - Are you using SELECT *? Functions on indexed columns? Missing LIMIT?
  5. Check indexes exist - Do your WHERE, JOIN, and ORDER BY columns have indexes?
  6. Verify index usage - The index exists, but is MySQL using it?

Most slow queries fall into a few categories: missing index, wrong index, or query pattern that prevents index usage. Start with EXPLAIN, add the right index, and verify the improvement.

Keep Reading