Just Add an Index!

Written by Dan Redington on December 2, 2025

You've got a Slack message. Production is slow. New Relic is lighting up. A customer-facing query that should take milliseconds is timing out at 10+ seconds. You know it's a database issue, but you're not sure where to start.

This is one of the most common performance problems Rails engineers face, and it's where a lot of teams make costly mistakes. The difference between a slow production app and a fast one often comes down to a few well-placed indexes—but the path to finding them is rarely obvious.

Let's walk through how to diagnose and fix slow ActiveRecord queries the right way, without guessing or adding indexes that don't help.

The Common Mistakes

Before we get to the solution, let's talk about what doesn't work.

No Indexes at All

This is rare in mature apps, but it happens. A table gets created during a migration sprint, ships to production, and nobody thinks about query patterns until it's too late. Without indexes, PostgreSQL has to scan every row in the table to find what it needs. On a table with 100K+ rows, that's a problem.

A Simple Index on Every Column

This is the opposite mistake, and it's more insidious. Someone hears "indexes make things fast" and decides to add a simple index to every column on a table. Now you've got 10+ indexes on a single table, each one taking up memory and slowing down writes.

Why this is a problem: It creates the illusion of performance. You've "done something," but you haven't actually solved the query patterns your app uses. Meanwhile, PostgreSQL is maintaining all those indexes in memory, bloating your database and slowing down inserts and updates.

The Knee-Jerk Indexing

A query is slow. You look at the WHERE clause and see it filters on user_id and created_at. You add an index on both columns and call it a day.

Why this is a problem: You're shooting in the dark. You might get lucky, but you probably won't. Without understanding how PostgreSQL is actually executing the query, you're just adding more bloat and hoping for the best.

The Right Way: Reproduce, Analyze, Fix, Verify

Here's the process I use every time I need to fix a slow query.

Step 1: Reproduce the Problem

First, you need to isolate the exact query that's causing the issue. If you're using an APM tool like New Relic or Skylight, you can usually grab the SQL directly from the transaction trace. If not, you can log the query in development or staging.

Once you have the query, run it in the production environment if at all possible. Use a read replica if you're nervous about impacting customers. The production dataset is what matters—query performance on 100 rows in development tells you nothing about 10 million rows in prod.

Here's an example of a slow query I recently debugged:

# ActiveRecord query
Order.where(user_id: user_id, status: 'pending')
     .where('created_at > ?', 30.days.ago)
     .order(created_at: :desc)
     .limit(50)

This translated to SQL like:

SELECT "orders".*
FROM "orders"
WHERE "orders"."user_id" = 12345
  AND "orders"."status" = 'pending'
  AND (created_at > '2024-11-02 00:00:00')
ORDER BY "orders"."created_at" DESC
LIMIT 50

In production, this query was taking about 8 seconds. Unacceptable.

Step 2: Run EXPLAIN ANALYZE

This is where the real work happens. PostgreSQL's EXPLAIN ANALYZE command shows you exactly how the database is executing your query. It's not a guess—it's the truth.

Run it like this:

EXPLAIN ANALYZE
SELECT "orders".*
FROM "orders"
WHERE "orders"."user_id" = 12345
  AND "orders"."status" = 'pending'
  AND (created_at > '2024-11-02 00:00:00')
ORDER BY "orders"."created_at" DESC
LIMIT 50;

You'll get output that looks like this:

Limit  (cost=0.00..1234.56 rows=50 width=200) (actual time=8234.123..8234.456 rows=50 loops=1)
  ->  Sort  (cost=0.00..5678.90 rows=12345 width=200) (actual time=8234.100..8234.200 rows=50 loops=1)
        Sort Key: created_at DESC
        ->  Seq Scan on orders  (cost=0.00..12345.67 rows=12345 width=200) (actual time=0.123..8230.456 rows=12345 loops=1)
              Filter: ((user_id = 12345) AND (status = 'pending') AND (created_at > '2024-11-02 00:00:00'))
              Rows Removed by Filter: 9876543
Planning Time: 1.234 ms
Execution Time: 8234.567 ms

An example of what to look for may be: Seq Scan (sequential scan). This means PostgreSQL is reading every single row in the table to find the ones that match your WHERE clause. On a table with millions of rows, that's a disaster.

Tip
If you're not comfortable reading EXPLAIN ANALYZE output, paste it into ChatGPT or Claude and ask for an explanation. AI tools are surprisingly good at identifying performance bottlenecks in query plans.

Step 3: Add the Right Index

Now that you know what's slow (the sequential scan), you can fix it with a targeted index.

In this case, we're filtering on user_id, status, and created_at, and then sorting by created_at. That means we need a compound index that covers all three columns, in the right order.

Here's the migration:

class AddIndexToOrdersForUserPendingQueries < ActiveRecord::Migration[7.0]
  disable_ddl_transaction!

  def change
    add_index :orders,
              [:user_id, :status, :created_at],
              algorithm: :concurrently,
              name: 'index_orders_on_user_status_created'
  end
end

⚠️Warning
Use algorithm: :concurrently and disable DDL transactions. This prevents PostgreSQL from locking the table while it builds the index. On a large production table, a regular index creation can lock writes for minutes or even hours. A concurrent index takes longer to build, but it doesn't block your app.

Why this index works: PostgreSQL can now use the index to quickly find all rows matching user_id and status, and because created_at is part of the index, it can also use it for sorting. This turns a full table scan into an index scan.

A few notes on compound indexes:

  • Order matters. Put the most selective columns first (usually equality filters like user_id), then range filters (like created_at), then sort columns.
  • Don't over-index. If this table is heavily written to, every index slows down inserts and updates. Be strategic.
  • Consider partial indexes. If you're only querying status = 'pending', you can add a where: "status = 'pending'" clause to the index to save space.

Step 4: Verify the Fix

After the index is built, run the query again and check the difference.

Before:

  • Execution Time: 8234 ms (~8 seconds)
  • Seq Scan on millions of rows

After:

  • Execution Time: ~20 ms
  • Index Scan using index_orders_on_user_status_created

This is the fun part. A 400x performance improvement from a single index.

Run EXPLAIN ANALYZE again to confirm PostgreSQL is using the new index:

Limit  (cost=0.56..12.34 rows=50 width=200) (actual time=0.123..18.456 rows=50 loops=1)
  ->  Index Scan using index_orders_on_user_status_created on orders  (cost=0.56..123.45 rows=12345 width=200) (actual time=0.120..18.400 rows=50 loops=1)
        Index Cond: ((user_id = 12345) AND (status = 'pending') AND (created_at > '2024-11-02 00:00:00'))
Planning Time: 0.234 ms
Execution Time: 18.567 ms

Perfect. The Seq Scan is gone, replaced by an Index Scan. Query time is now measured in milliseconds, not seconds.

Summary

Fixing slow ActiveRecord queries isn't about adding more indexes—it's about adding the right indexes. Here's the process:

  1. Reproduce the slow query in production or a production-like environment.
  2. Run EXPLAIN ANALYZE to see exactly how PostgreSQL is executing it.
  3. Identify the bottleneck, usually a sequential scan on a large table.
  4. Add a targeted compound index that matches your query's filters and sort order. Use algorithm: :concurrently.
  5. Verify the fix with another EXPLAIN ANALYZE and measure the performance improvement.

Don't guess. Don't add indexes blindly. Use the tools PostgreSQL gives you to understand the problem, then fix it with precision.

And when you see that execution time drop from 8 seconds to 20 milliseconds? That's the good stuff.

Your idea is ready. We're here to build it.

From prototype to production, we help you get there faster. We know how to balance speed, quality, and budget so your product starts delivering value right away.

Senior full stack engineersHIPAA experiencedRails and React specialists