Database Profiling and Tuning
Most Rails applications use Active Record, a simple object-relational mapping (ORM) strategy that maps tables to classes, rows to objects, and columns to object attributes. Like any other ORM implementation, Active Record makes a lot of data-related tasks easy, but it's also easy to run into performance problems if you treat the database as little more than a junk drawer. Many common database-related bottlenecks are the result of slow queries or the application executing too many queries.
As fast and powerful as modern relational database engines are, your database won't perform well without monitoring and an occasional tune-up.
Many databases provide a "slow query log," which logs every query that takes more than a specified amount of timetwo seconds is a good place to start to find the worst offenders, but you should soon be looking for queries that take as little as 250 milliseconds (0.25s). Another option is to use a profiler or simply tail the log, examining the execution time of each query.
Why is the query slow? There's one way to find out for sureuse your database's "explain plan" feature to see how the query is being processed. Chances are you'll find a full-table scan (sometimes called a "sequential scan") that you could fix by adding an index on some (or all) of the columns that you're filtering or ordering by. You might find that an index is being used, but a better index could be added.
There are too many variables for me to tell you what index would be best, so I encourage you to experiment and research the indexing strategies available in your choice of database. Add an index that you think will help and run the explain plan again. Try modifying the query to obtain the results in a slightly different, more efficient way.
When you've tuned the query and found the right index, apply it to your project by modifying the code that builds the query and create a migration to add the index.
Now consider a blog post, which can have many comments, each comment by a different user. Scanning the log file, you might find a query like this:
User Load (0.000366) SELECT * FROM "users" WHERE ("users"."id"=7)
This log entry shows that the User model was used to load the user with ID 7 from the database, and the query took 0.000366 secondspretty quick. However, a red flag should go up if you see a series of similar queries like this:
Comment Load (0.004620) SELECT * FROM "comments" WHERE ("posts"."id" = 43) User Load (0.000366) SELECT * FROM "users" WHERE ("users"."id"=7) User Load (0.000306) SELECT * FROM "users" WHERE ("users"."id"=3) User Load (0.000426) SELECT * FROM "users" WHERE ("users"."id"=9) User Load (0.000378) SELECT * FROM "users" WHERE ("users"."id"=5) User Load (0.000452) SELECT * FROM "users" WHERE ("users"."id"=8)
This pattern indicates an n+1 problem. The list of comments for a blog post is fetched in one query, but fetching the user who wrote each comment (presumably in a loop) is n more queries (where n is the number of comments), so a post commented on by 50 readers will require no fewer than 51 queries to fetch all of the data. Even if each of those queries is relatively fast, they can quickly add up. The following code demonstrates the problem:
# posts_controller.rb def show @post = Post.find params[:id] end # view - show.html.erb <% for comment in @post.comments %> <p> <%=h comment.user.name %> says...<br /> <%=h comment.text %> </p> <% end %>