After you've downloaded the Slow Query log for your site, you may find that it's unlike some of your other logs. We've included information here about the different pieces of the slow query log and how to read them.
Information about the log
The first part of a slow query log is the digest information.
# 1.6s user time, 20ms system time, 32.08M rss, 61.39M vsz
# Current date: Tue Aug 7 13:10:57 2012
# Hostname: staging-1047.prod.hosting.acquia.com
# Files: mysqld-slow.log
# Overall: 3.16k total, 404 unique, 0.14 QPS, 0.80x concurrency __________
# Attribute total min max avg 95% stddev median
# ========= ======= ======= ======= ======= ======= ======= =======
# Exec time 18105s 1s 935s 6s 12s 28s 2s
# Lock time 29s 0 2s 9ms 4ms 67ms 0
# Rows sent 62.37M 0 6.03M 20.19k 33.17k 196.65k 0.99
# Rows exam 73.95M 0 6.03M 23.94k 92.42k 197.17k 0.99
# Time range 2012-08-07 06:47:17 to 2012-08-07 13:05:53
# bytes 3.54M 11 1.30M 1.15k 537.02 32.70k 88.31
# Rows affe 89.04k 0 3.11k 28.83 0.99 250.23 0
# Rows read 75.64M 0 6.03M 24.49k 97.04k 198.56k 0.99
The log includes the following items (with specific references from the example):
- When was it run?
Tue Aug 7 13:10:57 2012
- On what hostname?
staging-1047.prod.hosting.acquia.com
- How many lines were processed?
3.16k total, 404 unique
Identifying problems
The next section provides an overview of what queries were the slowest. They're ranked in descending order, with the worst offender having a Rank of 1:
# Profile
# Rank Query ID Response time Calls R/Call Item
# ==== ================== ================ ===== ======== ================
# 1 0x67A347A2812914DF 3689.6381 20.4% 351 10.5118 SELECT watchdog
# 2 0x2FA97A9508A98849 3337.4335 18.4% 18 185.4130 SELECT watchdog
# 3 0x13071EB1C98EA90B 1356.7068 7.5% 58 23.3915 SELECT variable
# 4 0x30FE5C2032672827 1093.2228 6.0% 804 1.3597 SHOW TABLE STATUS
# 5 0x748DEEF28EA5992D 952.6171 5.3% 8 119.0771 DELETE cache_fpe
# 6 0xF080ABC5341CBEE1 320.7495 1.8% 81 3.9599 UPDATE apachesolr_index_entities_node taxonomy_index
# 7 0x7AF789F9862CBC08 301.1504 1.7% 40 7.5288 SELECT search_total search_index
# 8 0x6BCD96962189B14D 261.2722 1.4% 22 11.8760 SELECT apachesolr_search_node node
# 9 0xCE1FFB25CB1ACF8E 191.9873 1.1% 47 4.0848 SELECT cache_bootstrap
# 10 0xD92183ACD7A21D6A 185.5956 1.0% 43 4.3162 SELECT field_data_body
The remaining column headers are described as follows:
Query ID
- A hash of the query. You're not likely to need this.Response time
- The average amount of time needed to execute this query.Calls
- What percentage of the slow query log is this query?R/Call
- The mean response time per execution.Item
- A truncated version of the query.
From this, you can determine that watchdog queries occupy the two top slots, accounting for nearly 40% of of the calls to the database.
In-depth analysis
After the overview, the script provides an in-depth detail section about each query. Some of it is of interest, while other sections may be less useful. Let's look at the #2 ranked query from the previous section. Because of the amount of output for this section, we'll break it into sections:
Runtime statistics
One of the first lines in this section provides the column headers for the query's table of information. Pay attention to the following columns:
pct
(Percentage) - The percent of the total for the whole analysis run.total
- The actual value of the specified metric.95%
- The 95th percentile; 95% of the values are less than or equal to this value.stddev
(Standard deviation) - How tightly grouped the values are.
The standard deviation and median are both calculated from the 95th percentile, discarding the extremely large values.
# Query 2: 0.00 QPS, 0.15x concurrency, ID 0x2FA97A9508A98849 at byte 1909177
# This item is included in the report because it matches --limit.
# Attribute pct total min max avg 95% stddev median
# ========= ====== ======= ======= ======= ======= ======= ======= =======
# Count 0 18
# Exec time 18 3337s 1s 652s 185s 314s 145s 167s
# Lock time 0 56ms 0 56ms 3ms 0 13ms 0
# Rows sent 0 0 0 0 0 0 0 0
# Rows exam 1 1.34M 1003 97.67k 76.19k 97.04k 39.91k 97.04k
You can also examine Rows sent/Rows exam
(examined) as an indicator of query problems. These numbers should ideally be nearly a 1:1 ratio. A large number of rows examined, but a small number of rows sent, means that the query is examining much more data than it's returning — this may mean the query is inefficient. Indexing may help with queries like this.
Users, databases, and time range
The next section displays statistics regarding the users, databases, and time range for the query. The users and databases are displayed as a count of distinct values, followed by the values. If there's only one, it's displayed alone; if there are many, the section displays each of the most frequent ones, followed by the number of times that it appears:
# Users 3 website1... (14/77%), websitestg (3/16%)... 1 more
# Hosts 1 staging-1047.prod.hosting.acquia.com
# Databases 6 website1add... (7/38%), website1add... (7/38%)... 4 more
# Time range 2012-08-07 07:03:41 to 2012-08-07 13:05:51
# bytes 0 6.82k 388 388 388 388 0 388
# Rows affe 0 0 0 0 0 0 0 0
# Rows read 0 0 0 0 0 0 0 0
Visual breakdown of execution time
The log then displays a graph that provides a visual breakdown of the amount of time this query typically needed to complete. This query sometimes only needed approximately one second to run, but generally needed over 10 seconds to complete. The one second benchmark is where Acquia Support generally starts looking for issues with queries, as these can start having performance impacts.
# Query_time distribution
# 1us
# 10us
# 100us
# 1ms
# 10ms
# 100ms
# 1s ##################
# 10s+ ################################################################
Query review
Finally, the log includes the query that was problematic. The \G
flag at the end is an instruction to format the query across multiple lines for easier reading. The lines beginning with #
are comments, and are not part of the query being examined.
Here's how an example query would be presented:
# Tables
# SHOW TABLE STATUS FROM `website1addb2886` LIKE 'watchdog'\G
# SHOW CREATE TABLE `website1addb2886`.`watchdog`\G
# EXPLAIN /*!50100 PARTITIONS*/
SELECT w.message AS message, w.variables AS variables
FROM
watchdog w
WHERE (w.message LIKE 'login attempty failed%' ESCAPE '\\') AND (w.timestamp > '1344294002')
AND (w.message NOT IN ('UPGRADE.txt', 'MAINTAINERS.txt', 'README.txt', 'INSTALL.pgsql.txt',
'INSTALL.txt', 'LICENSE.txt', 'INSTALL.mysql.txt', 'COPYRIGHT.txt', 'CHANGELOG.txt'))
ORDER BY w.timestamp DESC
LIMIT 10 OFFSET 0\G
Slow queries grouped by database
You can also use pt-query-digest
to provide output that is grouped by database.
pt-query-digest --limit 100% --group-by db slow.log > slow-db.txt
This returns information similar to what's been previously described on this page, with an added Item
field that identifies the database.
# Profile
# Rank Query ID Response time Calls R/Call Item
# ==== ======== ================== ====== ======== ===============
# 1 0x 2128815.3618 76.9% 161869 13.1515 db1
# 2 0x 289770.3525 10.5% 10281 28.1850 db2
# 3 0x 156921.2035 5.7% 5567 28.1877 db3
# 4 0x 50476.0517 1.8% 1151 43.8541 db4