One of the great features of application performance management (APM) tools is the ability to track SQL queries. For example, Retrace tracks SQL queries across multiple database providers, including SQL Server. Retrace tells you how many times a query was executed, how long it takes on average, and what transactions called it.
Query performance also depends on data volume and transaction concurrency. Executing the same query on a table with millions of records requires more time that performing the same operation on the same table with only thousands of records.
Milena is a SQL Server professional with more than 20 years of experience in IT. She has started with computer programming in high school and continued at University. She has been working with SQL Server since 2005 and has experience with SQL 2000 through SQL 2014. Her favorite SQL Server topics are SQL Server disaster recovery, auditing, and performance monitoring. View all posts by Milena "Millie" Petrovic
Every time I find out that the performance of data retrieval from my database is slow. I try to figure out which part of my SQL query has the problem and I try to optimize it and also add some indexes to the table. But this does not always solve the problem.
Monitoring and diagnosing SQL Server performance requires monitoring performance metric values, but also understanding these metrics and their relation to other metrics, knowing metric normal values, monitoring resource-intensive processes and queries, etc.
Resource intensive processes use much processor time, memory, and disk while they are executed. Finding them is necessary for performance monitoring and tuning. The next step is to analyze these expensive SQL Server queries and optimize them if possible.
A SQL query cost shows how much resources (processor time, memory, and disk) a query has used during its execution. An expansive query uses a lot of processor time and memory, and has many I/O operations. Therefore, a query cost can be analyzed from a processor, memory, and disk cost perspective.
A slow or long-running query uses hardware resources long and prevents other queries to use these resources, which can eventually lead to blocking. A query that is executed quickly uses memory, processor, and disk for a short time, quickly releases them, so other queries can use them. Common causes of blocking are bad SQL query execution plans, lack of proper indexes, poor application design, bad SQL Server configuration, etc.
From the context menu, you can open the query in a new Query Editor tab, so you can analyze or modify it, and view the execution plan, which is useful for identifying why the query uses much resources.
As described, Activity Monitor is available without any additional setup. Besides the commonly monitored performance metrics, it provides a list of recently used queries, their code, and execution plans. These queries are shown in real-time, without an option to save them for later analysis. The grid with the queries can be filtered and ordered. Although Activity Monitor provides sufficient information about expensive queries for query analysis and troubleshooting, other performance parameters necessary in a complete monitoring solution are not available.
Data Collection in SQL Server Management Studio is another native tool that provides performance metrics monitoring and a list of expensive queries. It has three built-in data collection sets: server activity, disk usage, and query statistics. In this article, we will focus on the last one.
Unlike Activity Monitor, it has to be configured to start collecting performance metrics. It uses Management Data Warehouse and SQL Server Agent, and is available in SQL Server 2008 and higher, the Standard, Enterprise, Business Intelligence, and Web editions.
The list shows 10 most expensive queries. Each query is represented by a link that opens the Query Details report, where the complete query, along with query execution statistics in tables and graphs are shown. The Edit Query Text link opens the query in a new Query Editor tab, so you can modify it.
Although Data Collection provides enough useful information about recent expensive queries, because built-in data collection sets cannot be easily modified, nor new ones added without coding, a user is limited to a predefined set of collected metrics, which makes this feature is useful only for basic performance monitoring.
One of the most important DMVs regarding your worst performing SQL Server queries is sys.dm_exec_query_stats. For every cached execution plan SQL Server stores detailed information about how this execution plan performed at runtime. And in addition SQL Server tells you how much CPU time and how much I/O this specific query consumed. This is one of the DMVs that I use on regular basis when I have to troubleshoot badly performing SQL Server installations.
As you can see I just do here a simple ORDER BY total_worker_time DESC to get back the CPU intensive queries. In addition I also grab the SQL statement and the execution plan itself by calling the DMFs sys.dm_exec_sql_text and sys.dm_exec_query_plan. The following query shows how to find your worst performing queries regarding I/O consumption.
SQL Server is an amazing product: it can immediately give you very good answers to your questions. You only have to know where to search for your answer. Regarding poor performing queries you should always start by analyzing the DMV sys.dm_exec_query_stats, where SQL Server stores runtime statistics about your execution plans.
To tackle performance problems with applications, you first find the queries that constitute a typical workload, using SQL Profiler: Then, from the trace, you find the queries or stored procedures that are having the most impact. After that, it is down to examining the execution plans and query statistics to identify queries that need tuning and indexes that need creating. You then See what effects you've had and maybe repeat the process. Gail explains all, in a two-part article.
The most important data columns are TextData, CPU, Reads, Writes and Duration. Other columns, such as the LoginName, ApplicationName and HostName, may be useful for identifying where the query comes from, but they are not essential for identifying which queries are performing poorly.
A query that runs only once a day and minutes to run is less of an overall problem in general than one that runs 10 times a second and takes a half second to run each time. The queries that I need to optimise most urgently are the ones that consume the highest cumulative amount of time, or CPU, or number of reads, over the period monitored.
If ad-hoc SQL is involved, calculating the cumulative time, reads or CPU can be extremely non-trivial, as the slightly different query structures, and different values that may be hard-coded within the query, can make it very difficult to identify queries of the same pattern. Fortunately, this application uses just stored procedures, and as such getting the cumulative execution characteristics is relatively easy.
Gail Shaw, famous for her forum contributions under the pen name 'gilamonster', is from Johannesburg Gauteng and works as a database consultant in performance tuning and database optimisation. Before moving to consulting she worked at a large South African investment bank and was responsible for the performance of the major systems there.
In this article, you'll learn how to detect and remove a common cause of SQL Server query performance problems: reliance on implicit datatype conversions. We'll use a combination of plan cache queries, extended events, and SQL Monitor.
If you have good performance-testers, all they must do is find the code that relies on implicit conversions, by running the database through a range of integration tests. You can then look in the plan cache for query plans from the current database where there has been an implicit conversion on the table-side of the query, as demonstrated by Jonathan Kehayias. This provides all the information you need about the offending queries and columns
My preferred way to spot this problem is to run an extended events session that captures the sqlserver.plan_affecting_convert event. The great thing about running these is that those places where an implicit conversion has ruined a good execution plan instantly appear when you run the code.
If you suspect implicit conversions are a strong contributing factor to performance problems, you might consider setting up a custom metric, using a query such as the following, which returns a count of the number of cached plans, for queries executed in the last 10 minutes that took more than 0.1 seconds to execute and which contain implicit conversion warnings.
If you see an upward trend or sudden rise in the value of this metric, during periods of server slowdown, you can the following query will list all the queries that contributed to the figure in the custom metric:
This quick tip illustrates SQL Monitor's built-in set of Performance Rules for static code analysis. These rules are designed to highlight SQL syntax that could potential cause performance problems, and so indicate ways to improve the overall quality and performance of the workload, over time.
This article describes how to handle a performance issue that database applications may experience when using SQL Server: slow performance of a specific query or group of queries. The following methodology will help you narrow down the cause of the slow queries issue and direct you towards resolution.
To establish that you have query performance issues on your SQL Server instance, start by examining queries by their execution time (elapsed time). Check if the time exceeds a threshold you have set (in milliseconds) based on an established performance baseline. For example, in a stress testing environment, you may have established a threshold for your workload to be no longer than 300 ms, and you can use this threshold. Then, you can identify all queries that exceed that threshold, focusing on each individual query and its pre-established performance baseline duration. Ultimately, business users care about the overall duration of database queries; therefore, the main focus is on execution duration. Other metrics like CPU time and logical reads are gathered to help with narrowing down the investigation. 2b1af7f3a8