Collecting and Storing Poor Performing SQL Server Queries for Analysis
Problem
In an ideal world all of our queries would be optimized before they ever make it to a production SQL Server environment, but this is not always the case. Smaller data sets, different hardware, schema differences, etc. all effect the way our queries perform. This tip will look at a method of automatically collecting and storing poor performing SQL statements so they can be analyzed at a later date.Solution
With the new Dynamic Management Views and functions available starting in SQL Server 2005, capturing information regarding the performance of you SQL queries is a pretty straightforward task. The following view and functions give you all the information you need to determine how the SQL in you cache is performing:Using the view and functions above we can create a query that will pull out all the SQL queries that are currently in the cache. Along with the query text and plan we can also extract some important statistics on the performance of the query as well as the resources used during execution. Here is the query:
SELECT TOP 20 GETDATE() AS "Collection Date", qs.execution_count AS "Execution Count", SUBSTRING(qt.text,qs.statement_start_offset/2 +1, (CASE WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 ELSE qs.statement_end_offset END - qs.statement_start_offset )/2 ) AS "Query Text", DB_NAME(qt.dbid) AS "DB Name", qs.total_worker_time AS "Total CPU Time", qs.total_worker_time/qs.execution_count AS "Avg CPU Time (ms)", qs.total_physical_reads AS "Total Physical Reads", qs.total_physical_reads/qs.execution_count AS "Avg Physical Reads", qs.total_logical_reads AS "Total Logical Reads", qs.total_logical_reads/qs.execution_count AS "Avg Logical Reads", qs.total_logical_writes AS "Total Logical Writes", qs.total_logical_writes/qs.execution_count AS "Avg Logical Writes", qs.total_elapsed_time AS "Total Duration", qs.total_elapsed_time/qs.execution_count AS "Avg Duration (ms)", qp.query_plan AS "Plan" FROM sys.dm_exec_query_stats AS qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp WHERE qs.execution_count > 50 OR qs.total_worker_time/qs.execution_count > 100 OR qs.total_physical_reads/qs.execution_count > 1000 OR qs.total_logical_reads/qs.execution_count > 1000 OR qs.total_logical_writes/qs.execution_count > 1000 OR qs.total_elapsed_time/qs.execution_count > 1000 ORDER BY qs.execution_count DESC, qs.total_elapsed_time/qs.execution_count DESC, qs.total_worker_time/qs.execution_count DESC, qs.total_physical_reads/qs.execution_count DESC, qs.total_logical_reads/qs.execution_count DESC, qs.total_logical_writes/qs.execution_count DESC
Also, if you click on the data in the "Plan" column it will display the execution plan in graphical format in a new tab.
[My_Poor_Query_Cache]".
CREATE TABLE [DBA].[dbo].[My_Poor_Query_Cache] ( [Collection Date] [datetime] NOT NULL, [Execution Count] [bigint] NULL, [Query Text] [nvarchar](max) NULL, [DB Name] [sysname] NULL, [Total CPU Time] [bigint], [Avg CPU Time (ms)] [bigint] NULL, [Total Physical Reads] [bigint] NULL, [Avg Physical Reads] [bigint] NULL, [Total Logical Reads] [bigint] NULL, [Avg Logical Reads] [bigint] NULL, [Total Logical Writes] [bigint] NULL, [Avg Logical Writes] [bigint] NULL, [Total Duration] [bigint] NULL, [Avg Duration (ms)] [bigint] NULL, [Plan] [xml] NULL ) ON [PRIMARY] GO
That's it. Now, whenever you have spare time you can query this table and start tuning.
Comentarios
Publicar un comentario