Category: SQL Server 2008

Jan 23 2017

Data About Execution Plans

If you look at the Properties for the first operator of a graphical execution plan, you get all sorts of great information. I’ve talked about the data available there and how important it is in this older post. Checking out the properties of a plan you’re working on is a fundamental part of tuning that plan. What happens when you don’t know which plan you should be working on? What do you do, for example, if you want to see all the plans that are currently using ARITHABORT=FALSE or some other plan affecting setting?

The “easy” answer to this question is to run an XQuery against the XML of the query plan itself. You can identify these properties and retrieve the appropriate values from within the plan. However, XQuery consumes quite a bit of resources and you might not want to run this on a production system that’s already under stress. Now what?

sys.dm_exec_plan_attributes

There is a DMV that isn’t used a lot of the time because the information within it frequently doesn’t have a lot of bearing on solving fundamental query tuning issues such as out of date statistics, bad or missing indexes, or poorly structured T-SQL. This DMV, sys.dm_exec_plan_attributes, contains a bunch of values that are used by the optimizer to identify a plan in cache, such as object_id (if any), database_id, and compatibility level (compat_level). In addition to these clear & easy to understand attributes, there’s one more, set_options, that’s not immediately clear.

set_options

Follow the link about and you’ll find that the set_options column is a bitmask. It contains a number of settings within a single value. I won’t argue that this is a good (or bad) design. That’s what it is. The question is, how do we use it? Here’s a simple query that shows all the queries that have ANSI_WARNINGS set to true:

SELECT detqp.query_plan,
       depa.attribute,
       depa.value
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_text_query_plan(
                                          deqs.plan_handle,
                                          deqs.statement_start_offset,
                                          deqs.statement_end_offset
                                       ) AS detqp
CROSS APPLY sys.dm_exec_plan_attributes(deqs.plan_handle) AS depa
WHERE depa.attribute = 'set_options'
      AND (CAST(depa.value AS INT) & 16) = 16;

If you were looking for queries that didn’t have ANSI_WARNINGS, you could just change the value to 0. Use the values from the documentation link above to look at the various settings based on their bit values.

NOTE: One of the values is ‘Parallel’. When I was investigating this, I became very excited that this would be a way to programmatically identify parallel execution plans. However, it’s an attribute, like that others, that determines how a plan can be, not is, compiled. So looking at the parallel value here would just mean that a given plan could be parallel, not that it is.

Conclusion

You don’t want to be completely dependent on the query plan when it comes to investigation and identifying queries with problems. Instead, you want to be systematic in the approach. Using sys.dm_exec_plan_attributes, you can query for information about your queries.

 

Jan 10 2017

Query Optimizer and Data Definition Language Queries

Data Definition Language queries don’t go through the optimizer, right? While normally, my short answer to this question in the past would have been, yes. However, with testing comes knowledge and I want to be sure about the statement.

I’m working with a team of people to completely rewrite the SQL Server Execution Plans book. We’ll probably be published in April or May. It’s going to be amazing. The structure will be completely different and the book will represent five years of additional knowledge in how execution plans work and how to read and interpret them since the last book was written. However, enough on that. Let’s answer the question about Data Definition Language.

First of all, we need to quickly define our terms. Data Definition Language (DDL) represents the syntax for queries that build your databases and the data structures within them. As a result, if you read the documentation at MSDN, you’ll see that this includes CREATE, ALTER, DROP, ENABLE/DISABLE TRIGGER, TRUNCATE TABLE and UPDATE STATISTICS. Consequently, these command are separated from the Data Manipulation Language (DML) queries that are used to SELECT, UPDATE, DELETE and INSERT your data (amongst other things).

Testing the Data Definition Language Query

Here is an example of one of the simplest possible DDL queries:

CREATE TABLE dbo.MyNewTable (  MyNewTableID INT PRIMARY KEY IDENTITY(1, 1),
                               MyNewValue NVARCHAR(50)
                            );
GO

We can check whether or not you’re going to get an execution plan for this in two easy ways. You can capture the estimated plan, or run the query and capture the actual plan. Since most of the time, it’s preferable to work with the added runtime information that an actual plan provides, let’s start there. However, if you execute the query above and try to capture an actual plan, you will not get one. So, let’s capture the estimated plan instead. Here’s what we see:

estimatedplan

Not much of a plan really. The devil is in the details though, so let’s open the Properties page (always use the Properties page, the ToolTip is just too unreliable in the amount of information displayed):

Data Definition Language properties

That is ALL the properties that are available. The implication is simple, SQL Server doesn’t create execution plans for DDL statements.

Further, the lack of a plan suggests that these queries must not go through the query optimization process. We can check this. There is a Dynamic Management View (DMV), sys.dm_exec_query_optimizer_info, that shows aggregate values for the work done by the optimizer. It’s a little tricky to capture the metrics of a single query, but it is possible (NOTE: don’t run this on production, please):

DBCC FREEPROCCACHE();
GO

SELECT *
INTO OpInfoAfter
FROM sys.dm_exec_query_optimizer_info AS deqoi;
GO

DROP TABLE OpInfoAfter;
GO

--gather the existing optimizer information
SELECT *
INTO OpInfoBefore
FROM sys.dm_exec_query_optimizer_info AS deqoi;
GO

--run a query
CREATE TABLE dbo.MyNewTable (  MyNewTableID INT PRIMARY KEY IDENTITY(1, 1),
                               MyNewValue NVARCHAR(50)
                            );
GO

SELECT *
INTO OpInfoAfter
FROM sys.dm_exec_query_optimizer_info AS deqoi;
GO

--display the data that has changed
SELECT oia.counter,
       (oia.occurrence - oib.occurrence) AS ActualOccurence,
       (oia.occurrence * oia.value - oib.occurrence * oib.value) AS ActualValue
FROM OpInfoBefore AS oib
JOIN OpInfoAfter AS oia
   ON oib.counter = oia.counter
WHERE oia.occurrence <> oib.occurrence;
GO

DROP TABLE OpInfoBefore;
DROP TABLE OpInfoAfter;
GO

If we run this, we won’t see any values. This is because this CREATE TABLE statement doesn’t go through the optimizer. There’s no optimization possible, so you won’t see an execution plan of any kind.

Does this mean that my first answer is correct then? Does the optimizer skip making execution plans for DDL statements?

Changing the Data Definition Language Query

Let’s modify the query so that we’re doing an ALTER table instead of a CREATE:

ALTER TABLE Sales.Customer  WITH CHECK ADD  CONSTRAINT SomeTest FOREIGN KEY(CustomerID)
REFERENCES [dbo].[Agent] ([AgentId]);

If we check the Estimated and Actual plans using the same methods above, we don’t get an execution plan (well, the estimated plan is the same T-SQL place holder that we saw). What happens if we see what the optimizer did? This time, we get results:

counter ActualOccurence ActualValue
elapsed time 2 0
final cost 2 0.139429282
insert stmt 1 1
maximum DOP 2 0
optimizations 2 2
search 1 2 2
search 1 tasks 2 117
search 1 time 2 0
tables 2 2
tasks 2 117
view reference 1 1

I’ll tell you right now, I don’t know what this represents. I suspect I’d need to run the debugger to see it. Maybe it’s a plan for the CHECK process of the check constraint. The optimizer is involved in this process, twice. Two different statements were optimized. One involved inserting data and one involved referencing a view. However, at the end of the process, we still don’t see an execution plan displayed.

How About Extended Events

Can we capture the events to see what’s happening? We can try. I’ll spin up a Session and capture the following events:

query_post_execution_showplan
sql_statement_completed
sp_statement_completed

In addition, I’ll enable event correlation so that we can see the order in which events occurred. Now, I’ll try running the ALTER TABLE command to see what we can find out:

name statement attach_activity_id.guid attach_activity_id.seq
query_post_execution_showplan NULL 65A0A74B-E5D5-460C-846D-87808C334283 1
sp_statement_completed SELECT * FROM [Sales].[Customer] 65A0A74B-E5D5-460C-846D-87808C334283 2
sql_statement_completed ALTER TABLE Sales.Customer  WITH CHECK ADD  CONSTRAINT SomeTest FOREIGN KEY(CustomerID)  REFERENCES [dbo].[Agent] ([AgentId]) 65A0A74B-E5D5-460C-846D-87808C334283 3

And there we have it. We can see that there was a plan compiled, first, then a procedure call was made with a single statement: SELECT * FROM Sales.Customer. Finally our SQL Batch statement that ran the ALTER TABLE was created. Everything is grouped by the activity GUID and we have them in the sequence determined by attach_activity_id.seq. Here’s the execution plan:

ConstraintPlan

In short, there was a plan generated here, but it is, as I guessed at, the plan for the check constraint process. There is no plan for the Data Definition Language statement.

Conclusion

Just saying something doesn’t make it true. You have to be able to prove it. From every evidence that Microsoft supplies us, the DDL statements do not generate an execution plan or go through the query optimization process. Because we tested this theory by attempting to create a plan, by observation of the optimizer through the sys.dm_exec_query_optimizer_info DMV, and through Extended Events, we can therefore safely say that none of the statements tested created any kind of plan for the DDL statement.

Have I tested every variation on every possible DDL statement? No. Is it possible that other statements could have execution plans associated with them? I won’t say that’s impossible, because nothing is impossible. Therefore, I’ll simply say, that’s unlikely.

Dec 12 2016

OPTIMIZE FOR Hints When Parameter Sniffing is Turned Off

While presenting recently and talking about dealing with bad Parameter Sniffing, I got the question; what happens to OPTIMIZE FOR hints when parameter sniffing is disabled?

This is my favorite kind of question because the answer is simple: I don’t know.

Parameter Sniffing

For those who don’t know, parameter sniffing is when SQL Server uses the precise values passed into a query as a parameter (this means stored procedures or prepared statements) to generate an execution plan from the statistics using the value from the parameter. Most of the time, parameter sniffing is either helping you, or is not hurting you. Sometimes, parameter sniffing turns bad and hurts you quite severely.

Usually, but not always, this is because you either have severely skewed data (some data is very different than the rest, lots of rows versus fewer or vice versa), or your statistics are out of date. The real issue is deciding how best to resolve the issue if you are dealing with bad parameter sniffing.

OPTIMIZE FOR

There are a bunch of ways to deal with parameter sniffing when it goes bad. All of them revolve around controlling what type of execution plan gets generated. The three most common methods for resolving bad parameter sniffing are, plan forcing using Query Store (currently available in Azure SQL Database and SQL Server 2016, I have written extensively on this topic), using a RECOMPILE hint to get a new plan every time, or, using the OPTIMIZE FOR hint to get a plan based on a specific value or on the average of values.

Using OPTIMIZE FOR is pretty straight forward. Here’s an example query that uses the OPTIMIZE FOR hint to force the optimizer to choose a particular execution plan:

CREATE PROC dbo.AddressByCity @City NVARCHAR(30)
AS
   SELECT a.AddressID,
      a.AddressLine1,
      a.AddressLine2,
      a.City,
      sp.Name AS StateProvinceName,
      a.PostalCode
   FROM Person.Address AS a
   JOIN Person.StateProvince AS sp
      ON a.StateProvinceID = sp.StateProvinceID
   WHERE a.City = @City 
OPTION(OPTIMIZE FOR(@City='Mentor'));

Any time this stored procedure is called and that query gets compiled, or even if the statement gets recompiled, it will use the value of ‘Mentor’ to look at the statistics and determine how many rows are likely to be returned. This lets me take control away from the parameter sniffing process.

Turning Off Parameter Sniffing Entirely

We can just completely eliminate parameter sniffing. There’s a traceflag that we can set:

DBCC TRACEON (4136,-1);

This doesn’t require a server reboot or anything. It’ll just stop using parameter sniffing… on the whole server. I strongly advocate against using this without very extensive testing to confirm that you’re not benefitting from parameter sniffing.

We can remove the hint from the query above. Let’s then turn off parameter sniffing. If I capture the execution plan and look at the SELECT properties, that’s where I would normally see the Compile Time values for parameters. However, as you can see here, I don’t have a Compile Time value, just a Run Time value:

NoSniffing

What Happens To The OPTIMIZE FOR Hint

Let’s recompile the stored procedure using the OPTIMIZE FOR hint so that we can see what happens. Then, I’m going to call the procedure, but I’m going to pass a value that would result in a different execution plan:

EXEC dbo.AddressByCity @City = N'London';

Now, let’s take a look at the properties:

Sniffing

You can see that I have both a compile time value, and a run time value.

Conclusion

Just because I have parameter sniffing disabled, the query hints do not stop working. If I used OPTIMIZE FOR UNKNOWN instead of OPTIMIZE FOR a value, the results would be the same as when I disable parameter sniffing (the optimizer just uses an average instead of specific values). However, the other hint still pushes the optimizer to use a particular value, effectively pushing it to still do parameter sniffing even though parameter sniffing has been disabled. One other point, I also tested using OPTIMIZE FOR through a Plan Guide even though parameter sniffing was disabled. This worked fine too.

Dec 05 2016

Presentations for SQL Server Beginners

#iwanttohelp

Tired from all those blog posts

For my final blog post in the #gettingstarted, #iwanttohelp series, I decided to curate a set of presentations from the PASS Virtual Chapters. This content is available online. It’s free. Most importantly for the person just getting started as a SQL Server data pro, it’s good. I’m going to marry each of the presentations with my eleven blog posts in this series.

  1. The Importance of a Full Backup in SQL Server
    For this one I’m going to recommend Tim Radney’s session Understanding SQL Server Backup and Restore. I know Tim personally and guarantee this is a good session.
  2. Why Is The Server Slow
    Jes Borland is a very close personal friend and an absolutely outstanding presenter (and person). She has a session all about getting you started on wait statistics Beyond CXPacket: Understanding Wait Statistics.
  3. SQL Server Backups Are a Business Decision
    Paul Randal is, well, Paul Randal. If you’re not familiar with his work, you really should be. He presented a session called Building the Right Backup Strategy that should coincide nicely with my blog post.
  4. A View Is Not A Table
    OK, I couldn’t find a single session talking about views, so I’m going to come at this from the side, just a little (and I might start working on a presentation on views & other structures). This session by Kenneth Ureña, who I’ve talked with several times, is about structures and performance. It should do the trick. Indexing Strategies and Good Physical Designs for Performance Tuning.
  5. Use The Correct Data Type
    Silent Killers Lurking in Your Schema by (yes, a good friend) Mickey Steuwe is absolutely going to cover data types, among other things.
  6. Choosing the Right SQL Server Edition
    John Martin (yeah, another friend, what can I say, I try to stay on the good side of smart & capable people) has a presentation that’s a bit more focused on SQL Server internals but will absolutely address the edition of SQL Server as part of the work. Get SQL Server Set Up Right! The First Time.
    I’m also going to add a session by Jes Borland, again, that introduces Azure, especially talking about Azure in a hybrid scenario. Azure and SQL Server. Plus, I just want to share more Azure links. If you’re just getting started with the Microsoft Data Platform, start with Azure.
  7. Monitor Query Performance
    There are tons of choices here. I’m going with Kicking and Screaming: Replacing Profiler With Extended Events because I know that most people learn Profiler first. I also know that we need to get people to start using Extended Events instead. I will further add that Erin Stellato really knows here stuff on this topic, as well as many others. She is also a wonderful presenter (and a friend).
  8. Azure SQL Database For Your First Database
    As I said, I really believe in Azure as an entry into the land of Data Platform. It’s easy. It’s inexpensive. It’s available almost anywhere. Tim Radney has a session called Azure SQL Database for the Production DBA that will help to get you started.
  9. The Clustered Index is Vital To Your Database Design
    The clustered index is so important that everyone presents on it in one fashion or another. I’ve met Ayman El-Ghazali a few times. We’re not friends yet, but we get along (he’s very smart, see above). His session Indexing Fundamentals seems ideal for this slot.
  10. Statistics Are Vital For Performance
    Erin Stellato has a session called Statistics Starters. It is so important to understand what statistics are within SQL Server, how to use them, how to maintain them, it’s hard to over-emphasize it.
  11. PowerShell To Test a Query
    If you’re just getting going with Powershell, you’re going to need tons of help. Luckily it’s out there. I could have picked any number of excellent sessions by amazing people (many of them friends). I’m going with the PowerShell Tips and Tricks for SQL Server Administration by Mike Fal. Mike (a friend) recently helped me with some of my own PowerShell scripts, so he’s kind of top of mind at the moment.

Conclusion

For my final post I wanted to do three things. First, a recap of the last year’s worth of #getttingstarted blog posts. Believe it or not, I put a little work into each of these and it’s nice to see them all collected like this. Second, I wanted to illustrate the vast quantity of high quality learning that is available through PASS and the PASS Virtual Chapters. I pulled these sessions from a bunch of different chapters. There are alternatives I could have picked for every topic (except for views, something to work on). Which leads to my third, and final reason for doing my last post in the series like this, I wanted to keep helping. You now know a great resource to go to and search through for more information and learning.

Before we go, I want launch a small challenge of my own. If you’re a blogger, pick a topic (or a collection of topics like I did), and put together a blog post that curates the content using only PASS resources. Link back to this blog. Use the hash tag, #PASScurated. Let’s see what other information can be put together.

 

 

Nov 23 2016

PowerShell to Test a Query

powershellSo you want to do some tuning, but you’re not sure how to test a query on it’s performance. Not a problem. Here’s a very rough script that I use to do some recent testing.

This script to test a query is post #11 of the #enterylevel #iwanttohelp effort started by Tim Ford (b|t). Read about it here.

The Script

The goal here is to load a bunch of parameter values from one table and then use those values to run a query to test it. To do this I connect up to my SQL Server instance, naturally. Then I retrieve the values I’m interested in. I set up the query I want to test. Finally a loop through the data set, calling the query once for each value.

[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null
# Get the connection
$SqlConnection = New-Object System.Data.SqlClient.SqlConnection
$SqlConnection.ConnectionString = 'Server=WIN-3SRG45GBF97\DOJO;Database=WideWorldImporters;trusted_connection=true'

# Retrieve test data
$BillToCustomerCmd = New-Object System.Data.SqlClient.SqlCommand
$BillToCustomerCmd.CommandText = "SELECT  DISTINCT i.BillToCustomerID
FROM Sales.Invoices as i;"
$BillToCustomerCmd.Connection = $SqlConnection
$SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$SqlAdapter.SelectCommand = $BillToCustomerCmd
$BillToCustomerList = New-Object System.Data.DataSet
$SqlAdapter.Fill($BillToCustomerList)

# Set up test query
$SQLCmd = New-Object System.Data.SqlClient.SqlCommand
$SQLCmd.Connection = $SqlConnection
$SQLCmd.CommandText = "DECLARE @sqlquery NVARCHAR(MAX);
SET @sqlquery
   = N'SELECT si.StockItemName,
   i.InvoiceDate,
   i.SalespersonPersonID
FROM Sales.Invoices AS i
JOIN Sales.InvoiceLines AS il
   ON il.InvoiceID = i.InvoiceID
JOIN Warehouse.StockItems AS si
   ON si.StockItemID = il.StockItemID
WHERE i.BillToCustomerID = @BillToCustomerID;';

DECLARE @parms NVARCHAR(MAX);
SET @parms = '@BillToCustomerID int';

EXEC sys.sp_executesql @stmt = @sqlquery,
   @params = @parms,
   @BillToCustomerID = @btc;"
$SQLCmd.Parameters.Add("@btc",[System.Data.SqlDbType]"Int")

# Run the tests
foreach($row in $BillToCustomerList.Tables[0])
{
    $SqlConnection.Open()
    $SQLCmd.Parameters["@btc"].Value = $row[0]    
    $SQLCmd.ExecuteNonQuery() | Out-Null
    $sqlconnection.Close()
    
}

I’m using ExecuteNonQuery here so I can ignore the result set because, in this case, I don’t care about it. I just want to be able to capture the query metrics (using Extended Events naturally). If I wanted the results to come back I could just use ExecuteQuery.

Some Explanation

This is a very simple and simplistic way to do testing. I’m not providing this as a mechanism for all your tests. I’m not suggesting this should be your primary testing tool. This is just a simple way to do some basic testing.

You can easily mix this up to get more realistic tests or add to the tests. Throw in a command to pull the query out of the cache after each call. Now you’ll see how the compile works. Change the order of the retrieved data to make it random. Toss in other queries. Run a set of other queries on a loop in a different PowerShell script to generate load. The sky is the limit once you start playing with this.

The reason I go to PowerShell for this instead of running all these commands as T-SQL through SSMS is because of the more direct control on behavior I get with PowerShell. The ability to ignore the result set is just one example.

Conclusion

If you really want to do load testing and evaluation, I’d suggest setting up Distributed Replay and putting it to work. I’ve used it very successfully for that kind of thorough and complete testing of a system. If you really just want to know how this one query is going to fare, the PowerShell script above will enable you to test a query through this basic test. Just remember to capture the metrics when you’re doing any kind of test so that you can compare the results.


Want to play some more with execution plans and query tuning? I’ll be doing an all day seminar on execution plans and query tuning before SQLSaturday Providence in Rhode Island, December 2016, therefore, if you’re interested, sign up here.

Nov 07 2016

sp_executesql Is Not Faster Than an Ad Hoc Query

This requires an immediate caveat. You should absolutely be using sp_executesql over any type of non-parameterized execution of T-SQL. You must parameterize your T-SQL because the lack of parameters in building up and executing strings is a classic SQL Injection attack vector. Using straight ad hoc T-SQL is an extremely poor coding choice because of SQL Injection, not because there is something that makes one method faster than the other.

Yet, I see in performance checklists that you should be using sp_executesql over straight ad hoc T-SQL because it will perform faster. That statement is incorrect.

Some Discussion

Let me reiterate the caveat before we continue. I 100% advocate for the use of sp_executesql. This function is preferred over ad hoc SQL because, used properly (and isn’t that usually one of the main problems, always), you can both build an ad hoc query and use parameters in order to avoid SQL Injection. The security implications of SQL Injection are kind of hard to over-emphasize. SQL Injection has been a primary vector for hacking for close on to twenty years now. We know the best way to avoid it is to use parameterized queries with data validation around the parameters. Why this is continually ignored is hard for me to understand.

However, despite the importance of using sp_executesql, I’m not advocating for it’s use as a performance improvement mechanism. I’m unclear as to how this comes to be on a performance checklist, with no discussion of taking advantage of Parameter Sniffing and/or plan reuse (possible performance advantages). I can only assume this is yet another example of Cargo Cult Programming. People know that they are supposed to use sp_executesql (and yes, you are supposed to use it), but don’t really understand why, so they start guessing.

The tests are going to run primarily from T-SQL in order to compare a straight EXECUTE of a query string to sp_executesql. However, for the sake of protecting against SQL Injection, let me also mention that calling to your database strictly through code, you can use two approaches (well, several, but we’ll focus on two in order to keep this blog post to a minimal size, I can’t caveat and explain every single possible permutation of all possible database access methods while still making anything approaching a coherent point), building up ad hoc T-SQL and executing that against the server directly, or, using a mechanism to parameterize your queries. You absolutely should be using the parameterized methods in order to validate your input and avoid SQL Injection.

The Simplest Test

Let’s start with a very simple, and simplified, query in order to illustrate the point:

DECLARE @adhocquery NVARCHAR(max) 
SET @adhocquery = N'SELECT si.StockItemName,
   i.InvoiceDate,
   il.Description
FROM Sales.Invoices AS i
JOIN Sales.InvoiceLines AS il
   ON il.InvoiceID = i.InvoiceID
JOIN Warehouse.StockItems AS si
ON si.StockItemID = il.StockItemID;'

EXEC (@adhocquery);


DECLARE @sqlquery NVARCHAR(max) 
SET @sqlquery = N'SELECT si.StockItemName,
   i.InvoiceDate,
   il.Description
FROM Sales.Invoices AS i
JOIN Sales.InvoiceLines AS il
   ON il.InvoiceID = i.InvoiceID
JOIN Warehouse.StockItems AS si
ON si.StockItemID = il.StockItemID;'

EXEC sys.sp_executesql @stmt = @sqlquery;

That’s the same query executed using the two methods in question. The results are an identical execution plan and exactly the same number of reads. If I execute either of them thousands of times then the execution times don’t vary. They have matching query hash and plan hash values. These are identical queries in every possible way. Even if I compare the performance across thousands of executions and include the compile time there is no difference in the outcome.

At the simplest possible level, these are identical mechanisms for executing a basic query. The only performance difference comes about because of parameters.

Test With Parameters

Instead of just running the query over and over again, I really want to test actual, meaningful, behavior this time. I’m going to load all the values for the BillToCustomerID column of the Invoices table using a PowerShell script. Then, I’ll execute the queries once for each of these values, using the two different execution methods.

To make aggregating the results easier, I put each query into a procedure:

CREATE PROCEDURE dbo.AdHoc (@BillToCustomerID INT)
AS
   DECLARE @Query NVARCHAR(MAX);

   SET @Query
      = N'SELECT si.StockItemName,
   i.InvoiceDate,
   il.Description
FROM Sales.Invoices AS i
JOIN Sales.InvoiceLines AS il
   ON il.InvoiceID = i.InvoiceID
JOIN Warehouse.StockItems AS si
   ON si.StockItemID = il.StockItemID
WHERE i.BillToCustomerID = ' + CAST(@BillToCustomerID AS NVARCHAR(10)) + ';';

   EXEC (@Query);
GO


CREATE PROCEDURE dbo.ExecSQL (@BillToCustomerID INT)
AS
   DECLARE @sqlquery NVARCHAR(MAX);

   SET @sqlquery
      = N'SELECT si.StockItemName,
   i.InvoiceDate,
   i.SalespersonPersonID
FROM Sales.Invoices AS i
JOIN Sales.InvoiceLines AS il
   ON il.InvoiceID = i.InvoiceID
JOIN Warehouse.StockItems AS si
   ON si.StockItemID = il.StockItemID
WHERE i.BillToCustomerID = @BillToCustomerID;';

   DECLARE @parms NVARCHAR(MAX);

   SET @parms = '@BillToCustomerID int';

   EXEC sys.sp_executesql @stmt = @sqlquery,
      @params = @parms,
      @BillToCustomerID = @BillToCustomerID;
GO

The results are fun.

Execution Type Average Duration
sp_executesql AVG: 57946.03187251
Ad Hoc AVG: 14788.8924302789

What’s going on? Is the conclusion that, in fact, ad hoc queries are faster than sp_executesql?

Absolutely not.

I cheated.

I intentionally picked a data set with a pretty interesting distribution. Depending on the value passed for BillToCustomerID there is the possibility of one of three different execution plans:

sp_executesql

In fact, the data is such that the first value that would be called is going to generate the worst possible plan for all the other data sets because it leads to the plan that simply consists of three scans. Even if I choose to force one of the other plans first, something I did several times while testing, the fact that the ad hoc queries will always generate the best plan for the data set results in better overall performance for ad hoc, in this instance.

Please don’t mistake me. I could skew the data in another direction in order to make sp_executesql into the better performing mechanism. The initial premise was that you should use sp_executesql over ad hoc because it will be faster. That’s not the case. In fact, it completely depends on a number of factors as to which of these methods will be faster. That said, my preferred mechanism is to use sp_executesql because it creates parameterized queries where I can ensure, with a certainty, that I’m avoiding SQL Injection. To achieve parity on execution times, I could simply include the WITH RECOMPILE hint and then I would have the same speed as the ad hoc approach while still ensuring my security.

Oh, and to add another wrinkle, you could always turn on ‘Optimize For Ad Hoc’. That shaves a few more milliseconds off the ad hoc approach over the sp_executesql approach in this example.

Conclusion

I know I’ve over-emphasized this throughout this discussion, but I’m going to repeat it again, SQL Injection is dangerous and a purely ad hoc approach to queries leads to unsafe servers. You must validate your inputs and use a querying mechanism that ensures that SQL Injection won’t be an issue. This is accomplished by using parameterized queries, which includes sp_executesql.

However, should you be using sp_executesql over ad hoc queries because performance will improve? No. Clearly that’s not the case. Instead you should be using sp_executesql because it’s a safer, saner approach to writing your queries.


I love talking performance tuning. In fact, I’ll be doing an all day seminar on execution plans and query tuning before SQLSaturday Providence in Rhode Island, December 2016, therefore, if you’re interested, sign up here.

Nov 03 2016

Reinforcing the Importance of Statistics on Row Estimate

I recently wrote an introductory post about the importance of statistics. I just received a reinforcement on how important they are during my own work.

Bad Estimate

I hit a weird problem while I was setting up a query to illustrate a point (blog to be published next week). Let’s take the basis of the problem and explain it. I wanted data with distribution skew, so I ran this query to find out if there was a wide disparity between the top and bottom of the range:

SELECT i.BillToCustomerID,
   COUNT(i.BillToCustomerID) AS TestCount
FROM Sales.Invoices AS i
GROUP BY i.BillToCustomerID
ORDER BY TestCount ASC;

Sure enough, the bottom of the range returned three (3) rows and the top returned 21,551. If I then run a query to retrieve just a few rows like this:

SELECT *
FROM Sales.Invoices AS i
WHERE i.BillToCustomerID = 1048;

I get the following execution plan:

SimplePlan

I’m happy because this is the plan I expected. With this plan in hand, I don’t bother looking at anything else.

Creating a Problem

I expand out the query initially as follows:

SELECT i.InvoiceID,
   il.InvoiceLineID,
   si.StockItemName
FROM Sales.Invoices AS i
JOIN Sales.InvoiceLines AS il
   ON il.InvoiceID = i.InvoiceID
JOIN Warehouse.StockItems AS si
ON si.StockItemID = il.StockItemID
WHERE i.BillToCustomerID = 1048;

The execution plan now looks like this:

complexplan

Frankly, I’m puzzled. Why on earth did we go from a key lookup operation to a scan on the Invoices table? I rebuild the query a couple of times and it keeps going to a scan. Finally, I pause a moment and look at the row estimate (you know, like I should have done the first moment I was puzzled):

row estimate

258 rows? Wait, that’s wrong. The number of rows for this value is three. Why on earth would it be showing 258? There’s no reason. I haven’t done any kinds of calculations on the columns. I double check the structures. No hidden views or constraints, or anything that would explain why the estimate was so wrong. However, it’s clear that the estimate of 258.181 is causing the loops join and key lookup to go away in favor of a hash join and scan when I add complexity to the row estimate needed by the optimizer.

After thinking about it a while, I finally ran DBCC SHOW_STATISTICS:

statistics

Note the highest point on the histogram, 1047. Yet I’m passing in 1048.

So, what’s happening?

While the number of rows for 1048 was the lowest, at 3, unfortunately it seems that the 1048 values were added to the table after the statistics for the index had been updated. Instead of using something from the histogram, my value fell outside the values in the histogram. When the value is outside histogram the Cardinality Estimator uses the average value across the entire histogram, 258.181 (at least for any database that’s in SQL Server 2014 or greater and not running in a compatibility mode), as the row estimate.

I then change the query to use the value 1047, the execution plan then changed to look like this:

cleanplan

The new plan reflects the behavior I was going for when I was setting up the test. The row estimates are now accurate, and small, therefore I get a key lookup operation instead of a scan.

Conclusion

Statistics drive the decisions made by the optimizer. The very first moment you’re looking at an execution plan and you’re seeing a scan where you thought, for sure, you should have seen a seek, check the row estimates (OK, not the first moment, it could be a coding issue, structural issue, etc.). It could be that your statistics are off. I just received my own reminder to pay more attention to the row estimates and the statistics.


I love playing with statistics and execution plans and queries. As a result, I also like teaching how to do this stuff. If you’re interested, I’m putting on a class in Rhode Island, December 2016. Sign up here.

Nov 01 2016

Stored Procedures Are Not Faster Than Views

A performance tuning tip I saw recently said, “Views don’t perform as well as stored procedures.”

<sigh>

Let’s break this down, just a little.

Definitions

A view is nothing but a query. The definition given by Microsoft is that it’s a virtual table that’s defined by a query. It’s a query that is used to mask data or perform a complex join or similar behaviors. Views are queries that get stored in the database. Views can be easily referred to as if they were a tables. That’s it. I’ve written in the past about views, including how they can possibly perform poorly.

A stored procedure is also a query, or a series of queries, or, a whole lot more. Microsoft’s definition of a stored procedure basically defines it as programming object that can accept input through parameters, perform actions, and provide various types of output. Stored procedures are also stored in the database, but that’s about the end of the direct correlations to a view. Heck, you can call views from stored procedures, so I’m really at a loss as to where this tip comes from.

In short, yes, both these objects have in them queries, but these are fundamentally different objects. You can’t really say that using one or the other is faster because they each do different things. Further, you can write code that will perform poorly using either construct.

Test Setup To Compare Performance

Here’s the view definition I’m going to use for the tests:

CREATE VIEW dbo.CustomerDeliveryInfo
AS
SELECT c.CustomerName,
   c.DeliveryRun,
   c.RunPosition,
   dm.DeliveryMethodName,
   cid.CityName AS DeliveryCity,
   cip.CityName AS PostalCity,
   c.CustomerID
FROM Sales.Customers AS c
JOIN Application.DeliveryMethods AS dm
   ON dm.DeliveryMethodID = c.DeliveryMethodID
JOIN Application.Cities AS cid
   ON cid.CityID = c.DeliveryCityID
JOIN Application.Cities AS cip
   ON cip.CityID = c.PostalCityID;

We’re going to compare that with a stored procedure that uses the same query. The procedure also takes advantage of the fact that it is a stored procedure using a parameter for input values:

CREATE PROCEDURE dbo.CustomerDeliveryInformation 
(@CustomerID INT)
AS
BEGIN
   SELECT c.CustomerName,
      c.DeliveryRun,
      c.RunPosition,
      dm.DeliveryMethodName,
      cid.CityName AS DeliveryCity,
      cip.CityName AS PostalCity,
      c.CustomerID
   FROM Sales.Customers AS c
   JOIN Application.DeliveryMethods AS dm
      ON dm.DeliveryMethodID = c.DeliveryMethodID
   JOIN Application.Cities AS cid
      ON cid.CityID = c.DeliveryCityID
   JOIN Application.Cities AS cip
      ON cip.CityID = c.PostalCityID
   WHERE c.CustomerID = @CustomerID;
END;

We’re also going to create another stored procedure that uses the view:

CREATE PROCEDURE dbo.InfoCustomerDelivery 
(@CustomerID INT)
AS
BEGIN
   SELECT * FROM dbo.CustomerDeliveryInfo AS cdi
   WHERE cdi.CustomerID = @CustomerID;
END;

Because stored procedures and views are different, we’ll have to call these different objects in different ways:

SELECT *
FROM dbo.CustomerDeliveryInfo AS cdi
WHERE cdi.CustomerID = 556;

EXEC dbo.CustomerDeliveryInformation @CustomerID = 556;

EXEC dbo.InfoCustomerDelivery @CustomerID = 556;

In this way we can run each of these queries independently and compare the results between them.

Results Comparing Stored Procedures With Views

If you run each of the queries above, you will find that they all create a nearly identical execution plan:

Views exec plan

You can click on that to make it bigger. If we compare all the different plans, one set of details does stand out:

plandifferences

There is a difference in the compile time between the view by itself and the stored procedures (they were almost identical). Let’s look at performance over a few thousand executions:

Query duration
View AVG: 210.431431431431
Stored Proc w/ View AVG: 190.641641641642
Stored Proc AVG: 200.171171171171

This is measured in microsends, so the variation we’re seeing is likely just some disparity on I/O, CPU or something else since the differences are trivial at 10mc or 5%. While that may seem like the view is suffering, please note that the view inside the procedure actually ran faster by 5%. Again, this is explained by the fact that we’re only talking about a 10 microsecond difference. I’m not sure if that’s within the margin for error on the Extended Event sql_batch_complete or not (I couldn’t find documentation stating what it might be), but I’ll bet it’s close. I believe it’s safe to say that the average performance of these queries is identical.

All three queries had 8 logical reads.

What about execution time including compile time, since there is a difference:

Query duration
View AVG: 10089.3226452906
Stored Proc AVG: 9314.38877755511
Stored Proc w/ View AVG: 9938.05410821643

The difference in the performance including compile time for the procedure alone is 700mc better on average than the view. That’s an 8% difference. It was almost that high for the view that used the procedure at 7%.

If we’re just talking compile time then, there is a significant win if we avoid the view. This is no doubt because of the extra work involved in unpacking the view and going through the simplification process within the optimizer. Plus, the view alone in our query was parameterized by the optimizer in order to assist it’s performance over time (as we saw in the average results without the recompile). All that extra work explains the 8% difference.

Let’s Break It

What if we change the query around a little. I decide that all I want to see right now from the view is the CustomerID:

SELECT cdi.CustomerID
FROM dbo.CustomerDeliveryInfo AS cdi
WHERE cdi.CustomerID = 556;

When I execute this, I get a whole new execution plan:

viewsimple

The execution time drops a little to around 190mc on average and the reads go from 8 to 2. The stored procedure would have to get rewritten to only return CustomerID. Does that mean that views are faster than stored procs? Absolutely not. It just means that there is some degree of flexibility built into the view, as a construct, that’s not there in a stored procedure, as a construct. These are fundamentally different objects.

What if we change the query against the view again:

SELECT *
FROM dbo.CustomerDeliveryInfo AS cdi
WHERE cdi.CustomerName = 'Om Yadav';

Once more the execution plan will change to something different than before:

viewcomplex

Performance drops to about 300mc and we get 10 reads instead of 8. Does that mean that views are slower than stored procedures? No. We’re attempting to compare two different objects that perform two different functions within SQL Server.

Conclusion

Since a stored procedure can actually query a view, suggesting that we use stored procedures instead of views becomes quite problematic. With the exception of the differences in compile time, we see that views actually perform exactly the same as stored procedures, if the query in question is the same. There are reasons to use views as well as reasons to not use them. There are reasons to use stored procedures as well as reasons to not use them. Neither of these objects is preferred above the other because of performance concerns.


Want to play some more with execution plans and query tuning? I’ll be doing an all day seminar on execution plans and query tuning before SQLSaturday Providence in Rhode Island, December 2016, therefore, if you’re interested, sign up here.

Oct 24 2016

A Sub-Query Does Not Hurt Performance

The things you read on the internet, for example, “don’t use a sub-query because that hurts performance.”

Truly?

Where do people get these things?

Let’s Test It

I’ve written before about the concept of cargo cult data professionals. They see one issue, one time, and consequently extrapolate that to all issues, all the time. It’s the best explanation I have for why someone would suggest that a sub-query is flat out wrong and will hurt performance.

Let me put a caveat up front (which I will reiterate in the conclusion, just so we’re clear), there’s nothing magically good about sub-queries just like there is nothing magically evil about sub-queries. You can absolutely write a sub-query that performs horribly, does horrible things, runs badly, and therefore absolutely screws up your system. Just as you can with any kind of query. I am addressing the bad advice that a sub-query is to be avoided because they will inherently lead to poor performance.

Let’s start with a simple test, just to validate the concept of how a sub-query performs within SQL Server:

SELECT sd.OrderQty,
   pr.Name
FROM
   (SELECT *
    FROM Sales.SalesOrderDetail AS sod
   ) AS sd
JOIN
   (SELECT *
    FROM Production.Product AS p
   ) AS pr
   ON pr.ProductID = sd.ProductID
WHERE sd.SalesOrderID = 52777;

SELECT sod.OrderQty,
   p.Name
FROM Sales.SalesOrderDetail AS sod
JOIN Production.Product AS p
   ON p.ProductID = sod.ProductID
WHERE sod.SalesOrderID = 52777;

If there is something inherently wrong with a sub-query, then there is something twice as wrong with two sub-queries. Here are the resulting execution plans:

sub-query plan matches query plan

Huh, look sort of, I don’t know, almost identical. Let’s compare the plans using the new SSMS plan comparison utility:

only slight diffences in sub-query plan

Well, darn. Displayed in pink are the common sets of operations between the two plans. In other words, for these plans, everything except the properties of the SELECT operator are exactly the same. Let’s take a look at those properties:

sub-query SELECT properties

OK. Now we have some interesting differences, and especially, some interesting similarities. Let’s start with the similarities. First of all, we have exactly the same QueryPlanHash value in both plans. In addition, we also have identical estimated rows and costs. In short, the optimizer created two identical execution plans. Now, this is where things get a little bit interesting. See, the optimizer actually worked a little harder to create the first plan than the second. It took an extra tic on the CPU and just a little more CompileMemory and CompileTime. Interesting.

What about execution times? With a few runs on average, the execution times were identical at about 149mc with 11 reads. However, running a query once or twice isn’t testing. Let’s get a few thousand runs of both queries. The average results from the Extended Events sql_batch_completed event were 75.9 microseconds for both queries.

However, what about that extra little bit of compile time in the query that used sub-queries? Let’s add in a statement to free the procedure cache on each run and retry the queries. There is a measurable difference now:

Query duration
Sub-query AVG: 5790.20864172835
Query AVG: 4539.49289857972

More work is done by the optimizer on the sub-query to compile the same execution plan. We’re adding work to the optimizer, requiring it to unpack the, admittedly, silly query written above.  When we refer only to the compile time and not the execution time, there is a performance hit. Once the query is compiled, the performance is identical. Whether or not you get a performance hit from a sub-query then, in part, depends on the degree to which you’re experiencing compiles or recompiles. Without the recompile, there is no performance hit. At least in this example.

Let’s Test It Again, Harder

I firmly believe in the old adage; if you ain’t cheatin’, you ain’t fightin’. It’s time to put the boot in.

Let’s go with much more interesting queries that are more likely to be written than the silly example above. Let’s assume some versioned data like in this article on Simple-Talk. We could express a query to bring back a single version of one of the documents in one of three ways from the article. We’re just going to mess with two of them. One that uses a sub-query, and one that does not:

--no sub-query
SELECT TOP 1 d.DocumentName,
   d.DocumentID,
   v.VersionDescription,
   v.VersionID,
   ROW_NUMBER() OVER (ORDER BY v.VersionID DESC) AS RowNum
FROM dbo.Document d
JOIN dbo.Version v
   ON d.DocumentID = v.DocumentID
WHERE d.DocumentID = 9729;

--sub-query
SELECT  d.[DocumentName],
        d.[DocumentId],
        v.[VersionDescription],
        v.[VersionId]
FROM    dbo.[Document] d
        CROSS APPLY (SELECT TOP (1)
                            v2.VersionId,
                            v2.VersionDescription
                     FROM   dbo.[Version] v2
                     WHERE  v2.DocumentId = d.DocumentId
                     ORDER BY v2.DocumentId,
                            v2.VersionId DESC
                    ) v
WHERE   d.[DocumentId] = 9729;

As per usual, we can run these once and compare results, but that’s not really meaningful. We’ll run them thousands of times. Also, to be sure we’re comparing apples to apples, we’ll force a recompile on every run, just like in the first set of tests. The results this time:

Query duration
Sub-query AVG: 1852.14114114114
Query AVG: 2022.62162162162

You’ll note that, even with the compile on each execution, the query using a sub-query actually out-performed the query that was not using a sub-query. The results are even more dramatic when we take away the compile time:

Query duration
Sub-query AVG: 50.8368368368368
Query AVG: 63.3103103103103

We can also look to the execution plans to get an understanding of how these queries are being resolved:

differentplans

The plan on top is the sub-query plan, and the plan on the bottom is the plan for just the plain query. You can see that the regular query is doing a lot more work to arrive at an identical set of data. The differences are visible in the average execution time, about a 20% improvement.

You could argue that we’re comparing two completely different queries, but that’s not true. Both queries return exactly the same result set. It just so happens that the query using the sub-query performs better overall in this instance. In short, there’s no reason to be scared of using a sub-query.

Sub-Query Conclusion

Is it possible for you to write horrid code inside of a sub-query that seriously negatively impacts performance? Yes. Absolutely. I’m not arguing that you can’t screw up your system with poor coding practices. You absolutely can. The query optimization process within SQL Server deals well with common coding practices. Therefore, the queries you write can be fairly sophisticated before, by nature of that sophistication, you begin to get serious performance degradation.

You need to have a method of validation for some of what you read on the internet. People should provide both the queries they are testing with and the numbers that their tests showed. If you’re just seeing completely unsupported, wildly egregious statements, they’re probably not true.

In conclusion, it’s safe to use sub-queries. Just be careful with them.


If you’re finding any of this useful and you’d like to dig down a little more, you can, because I’ll be putting on an all day seminar on execution plans and query tuning. The event takes place before SQLSaturday Providence in Rhode Island, December 2016, therefore, if you’re interested, sign up here.

Oct 17 2016

SELECT * Does Not Hurt Performance

SELECT *I read all the time how SELECT * hurts performance. I even see where people have said that you just have to supply a column list instead of SELECT * to get a performance improvement. Let’s test it, because I think this is bunkum.

The Test

I have here two queries:

SELECT *
FROM Warehouse.StockItemTransactions AS sit;

--and

SELECT sit.StockItemTransactionID,
       sit.StockItemID,
       sit.TransactionTypeID,
       sit.CustomerID,
       sit.InvoiceID,
       sit.SupplierID,
       sit.PurchaseOrderID,
       sit.TransactionOccurredWhen,
       sit.Quantity,
       sit.LastEditedBy,
       sit.LastEditedWhen
FROM Warehouse.StockItemTransactions AS sit;

I’m basically going to run this a few hundred times each from PowerShell. I’ll capture the executions using Extended Events and we’ll aggregate the results.

The Results

I ran the test multiple times because, funny enough, I kept seeing some disparity in the results. One test would show a clear bias for one method, another test would show the opposite. However, averaging the averages we see that things broke down as follows:

* 167.247ms
Column List 165.500ms

That’s after about 2000 separate executions of each query. There’s a 2ms bias towards the Column List query as opposed to the *. That’s an improvement, if you want to call it that, of 1%. It’s hardly worth the bother, assuming that with more testing this continued to hold true. In multiple tests, the SELECT * ran faster. I just feel honor bound to put up the full results. They show an improvement, but not one I’d get excited about. Oh, and the reads, the execution plan, everything else… identical.

SELECT * Conclusion

Don’t get me wrong, there are lots of reasons to not use SELECT *. Yes, performance is one of the reasons to not use SELECT *. However, when most people suggest that maybe using SELECT * is a bad idea for performance reasons, what they’re saying is you ought to only move the columns you need and the data you are actually using, not everything. I’m not aware of anyone with experience and knowledge suggesting that using the complete column list instead of SELECT * is faster. As we can see in the tests above, it isn’t (or is by so small a margin, who cares).


I love talking performance tuning. In fact, I’ll be doing an all day seminar on execution plans and query tuning before SQLSaturday Providence in Rhode Island, December 2016, therefore, if you’re interested, sign up here.