May 03 2016

Implicit Conversion and Performance

Letting SQL Server change data types automatically can seriously impact performance in a negative way. Because a calculation has to be run on each column, you can’t get an index seek. Instead, you’re forced to use a scan. I can demonstrate this pretty simply. Here’s a script that sets up a test table with three columns and three indexes and tosses a couple of rows in:

CREATE TABLE dbo.ConvertTest (
     BigIntColumn BIGINT NOT NULL,
     IntColumn INT NOT NULL,
     DateColumn VARCHAR(30)
    );

CREATE INDEX BigIntIndex 
ON dbo.ConvertTest 
(BigIntColumn);
CREATE INDEX IntIndex 
ON dbo.ConvertTest 
(IntColumn);
CREATE INDEX DateIndex 
ON dbo.ConvertTest
(DateColumn);

WITH    Nums
          AS (SELECT TOP (1000000)
                        ROW_NUMBER() OVER (ORDER BY (SELECT 1
                                                    )) AS n
              FROM      master.sys.all_columns ac1
              CROSS JOIN master.sys.all_columns ac2
             )
    INSERT  INTO dbo.ConvertTest
            (BigIntColumn,
             IntColumn,
             DateColumn
            )
    SELECT  Nums.n,
            Nums.n,
            DATEADD(HOUR, Nums.n, '1/1/1900')
    FROM    Nums;

If I run a very simple query like this:

DECLARE @param DATETIME;

SET @param = '3/15/1963 8:00:00';

SELECT  ct.DateColumn
FROM    dbo.ConvertTest AS ct
WHERE   ct.DateColumn = @param;

I get an execution plan that looks like this:

convert_scan

The issue is, while I put dates into the DateColumn, the data type there is VARCHAR, not DATETIME. Passing a DATETIME value results in an implicit conversion that we can see in the Index Scan Predicate property:

ConverImplicitHurtsPerformance

This is a function against a column and that is what results in the scan. If I modify the code as follows:

DECLARE @param VARCHAR(30);

SET @param = '3/15/1963 8:00:00';

SELECT  ct.DateColumn
FROM    dbo.ConvertTest AS ct
WHERE   ct.DateColumn = @param;

I get an execution plan that looks like this:

convert_seek

No change in data type is needed, so the index can be used. It’s a very simple demonstration of how implicit conversions hurt performance. Now, what about the integer columns? What happens when I do this with the code:

DECLARE @param INT;

SET @param = 650323;

SELECT  ct.BigIntColumn
FROM    dbo.ConvertTest AS ct
WHERE   ct.BigIntColumn = @param;

Here’s the execution plan:

convert_intseek

And here is the predicate:

convert_predicate

In this case, we’re still getting an implicit conversion of the data type, but SQL Server is smart enough to realize that, hey, it’s still just an integer, so it can convert the @param instead of the column, resulting in a seek. If I change the test so that it runs a query against the INT column passing a BIGINT value, it doesn’t even do a conversion of any kind.

In short, not all implicit conversions are a problem. However, it’s still a best practice to use the right data type for columns. You should also use the same data type for your parameters and local variables as the column data type. Taking these steps avoids issues with implicit conversion.


For lots more information on query tuning, in August, I’ll be doing an all day pre-con at SQLServer Geeks Annual Summit in Bangalore India.

I’m also going to be doing an all day training course before SQL Saturday Oslo in September.

Feb 24 2016

Query Store and Recompile

One of the many advantages of SQL Cruise is the ability to have enough time during a presentation to be able to answer questions from the people there in great detail. One question came up while I was showing the new functionality of Query Store (available soon in SQL Server 2016, available right now in Azure SQL Database).

What happens to plan forcing when you have OPTION RECOMPILE on a query?

Great question. I have a favorite procedure I use to illustrate the functionality of parameter sniffing:

ALTER PROC dbo.AddressByCity @City NVARCHAR(30)
AS
SELECT  a.AddressID,
        a.AddressLine1,
        a.AddressLine2,
        a.City,
        sp.Name AS StateProvinceName,
        a.PostalCode
FROM    Person.Address AS a
JOIN    Person.StateProvince AS sp
        ON a.StateProvinceID = sp.StateProvinceID
WHERE   a.City = @City;

If this procedure is called with the value of ‘Mentor’ you get an execution plan that looks like this:

2016-02-22_10-48-00

If you remove that plan from cache and then call the procedure with the value of ‘London’ then the plan looks like this:

2016-02-22_10-38-43

It’s parameter sniffing in action. One of the cool pieces of functionality that comes with the Query Store is the ability to choose a plan and make that plan get used, regardless of the parameter values. You do this through the following command:

sys.sp_query_store_force_plan

You pass it a query_id and a plan_id  from the information in the Query Store and now that plan will get used. But, what happens when you have attempted to fix your query using a query hint, such as RECOMPILE? Let’s modify our procedure:

ALTER PROC dbo.spAddressByCity @City NVARCHAR(30)
AS
SELECT  a.AddressID,
        a.AddressLine1,
        a.AddressLine2,
        a.City,
        sp.Name AS StateProvinceName,
        a.PostalCode
FROM    Person.Address AS a
JOIN    Person.StateProvince AS sp
        ON a.StateProvinceID = sp.StateProvinceID
WHERE   a.City = @City
OPTION  (RECOMPILE);

If I run the procedure, regardless of the parameter values passed in, I get a new plan for each value. Nothing is stored in cache, but, it is stored in the Query Store.

Let’s pull the query_id and plan_id out of Query Store for this procedure:

SELECT  qsq.query_id,
        qsp.plan_id,
        CAST(qsp.query_plan AS XML) AS sqlplan
FROM    sys.query_store_query AS qsq
JOIN    sys.query_store_plan AS qsp
        ON qsp.query_id = qsq.query_id
WHERE   qsq.object_id = OBJECT_ID('dbo.spAddressByCity');

Currently, on my system, this brings back four rows, two distinct query_id and four different plan_id. This is because, when I modified the text of the query within the procedure, I kept the object_id the same, but, the query text is different (the RECOMPILE hint). This means a different record in the Query Store. We’ll use the newer query_id and pick one of the plans to force it:

EXEC sys.sp_query_store_force_plan
    42,
    44;

Now, no matter what value I pass it, the RECOMPILE hint is effectively ignored. I have the same plan every time (in my case, the London plan). Ah, but is the RECOMPILE hint ignored? In fact, no, it isn’t. If I capture the extended event sql_statement_recompile, I can see every single time I execute this procedure that it’s going through a full recompile… which is then tossed and the plan I chose is forced upon the optimizer. Every time.

This isn’t a revelation. It makes sense. However, the Query Store represents one more thing that we have to think through.


If you want to talk query tuning, execution plans, the Query Store, and more, I’ll be presenting an all day pre-conference seminar at SQLDay in Wroclaw Poland on May 16th, 2016. Let’s talk.

Feb 22 2016

Why Is The Server Slow?

This is blog post #2 in support of Tim Ford’s (b|t) #iwanttohelp, #entrylevel.

If you haven’t been working in SQL Server for very long, you may not have got this phone call yet, but you will:

Hi, yeah, the server is slow. Thanks. Bye.

Let’s pretend for a moment that you know which server they’re referring to (because just finding out that piece of information can be a challenge). Now what?

The list of tools and mechanisms within SQL Server for gathering metrics is extremely long:

Performance Monitor
Dynamic Management Views & Functions
System Views
Extended Events
Trace Events
Activity Monitor
Data Collector
Execution Plans
3rd Party Tools

I’m leaving out lots of stuff in that list. So where do you start when you get this phone call? Where is the server slow?

The best place to start is by looking at the wait statistics.

A simple, and simplified, explanation for how processes work within SQL Server is that each process gets access to the various resources for a little while, then has to pass off access to another process, each of them working together to get the work done. But, some processes take longer than others. When a process has to wait for anything in SQL Server, this information gets logged. Depending on the system you’re working with you can access these wait statistics from one of three locations:

sys.dm_os_wait_stats: for SQL Server
sys.dm_db_wait_stats: for Azure SQL Database
sys.dm_pdw_nodes_os_wait_stats: for Azurew SQL Data Warehouse

Getting information from these system views is extremely simple. Here’s an example query:

SELECT  *
FROM    sys.dm_os_wait_stats AS dows
ORDER BY dows.wait_time_ms DESC;

The results of this query look like this:

waits

The first column lists the wait types. The second column provides a count of the tasks that have had that wait type. The third column, wait_time_ms, is the amount of time in milliseconds that tasks have been waiting within the system, cumulative. Next is the maximum wait time that any one task has waited, max_wait_time_ms. Finally, signal_wait_time_ms, you see the time that the waits have had to wait for access to the CPU (also known as time spent on the Runnable queue). The importance of waits are not simply the time that a wait has had, but also the number of tasks and the max time. Using all these values gives meaning to the individual wait.

Here is where things actually get difficult. The wait types are arcane, difficult to understand, and difficult to interpret. Further, a lot of the wait types actually don’t mean anything at all. The waits are not indicative of an issue. So, while the query above is simple, the results it provides are very weak. Instead, I strongly recommend you use the query provided by Paul Randal, located here. It will filter out the wait statistics that you shouldn’t care about.

Now, you have a meaningful list of wait statistics that will tell you exactly why, if not where, your server is running slow. Unfortunately, these waits still need to be interpreted. If you read further on Paul’s blog, you’ll see he has a number of waits and their causes documented. That’s your best bet to start understanding what’s happening on your system (although, I hear, Paul might be creating a more complete database of wait stats. I’ll update this blog post should that become available).

One other thing to consider. These waits are since the last time the server was started (or failed over, or the database was failed over in Azure, or if the values have been reset). This means that simply looking at the list doesn’t give you necessarily enough information. Instead, running this more than once during a day can show you what’s been slow over time by comparing the two data sets. You can also use sys.dm_exec_session_wait_stats to see what any given session is experiencing if you want to know what a given user or process is experiencing. Don’t just look at the list and think you’re done. The best thing to do is get to a point of proactive monitoring (for this, monitoring tools make it easier).

So, when you get the phone call that says the server is slow, you know how to get started understanding exactly why that may be the truth.

Jan 25 2016

Finding Your Query in Query Store

Query Store is pretty amazing. I’m loving working with it. I think it’s likely to change how query tuning will be done in the future. Lots of people are probably going to just use the reports and tools in SQL Server Management Studio. However, a pretty healthy chunk of us will start using the system views in order to programmatically access the information stored in Query Store. One of the first things you’re going to want to do is track down your query.

The primary views you’ll want are sys.query_store_query and sys.query_store_query_text. They join together based on the query_text_id. Let’s take four scenarios and see if we can retrieve the correct query. First up, an ad hoc query:

SELECT  e.NationalIDNumber,
        p.LastName,
        p.FirstName,
        a.City,
        bea.AddressTypeID
FROM    HumanResources.Employee AS e
JOIN    Person.BusinessEntityAddress AS bea
        ON bea.BusinessEntityID = e.BusinessEntityID
JOIN    Person.Address AS a
        ON a.AddressID = bea.AddressID
JOIN    Person.Person AS p
        ON p.BusinessEntityID = e.BusinessEntityID
WHERE   p.LastName = 'Hamilton';

If we wanted to retrieve this from the Query Store AdventureWorks2014, we’d run a query like this:

SELECT * FROM sys.query_store_query_text AS qsqt
WHERE qsqt.query_sql_text = 'SELECT  e.NationalIDNumber,
        p.LastName,
        p.FirstName,
        a.City,
        bea.AddressTypeID
FROM    HumanResources.Employee AS e
JOIN    Person.BusinessEntityAddress AS bea
        ON bea.BusinessEntityID = e.BusinessEntityID
JOIN    Person.Address AS a
        ON a.AddressID = bea.AddressID
JOIN    Person.Person AS p
        ON p.BusinessEntityID = e.BusinessEntityID
WHERE   p.LastName = ''Hamilton''';

Of note, I had to drop the statement terminator from the text of the query, the semi-colon, in order to retrieve the correct query. That’ll be good to know in a moment. This retrieves the query information I requested, just fine.

For our next example, let’s simplify things a whole bunch:

SELECT  *
FROM    Production.BillOfMaterials AS bom
WHERE   bom.BillOfMaterialsID = 2363;

If I then attempt to retrieve the information from Query Store like this:

SELECT  *
FROM    sys.query_store_query_text AS qsqt
WHERE   qsqt.query_sql_text = 'SELECT  *
FROM    Production.BillOfMaterials AS bom
WHERE   bom.BillOfMaterialsID = 2363';

It actually doesn’t work. Note, I took off the statement terminator, just like before. In fact, the problem here can be identified if we look at the T-SQL from the execution plan from the second example:

SELECT * FROM [Production].[BillOfMaterials] [bom] WHERE [bom].[BillOfMaterialsID][email protected]

This query has gone through simple parameterization. So, in order to retrieve the information from Query Store, we have a function, sys.fn_stmt_sql_handle_from_sql_stmt that we have to incorporate like this:

SELECT  qsqt.*
FROM    sys.query_store_query_text AS qsqt
JOIN    sys.query_store_query AS qsq
        ON qsq.query_text_id = qsqt.query_text_id
CROSS APPLY sys.fn_stmt_sql_handle_from_sql_stmt('SELECT  *
FROM    Production.BillOfMaterials AS bom
WHERE   bom.BillOfMaterialsID = 2363;', qsq.query_parameterization_type) AS fsshfss

And note, I have the statement terminator back in place, but this function takes that into account. I joined to the query_store_query table in order to get the parameterization type value. With the function figuring out the statement handle based on the text I originally ran the query through, everything is hunky dory.

Let’s look at one more example:

EXEC dbo.spAddressByCity
    @City = N'London';

This is a stored procedure, so I can just do this:

SELECT  qsqt.*
FROM    sys.query_store_query_text AS qsqt
JOIN    sys.query_store_query AS qsq
        ON qsq.query_text_id = qsqt.query_text_id
WHERE   qsq.object_id = OBJECT_ID('dbo.spAddressByCity');

However, you may have multi-statement stored procs, so you might want to query based on the text within the procedure like this:

SELECT  *
FROM    sys.query_store_query_text AS qsqt
WHERE   qsqt.query_sql_text = 'SELECT  a.AddressID,
        a.AddressLine1,
        a.AddressLine2,
        a.City,
        sp.Name AS StateProvinceName,
        a.PostalCode
FROM    Person.Address AS a
JOIN    Person.StateProvince AS sp
        ON a.StateProvinceID = sp.StateProvinceID
WHERE   a.City = @City'

Bad news. That doesn’t work. If you looked at the qsqt.query_sql_text value in the previous Query Store query using the OBJECT_ID function, you’ll see that the query looks like this within the Query Store:

(@City nvarchar(30))SELECT  a.AddressID,          a.AddressLine1,          a.AddressLine2,          a.City,          sp.Name AS StateProvinceName,          a.PostalCode  FROM    Person.Address AS a  JOIN    Person.StateProvince AS sp          ON a.StateProvinceID = sp.StateProvinceID  WHERE   a.City = @City

But, if you look at the documentation for sys.fn_stmt_sql_handle_from_sql_stmt that I linked above, it only works with simple or forced parameterization (this, despite having options for none and user, 0 & 1 respectively in the documents). That means you may be resorted to the use of LIKE to retrieve particular statements:

SELECT  qsqt.*
FROM    sys.query_store_query_text AS qsqt
WHERE qsqt.query_sql_text LIKE '%SELECT  a.AddressID,
        a.AddressLine1,
        a.AddressLine2,
        a.City,
        sp.Name AS StateProvinceName,
        a.PostalCode
FROM    Person.Address AS a
JOIN    Person.StateProvince AS sp
        ON a.StateProvinceID = sp.StateProvinceID
WHERE   a.City = @City%';

At least, that’s where we’re at with the current, public, CTP.

Tracking down your query can be a little bit of work and hopefully these tips will make it a little easier.

Sep 09 2015

Yes, Foreign Keys Help Performance

I created this example several years ago that illustrates how foreign key constraints can help performance. It’s a contrived example. Granted. I feel like it illustrates the point.

However, over the years, people have questioned one aspect of it. The optimizer uses the foreign keys to figure out which tables can be eliminated from the query, making for a more efficient plan and making the query run faster. The pushback has always been, “Yeah, Grant, but nobody writes T-SQL where they include extra tables that they don’t need.”

My initial response, after I stop laughing, is to point out any number of ORM tools. But, you know what, let’s assume that’s correct. No one would ever create a giant catch-all view that has all their JOINs in one place so they don’t have to write them for different queries. Stuff like that never happens. Everyone will only, ever, write a query based on the exact tables they need, so this simplification process of the optimizer is a bad example. I agree.

Please go read through the other post for all the details of the set up. I’m just going to focus on one query in this post.

First, in the example, the SELECT only references the Person table and there is no WHERE clause. So we could completely eliminate all the other tables. But, I still want to illustrate how foreign keys help, so let’s assume, for some reason, we need the JOIN (yes, contrived, but it’s going to illustrate the point, again):

SELECT  p.LastName + ',' + p.FirstName AS PersonName
FROM person.BusinessEntityAddress AS bea
JOIN Person.Person AS p
ON p.BusinessEntityID = bea.BusinessEntityID;
GO
SELECT  p.LastName + ',' + p.FirstName AS PersonName
FROM dbo.MyBusinessEntityAddress AS bea
JOIN dbo.MyPerson AS p
ON p.BusinessEntityID = bea.BusinessEntityID;
GO

The first query goes against a pair of tables with enforced referential constraints. The other query goes against an identical set of tables (indexes are all the same, so is the data and the statistics), but no foreign keys (the final point at the old blog post). If we capture performance metrics on these two queries we see the following:

Execution Time (Avg ms) Reads
Foreign Key 134095.6667 155
No Foreign Key 193937.6667 3929

On average, the second query is about 60ms slower and has 18 times as many reads. Why? Let’s look at the execution plans:

ForeignKeysHelpPerformance

The only difference between these tables is the existence of foreign key constraints, which leads to differences in the estimated number of rows. This occurs because the optimizer knows that the INNER JOIN is going to limit the data coming back to only matching data AND it knows that because there’s an enforced referential constraint, exactly how many rows that will be. Because of this knowledge, it chooses an execution plan that is more efficient.

Foreign keys help performance in SQL Server.

May 05 2015

Effects of Persisted Columns on Performance

I live for questions. And my favorite questions are the ones where I’m not completely sure of the answer. Those are the questions that make me stop presenting in order to take a note so I can try to answer the question later, usually in a blog post. Guess where we are today?

I was asked at SQL Bits in London about the direct impact of the PERSISTED operator on calculated columns, both inserts and selects. I didn’t have a specific answer, so I wrote it down for later (and asked the, self-described, persisting Dane, to email me to remind me. He did, so I put together a few tests to try to answer his question.

First, I created three tables:

CREATE TABLE dbo.PersistTest (
PersistTestID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
RandomValue1 DECIMAL(5,3),
RandomValue2 DECIMAL(5,3),
CalcValue AS (RandomValue1 + RandomValue2),
PersistValue AS (RandomValue1 + RandomValue2) PERSISTED
);
GO

CREATE TABLE dbo.PersistTest2 (
PersistTestID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
RandomValue1 DECIMAL(5,3),
RandomValue2 DECIMAL(5,3),
CalcValue AS (RandomValue1 + RandomValue2)
);
GO

CREATE TABLE dbo.PersistTest3 (
PersistTestID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
RandomValue1 DECIMAL(5,3),
RandomValue2 DECIMAL(5,3),
PersistValue AS (RandomValue1 + RandomValue2) PERSISTED
);
GO

The first table has two calculated columns, one PERSISTED and one not. The second table has a non-persisted column only. The third table has a persisted column only. This way, I can see the behavior across all these tables and compare them to see where PERSISTED adds to, or removes from, performance.

I took a very simple set of queries and ran these a few times on each table to get an average execution time:

INSERT dbo.PersistTest (
RandomValue1,
RandomValue2) VALUES (
42.2 , -- RandomValue1 - decimal
44.4  -- RandomValue2 - decimal
);

The results were:

Persist Persist 2 Persist 3
3580.5 1626 2260.5

On average, across about five executions, you can see a distinct difference of about 600 microseconds between Persist 2 and Persist 3, and an even larger average on the single Persist table. Let’s try the same thing with a few SELECT statements:

SELECT  pt.CalcValue,
        pt.PersistValue
FROM    dbo.PersistTest AS pt
WHERE   pt.PersistTestID = 3;
GO
SELECT  pt.CalcValue
FROM    dbo.PersistTest2 AS pt
WHERE   pt.PersistTestID = 3;
GO
SELECT  pt.PersistValue
FROM    dbo.PersistTest3 AS pt
WHERE   pt.PersistTestID = 3;
GO

While the results are spectular:

Persist Persist 2 Persist 3
115.5 123.5 109.5

You can see that making the calculation on the fly is more costly with both Persist and Persist2 being higher than Persist3, which had no calculations to retrieve data.

Now, let’s run the same thing, but, how about, 10,000 times so we can really see what the differences are over time. First, the inserts:

Persist Persist 2 Persist 3
84854 68783 73817

Still measured in microseconds, you can see that the cumulative totals are higher for the persisted columns than they are for the non-persisted column. So, there is an overhead for storing the extra information. But, what about queries?

Persist Persist 2 Persist 3
Average 221.25 222.16 100.54
Cumulative 2203431 2254238 1005469

I’d say that’s pretty conclusive. When I hit the SELECT statement 10,000 times instead just five or six, we clearly have differences in behavior. Interestingly enough, the logical reads were identical at 20,000 each. So it really did come down to the time spent calculating the values versus the time spent just retrieving them.

Caveats. This is a very simple test with simple calculations and small persisted values. Even doing something 10,000 times isn’t the same as doing it one million or one hundred million. In short, your mileage may vary, some assembly required, for queries lasting longer than four hours call your physician, it depends.

You can see that a PERSISTED column is going to negatively impact INSERT, but, it’s going to positively impact SELECT. The question is, which one is giving you pain at the moment.

UPDATE: Transposed the data in the final table between Persist 2 & Persist 3. Fixed it.

Apr 06 2015

Constraints and SELECT Statements

I’ve posted previously about how a foreign key constraint can change how a SELECT query behaves. Logically that just makes sense. But other types of constraints don’t affect execution plans do they?

Yes.

Let’s take this constraint as an example:

ALTER TABLE Sales.SalesOrderDetail WITH CHECK 
ADD  CONSTRAINT CK_SalesOrderDetail_UnitPrice 
CHECK  ((UnitPrice>=(0.00)))

That will ensure that no values less than zero can slip in there. We can even validate it:

INSERT Sales.SalesOrderDetail
        (SalesOrderID,
         CarrierTrackingNumber,
         OrderQty,
         ProductID,
         SpecialOfferID,
         UnitPrice,
         UnitPriceDiscount,
         rowguid,
         ModifiedDate
        )
VALUES  (60176, -- SalesOrderID - int
         N'XYZ123', -- CarrierTrackingNumber - nvarchar(25)
         1, -- OrderQty - smallint
         873, -- ProductID - int
         1, -- SpecialOfferID - int
         -22, -- UnitPrice - money
         0.0, -- UnitPriceDiscount - money
         NEWID(), -- rowguid - uniqueidentifier
         GETDATE()  -- ModifiedDate - datetime
        );

Will give me an error:

Msg 547, Level 16, State 0, Line 470
The INSERT statement conflicted with the CHECK constraint “CK_SalesOrderDetail_UnitPrice”. The conflict occurred in database “AdventureWorks2014”, table “Sales.SalesOrderDetail”, column ‘UnitPrice’.

Let’s look at a SELECT query now. If we run this:

SELECT  soh.OrderDate,
        soh.ShipDate,
        sod.OrderQty,
        sod.UnitPrice,
        p.Name AS ProductName
FROM    Sales.SalesOrderHeader AS soh
JOIN    Sales.SalesOrderDetail AS sod
        ON sod.SalesOrderID = soh.SalesOrderID
JOIN    Production.Product AS p
        ON p.ProductID = sod.ProductID
WHERE   p.Name = 'Water Bottle - 30 oz.';

The resulting execution plan looks like this:

Constraint

But, if I modify the query to look like this, adding an additional AND filter on the constrained UnitPrice column:

SELECT  soh.OrderDate,
        soh.ShipDate,
        sod.OrderQty,
        sod.UnitPrice,
        p.Name AS ProductName
FROM    Sales.SalesOrderHeader AS soh
JOIN    Sales.SalesOrderDetail AS sod
        ON sod.SalesOrderID = soh.SalesOrderID
JOIN    Production.Product AS p
        ON p.ProductID = sod.ProductID
WHERE   p.Name = 'Water Bottle - 30 oz.'
        AND sod.UnitPrice > $0.0;

You know what happens to the execution plan? Nothing. It stays exactly the same. The optimizer knows that in order to satisfy the query, it can safely ignore the change in the WHERE clause. In fact, you can look at the SELECT operator properties for the two different plans and note that while the Query Hash values changes, the Plan Hash value stays the same. The plans are identical.

With that knowledge, I’m going to modify the query to look like this, reversing the UnitPrice reference to look for data that violates the constraint:

SELECT  soh.OrderDate,
        soh.ShipDate,
        sod.OrderQty,
        sod.UnitPrice,
        p.Name AS ProductName
FROM    Sales.SalesOrderHeader AS soh
JOIN    Sales.SalesOrderDetail AS sod
        ON sod.SalesOrderID = soh.SalesOrderID
JOIN    Production.Product AS p
        ON p.ProductID = sod.ProductID
WHERE   p.Name = 'Water Bottle - 30 oz.'
        AND sod.UnitPrice < $0.0;

And now we have a new execution plan:

Constraint_scan

 

The optimizer recognized that there is no way that any data can be returned with the WHERE clause above because there is an enforced constraint (note the use of the WITH CHECK clause on the constraint). This completely changes the execution plan in every possible way. Now, instead of attempting to access the data, a Constant Scan operator is put in as a place holder for an empty result set.

To sum up, yes, constraints absolutely affect the choices made by the optimizer when those constraints would have an affect on the plan, even a SELECT query. Also, I would argue, this means that the use of enforced constraints can be a performance enhancer since the optimizer can make intelligent choices about how a given query is dealt with.

UPDATE: Fixed a typo that said the constraint prevented data that was equal to or less than zero. It’s only for data less than zero.

Jan 06 2015

Execution Plans, Performance Tuning and Rum

captain-jack-sparrow-captain-jack-sparrow-2485446-1024-768

This is me on the last cruise

In just a few more weeks I’ll be setting sail on the first of two SQL Cruise events this year. I’m honored beyond my ability to appropriately express it to be included as one of the Technical Leads for both the cruise in February (7-14, 2015, I think there’s a seat or two left) to the Caribbean and the one in June (14-21, 2015, definitely a couple of seats left) to the Mediterranean. Lest you think that this is just an excuse to hang out and drink, you ought to know a little about how sessions are presented on the cruise and the sessions I’m presenting.

Don’t mistake sessions on the boat for your typical one hour session at a SQL Saturday event. These sessions are two hours long. That means two things. First, I can spend quite a bit more time on my topic so that I get to put out some real in-depth information rather than the high level or glossed presentations you can do in one hour. Second, you get to ask questions, make comments, directly interact in an environment where, even if I say we’ll have to talk about this after the session, I’m right there so we’ll be talking about it after the session. Plus, you’ll still have the other Technical Leads there to help answer questions.

Because the most important aspect of the SQL Cruise is your access to both the Technical Leads and the other cruisers (including MVPs, MCMs, and just awesome and amazing people). This is the single most important aspect of SQL Cruise, the ability to sit down with people like Kevin Kline and pick his brain, without pressure, without the need to run to another session, without the ability of Kevin to escape (ha, not that he’d try, he’s one of the most helpful and kind people I know). The same goes for all the other cruisers and Tech Leads.

I’m doing two sessions on the cruise. The first is called “Execution Plans, What Can You Do With Them.” Yeah, yeah, yeah, query tuning. Whatever. You’re right, it is one of the important aspects of what execution plans do. But, in this session, instead of exploring an execution plan to try to improve performance, we’re going to explore execution plans to see what they tell us about how SQL Server works. What happens when you insert data? What happens when there’s a calculated column? How do the queries and the internals of the structures get resolved by SQL Server? Well, I can’t tell you everything about that because the execution plan is still an abstraction layer. But, there’s a ton of really interesting information available within these things. We’re going to dig around in them and see what we can see and just talk about what an execution plan shows us and how we can explore it. I’ve been working obsessively on a bunch of, what I think, are interesting plans with fun little details to explore. And, I’ll share all the scripts so you can explore them on your own.

Next, I’m doing a session on a performance checklist. I’m going to just flat out give away the stuff I go through and look at when setting up a server, designing a database, or consulting on performance problems. But more importantly, I want to discuss with you why I’m doing everything I’m doing. I’m counting on the fact that with all the MVPs and MCMs in the room, I’m going to be able to say a few things that they’re going to jump into the middle of. Oh yes, we’re going to have some excellent conversations during this session. I promise. I make a few suggestions that almost no one agrees with. We’re going to have fun. You won’t want to miss it.

Oh, yeah, and we get to do all this awesome learning, sharing, talking and laughing while basking in warm sunshine and sipping on the beverage of your choice (make mine a Mojito, no, wait, a Mulata Daisy, hang on, how about a Caipirinha? Yeah, I know, that’s not rum, it’s still good, and who knows, the rum might be gone).

SQL Cruise quite literally changes the life trajectory of people involved with it. The direct, extended, involvement between the people on the cruise energizes people to make positive changes in their lives. And we learn stuff. And we have fun. Don’t miss out.

Dec 19 2014

“Pretty Plans vs. Performance” or “Grant Gets Pwned”

If you get an execution plan that looks like this:

prettyplan_uglyplan

I wouldn’t blame you for immediately thinking about query tuning. Especially if the code that generated it looks like this:

SELECT  soh.OrderDate,
        sod.OrderQty,
        sod.LineTotal
FROM    Sales.SalesOrderHeader AS soh
INNER JOIN Sales.SalesOrderDetail AS sod
        ON sod.SalesOrderID = soh.SalesOrderID
WHERE   soh.SalesOrderID IN (@p1, @p2, @p3, @p4, @p5, @p6, @p7, @p8, @p9, @p10,
                             @p11, @p12, @p13, @p14, @p15, @p16, @p17, @p18,
                             @p19, @p20, @p21, @p22, @p23, @p24, @p25, @p26,
                             @p27, @p28, @p29, @p30, @p31, @p32, @p33, @p34,
                             @p35, @p36, @p37, @p38, @p39, @p40, @p41, @p42,
                             @p43, @p44, @p45, @p46, @p47, @p48, @p49, @p50,
                             @p51, @p52, @p53, @p54, @p55, @p56, @p57, @p58,
                             @p59, @p60, @p61, @p62, @p63, @p64, @p65, @p66,
                             @p67, @p68, @p69, @p70, @p71, @p72, @p73, @p74,
                             @p75, @p76, @p77, @p78, @p79, @p80, @p81, @p82,
                             @p83, @p84, @p85, @p86, @p87, @p88, @p89, @p90,
                             @p91, @p92, @p93, @p94, @p95, @p96, @p97, @p98,
                             @p99);

Let’s replace this with a table variable, maybe even one passed in as a parameter. The plan then looks like this:

prettyplan_pretty1

Ah, much prettier. I’m happy now, all’s right with the world… But, just in case, let’s look at performance. The first query ran in about 2.2ms and had 599 reads. The second query ran in about 24ms and had 598 reads… crud.

Well, let’s modify everything again. Instead of a table variable, we’ll use a temporary table and get some statistics into this puppy which will clean things right up. Here’s the new plan:

prettyplan_pretty2

Looks pretty familiar, although there are slight differences in the cost estimates between this plan and the preceding one. But the run time is 85ms with 714 reads AND I had to create the temporary table which added time to the whole thing.

Doggone it, that other plan is heinous and ugly and so is the query which uses an IN clause. Shouldn’t the cleaner, simpler, execution plan be an indicator that we’re going to get better performance?

Nope.

The thing is, just because the execution plan is simple and easy to understand does not mean it’s going to perform well. You can’t simply look to an execution plan to understand performance. You have to measure the run times on the query, look to the resources it’s going to use in order to understand where waits are likely, look at it’s reads, and take all this into account, along with understanding what the execution plan is doing in order to make the appropriate choices for performance on your system.

UPDATE:

I kept working with this because I was convinced I could get faster performance. The main difference as I saw it was that the optimizer sorted the data in the IN clause and I wasn’t explicitly sorting the data in any of my replacement queries. But nothing I did resulted in better execution times. And that was upsetting.

First, when you’re tuning a query, you’re going to look at the execution plans, as I did above. But, when you want to measure the performance of queries, it’s a very good idea to turn off execution plans and just capture the query metrics. I knew this and was doing it and you could see the results in the Extended Events where I was capturing each statement for the SPID I was working within. I also had the SET STATISTICS IO and SET STATISTICS TIME enabled for the query. Since each execution would cause those to fire as part of the statements and those were making my  ExEvents window messy, I decided to turn them off… WHOA! Query execution times radically changed.

In fact, my first attempt at tuning the query, substituting a table parameter, was suddenly faster than the original. The fastest was when I pre-sorted the data in a temporary table (discounting the costs of sorting and inserting the data into the temp table just for the moment). In fact, the prettiest plan was indeed the fastest.

Experimenting further, it was the STATISTICS IO that completely changed the execution times.

In short, pay no attention to my original post above, instead, let the lesson be that I need to be very cautious about the Observer Effect.

Sep 23 2014

SQL Server Query Performance Tuning

The latest update to my book, SQL Server Query Performance Tuning was released last week. This is the fourth edition of the book, and the third edition that I’ve been responsible for. At the urging of my editor, Jonathan Gennick, I have completely restructured the book for this release. The chapters have been broken up and rearranged so that they’re smaller, more easily consumed. Yes, I’ve worked with my technical editor, Joe Sack, to add lots of new information and to ensure that the existing information is more accurate and more useful. But, we’ve also added new chapters on topics that weren’t given enough attention in the previous versions of the book, such as parameter sniffing. It can certainly seem like these book releases are just some incremental changes on top of existing information, but not this time. This is a new book, with a new structure and new material (almost 80 pages worth), but, hopefully, with all the usefulness of the old book fundamentally intact. Please check it out.

If reading books isn’t your thing, or, you just want some personal interaction to assist your learning, I’ll be doing an all day, pre-conference seminar on query tuning at the PASS Summit this year, 2014. Go here to register.