Category: SQL Server 2012

Jul 25 2016

Monitor Query Performance

Blog post #7 in support of Tim Ford’s (b|t) #iwanttohelp, #entrylevel. Read about it here.

Sooner or later when you’re working with SQL Server, someone is going to complain that the server is slow. I already pointed out the first place you should look when this comes up. But what if they’re more precise? What if, you know, or at least suspect, you have a problem with a query? How do you get information about how queries are behaving in SQL Server?

Choices For Query Metrics

It’s not enough to know that you have a slow query or queries. You need to know exactly how slow they are. You must measure. You need to know how long they take to run and you need to know how many resources are used while they run. You need to know these numbers in order to be able to determine if, after you do something to try to help the query, you’ll know whether or not you’ve improved performance. To measure the performance of queries, you have a number of choices. Each choice has positives and negatives associated with them. I’m going to run through my preferred mechanisms for measuring query performance and outline why. I’ll also list some of the other mechanisms you have available and tell you why I don’t like them. Let’s get started.

Dynamic Management Views

Since SQL Server 2005, Dynamic Management Views (DMV) and Functions (DMF) have been available for access all sorts of information about the server. Specifically there are a few DMVs that are focused on queries and query performance. If you go back through my blog, you can find tons of examples where I illustrate their use. You can also see them at work in commercial tools and free tools. Adam Machanic’s sp_WhoIsActive, a free tool, makes extensive use of DMVs. To learn more about DMVs, you can download a free book, Performance Tuning with SQL Server Dynamic Management Views. DMVs are available in Azure SQL Database, Azure SQL Data Warehouse, and all editions of SQL Server.

The information captured by DMVs is an aggregation of all the times the query has been run. This means you can’t find how long the query ran at 3PM yesterday. You can though see the minimum and maximum time the query took as well as the average. The ability to see this information is what makes DMVs useful. However, another important point about DMVs is that they only collect information while a query is in memory. As soon as it leaves the cache (the area of memory it is stored in), so does all the aggregated information about the query in the DMVs.

You use the DMVs for a general understanding of how a query is behaving. They’re not meant for detailed for long term collection of information about queries. For that we use other tools.

Extended Events

Introduced in SQL Server 2008, Extended Events (ExEvents) are a mechanism for capturing detailed information about SQL Server and the processes within. One of those processes is how queries behave. I have multiple examples on this blog on using ExEvents. You can’t go wrong reading about them on Jonathan Kehayias’ blog. Extended events are available in Azure SQL Database and all editions of SQL Server.

When you need to know every query against a database, or each time a particular query is called, and all the details associated with the query (reads, writes, duration), ExEvents are the way to go. ExEvents are very lightweight on the server (but not free) and can be filtered so that you capture just the information you need. The information is detailed and not aggregated. Instead it’s raw. The real issue with capturing this data is the amount of data you’ll be capturing. Testing and careful filtering to ensure you’re dealing with too much information is important. Prior to SQL Server 2012, there was no graphical user interface for reading ExEvent data, so you would have been forced to run queries against the XML that the information is captured within. With the tools available in SQL Server Management Studio, this is no longer the case.

You use ExEvents when you need specific and detailed information about a query. ExEvents are not so good for generalized monitoring.

Query Store

Introduced in Azure SQL Database, and first released in SQL Server with 2016, Query Store is another mechanism for capturing aggregated information about queries. As before, I have examples on how to work with Query Store on my blog. You can also find quite a bit on it over at Simple-Talk. Query Store is pretty specialized still and only available in Azure and SQL Server 2016, but it is in all editions of SQL Server 2016.

Query Store captures information similar to what is available in the DMVs. However, unlike the DMVs, the information that Query Store captures is kept around, even after a query ages out or is removed from cache. This persistence makes Query Store very exciting. You do have to choose to turn it on for each database you wish to capture queries for. It’s not automatic like DMVs. The capture processes are asynchronous, so they should be relatively light weight for most databases.

You use the Query Store when you need to capture query metrics over the long term, but you don’t need detailed information and aggregations works well for you.

Others

There are other ways to measure query performance. You can use the Profiler GUI, but that actually seriously negatively impacts the server. You can bring a server down by using it, so it should be avoided. Profiler generates scripts called trace, which can be used to monitor your server. However, they have a much higher impact than ExEvents and they’re on the deprecation list. Microsoft is not added new trace events for new functionality, so they’re becoming less and less useful with each release. You also can’t use trace against Azure. If you’re writing a query and you just want to see how long it takes to run, you can use SET STATISTICS TIME ON, to capture the execution time. This is a handy way to quickly measure performance. There is also the ability to capture reads and writes using SET STATISTICS IO ON, but, while this does capture the metrics we need, it adds considerable overhead to the query, skewing performance measurement. This is why I stick to ExEvents when I need an accurate measure.

Conclusion

Honest people can disagree about the best way to capture query performance. I have my preferences as you can see. However, I’m fairly certain that everyone would agree that it’s important to know how to capture performance metrics in order to be able to assert that performance has increased or decreased in a measured fashion. You don’t want to guess at query performance, you want to know.

Jul 18 2016

Common Table Expression, Just a Name

The Common Table Expression (CTE) is a great tool in T-SQL. The CTE provides a mechanism to define a query that can be easily reused over and over within another query. The CTE also provides a mechanism for recursion which, though a little dangerous and overused, is extremely handy for certain types of queries. However, the CTE has a very unfortunate name. Over and over I’ve had to walk people back from the “Table” in Common Table Expression. The CTE is just a query. It’s not a table. It’s not providing a temporary storage space like a table variable or a temporary table. It’s just a query. Think of it more like a temporary view, which is also just a query.

Every time I explain this, there are people who don’t believe me. They point to the “Table” in the name, “See. Says so right there. It’s a table.”

It’s not and I can prove it. Let’s create a relatively simple CTE and use it in a query:

WITH    MyCTE
          AS (SELECT    c.CustomerName,
                        cc.CustomerCategoryName
              FROM      Sales.Customers AS c
              JOIN      Sales.CustomerCategories AS cc
              ON        cc.CustomerCategoryID = c.CustomerCategoryID
              WHERE     c.CustomerCategoryID = 4)
    SELECT  *
    FROM    MyCTE;

Now, I’m going to run the query within the CTE and the CTE together as two statements in a batch and capture the execution plans:

ExecPlans

On the top, the CTE, on the bottom, the query. You’ll note that the execution plans are identical. They each have the exact same Query Plan Hash value in the properties, 0x88EFD2B7C165E667, even though they have different Query Hash values, 0x192FFC125A08CC35 and 0xFEB7F2BCAC853CD5, respectively. Further, if I capture the query metrics using extended events, I get identical reads and, on average, identical execution times:

duration

This is because, there is no table being created. The data is not treated differently. A CTE is just a query, not some type of temporary storage.

Heck, let’s do one more thing. Let’s use the latest SSMS plan comparison tool and highlight one of the operators to see what differences there are internally in the plan:

 

plancompare

I don’t see a lot of differences. In fact, I don’t see any. That’s because the optimizer recognizes these two queries as identical. If it was loading data into temporary storage, you would see differences in something. We don’t. This is because, despite the somewhat unfortunate emphasis that gets placed on the Table portion of the name, the emphasis of the name, Common Table Expression, should be on the word Expression.

I will point out an interesting difference, especially useful for those who plug in CTEs everywhere, whether it’s needed or not. Let’s look at the properties of the two plans:

peroperties

You can see the similarities and differences that I pointed out earlier in the Statement, Query Hash and Query Plan Hash, as well as the Estimated Subtree Cost and others. What’s truly interesting is that the CompileCPU, CompileMemory and CompileTime for the CTE is higher than the regular query. While the CTE is just a query, it’s a query that adds a non-zero overhead when used, and therefore, should only be used where appropriate (good gosh, I’ve seen people put it EVERWHERE, on every single query, don’t do that).

Hopefully, this is enough to establish, truly, completely, and thoroughly, that the Common Table Expression is an expression, not a table.

Yeah, I did this before, but it keeps coming up, so I tried a different approach. Let’s see if the word gets out. Your Common Table Expression is not a table.


I love talking about execution plans and query tuning. I’ll be doing this at an all day pre-con at SQLServer Geeks Annual Summit in Bangalore India.

Don’t  miss your chance to an all day training course on execution plans before SQL Saturday Oslo in September.

Jul 11 2016

Same Query, Different Servers, Different Performance. Now What?

Based on the number of times I see this question on forums, it must be occurring all the time. You have two different servers that, as far as you know, are identical in terms of their options and setup (although not necessarily in terms of power, think a test or pre-production system versus production). On these servers you have a database on each that, as far as you know, is the same as the other in terms of options, objects, maybe even data (although, this does mean that you have unmasked production information in your QA environment, which potentially means you’re going to jail, might want to address this, especially now that I’ve told you about it, mens rea, you’re welcome). On each database you run, as far as you know, the exact same query (whether a straight up ad hoc query, stored procedure, parameterized query, whatever). On your production server, let’s call it PROD, the query runs for 3 hours before it returns, but on the test server, let’s call it TEST, it returns in about 3 seconds.

What. The. Hell.

This is especially troubling because PROD has 8 processors that are much more powerful than TEST, which has only 2 (either can go parallel, so this is the same). TEST only has 16GB of RAM and PROD has 64GB. Further, you know that the databases, data, and statistics are the same because you restored the TEST database from PROD (although, see that jail thing above). However, you’re on top of things. You capture the query metrics so you know exactly the amount of time and the reads or writes from each query and you even do this using extended events so you’re minimizing observer effects. You also have monitoring in place, so you know that there is no blocking in PROD. The query is just a pig there, but not on TEST. You’ve even gone so far as to get an estimated execution plan from each server and they are absolutely different.

Now what?

Well obviously something somewhere is different. Start by comparing everything on both servers and both databases down to… hang on, here, we’ll write a PowerShell script and then….

Wait, wait, wait!

You have the execution plans? Before we start digging through all the properties everywhere and comparing everything to everything, updating statistics 14 times, and all the rest, what if we look at the execution plans. They’re different, so we should start looking at scans & indexes & statistics &….

Wait!

Let’s start simple and clear. Let’s take a look at the properties of the first operator:

AllProperties

This contains a ton of information about the settings of the systems where each plan was generated. Let’s assume that, probably, the databases are the same, as is the data and the statistics, but the problem is a system or database setting. These differences can result in different execution plans and therefore different execution times. If we are dealing with two different servers and we are fairly sure the data, structure and the statistics are the same, the properties of the first operator are a great starting point for understanding what went wrong.

Oh, and the first operator is this one with the red square around it:

first operator

Let’s use the new SSMS Compare Showplan to see the differences between our two execution plans that we captured:

properties compared

(clicking on that will make it bigger)

There are a bunch of differences highlighted, but one ought to jump out pretty quick. That’s right, these two databases have different compatibility levels which resulted in one of them using the new optimizer and one using the old optimizer as evidenced by the CardinalityEstimatorModelVersion. Fun point, both have identical query hash values. Not surprising, but additional, quick, validation that we really are talking about the same query on each server (and one of them isn’t using a trace flag to change the cardinality estimator). You now have enough information to go and make some changes to your system without lots of further fumbling and work.

When dealing with the same query from two servers that ought to result in similar behavior, but doesn’t, get the execution plans (estimated plans are fine here) and compare the the properties of the first operator. That’s the quickest way to identify the issues that could be leading to the differences between the servers.


Want to talk more about execution plans and query tuning? Let’s do it.

In August, I’ll be doing an all day pre-con at SQLServer Geeks Annual Summit in Bangalore India.

I’m also going to be in Oslo Norway for a pre-con before SQL Saturday Oslo in September.

Jun 28 2016

CASE Statement in GROUP BY

Set based operations means you should put everything into a single statement, right?

Well, not really. People seem to think that having two queries is really bad, so when faced with logical gaps, they just cram them into the query they have. This is partly because SQL Server and T-SQL supports letting you do this, and it’s partly because it looks like a logical extension of code reuse to arrive at a query structure that supports multiple logic chains. However, let’s explore what happens when you do this on particular situation, a CASE statement in a GROUP BY clause.

You see this a lot because a given set of data may be needed in slightly different context by different groups within the company. Like many of my example queries, this could be better written. Like many of my example queries, it mirrors what I see in the wild (and for those following along at home, I’m using the WideWorldImporters database for tests now):

CREATE PROCEDURE dbo.InvoiceGrouping (@x INT)
AS
SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY CASE WHEN @x = 7 THEN i.ContactPersonID
              WHEN @x = 15 THEN i.AccountsPersonID
              ELSE i.SalespersonPersonID
         END;
GO

Running this for any given value above, 7, 15 or other, you’ll get the same execution plan, regardless of the column used in the GROUP BY. However, Parameter Sniffing is still something of a factor. When you group this data by SalesPersonID, you only get 10 rows back. This will be shown as the estimated number of rows returned if some value other than 7 or 15 is used as a parameter. However, this is always the plan:

coreplan

You can click on that to expand it into something readable. We can eliminate the Parameter Sniffing from the equation if we want to by modifying the query thus:

CREATE PROCEDURE dbo.InvoiceGrouping_NoSniff (@x INT)
AS
DECLARE @x2 INT;
SET @x2 = @x;

SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY CASE WHEN @x2 = 7 THEN i.ContactPersonID
              WHEN @x2 = 15 THEN i.AccountsPersonID
              ELSE i.SalespersonPersonID
         END;
GO

However, except for some deviation on the estimated rows (since it’s averaging the rows returned), the execution plan is the same.

What’s the big deal right? Well, let’s break down the code into three different procedures:

CREATE PROCEDURE dbo.InvoiceGrouping_Contact
AS
SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY i.ContactPersonID;
GO

CREATE PROCEDURE dbo.InvoiceGrouping_Sales
AS
SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY i.SalespersonPersonID;
GO

CREATE PROCEDURE dbo.InvoiceGrouping_Account
AS
SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY i.AccountsPersonID;
GO

Interestingly enough, these three queries produce a nearly identical execution plan. The one big difference is the Compute Scalar operator that is used to generate a value for the Hash Match Aggregate is no longer in the query:

specificplan

The same basic set of structures, scans against both tables, to arrive at the data. Cost estimates between the two plans are very different though, with the targeted queries having a much lower estimated cost.

Performance-wise, interestingly enough, the average execution time of the first query, only returning the 10 rows, is 157ms on average, while the query grouping directly on the SalesPersonID averages about 190ms. Now, the reads tell a slightly different story with 17428 on the generic query and 5721 on the specific query. So, maybe a server under load will see a significant performance increase. However, let’s deal with what we have in front of us and say that, at least for these tests, the catch-all GROUP BY query performs well.

Now let’s change the paradigm slightly. Let’s add an index:

CREATE INDEX TestingGroupBy ON Sales.Invoices (SalespersonPersonID);

Frankly, this isn’t a very useful index. However, after adding it, the execution plan for the InvoiceGrouping_Sales query changes. Instead of scanning the table, it’s now scanning the index. Despite recompiles and attempts to force it using hints, the original InvoiceGrouping query will not use this index. Duration of the InvoiceGrouping_Sales query drops to 140ms on average and the reads drop a little further to 5021. Getting an 11% increase on performance is a win.

This is a pretty simplified example, however, making the CASE statement more complex won’t improve performance or further assist the optimizer to make good choices. Instead of trying to cram multiple different logical groupings into a single query, a better approach would be to create the three new procedures that I did above, and make the original InvoiceGrouping procedure into a wrapping procedure that chooses which of the individual procedures to call. This way, if you do add indexes in support of each of the different possible groupings, you would realize a positive outcome in your performance.


Want to talk more about execution plans and query tuning?. In August, I’ll be doing an all day pre-con at SQLServer Geeks Annual Summit in Bangalore India.

I’m also going to be in Oslo Norway for a pre-con before SQL Saturday Oslo in September.

May 12 2016

Use The Correct Data Type

Blog post #5 in support of Tim Ford’s (b|t) #iwanttohelp, #entrylevel. Read about it here.

Saying that you should use the correct data type seems like something that should be very straight forward. Unfortunately it’s very easy for things to get confusing. Let’s take a simple example from AdventureWorks. If I run this query:

SELECT  a.ModifiedDate
FROM    Person.Address AS a
WHERE   a.AddressID = 42;

The output looks like this:

2009-01-20 00:00:00.000

Normal right? You see the year, the month and the day followed by the time in hours, minutes, and seconds as a decimal. Ah, but there is an issue. This query is supposed to be for the reporting system, and the business only cares about the date that the values in the Person.Address table have been modified, so they don’t want to see the time. Also, the person in charge is kind of picky. They really don’t like seeing the date formatted that way. They prefer to see “January 20, 2009.”

Far too often then, the easy answer, just change the column to varchar. You can trim the time and output in exactly the format needed by the business. Problem solved and it was easy…

Well, until someone inputs “Janry 20, 2009” slightly mangling the spelling and suddenly your report looks all messed up. Or, they ask you to start filtering just the last two weeks, regardless of when the report was run and you now can’t easily do date math on the column. Even after you get over that problem with a little formatting using CAST (along with ISDATE to try to catch all those other typos that are in the system now)  you notice that the performance is really slow so you go to put an index on the column and now you have an index key that is 50 bytes wide instead of the 3 bytes that the DATE data type would have been, making the index less efficient (not to mention, sorting the data is going to put February ahead of January, more formatting).

The list goes on and on. You’re going to hit issue after issue and all you tried to do was a little formatting, make the data look pretty by using the VARCHAR data type in place of DATETIME or DATE. This kind of thing happens all the time, especially to people just getting started. The rules should be very simple, use the correct data type for the data you’re collecting. However, it’s easy to get distracted. It’s easy to get pushed. The business says “put it in this format” and the only way you can do that is by changing the data type so…

Don’t do it. Take the time to understand the implications of mangling the data types. It’s not just date and time that cause issues either. Lots and lots of stuff can be easily stuffed into a VARCHAR field that more properly should be an INT or a DECIMAL or some other data type. Be able to communicate with the business so that they understand that you’re not simply being difficult, you’re trying to protect the functionality and integrity of the data. Show how formatting can be done using client tools. Explain all the downstream issues that this decision will have. Then, use the correct data type for the information being stored.

May 03 2016

Implicit Conversion and Performance

Letting SQL Server change data types automatically can seriously impact performance in a negative way. Because a calculation has to be run on each column, you can’t get an index seek. Instead, you’re forced to use a scan. I can demonstrate this pretty simply. Here’s a script that sets up a test table with three columns and three indexes and tosses a couple of rows in:

CREATE TABLE dbo.ConvertTest (
     BigIntColumn BIGINT NOT NULL,
     IntColumn INT NOT NULL,
     DateColumn VARCHAR(30)
    );

CREATE INDEX BigIntIndex 
ON dbo.ConvertTest 
(BigIntColumn);
CREATE INDEX IntIndex 
ON dbo.ConvertTest 
(IntColumn);
CREATE INDEX DateIndex 
ON dbo.ConvertTest
(DateColumn);

WITH    Nums
          AS (SELECT TOP (1000000)
                        ROW_NUMBER() OVER (ORDER BY (SELECT 1
                                                    )) AS n
              FROM      master.sys.all_columns ac1
              CROSS JOIN master.sys.all_columns ac2
             )
    INSERT  INTO dbo.ConvertTest
            (BigIntColumn,
             IntColumn,
             DateColumn
            )
    SELECT  Nums.n,
            Nums.n,
            DATEADD(HOUR, Nums.n, '1/1/1900')
    FROM    Nums;

If I run a very simple query like this:

DECLARE @param DATETIME;

SET @param = '3/15/1963 8:00:00';

SELECT  ct.DateColumn
FROM    dbo.ConvertTest AS ct
WHERE   ct.DateColumn = @param;

I get an execution plan that looks like this:

convert_scan

The issue is, while I put dates into the DateColumn, the data type there is VARCHAR, not DATETIME. Passing a DATETIME value results in an implicit conversion that we can see in the Index Scan Predicate property:

ConverImplicitHurtsPerformance

This is a function against a column and that is what results in the scan. If I modify the code as follows:

DECLARE @param VARCHAR(30);

SET @param = '3/15/1963 8:00:00';

SELECT  ct.DateColumn
FROM    dbo.ConvertTest AS ct
WHERE   ct.DateColumn = @param;

I get an execution plan that looks like this:

convert_seek

No change in data type is needed, so the index can be used. It’s a very simple demonstration of how implicit conversions hurt performance. Now, what about the integer columns? What happens when I do this with the code:

DECLARE @param INT;

SET @param = 650323;

SELECT  ct.BigIntColumn
FROM    dbo.ConvertTest AS ct
WHERE   ct.BigIntColumn = @param;

Here’s the execution plan:

convert_intseek

And here is the predicate:

convert_predicate

In this case, we’re still getting an implicit conversion of the data type, but SQL Server is smart enough to realize that, hey, it’s still just an integer, so it can convert the @param instead of the column, resulting in a seek. If I change the test so that it runs a query against the INT column passing a BIGINT value, it doesn’t even do a conversion of any kind.

In short, not all implicit conversions are a problem. However, it’s still a best practice to use the right data type for columns. You should also use the same data type for your parameters and local variables as the column data type. Taking these steps avoids issues with implicit conversion.


For lots more information on query tuning, in August, I’ll be doing an all day pre-con at SQLServer Geeks Annual Summit in Bangalore India.

I’m also going to be doing an all day training course before SQL Saturday Oslo in September.

Apr 12 2016

A View Is Not A Table

Blog post #4 in support of Tim Ford’s (b|t) #iwanttohelp, #entrylevel

In SQL Server, in the T-SQL you use to query it, a view looks just like a table (I’m using the AdventureWorks2014 database for all these examples):

SELECT  *
FROM    Production.vProductAndDescription AS vpad;

 

SELECT  vpad.Name,
        vpad.Description,
        vpmi.Instructions
FROM    Production.vProductAndDescription AS vpad
JOIN    Production.Product AS p
        ON p.ProductID = vpad.ProductID
JOIN    Production.vProductModelInstructions AS vpmi
        ON vpmi.ProductModelID = p.ProductModelID
WHERE   vpad.ProductID = 891
        AND vpad.CultureID = 'fr';

The above query actually combines two views and a table. This is what is commonly referred to as a “code smell”. A code smell is a coding practice that works, but that can lead to problems. In this case, we’re talking about performance problems. The performance problems when using views to join to tables and other views as if they were real tables comes about because a standard view is not a table. Its a query. For example, the second view introduced, vPorductModelInstructions looks like this:

ALTER VIEW [Production].[vProductModelInstructions] 
AS 
SELECT 
    [ProductModelID] 
    ,[Name] 
    ,[Instructions].value(N'declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
        (/root/text())[1]', 'nvarchar(max)') AS [Instructions] 
    ,[MfgInstructions].ref.value('@LocationID[1]', 'int') AS [LocationID] 
    ,[MfgInstructions].ref.value('@SetupHours[1]', 'decimal(9, 4)') AS [SetupHours] 
    ,[MfgInstructions].ref.value('@MachineHours[1]', 'decimal(9, 4)') AS [MachineHours] 
    ,[MfgInstructions].ref.value('@LaborHours[1]', 'decimal(9, 4)') AS [LaborHours] 
    ,[MfgInstructions].ref.value('@LotSize[1]', 'int') AS [LotSize] 
    ,[Steps].ref.value('string(.)[1]', 'nvarchar(1024)') AS [Step] 
    ,[rowguid] 
    ,[ModifiedDate]
FROM [Production].[ProductModel] 
CROSS APPLY [Instructions].nodes(N'declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
    /root/Location') MfgInstructions(ref)
CROSS APPLY [MfgInstructions].ref.nodes('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
    step') Steps(ref);

GO

That’s a query against the XML stored in the ProductModel table. The view was created to mask the complexity of the necessary XPath code, while providing a mechanism for retrieving the data from the XML. This is a common use of views. However, when we then treat the view as a table, and join it to other tables and views, we present a problem for the optimizer. Because a view is not a table, but is instead a query, the optimizer has to resolve this query in combination with any other views or tables to arrive at an execution plan for the whole combined mess. While the optimizer is very good at what it does, because of the complexity caused by the additional unnecessary processing to figure out which parts of the view is not needed to satisfy the query, it can make poor choices. That can result in poor performance.

If I were to rewrite the query, it would look something like this:

SELECT  p.Name,
        pd.Description,
        pm.Instructions.value(N'declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
        (/root/text())[1]', 'nvarchar(max)') AS Instructions
FROM    Production.Product AS p
JOIN    Production.ProductModelProductDescriptionCulture AS pmpdc
        ON pmpdc.ProductModelID = p.ProductModelID
JOIN    Production.ProductDescription AS pd
        ON pd.ProductDescriptionID = pmpdc.ProductDescriptionID
JOIN    Production.ProductModel AS pm
        ON pm.ProductModelID = p.ProductModelID
CROSS APPLY Instructions.nodes(N'declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
    /root/Location') MfgInstructions (ref)
CROSS APPLY MfgInstructions.ref.nodes('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
    step') Steps (ref)
WHERE   p.ProductID = 891
        AND pmpdc.CultureID = 'fr';

That’s a lot more complex than the query we had above that only referenced three objects and had only two JOIN operations. However, if you capture the I/O and the execution time for these queries, you’ll see a different story.

I used two methods for measuring performance. I used SET STATISTICS IO and SET STATISTICS TIME to ON for the queries for one set of tests. For another set I used Extended Events. Consistently the execution time for the query with the view was around 110ms. The query that didn’t reference any views was around 37ms. The reads were 155 for the query with views, but only 109 for the query without. If you look at the individual table I/O, you can start to see where the differences come from. These are the results from the query with the views:

Table ‘ProductDescription’. Scan count 0, logical reads 56
Table ‘ProductModelProductDescriptionCulture’. Scan count 28, logical reads 56
Table ‘xml_index_nodes_418100530_256001’. Scan count 13, logical reads 37
Table ‘ProductModel’. Scan count 0, logical reads 2
Table ‘Product’. Scan count 0, logical reads 4

These are the results for the query without the view:

Table ‘ProductDescription’. Scan count 0, logical reads 56
Table ‘xml_index_nodes_418100530_256001’. Scan count 13, logical reads 37
Table ‘ProductModelProductDescriptionCulture’. Scan count 6, logical reads 12
Table ‘ProductModel’. Scan count 0, logical reads 2
Table ‘Product’. Scan count 0, logical reads 2

You can see the differences in both ProductModelProductDescriptionCulture and Product. This is because of the differences in the execution plans caused by differences in the choices made by the optimizer.

A standard view is not a table. There is such a thing as a materialized view or indexed view, which is a table. That’s not what we’re talking about here. While you can use a view as if it was a table, don’t mistake it for a table. A view is just a mask in front of a query. It can’t be used like an object so that you avoid rewriting the same JOIN. That will lead to issues for the optimizer as this simple set of examples showed. Don’t shy away from using views, just understand what their real behavior is. A view is a query, not a table.

Apr 05 2016

Views and Simplification

I’ve been getting lots of questions on views lately. Must be something in the water.

Because SQL Server allows you to treat a view as if it was a table, lots of people pretty much assume that it is a table since they get to treat it that way. The thing is, a view is not a table. It’s a query. Let’s explore this just a little bit. Here’s a relatively straight forward view:

CREATE VIEW dbo.PersonInfo
AS
SELECT  a.AddressLine1,
        a.City,
        a.PostalCode,
        a.SpatialLocation,
        p.FirstName,
        p.LastName,
        be.BusinessEntityID,
        bea.AddressID,
        bea.AddressTypeID
FROM    Person.Address AS a
JOIN    Person.BusinessEntityAddress AS bea
        ON a.AddressID = bea.AddressID
JOIN    Person.BusinessEntity AS be
        ON bea.BusinessEntityID = be.BusinessEntityID
JOIN    Person.Person AS p
        ON be.BusinessEntityID = p.BusinessEntityID;
GO

I can query this view like this:

SELECT  *
FROM    dbo.PersonInfo AS pni
WHERE   pni.LastName LIKE 'Ran%';

The resulting execution plan looks like this:

viewSimple1

You don’t even need to expand it for what I’m about to show. If we modify the query against our view as follows:

viewSimple2

Again, you can expand these, but you don’t need to. Notice, the first plan had four tables being referenced, which represent the four tables from the view. The second query only has two tables. This is because the optimizer looked at the query that the view represents, not simply the query that I used to call the view. It then recognized that simplification could be used to eliminate unnecessary JOIN operations from the execution plan and still get the same data because of foreign key constraints on the tables.

The important point to note is that the optimizer is absolutely not treating the view like a table. The optimizer is treating the view like a query, which is all it is. This has both positive and negative impacts when it comes to query performance tuning and this view. You could spend all sorts of time “tuning” the view, only to find all that tuning you’ve done tossed out the window when the query doesn’t reference a column in the view and that causes the optimizer to rearrange the plan. I don’t want to convey that this is an issue. It’s not. I’m just trying to emphasize the point that a view is just a query.

Now, when we get into treating a view exactly like a table in JOINs or calling a view from a view (known as nesting), then we’re talking about issues. I’ll put up another post on a JOIN and views.


For lots more information on query tuning, I’m presenting an all day pre-con at SQL Day in Wroclaw Poland on May 16.

Mar 07 2016

SQL Server Backups Are A Business Decision

Blog post #3 in support of Tim Ford’s (b|t) #iwanttohelp, #entrylevel

Read more about Tim’s challenge here.

It’s very easy to think of SQL Server backups as a technical problem. You have so much stuff going on, BACKUP DATABASE commands, recovery models, BACKUP LOG commands, Differential backups. Getting them all into the correct order and automating the processes sure seems like a technical problem. It isn’t. It’s all about the business. If you’re taking on the duties of a DBA whether you’re an accidental DBA, a reluctant DBA or you were voluntold into the DBA position, you need to plan to sit down with responsible parties from the business and get an understanding with them regarding RPO and RTO.

RPO is a TLA for Recovery Point Objective. The easiest way to describe RPO is to ask, “In terms of time, how much data are we willing to lose?” The immediate answer is always going to be zero. Here is where we have to be honest. You won’t be able to guarantee zero data loss (yeah, there are probably ways to do this, but #entrylevel). Talk with the business. Most of the time, you’ll find that they’d actually be OK with 15 minutes, or maybe 5 minutes, or even an hour of lost data. It really varies, not only from business to business, but from database to database within the business (allow for this flexibility). You need to establish this number. RPO is going to help you figure out how to set up your backups, your recovery model, your logs and their backups. All that stuff that seemed so technical, it’s all based on this extremely important number that you’re going to work with the business to arrive at.

Oh, but we’re not done. Once you’ve managed to get the business comfortable (as comfortable as they can be) with the idea that they could lose data, you also have to prepared them for the idea that, in the event of a disaster, restoring the database from backups is not going to be instant. It’s going to take time. This is where we have to define the RTO or Recovery Time Objective. This is our goal for how quickly we can restore the database. RTO is not so much a negotiation with the business as it is an education for the business. You see, you can only restore so fast on your hardware. Further, the RESTORE DATABASE process is dependent on the size of the backups. Even further, it’s dependent on the types of restore operations we’re running and whether or not we use WITH RECOVERY in the RESTORE operations. You may have to test a few restores to get an idea how fast things are with your system. Regardless, the RTO has to be arrived at and agreed on. You may also have to readdress the RTO as the number, size and volume of your data changes over time. Be prepared for this as well.

With the RPO established, you can now decide on the recovery model. Let’s take an example. If the business says that they can afford a day of data loss, depending on the size of your database, you can put this database into SIMPLE recovery, run a full backup once a day and walk away a winner. Another example, the business decides that it could live with up to 15 minutes worth of data loss. Now you have to go to FULL recovery and you have to set up log backups in addition to your full backups. Then, you start to mix the RTO into the mix. Let’s say you’re outage was to occur at 8PM and you run your backups at Midnight. You now have to restore 20 hours worth of log backups. That can take a long time. So, in order to make the RTO as short as possible, you toss in a differential backup every day at noon. Now you’ll only ever have to restore 12 hours worth of backups, so you can define a rough RTO for the business.

These simplified, and somewhat simplistic, examples are just the start of the process of figuring out how best to do your backups. However, that’s the technical part of the problem. The fundamental definitions that you have to have in order to start solving this technical issue are business decisions that you must get your business people involved with. Define the RPO and RTO, then start defining your recovery strategy.

Feb 22 2016

Why Is The Server Slow?

This is blog post #2 in support of Tim Ford’s (b|t) #iwanttohelp, #entrylevel.

If you haven’t been working in SQL Server for very long, you may not have got this phone call yet, but you will:

Hi, yeah, the server is slow. Thanks. Bye.

Let’s pretend for a moment that you know which server they’re referring to (because just finding out that piece of information can be a challenge). Now what?

The list of tools and mechanisms within SQL Server for gathering metrics is extremely long:

Performance Monitor
Dynamic Management Views & Functions
System Views
Extended Events
Trace Events
Activity Monitor
Data Collector
Execution Plans
3rd Party Tools

I’m leaving out lots of stuff in that list. So where do you start when you get this phone call? Where is the server slow?

The best place to start is by looking at the wait statistics.

A simple, and simplified, explanation for how processes work within SQL Server is that each process gets access to the various resources for a little while, then has to pass off access to another process, each of them working together to get the work done. But, some processes take longer than others. When a process has to wait for anything in SQL Server, this information gets logged. Depending on the system you’re working with you can access these wait statistics from one of three locations:

sys.dm_os_wait_stats: for SQL Server
sys.dm_db_wait_stats: for Azure SQL Database
sys.dm_pdw_nodes_os_wait_stats: for Azurew SQL Data Warehouse

Getting information from these system views is extremely simple. Here’s an example query:

SELECT  *
FROM    sys.dm_os_wait_stats AS dows
ORDER BY dows.wait_time_ms DESC;

The results of this query look like this:

waits

The first column lists the wait types. The second column provides a count of the tasks that have had that wait type. The third column, wait_time_ms, is the amount of time in milliseconds that tasks have been waiting within the system, cumulative. Next is the maximum wait time that any one task has waited, max_wait_time_ms. Finally, signal_wait_time_ms, you see the time that the waits have had to wait for access to the CPU (also known as time spent on the Runnable queue). The importance of waits are not simply the time that a wait has had, but also the number of tasks and the max time. Using all these values gives meaning to the individual wait.

Here is where things actually get difficult. The wait types are arcane, difficult to understand, and difficult to interpret. Further, a lot of the wait types actually don’t mean anything at all. The waits are not indicative of an issue. So, while the query above is simple, the results it provides are very weak. Instead, I strongly recommend you use the query provided by Paul Randal, located here. It will filter out the wait statistics that you shouldn’t care about.

Now, you have a meaningful list of wait statistics that will tell you exactly why, if not where, your server is running slow. Unfortunately, these waits still need to be interpreted. If you read further on Paul’s blog, you’ll see he has a number of waits and their causes documented. That’s your best bet to start understanding what’s happening on your system (although, I hear, Paul might be creating a more complete database of wait stats. I’ll update this blog post should that become available).

One other thing to consider. These waits are since the last time the server was started (or failed over, or the database was failed over in Azure, or if the values have been reset). This means that simply looking at the list doesn’t give you necessarily enough information. Instead, running this more than once during a day can show you what’s been slow over time by comparing the two data sets. You can also use sys.dm_exec_session_wait_stats to see what any given session is experiencing if you want to know what a given user or process is experiencing. Don’t just look at the list and think you’re done. The best thing to do is get to a point of proactive monitoring (for this, monitoring tools make it easier).

So, when you get the phone call that says the server is slow, you know how to get started understanding exactly why that may be the truth.