Aug 22 2016

Run For the PASS Board

When the word comes around for elections to the PASS Board, are you one of those people who, even in passing, considers running? If you are, my advice is to well, do it.

By the way, here’s that word.

In order to make this a stronger, even more vibrant organization that continues to provide support to its members, we need more good leadership. A healthy set of choices, meaning lots of people, is necessary. That means we need you to run. There is no commitment required immediately. We’ve just started the process. What I’m asking you to do now, is start thinking about running, and thinking about it in a positive fashion. Track down a board member, or former board member, and find out what they think of the experience. You can ask them what they didn’t like, sure. More importantly, ask them what they got out of the experience. I’ve talked to a lot of former board members (and did so before I ran). Each can tell you a horror story. However, every single one of them also talked about the positive aspects of their time on the board.

Personally, my first year, while a learning experience (and we all know what they’re like), was unremittingly positive. I’m excited about what I got done while I was on the board. I’m excited about what I may get done. I’m VERY excited about my new role. My positive experience comes from two places. First, I’ve been able to make a useful impact on the organization that has done so much for me personally, paying forward some of what I’ve received. Next, I’ve been working on the skills needed to take part in running a very large organization, which is hugely personally beneficial. I won’t tell you that everything has been easy. It hasn’t. This is work, and extra work, on top of your job, family, career, etc. However, it has been, and continues to be, a rewarding experience.

I’m asking you, as Grant, please, run for the PASS Board.

Aug 17 2016

Updates to Azure Data Platform Learning Resources

I’m acting as the curator to a list of Azure Data Platform Learning Resources. It’s speakers and teachers, blogs, articles and more, all focused around the Azure Data Platform. I have it hosted on GitHub so that anyone can contribute. There have been several updates recently that you may want to take a look at.

If you’re looking for where and how to get started in the Azure Data Platform, this is a great resource to get you going.

If you’re teaching the Azure Data Platform, anything from PowerBI to DocumentDB to Azure SQL Data Warehouse and all the stuff in between, please add your information so that this list is up to date.

Aug 15 2016

Query Store, Force Plan and Dropped Objects

I love the Query Store. Seriously. It’s a huge leap forward in the capabilities of Azure SQL Database and SQL Server in support of performance monitoring and query optimization. One of my favorite aspects of the Query Store is the ability to force plans. Frankly though, it’s also the scariest part of the Query Store. I do believe that plan forcing will be one of the most ill-used functions in SQL Server since the multi-statement table-valued user-defined function (don’t get me started). However, unlike the UDF, this ill-use will be because of poor understanding on the part of the user, not a fundamental design issue. No, plan forcing and the Query Store are very well constructed. Let me give you an example of just how well constructed they are.

Let’s imagine that have a situation such as bad parameter sniffing where you’ve determined that from the more than one possible execution plans against a table, there is a preferred plan. Enabling plan forcing to ensure that plan gets used is a no-brainer. Let’s further imagine that you have a junior DBA who is… let’s just say overly aggressive in their duties such that they do silly things occasionally. What happens when your pretty plan, which uses a particular index meets your junior DBA who just dropped that index?

Here’s the setup. We’re using the WideWorldImporters database and we have this query:

SELECT  *
FROM    Warehouse.StockItemTransactions AS sit
WHERE   sit.TransactionOccurredWhen BETWEEN '9/9/2015'
                                    AND     '9/11/2015';

This query, with the default configuration, will scan the existing table, so I’ll add an index:

CREATE INDEX TransactionOccurredWhenNCI
ON Warehouse.StockItemTransactions
(TransactionOccurredWhen);

For a limited range such as the one I’m passing above, I’ll get a plan with a key lookup operation which runs faster than the scan, so I’m happy. For a broader range, I’m likely to see a scan again, but since most of my queries have a very narrow range, I’d sure like to be able to force the plan to always compile to the seek and key lookup. To do this I need to find the query_id and plan_id from the Query Store (assuming I’m not using the GUI):

SELECT  qsp.plan_id,
        qsp.query_id,
		qsqt.query_sql_text,
		qsp.count_compiles
FROM    sys.query_store_plan AS qsp
JOIN    sys.query_store_query AS qsq
        ON qsq.query_id = qsp.query_id
JOIN    sys.query_store_query_text AS qsqt
        ON qsqt.query_text_id = qsq.query_text_id
WHERE   qsqt.query_sql_text LIKE 'SELECT  *
FROM    Warehouse.StockItemTransactions AS sit%';

With those values, I can force the execution plan so that it will always use the plan I want:

EXEC sys.sp_query_store_force_plan 42460,463;

That’s it. I’m happy because I’m going to see the execution plan used over and over, despite any values passed during a recompile.

Then…

Along comes our aggressive junior DBA who decides that there are “too many” indexes on the server. No, I don’t know what that means either, but they evidently read it on the internet or something so they drop the index we created before:

DROP INDEX TransactionOccurredWhenNCI ON Warehouse.StockItemTransactions;

What now happens to our lovely execution plan and the plan forcing? We’ll take a look at two events in Extended Events, sql_statement_recompile and query_store_plan_forcing_failed. Nothing happens immediately on dropping the index. The plans associated with that object, if any, are marked as invalid in the cache. The next time we call the query it’s going to recompile and we can see the event:

recompile_event

The most important part of the event is the recompile_cause which is “Schema changed”. However, I would also note the attach_activity_id.guid. I’ve chosen to enable causality tracking in this Extended Event session. This will cause all events associated with a common activity to get a GUID and then a sequence. This is interested because, after the recompile event, we get the query_store_plan_forcing_failed event:

plan_forcing_failed

The guid value is the same as the event above and the *.seq number is now 2, showing that, for these events, the recompile event occurred and then this event occurred. That makes perfect sense. The plan is marked for recompile, so, it’s going to be recompiled. I have enabled plan forcing though, so I have a particular plan that I want the optimizer to use. However, thanks to my “helpful” junior DBA, the plan is now invalid. You even get the description of what happened in the message field for the event:

Index ‘WideWorldImporters.Warehouse.StockItemTransactions.TransactionOccurredWhenNCI’, specified in the USE PLAN hint, does not exist. Specify an existing index, or create an index with the specified name.

The first question now is, what happens with this query and the execution plan? Does the new plan generated now that the index is missing get stored in cache? Yes, it does. We can validate that by querying the cache, or, when capturing the actual execution plan, checking the “Retrieved from cache” property.

Because plan forcing is enabled, do we see a recompile every time this query is called? The answer to that question is slightly complex. Under normal circumstances, no. As long as that plan remains in cache, it’s simply reused. No other recompiles occur. A normal recompile event will cause another attempt at applying the invalid execution plan and we would see yet another query_store_plan_forcing_failed event for each recompile on the query. However, during testing, Joey D’Antoni (who was helping me play with this when we discussed what would happen when a plan was made invalid) had severe memory pressure on his server. He saw intermittent recompiles with a cause message that said plan forcing had failed. So if your server is under extreme stress and you cause this issue, you might see different messages. Just remember, the cause of the recompiles was not the plan forcing, but the memory pressure.

The fun thing is, as long as I don’t remove the plan forcing or take the query and plan out of the Query Store manually, if I recreate the index on my table with the same name and definition as that expected by the plan, the Query Store will simply reapply the plan and then successfully force it during any subsequent recompile situation. This is because Query Store is persisted with the database and barring outside activity, the information there will remain, just like the rest of the data in the database.

All of this means that Query Store works exactly the way we would expect, not forcing additional recompiles when you, or your junior DBA, inadvertently invalidate a plan. It also works as expected in that forcing a plan is stored with your database so that, assuming you don’t remove that plan from the Query Store, it will simply be reapplied after you fix the problem. It’s fun to see the thought that went behind the design of the behavior of Query Store. However, please, use plan forcing judiciously.

Aug 09 2016

Azure SQL Database For Your First Database

This is post 8 supporting  Tim Ford’s (b|t) initiative on #iwanttohelp, #entrylevel. Read about it here.

In post #6, I talked about Azure SQL Database as one of the choices you have when you’re picking the type of SQL Server instance you want to run. I want to expand on why you should be considering moving into Azure SQL Database at the start of your career and some of the important differences you’ll have to be aware of as you get going.

Since you are right at the start of your career, you may as well plan on maximizing the life of the knowledge and skills you’re building. By this, I mean spend your time learning the newest and most advanced software rather than the old approach. Is there still work for people who only know SQL Server 2000? Sure. However, if you’re looking at the future, I strongly advocate for going with online, cloud-based systems. This is because, more and more, you’re going to be working with online, connected, applications. If the app is in the cloud, so should the data be. Azure and the technologies within it are absolutely the cutting edge today. Spending your limited learning time on this technology is an investment in your future.

The important point to remember is that when you start to look at something like Azure SQL Database is that, superficial resemblance aside, it’s not actually SQL Server. Oh yeah, it is. Those of us who have been teaching and supporting it for years emphasize the similarities. You know how to create a table in SQL Server? Great, same syntax in Azure SQL Database. Same thing for so much of the behavior of Azure SQL Database. However, at it’s core, Azure SQL Database is not the same as SQL Server. It’s a platform as a service offering. This means you must change the way you think about what you’re going to implement on it and how you’re going to implement it.

Take one example, backups. Since Azure SQL Database is a managed service, you don’t have access to the same type of functionality for backups as you do for regular SQL Server. That’s not to say that you can’t get backups. First, Microsoft itself has backups running on your databases for you. You have the capabilities for a point in time recovery build into the service directly. You can also ensure protection of your databases through geo-replication. Further, if you really do want to do a backup for some reason, you can. It’s not through the traditional BACKUP DATABASE command, but it’s still a fully-recoverable copy of your data and structure through the BACPAC (just make sure you create a copy of your database first).  This does mean that you can’t backup your Azure SQL Database through traditional means and then run a RESTORE command locally, but this isn’t how you should be working with Azure SQL Database anyway. Remember, it’s a platform as a service and is therefore not the same. This is just one example of several in how you have to think about Azure SQL Database differently.

Assume that some of your assumptions may have to be questioned as you move into this new space. It does require a modification in your approach, but one that, as someone just starting your career, you’re well positioned to make.

Aug 01 2016

There Is No Difference Between Table Variables, Temporary Tables and Common Table Expressions

I actually saw the above statement posted online. The person making the claim further stated that choosing between these three constructs was “personal preference” and didn’t change at all the way SQL Server would choose to deal with them in a query.

wait-seriously.jpg

Let’s immediately say, right up front, the title is wrong. Yes, there are very distinct differences between these three constructs. Yes, SQL Server will absolutely deal with these three constructs in different ways. No, picking which one is correct in a given situation is not about personal preference, but rather about the differences in behavior between the three.

To illustrate just a few of the differences between these three constructs, I’ll use variations of this query:

SELECT * FROM Sales.Orders AS o
JOIN Sales.OrderLines AS ol
ON ol.OrderID = o.OrderID
WHERE ol.StockItemID = 227;

The execution plan for this query looks like this:

straigt

The number of reads is 1,269 and the duration is around 234ms on average.

Let’s modify the query to use a table variable. Note, I do include a primary key with the table variable which can be used by the optimizer to make decisions based on unique values.

DECLARE @OrderLines TABLE
(OrderLineID INT NOT NULL PRIMARY KEY,
OrderID INT NOT NULL,
StockItemID INT NOT NULL,
Description NVARCHAR(100) NOT NULL,
PackageTypeID INT NOT NULL,
Quantity INT NOT NULL,
UnitPrice DECIMAL(18,2) NULL,
TaxRate DECIMAL(18,3) NOT NULL,
PickedQuantity INT NOT NULL,
PickingCompletedWhen DATETIME2 NULL,
LastEditedBy INT NOT NULL,
LastEditedWhen DATETIME2 NOT NULL);

INSERT @OrderLines
        (OrderLineID,
         OrderID,
		 StockItemID,
         Description,
         PackageTypeID,
         Quantity,
         UnitPrice,
         TaxRate,
         PickedQuantity,
         PickingCompletedWhen,
         LastEditedBy,
         LastEditedWhen
        )
SELECT *
FROM Sales.OrderLines AS ol
WHERE ol.StockItemID = 227;

SELECT * FROM Sales.Orders AS o
JOIN @OrderLines AS ol
ON ol.OrderID = o.OrderID
WHERE ol.StockItemID = 227;

I’m not concerned with how long it takes the data to load, only the behavior of the query after I load the data. Here’s the execution plan:

Variable

Not much to say. Clearly it’s different from the regular query, but that shouldn’t be a shock since we’re dealing with different tables. Overall the number of reads goes to 1508 because we’re messing with data twice and performance for the whole process is about 260ms. Breaking it down by statement within the batch, so that we can get a very fair comparison, the active part of the query we’re concerned with, the JOIN between the table and the table variable, runs in about 250ms and has only 356 reads.

Modifying the query again for temporary tables, it looks like this:

CREATE TABLE #OrderLines
(OrderLineID INT NOT NULL PRIMARY KEY,
OrderID INT NOT NULL,
StockItemID INT NOT NULL,
Description NVARCHAR(100) NOT NULL,
PackageTypeID INT NOT NULL,
Quantity INT NOT NULL,
UnitPrice DECIMAL(18,2) NULL,
TaxRate DECIMAL(18,3) NOT NULL,
PickedQuantity INT NOT NULL,
PickingCompletedWhen DATETIME2 NULL,
LastEditedBy INT NOT NULL,
LastEditedWhen DATETIME2 NOT NULL);

INSERT #OrderLines
        (OrderLineID,
         OrderID,
         StockItemID,
         Description,
         PackageTypeID,
         Quantity,
         UnitPrice,
         TaxRate,
         PickedQuantity,
         PickingCompletedWhen,
         LastEditedBy,
         LastEditedWhen
        )
SELECT * FROM Sales.OrderLines AS ol
WHERE ol.StockItemID = 227;

SELECT * FROM Sales.Orders AS o
JOIN #OrderLines AS ol
ON ol.OrderID = o.OrderID
WHERE ol.StockItemID = 227;

DROP TABLE #OrderLines;

The new execution plan looks like this:

temp

Don’t go getting all excited. I recognize that these two plans look similar, but they are different. First, let me point out that we have more reads with 1546 and an increase in duration to 273ms. This comes from two places. First, we’re creating statistics on the data in the temporary table where none exist on the table variable. Second, because I want to run this script over and over, I’m including the DROP TABLE statement, which is adding overhead that I wouldn’t see if I treated it like the table variable (which I could, but not here). However, breaking down the to the statement level, I get 250ms duration, just like with the table variable, but, I see 924 reads.

What’s going on?

Note first the estimated costs between the two exec plans, 50/50 for the query with the table variable and 2/98 for the temporary table. Why? Well, let’s compare the two plans (and yeah, I LOVE the new SSMS plan compare functionality). Specifically, let’s look at each Clustered Index Scan operation. There are a number of differences, but the most telling is right here:

differences

On the left is the temporary table. On the right is the table variable. Note the TableCardinality values. The table variable shows zero because there are no statistics, despite the table created having a primary key. In this case, it doesn’t make an appreciable difference in behavior from a pure performance standpoint (250ms to 250ms), but you can clearly see differences in behavior.

Oh, and the CTE? It had the same execution plan as the original query because a CTE is not a table, it’s an expression.

In short, yes, there are very distinct differences in behavior between a table variable, a temporary table, and a common table expression. These are not constructs that are interchangeable on a whim. You need to understand what each does in order to use each appropriately.


Next week I’ll be doing talking about execution plans and query tuning at an all day pre-con at SQLServer Geeks Annual Summit in Bangalore India.

Don’t miss your chance to an all day training course on execution plans before SQL Saturday Oslo in September.

Jul 25 2016

Monitor Query Performance

Blog post #7 in support of Tim Ford’s (b|t) #iwanttohelp, #entrylevel. Read about it here.

Sooner or later when you’re working with SQL Server, someone is going to complain that the server is slow. I already pointed out the first place you should look when this comes up. But what if they’re more precise? What if, you know, or at least suspect, you have a problem with a query? How do you get information about how queries are behaving in SQL Server?

Choices For Query Metrics

It’s not enough to know that you have a slow query or queries. You need to know exactly how slow they are. You must measure. You need to know how long they take to run and you need to know how many resources are used while they run. You need to know these numbers in order to be able to determine if, after you do something to try to help the query, you’ll know whether or not you’ve improved performance. To measure the performance of queries, you have a number of choices. Each choice has positives and negatives associated with them. I’m going to run through my preferred mechanisms for measuring query performance and outline why. I’ll also list some of the other mechanisms you have available and tell you why I don’t like them. Let’s get started.

Dynamic Management Views

Since SQL Server 2005, Dynamic Management Views (DMV) and Functions (DMF) have been available for access all sorts of information about the server. Specifically there are a few DMVs that are focused on queries and query performance. If you go back through my blog, you can find tons of examples where I illustrate their use. You can also see them at work in commercial tools and free tools. Adam Machanic’s sp_WhoIsActive, a free tool, makes extensive use of DMVs. To learn more about DMVs, you can download a free book, Performance Tuning with SQL Server Dynamic Management Views. DMVs are available in Azure SQL Database, Azure SQL Data Warehouse, and all editions of SQL Server.

The information captured by DMVs is an aggregation of all the times the query has been run. This means you can’t find how long the query ran at 3PM yesterday. You can though see the minimum and maximum time the query took as well as the average. The ability to see this information is what makes DMVs useful. However, another important point about DMVs is that they only collect information while a query is in memory. As soon as it leaves the cache (the area of memory it is stored in), so does all the aggregated information about the query in the DMVs.

You use the DMVs for a general understanding of how a query is behaving. They’re not meant for detailed for long term collection of information about queries. For that we use other tools.

Extended Events

Introduced in SQL Server 2008, Extended Events (ExEvents) are a mechanism for capturing detailed information about SQL Server and the processes within. One of those processes is how queries behave. I have multiple examples on this blog on using ExEvents. You can’t go wrong reading about them on Jonathan Kehayias’ blog. Extended events are available in Azure SQL Database and all editions of SQL Server.

When you need to know every query against a database, or each time a particular query is called, and all the details associated with the query (reads, writes, duration), ExEvents are the way to go. ExEvents are very lightweight on the server (but not free) and can be filtered so that you capture just the information you need. The information is detailed and not aggregated. Instead it’s raw. The real issue with capturing this data is the amount of data you’ll be capturing. Testing and careful filtering to ensure you’re dealing with too much information is important. Prior to SQL Server 2012, there was no graphical user interface for reading ExEvent data, so you would have been forced to run queries against the XML that the information is captured within. With the tools available in SQL Server Management Studio, this is no longer the case.

You use ExEvents when you need specific and detailed information about a query. ExEvents are not so good for generalized monitoring.

Query Store

Introduced in Azure SQL Database, and first released in SQL Server with 2016, Query Store is another mechanism for capturing aggregated information about queries. As before, I have examples on how to work with Query Store on my blog. You can also find quite a bit on it over at Simple-Talk. Query Store is pretty specialized still and only available in Azure and SQL Server 2016, but it is in all editions of SQL Server 2016.

Query Store captures information similar to what is available in the DMVs. However, unlike the DMVs, the information that Query Store captures is kept around, even after a query ages out or is removed from cache. This persistence makes Query Store very exciting. You do have to choose to turn it on for each database you wish to capture queries for. It’s not automatic like DMVs. The capture processes are asynchronous, so they should be relatively light weight for most databases.

You use the Query Store when you need to capture query metrics over the long term, but you don’t need detailed information and aggregations works well for you.

Others

There are other ways to measure query performance. You can use the Profiler GUI, but that actually seriously negatively impacts the server. You can bring a server down by using it, so it should be avoided. Profiler generates scripts called trace, which can be used to monitor your server. However, they have a much higher impact than ExEvents and they’re on the deprecation list. Microsoft is not added new trace events for new functionality, so they’re becoming less and less useful with each release. You also can’t use trace against Azure. If you’re writing a query and you just want to see how long it takes to run, you can use SET STATISTICS TIME ON, to capture the execution time. This is a handy way to quickly measure performance. There is also the ability to capture reads and writes using SET STATISTICS IO ON, but, while this does capture the metrics we need, it adds considerable overhead to the query, skewing performance measurement. This is why I stick to ExEvents when I need an accurate measure.

Conclusion

Honest people can disagree about the best way to capture query performance. I have my preferences as you can see. However, I’m fairly certain that everyone would agree that it’s important to know how to capture performance metrics in order to be able to assert that performance has increased or decreased in a measured fashion. You don’t want to guess at query performance, you want to know.

Jul 18 2016

Common Table Expression, Just a Name

The Common Table Expression (CTE) is a great tool in T-SQL. The CTE provides a mechanism to define a query that can be easily reused over and over within another query. The CTE also provides a mechanism for recursion which, though a little dangerous and overused, is extremely handy for certain types of queries. However, the CTE has a very unfortunate name. Over and over I’ve had to walk people back from the “Table” in Common Table Expression. The CTE is just a query. It’s not a table. It’s not providing a temporary storage space like a table variable or a temporary table. It’s just a query. Think of it more like a temporary view, which is also just a query.

Every time I explain this, there are people who don’t believe me. They point to the “Table” in the name, “See. Says so right there. It’s a table.”

It’s not and I can prove it. Let’s create a relatively simple CTE and use it in a query:

WITH    MyCTE
          AS (SELECT    c.CustomerName,
                        cc.CustomerCategoryName
              FROM      Sales.Customers AS c
              JOIN      Sales.CustomerCategories AS cc
              ON        cc.CustomerCategoryID = c.CustomerCategoryID
              WHERE     c.CustomerCategoryID = 4)
    SELECT  *
    FROM    MyCTE;

Now, I’m going to run the query within the CTE and the CTE together as two statements in a batch and capture the execution plans:

ExecPlans

On the top, the CTE, on the bottom, the query. You’ll note that the execution plans are identical. They each have the exact same Query Plan Hash value in the properties, 0x88EFD2B7C165E667, even though they have different Query Hash values, 0x192FFC125A08CC35 and 0xFEB7F2BCAC853CD5, respectively. Further, if I capture the query metrics using extended events, I get identical reads and, on average, identical execution times:

duration

This is because, there is no table being created. The data is not treated differently. A CTE is just a query, not some type of temporary storage.

Heck, let’s do one more thing. Let’s use the latest SSMS plan comparison tool and highlight one of the operators to see what differences there are internally in the plan:

 

plancompare

I don’t see a lot of differences. In fact, I don’t see any. That’s because the optimizer recognizes these two queries as identical. If it was loading data into temporary storage, you would see differences in something. We don’t. This is because, despite the somewhat unfortunate emphasis that gets placed on the Table portion of the name, the emphasis of the name, Common Table Expression, should be on the word Expression.

I will point out an interesting difference, especially useful for those who plug in CTEs everywhere, whether it’s needed or not. Let’s look at the properties of the two plans:

peroperties

You can see the similarities and differences that I pointed out earlier in the Statement, Query Hash and Query Plan Hash, as well as the Estimated Subtree Cost and others. What’s truly interesting is that the CompileCPU, CompileMemory and CompileTime for the CTE is higher than the regular query. While the CTE is just a query, it’s a query that adds a non-zero overhead when used, and therefore, should only be used where appropriate (good gosh, I’ve seen people put it EVERWHERE, on every single query, don’t do that).

Hopefully, this is enough to establish, truly, completely, and thoroughly, that the Common Table Expression is an expression, not a table.

Yeah, I did this before, but it keeps coming up, so I tried a different approach. Let’s see if the word gets out. Your Common Table Expression is not a table.


I love talking about execution plans and query tuning. I’ll be doing this at an all day pre-con at SQLServer Geeks Annual Summit in Bangalore India.

Don’t  miss your chance to an all day training course on execution plans before SQL Saturday Oslo in September.

Jul 11 2016

Same Query, Different Servers, Different Performance. Now What?

Based on the number of times I see this question on forums, it must be occurring all the time. You have two different servers that, as far as you know, are identical in terms of their options and setup (although not necessarily in terms of power, think a test or pre-production system versus production). On these servers you have a database on each that, as far as you know, is the same as the other in terms of options, objects, maybe even data (although, this does mean that you have unmasked production information in your QA environment, which potentially means you’re going to jail, might want to address this, especially now that I’ve told you about it, mens rea, you’re welcome). On each database you run, as far as you know, the exact same query (whether a straight up ad hoc query, stored procedure, parameterized query, whatever). On your production server, let’s call it PROD, the query runs for 3 hours before it returns, but on the test server, let’s call it TEST, it returns in about 3 seconds.

What. The. Hell.

This is especially troubling because PROD has 8 processors that are much more powerful than TEST, which has only 2 (either can go parallel, so this is the same). TEST only has 16GB of RAM and PROD has 64GB. Further, you know that the databases, data, and statistics are the same because you restored the TEST database from PROD (although, see that jail thing above). However, you’re on top of things. You capture the query metrics so you know exactly the amount of time and the reads or writes from each query and you even do this using extended events so you’re minimizing observer effects. You also have monitoring in place, so you know that there is no blocking in PROD. The query is just a pig there, but not on TEST. You’ve even gone so far as to get an estimated execution plan from each server and they are absolutely different.

Now what?

Well obviously something somewhere is different. Start by comparing everything on both servers and both databases down to… hang on, here, we’ll write a PowerShell script and then….

Wait, wait, wait!

You have the execution plans? Before we start digging through all the properties everywhere and comparing everything to everything, updating statistics 14 times, and all the rest, what if we look at the execution plans. They’re different, so we should start looking at scans & indexes & statistics &….

Wait!

Let’s start simple and clear. Let’s take a look at the properties of the first operator:

AllProperties

This contains a ton of information about the settings of the systems where each plan was generated. Let’s assume that, probably, the databases are the same, as is the data and the statistics, but the problem is a system or database setting. These differences can result in different execution plans and therefore different execution times. If we are dealing with two different servers and we are fairly sure the data, structure and the statistics are the same, the properties of the first operator are a great starting point for understanding what went wrong.

Oh, and the first operator is this one with the red square around it:

first operator

Let’s use the new SSMS Compare Showplan to see the differences between our two execution plans that we captured:

properties compared

(clicking on that will make it bigger)

There are a bunch of differences highlighted, but one ought to jump out pretty quick. That’s right, these two databases have different compatibility levels which resulted in one of them using the new optimizer and one using the old optimizer as evidenced by the CardinalityEstimatorModelVersion. Fun point, both have identical query hash values. Not surprising, but additional, quick, validation that we really are talking about the same query on each server (and one of them isn’t using a trace flag to change the cardinality estimator). You now have enough information to go and make some changes to your system without lots of further fumbling and work.

When dealing with the same query from two servers that ought to result in similar behavior, but doesn’t, get the execution plans (estimated plans are fine here) and compare the the properties of the first operator. That’s the quickest way to identify the issues that could be leading to the differences between the servers.


Want to talk more about execution plans and query tuning? Let’s do it.

In August, I’ll be doing an all day pre-con at SQLServer Geeks Annual Summit in Bangalore India.

I’m also going to be in Oslo Norway for a pre-con before SQL Saturday Oslo in September.

Jul 05 2016

Carpenters vs. DBAs

9530113595_907e63b28a_mLet’s get the caveat out of the way up front, I work for a tool vendor.

If you look around at the tools landscape for the Microsoft Data Platform, it’s really interesting. There are a few tools vendors, primarily clustered around monitoring tools, and then there are a bunch of point tools for helping with various aspects of operations against the Data Platform (mostly SQL Server). Some of these are free tools. Some are pay only. Some are a mix. There are variables in the quality of these tools, and I’m sure not going to comment on that. Instead, I find one thing really interesting.

Let’s step back a bit. My neighbors have both worked as carpenters (well, one carpenter, and one general contractor who also does carpentry). They both go out of their way to ensure that their basic tool set is what they consider the best (want to start a fight, ask about hammers, it’s fun). They pay for these basic tools themselves. They also might pay for some of the smaller power tools themselves. The larger tools will be supplied at the work site.

Compare this with your average DBA or Database Developer. Some of the tools available for the Data Platform are clearly “work site” tools, especially the monitoring tools. These must be supplied by the organization (gods above & below, the last thing we want is each DBA or data pro to bring in a different monitoring tool). Then there are the “hand tools” of the Data Platform pro. Interestingly enough, many of these support a floating license such that you could purchase the tool and then “carry” it with you from job site to job site, like a carpenter. Instead, most every Data Pro I know will insist that the company has to supply them with these tools, or that the tools be free (although, you then see the company that won’t let you use free, but unapproved, tools on their site). They refuse to purchase any tool with their own cash. Even though having this tool could improve their work, maybe even make them look better than their peers.

I find this mind-set fascinating. It’s especially so because the average salary for a carpenter and the average for a DBA are somewhat far apart (40k to 70k). You’d think that making nearly double the amount of money, a DBA wouldn’t hesitate to purchase a piece of software that would make them better, as an individual. By and large though, you’d be wrong.

Yeah, I know what you’re thinking. This is just a pitch from a tools vendor. Well, duh. Yeah, it is. However, I also have purchased my own software in the past and it has helped me be better at my job than my peers. As a wise man once said, “If you ain’t cheatin’, you ain’t fightin'”. As much as our community is all about sharing, and I believe in sharing, there is still a competition when it comes to getting and keeping jobs. You’d think that you’d go for any advantage you can get. I sure have. That includes buying a piece of software that helps me do my job better, out of my own pocket. While I do want you to buy tools because I’m selfish and want to keep earning a pay check, I also think you should be purchasing tools so that you can become better at your job. Embrace the healing power of “And.”

Oh, why would you buy anything but a straight claw hammer?

Jun 28 2016

CASE Statement in GROUP BY

Set based operations means you should put everything into a single statement, right?

Well, not really. People seem to think that having two queries is really bad, so when faced with logical gaps, they just cram them into the query they have. This is partly because SQL Server and T-SQL supports letting you do this, and it’s partly because it looks like a logical extension of code reuse to arrive at a query structure that supports multiple logic chains. However, let’s explore what happens when you do this on particular situation, a CASE statement in a GROUP BY clause.

You see this a lot because a given set of data may be needed in slightly different context by different groups within the company. Like many of my example queries, this could be better written. Like many of my example queries, it mirrors what I see in the wild (and for those following along at home, I’m using the WideWorldImporters database for tests now):

CREATE PROCEDURE dbo.InvoiceGrouping (@x INT)
AS
SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY CASE WHEN @x = 7 THEN i.ContactPersonID
              WHEN @x = 15 THEN i.AccountsPersonID
              ELSE i.SalespersonPersonID
         END;
GO

Running this for any given value above, 7, 15 or other, you’ll get the same execution plan, regardless of the column used in the GROUP BY. However, Parameter Sniffing is still something of a factor. When you group this data by SalesPersonID, you only get 10 rows back. This will be shown as the estimated number of rows returned if some value other than 7 or 15 is used as a parameter. However, this is always the plan:

coreplan

You can click on that to expand it into something readable. We can eliminate the Parameter Sniffing from the equation if we want to by modifying the query thus:

CREATE PROCEDURE dbo.InvoiceGrouping_NoSniff (@x INT)
AS
DECLARE @x2 INT;
SET @x2 = @x;

SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY CASE WHEN @x2 = 7 THEN i.ContactPersonID
              WHEN @x2 = 15 THEN i.AccountsPersonID
              ELSE i.SalespersonPersonID
         END;
GO

However, except for some deviation on the estimated rows (since it’s averaging the rows returned), the execution plan is the same.

What’s the big deal right? Well, let’s break down the code into three different procedures:

CREATE PROCEDURE dbo.InvoiceGrouping_Contact
AS
SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY i.ContactPersonID;
GO

CREATE PROCEDURE dbo.InvoiceGrouping_Sales
AS
SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY i.SalespersonPersonID;
GO

CREATE PROCEDURE dbo.InvoiceGrouping_Account
AS
SELECT  SUM(il.UnitPrice),
        COUNT(i.ContactPersonID),
        COUNT(i.AccountsPersonID),
        COUNT(i.SalespersonPersonID)
FROM    Sales.Invoices AS i
JOIN    Sales.InvoiceLines AS il
        ON il.InvoiceID = i.InvoiceID
GROUP BY i.AccountsPersonID;
GO

Interestingly enough, these three queries produce a nearly identical execution plan. The one big difference is the Compute Scalar operator that is used to generate a value for the Hash Match Aggregate is no longer in the query:

specificplan

The same basic set of structures, scans against both tables, to arrive at the data. Cost estimates between the two plans are very different though, with the targeted queries having a much lower estimated cost.

Performance-wise, interestingly enough, the average execution time of the first query, only returning the 10 rows, is 157ms on average, while the query grouping directly on the SalesPersonID averages about 190ms. Now, the reads tell a slightly different story with 17428 on the generic query and 5721 on the specific query. So, maybe a server under load will see a significant performance increase. However, let’s deal with what we have in front of us and say that, at least for these tests, the catch-all GROUP BY query performs well.

Now let’s change the paradigm slightly. Let’s add an index:

CREATE INDEX TestingGroupBy ON Sales.Invoices (SalespersonPersonID);

Frankly, this isn’t a very useful index. However, after adding it, the execution plan for the InvoiceGrouping_Sales query changes. Instead of scanning the table, it’s now scanning the index. Despite recompiles and attempts to force it using hints, the original InvoiceGrouping query will not use this index. Duration of the InvoiceGrouping_Sales query drops to 140ms on average and the reads drop a little further to 5021. Getting an 11% increase on performance is a win.

This is a pretty simplified example, however, making the CASE statement more complex won’t improve performance or further assist the optimizer to make good choices. Instead of trying to cram multiple different logical groupings into a single query, a better approach would be to create the three new procedures that I did above, and make the original InvoiceGrouping procedure into a wrapping procedure that chooses which of the individual procedures to call. This way, if you do add indexes in support of each of the different possible groupings, you would realize a positive outcome in your performance.


Want to talk more about execution plans and query tuning?. In August, I’ll be doing an all day pre-con at SQLServer Geeks Annual Summit in Bangalore India.

I’m also going to be in Oslo Norway for a pre-con before SQL Saturday Oslo in September.