Category: SQL Server 2014

Aug 20 2014

The Red Gate Way…

SitCAs companies go, Red Gate is a little different. That is readily apparent in our tools and the philosophy behind them, ingeniously simple. But, we do a lot of other things too. There’s the Simple-Talk web site where we publish serious articles on all aspects of development and database administration across platforms and programming languages. There’s SQL Server Central, the single largest SQL Server community on the planet. There’s Ask SQL Server where you can get direct answers to your direct questions about SQL Server. If all that’s not enough, there are all the books, which we give away for free, on, again, all aspects of programming and database administration. But, we like to do more, so we also bring you training, the Red Gate way, at the SQL in the City events.

We’ve got two more SQL in the City events coming up soon. First, we’re back in London again on Friday, October 24, 2014. This event is one of my favorites, every year. We’re bringing in MVPs like Steve Jones, Ike Ellis, Brian Randell and others, all to teach you about SQL Server, but we’re doing it the Red Gate way. So please, register for this event. I’ll see you there and we can share a frothy beverage (it’s Red Gate).

Next, I’m thrilled to say that we’re going to be in Seattle on Monday, November 3, 2014. That’s right, just before the PASS Summit. If you wanted a reason to get out to Seattle early, here it is. We’re bringing a lot of the same crew from the London event over to Seattle. You’ll be able to experience what the London people did and more. This is SQL Server training done right, that is the Red Gate Way. Let’s get together and talk and share a frothy beverage in the States. It’s a free event, but there’s limited room, so please register now.

These are unique and popular events. We pull out all the stops to make them fun, special, educational, useful, helpful, doggone it, good. Please, come out, talk to me, talk to the Red Gate team, help influence the tools that you use every day, and learn about SQL Server.

Aug 14 2014

A Full Day of Query Tuning

I’m excited to able to say that I’ve been given the opportunity to put on a full day workshop at SQL Connections on Friday, September 19th, 2014. The title is “Query Performance Tuning in SQL Server 2014″, but I assure you we’re going to cover things that are applicable if you’re still working on SQL Server 2005. We’ll start the day covering the different mechanisms you have to capture query metrics. We’ll go over dynamic management objects and extended events that are incredibly important to you in understanding which queries you need to tune. We’ll get an introduction into how the optimizer works and the importance that statistics, indexes and constraints play in helping the optimizer make the choices it makes. I promise, execution plans will be covered throughout the day, in great detail, because they are a fundamental part of how you’re going to understand the choices the optimizer made. Then, we’re going to go through common problems, how you can identify them, troubleshoot them, and solve them. We’ll get rid of a lot of myths and just easily fixed issues. Throughout the day we’ll be covering both older versions of SQL Server as well as SQL Server 2014. Then, to finish out the day, we’ll go over some of the new opportunities that are unique to SQL Server 2014, their use and their shortcomings.

In short, I’m trying to take you from not knowing which queries you need to tune, to identifying those problematic queries, understanding what the problems are and how to solve them. You’re going to know where to go to get started reading execution plans. You’re going to walk away with a love and fascination for extended events. You’re going to get tools and methods that you can apply to your own code, your own applications, your own servers. And, this all takes place after an amazing week of learning at the IT/Dev Connections event in Vegas. Please click here now to register.

Jul 15 2014

Execution Plan Details

I wouldn’t say it’s common knowledge that you should look at execution plans when tuning queries, but it’s not exactly uncommon knowledge either. But, people tend to get focused on just looking at the graphical part of the plan and there’s just not enough information there. Let’s take a look at a query:

SELECT  pc.Name,
        ps.Name,
        v.Name,
        pr.ReviewerName,
        p.Name,
        v.ModifiedDate,
        p.Color
FROM    Production.Product AS p
        LEFT JOIN Production.ProductReview AS pr
        ON pr.ProductID = p.ProductID
        JOIN Production.ProductSubcategory AS ps
        ON p.ProductSubcategoryID = ps.ProductSubcategoryID
        JOIN Production.ProductCategory AS pc
        ON pc.ProductCategoryID = ps.ProductCategoryID
        JOIN Purchasing.ProductVendor AS pv
        JOIN Purchasing.Vendor AS v
        ON v.BusinessEntityID = pv.BusinessEntityID
        ON pv.ProductID = p.ProductID
WHERE   v.ModifiedDate = '2006-02-17 00:00:00.000'
AND p.Color LIKE 'Y%';

This generates an execution plan that looks like this:

PlanDetails

Neither v.ModifiedDate nor p.Color have indexes. Yet, we only see a single scan in this plan, on the BusinessEntity.Vendor table. Why? Well, we can’t tell from the GUI directly, so, you have to look to the tool tips or the properties. The tool tip in this case actually proves helpful, as does the properties:

PlanDetailsProperties

In short, the clustered index is used to seek out a number of rows and then a secondary predicate is placed on those few rows to further filter the results. You would never even guess at that using just the GUI alone. You have to look to the details of the execution plan to understand that. There are lots of other examples where you can only get the information from the properties because not everything is available in the tooltip. This can include things like Optimization Level, Reason For Early Termination, Scan Direction, and a bunch of other things. Just remember to drill down to the properties in order to better understand your execution plans.

Why do you need information like this? Are we going to tune this query? Maybe we could help it by adding an index t the Vendor table. But, at least for this query with this date range, it appears we don’t need an index on the Product table. But, that may change depending on the number of rows returned from the Vendor table. A different data set can result in an entirely different execution plan with entirely different performance characteristics.

Query tuning is tough. That’s why I’ve put together a one-day seminar to get you started on it. There’s still room in Albany, on July 25th. You can register here. I’ll also be teaching at SQL Connections in September in Las Vegas. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on just execution plans at SQL Server Days. Go here to register for this event. When I did a similar session at the PASS Summit two years ago, it sold out. I’ll be presenting at Summit this year in Seattle in November. Go here to register. But do it early or you may be put on a waiting list.

 

 

 

 

Jul 02 2014

Statistics and Natively Compiled Procedures

Statistics are one of the single most important driving factors for the behavior of the query optimizer. The cardinality estimates stored within the statistics drive costing and costing drives the decision making of the optimizer. So, how does this work with the new SQL Server 2014 natively compiled procedures? Differently.

In-memory tables do not maintain their statistics automatically. Further, you can’t run DBCC SHOW_STATISTICS to get information about those statistics, so you can’t tell if they’re out of date or not or what the distribution of the data is within them. So, if I create some memory optimized tables, skip loading any data into them and then run this standard query:

SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = 'London';

The estimates in the execution plan for the query show some pretty wild values:

Estimates

That’s an estimated number of rows of 65,536 and an actual of zero in a table scan because I created my table without an index. If I recreate it with an index, but still no data, the estimates change and we have an index seek operation now:

EstimatesIndex

That’s suggesting that the optimizer thinks there are 256 rows. But there’s no data here. So, let’s load some data into the table. Here are the new estimates from the index seek operator:

estimateswdata

I haven’t yet updated the statistics, so the optimizer still thinks there are zero rows in the table, or at least, it has no statistics. Well, not true:

SELECT  s.name,
        s.auto_created,
        s.user_created,
        s.filter_definition,
        sc.column_id,
        c.name AS ColumnName
FROM    sys.stats AS s
        JOIN sys.stats_columns AS sc ON sc.stats_id = s.stats_id
AND sc.object_id = s.object_id
        JOIN sys.columns AS c ON c.column_id = sc.column_id
AND c.object_id = s.object_id
WHERE   s.object_id = OBJECT_ID('dbo.Address');

If I run this, I’ll see statistics on the table, including system generated statistics:

stats

One point, while statistics don’t update automatically, they clearly can still be created automatically. But I can’t run DBCC SHOW_STATISTICS to see what’s in there. So, let’s see what estimates look like in the natively compiled procedure. I’ll take the same query code above and convert it to a proc. Then, when I capture the estimated plan from the procedure (no actual plans allowed), the index seek operator shows these estimates:

estimatescompiled

Now, we have an accurate estimate on the number of rows this query is likely to return. Interesting. So, let’s drop the procedure, update the statistics and then rerun the query and the procedure. First the query. The estimates don’t change. I’m still seeing estimated values as 256 while the actual is 434. So, let’s free the procedure cache and try again:

EstimatesUpdated

Ah, there we go. The plan itself came out the same way, but we clearly have more accurate estimates now. On to the procedure. I’ll recreate it and then get the estimated plan. Here are the estimate values from the same index seek operation:

Estimatescompiledupdated

Oops. Still estimated 0 rows.

What’s all this mean? I’m not sure. The documentation from Microsoft in this area is sketchy. During the most recent 24 Hours of PASS, I was able to ask Microsoft about the impact of statistics on natively compiled plans. They suggested that it was not necessarily going to be the same as we see in standard queries. These tests make that fairly evident. Also, it looks like the default values of estimated rows for in-memory tables is different. If I create standard tables, empty, and run the same query against them, the estimated number of rows is what I expect, 1. But in the case of in-memory tables it’s 256 with an index and 65,536 without one (or at least that’s what I’m seeing). However, the estimates for the natively compiled procedure never changed in this test case, always at 0. This is hardly shocking, but it seems that different rules apply for in-memory tables and their statistics as well as natively compiled procedures and their consumption of those statistics. And, as Microsoft has changed the default estimated number of rows for table variables from 1 to 100 in SQL Server 2014, it seems we have another instance where they’re defaulting to an even higher value and one where the values seem to just disappear.

The behavior of statistics within in-memory tables is extremely interesting because you may see changing behavior with those tables as your queries get more complex and your data changes. It makes a very strong case for making sure that you update your statistics on a regular basis on these tables.

I’m taking this show on the road. If you want to get an all day class on query tuning, I’ve got a lot of opportunities coming up. I believe that Albany, on July 25th, is not yet sold out. You can register here. I’m teaching an all day session at SQL Connections in September in Las Vegas. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event. I’m excited, and more than a little humbled, to get the opportunity to present an all day pre-conference seminar at the PASS Summit in Seattle in November. Go here to register.

Jun 25 2014

The Utility of Execution Plans in Natively Compiled Procedures

I’m actually having problems identifying the utility of execution plans when working with natively compiled procedures. Or, put another way, why bother? I’ve posted a couple of times on natively compiled procedures and SQL Server execution plans. I’ve found the differences interesting and enlightening, but I’m seriously questioning why I should bother, at least currently. I’m sure there will be many changes to the behaviors of the natively compiled procedures and their relationship with execution plans. But right now, well, let’s look at an example. I have three simple tables stored in-memory. Here’s the definition of one:

CREATE TABLE dbo.Address
    (
     AddressID INT IDENTITY(1, 1)
                   NOT NULL
                   PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 50000),
     AddressLine1 NVARCHAR(60) NOT NULL,
     AddressLine2 NVARCHAR(60) NULL,
     City NVARCHAR(30) COLLATE Latin1_General_100_BIN2 NOT NULL,
     StateProvinceID INT NOT NULL,
     PostalCode NVARCHAR(15) NOT NULL,
     ModifiedDate DATETIME
        NOT NULL
        CONSTRAINT DF_Address_ModifiedDate DEFAULT (GETDATE())
    )
    WITH (
         MEMORY_OPTIMIZED=
         ON);

I can then create the following code as a natively compiled procedure:

CREATE PROC [dbo].[AddressDetails] @City NVARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

When I call for an estimated plan (remember, no actual plans) I’ll get this:

Scan

If you click on it, you’ll note that there’s an index scan. But the costs are all zero. Everything is FREE! Or not. The execution time is 93ms. If I put an index on the City column, the execution plan changes to the one I showed previously, an index seek, and the execution time goes to 42ms. Clearly, the scans are costing something. Scans aren’t necessarily bad and seeks aren’t necessarily good, but it’s hard to spot issues with execution plans with no costing involved at all. Which makes me wonder, should we bothering with execution plans for the natively compiled procs? I’m honestly unsure.

For most query tuning, statistics matter a lot. I understand we still have room in Albany on July 25th. You can register here. I’m doing an all day session at SQL Connections in September in Las Vegas. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event. I’d love to talk query tuning with you all day long.

 

 

Jun 17 2014

Natively Compiled Procedures and Bad Execution Plans

I’ve been exploring how natively compiled procedures are portrayed within execution plans. There have been two previous posts on the topic, the first discussing the differences in the first operator, the second discussing the differences everywhere else. Now, I’m really interested in generating bad execution plans. But, the interesting thing, I wasn’t able to, or, rather, I couldn’t see evidence of plans changing based on silly things I did to my queries and data. To start with, here’s a query:

CREATE PROC [dbo].[AddressDetails] @City NVARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

And this is a nearly identical query, but with some stupid stuff put in:

CREATE PROC [dbo].[BadAddressDetails] @City VARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

I’ve change the primary filter parameter value to a VARCHAR when the data is NVARCHAR. This difference is likely to lead to differences in an execution plan, although not necessarily. If I load my tables up and update my statistics, then create the procedures and run them both with the same parameter values, I should detect any differences, right? Here’s the resulting execution plan:

ActualPlan

It’s an identical plan for both queries. In fact, the only difference in the plan that I can find is a CAST in the Index Seek operator for the BadAddressDetails procedure, as expected. But, it didn’t prevent the plan… the plan, from showing any other difference. However, execution is something else entirely. And this is where things get a little strange. There are two ways to execute a procedure:

EXEC dbo.AddressDetails @City = 'London';
EXEC dbo.AddressDetails 'London';

Interestingly enough, the first one is considered to be the slow way of passing a parameter. The second one is the preferred mechanism for natively compiled procedures. Now, if I execute these two versions of calling the procedure, I actually see different performance. The first call, the slow one, will run, somewhere in the neighborhood of 342 µs. The other ran in about 255 µs. Granted, we’re only talking about ~100 µs, but we’re also talking a 25% speed increase, and that’s HUGE! But that’s not the weird bit. The weird bit was that when I ran the good and bad queries together, the slow call on the bad query was consistently faster than the slow call on the good query. The fast call reversed that trend. And, speaking of which, the bad query, with the CAST ran in about 356 µs or ~25% slower.

The execution plan really didn’t show any indication that this would be slower, which made me do the next thing I did. I updated my Address table so that all the values were equal to ‘London.’ Then, because statistics are not maintained on in-memory tables automatically, I updated the statistics:

UPDATE STATISTICS dbo.Address WITH FULLSCAN, NORECOMPUTE;

With the statistics up to date, I dropped and recreated the procedure (there is no recompile with natively compiled procedures, something to keep in mind… maybe, more in a second). So now, the selectivity on the index was 1. The most likely outcome, an index scan. Guess what happened? Nothing. The execution plan was the same. I then went nuts, I converted all my tables so that a horrific mishmash of data would be brought back instead of clean data sets and I put data conversions in and… nothing. Index Seeks and Nested Loops joins. Weirdness.

I’m actually unsure why this is happening. I’m going to do more experimenting with it to try to figure out what’s up. But, that lack of recompile, maybe it doesn’t matter if, regardless of data distribution, you’re going to get the same plan anyway. I’m really not positive that looking at the execution plan for natively compiled procedures does much of anything right now. However, these tests were a little bit subtle. I’ll load up more data, get a more complex query and then really mess around with the code to see what happens. I’ll post more of my experiments soon.

I promise not to experiment on you though when I’m teaching my all day query tuning seminars. There are a bunch coming up, so if you’re interested in learning more, here’s where to go.  Just a couple of days left before Louisville and I’m not sure if there’s room or not, but it’s happening on the 20th of June. Go here to register. Albany will be on July 25th, but we’re almost full there as well. You can register here. SQL Connections is a pretty cool event that takes place in September in Las Vegas. In addition to regular sessions I’ll be presenting an all-day session on query tuning on the Friday of the event. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event. Let’s get together and talk.

 

Jun 10 2014

Differences In Native Compiled Procedures Execution Plans

All the wonderful functionality that in-memory tables and natively compiled procedures provide in SQL Server 2014 is pretty cool. But, changes to core of the engine results in changes in things that we may have developed a level of comfort with. In my post last week I pointed out that you can’t see an actual execution plan for natively compiled procedures. There are more changes than just the type of execution plan available. There are also changes to the information available within the plans themselves. For example, I have a couple of stored procedures, one running in AdventureWorks2012 and one in an in-memory enabled database with a few copies of AdventureWorks tables:

--natively compiled
CREATE PROC dbo.AddressDetails @City NVARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

--standard
CREATE PROC dbo.AddressDetails @City NVARCHAR(30)
AS
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    Person.Address  AS a
                JOIN Person.StateProvince  AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN Person.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
GO

The execution plans are obviously a little bit different, one going against in-memory tables and indexes and the other going against standard ones. However, that’s not the point here. This is the point. One of the first things I always check when looking at a new execution plan is the first operator, the SELECT/INSERT/UPDATE/DELETE operator. Here it is from the estimated plan of the query against the standard tables:

StandardSelectProperties

All the juicy goodness of the details is on display including the Optimization Level and Reason for Early Termination, row estimates, etc. It’s a great overview of how the plan was put together by the optimizer, some of the choices made, useful information such as the parameters used, etc. It’s great. Here’s the same thing for the natively compiled procedure:

NativeSelectProperties

Uhm… where are all my wonderful details? I mean, honestly, everything is gone. All of it. Further, what’s left, I’m pretty sure, is nothing but a lie. Zero cost? No, but obviously not from the standard optimizer estimates either, so, effectively zero. I’m pretty sure Physical Operation is just there as an oversight. In short, this is a different game. Yes, you will still need to evaluate execution plans for natively compiled procedures, but we’re talking a whole different approach now. I mean, great googly moogly, there’s not parameter compile time values. Is that just ignored now? Are the days of bad parameter sniffing behind us, or, are the days of good parameter sniffing gone forever? And it’s not just the SELECT operator. Here are the properties for a Nested Loops operator. First the standard set:

StandardNestedLoops

And, the natively compiled procedure:

NativeNestedLoops

Now, except for the fact that everything is FREE, the differences here are easier to explain. Execution Mode is applicable to columnstore indexes, and none of those are available yet in in-memory storage, so I’m not shocked to see that property removed. Same for the others. But this complete lack of costing is going to make using execution plans, always a problematic proposition with only estimated values available for so many things, even harder. It might even make it so that all you really need to do is look at the graphical plan. Drilling down on the properties, until meaningful data starts to appear there, might be a waste of time for natively compiled procedures.

I’ll keep working on these. Next up, can you get a “bad” execution plan with a natively compiled procedure? We’ll find out.

Just a reminder that I’m taking this show on the road. I’m doing a number of all day seminars on query tuning at various events in multiple countries. Louisville has almost filled the room we have available on the 20th of June. Go here to register.  But don’t wait. I’m also going to be in Albany on July 25th, but we’re almost full there as well. You can register here. If you were thinking about attending SQL Connections in September in Las Vegas, In addition to regular sessions I’ll be doing a day on query tuning. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event.

 

Jun 05 2014

Natively Compiled Procedures and Execution Plans

The combination of in-memory tables and natively compiled procedures in SQL Server 2014 makes for some seriously screaming fast performance. Add in all the cool functionality around optimistic locking, hash indexes and all the rest, and we’re talking about a fundamental shift in behavior. But… Ah, you knew that was coming. But, you can still write bad T-SQL or your statistics can get out of date or you can choose the wrong index, or any of the other standard problems that come up that can negatively impact all those lovely performance enhancements. Then what? Well, same as before, take a look at the execution plan to understand how the optimizer has resolved your queries. But… Yeah, another one. But, things are a little different with the natively compiled procedures and execution plans. I have a very simple little example in-memory database with just a few tables put up into memory and a straight forward procedure that I’ve natively compiled:

CREATE PROC dbo.AddressDetails @City NVARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

The fun thing is, even with these really small examples, the performance differences are staggering when compared to standard tables or just in-memory tables alone. Anyway, this is what the estimated plan for this procedure looks like:

ActualPlan

Looks like a pretty standard execution plan right? Here’s the actual plan:

 

 

No, don’t bother refreshing your browser, that’s just a blank couple of lines because, there is no actual plan. You’re not dealing with a standard query, remember. There are only a couple of reasons to get an actual plan. First, if you’re experiencing recompiles, you might want to see the plan that was ultimately executed. An actual plan will reflect this, as will a plan pulled from cache. Second, you want to see some of the run-time metrics, actual rows, actual executions, runtime parameter values. Well, the first is not an issue since you’re not going to see these things recompile. It’s a DLL. The second could be an issue. I’d like to see actual versus estimated to understand how the optimizer made it’s choices. Regardless, the actual plan won’t generate in SSMS when you execute the natively compiled procedure.

There are some more differences between the plans for natively compiled procedures and standard procedures. I’ll go over a few more in another blog post.

Hey, if you do want to talk query tuning? I’m taking my one day seminar on the road to a bunch of different events. There’s still time to get to the event in Louisville on the 20th of June. That’s right before the SQL Saturday there. Go here to register. I’m also going to be putting this on the day before SQL Saturday Albany. You can register here. I’m very honored to have been selected to speak at SQL Connections in September in Las Vegas. This includes an all day seminar on query tuning. Go here to register for this great event. I’m also very excited to be able to say that I’m also going to be doing a different seminar in Belgium for SQL Server Days. I’ll be presenting an all day seminar on execution plans, including lots of details on SQL Server 2014. Go here to register for this event.

That’s four opportunities to get together and spend an entire day talking about query tuning, execution plans, statistics, the optimizer, extended events, oh, all sorts of things.

May 13 2014

Add an Instance to SQL Server Azure Virtual Machine

How do you add an instance to your local SQL Server installation? You run the executable that you probably downloaded from MSDN or maybe from a CD. Works the same on an Azure VM right? Sure… but wait. Do I have to go and download the software to my VM instance? Let’s assume that you’re running one of the VMs from the Gallery, then, the answer is “No.” Just navigate to C:\SQLServer_12.0_Full. There you’ll find the full installation setup for SQL Server. And you’re off and running… Until you realize that you don’t have the Product Key for this thing. What happens when you get to this screen:

CDKey

You can look around all you want and you won’t see a product key anywhere. At least no where that I could find. So what do you do? Same question was asked and answered over on this forum at SQL Server Central. The trick is to get the product key from SQL Server. I tried several different methods, the ones you’ll find if you search for how to get the product key from an existing copy of SQL Server. But finally, as was posted on the forum, a method that worked was found. I tested it out and I was able to add an instance to a VM from the Gallery.

Which brings up the next question. Did I just violate some type of licensing with Microsoft? Lordy I hope not. But I did some research. This definition of the support policy at Microsoft says that anything that is not explicitly denied in that documentation, that is normally supported is still supported. There’s nothing in there about multiple instances. There’s nothing in the basic Azure Licensing FAQ. There’s nothing against this in the Pricing details either. And since the standard iron version of SQL Server allows you to have as many instances running on a given server that you want, from what I can tell, this still applies here.

Personally, I don’t think I’d want to run multiple instances on a single Azure VM. I wouldn’t really want to run multiple instances on a VM or, in some cases, even on iron. Multiple instances frequently have difficulty playing nice. I can’t see that getting any better inside Azure. However, there’s nothing to keep you from doing it except tracking down that Product Key. Get that, and you’re golden.

Apr 10 2014

I’m a Traveling Man

We are coming into quite a busy time for my speaking schedule. I’m hitting the road. It does one thing for me that I truly love, I get to talk to people. So, if you have questions, want to chat, need to call me a pompous know-it-all to my face, I’ve got some opportunities for you.

Next week, April 13-16, is SQL Intersection. You can register by clicking here. The following week, I’ve got two events. First, on Friday April 25th, Red Gate Software is hosting a free half day SQL in the City Seminar in the Chicago area. We’ll be talking database deployment all day. Go here to register, but don’t wait, seats absolutely are limited. And, since this is a Red Gate event, at the end of the day, I’ll buy you a beverage or two while we exchange war stories. The next day, Saturday April 26th, is SQL Saturday Chicago I’ll be presenting a session. Check out the lineup and get yourself registered. That’s it for April.

May gets really fun. Saturday May 3 is SQL Saturday Atlanta. This is one of those “I was there” events for the Atlanta area. I’ll be there. Saturday May 17 is SQL Saturday Detroit. This one, at the moment, looks pretty intimate, but that means you get to hang out with Jeff Moden, Ginger Ford, Allen White, Tim Ford and ask questions until you run out of questions. I wouldn’t miss it if I lived in that area. Heck, I don’t live in that area and I’ll be there too. Then I get to go on my Carolina Cruise. I’m visiting three user groups in three days in the Carolinas. First up is Raleigh at the Triangle SQL Server User Group on the 20th. Then I get to Charlotte (and that was a great city for hosting the PASS Summit) on the 21st. Finally I’m off to Columbia and the Midland PASS Chapter on the 22nd. That’s going to be a blast. And we’re not done with May. On the 27th and 28th I’m going to hop the pond to speak at TechoRama in Belgium. I’m terribly excited about this event. Maybe it’s just because I like Belgian beer, but it really does look pretty cool. Go here to get registered. And I love the count-down clock on the web page. That’s exactly how I feel.

In June I come back over to my side of the pond. There are some events we’re still setting up. But the ones I know I’m going to are SQL Saturday Louisville on the 21st of May. But, the day before, on the 20th, I have an all day seminar on query tuning. Click here to register. We should have another SQL in the City Seminar set up for June as well as a couple of more SQL Saturday events. I’ll post once I learn more.

July is still pretty open (please, please, please, OH, PLEASE, I want to got to SQL Bits), but I do have another all day seminar on query tuning set up for Albany. You can go here to register. That’s the day before SQL Saturday Albany. It’s going to be their first event, so let’s help make it a great one.

As the schedule for June and July solidifies I’ll publish another listing. Let’s get together and talk.