Category: Azure

Jul 29 2015

Hey Kids! Let’s Put on a Show at the Old Barn

Alternate Title: I’m traveling a bunch. Let’s get together and talk.

A bunch of trips and presentations coming up, so I thought I’d share. First, I’ll be SQL Saturday Omaha for my first time ever presenting in Nebraska. I’m excited to add this state to my list (which is almost over 40 now). If you’re not doing anything August 15th, let’s have a chat. Next, fingers crossed, I’ll get selected to fly back to my home state, Oklahoma, to go to SQL Saturday OKC. These guys put on a great event and hey, it’s Oklahoma so how can it be bad. I hope they announce soon. I need to schedule my flights. This one is on August 29th.

September also has several events. First, I’ll be at SQL Saturday Las Vegas on the 12th of September. My first time at this event so I’m looking forward to it. That’s followed immediately by a pre-con and a couple of sessions at ITDev Connections. I love inter-disciplinary conferences because it gives you a chance to branch out and learn more knowledge across the stack. It’s extremely useful if you’re getting into DevOps (which you should be). My pre-con is on query tuning and execution plans, using one to help the other. This is a great event with a lot of excellent speakers. Go here to register. But I’m not done. Down in San Diego, at their SQL Saturday on the 19th, I’ll also be doing a pre-con on query performance tuning on the 18th. I’ll see if I can’t squeeze some 2016 stuff in there too. Click here now to register for the precon.

But we’re not done. In October, I’m going to hop the pond for one my absolute favorite events, Red Gate’s very own SQL in the City: London. It’s a great event with a great collection of speakers. I won’t lie, we’re going to focus on Database Lifecycle Management (DLM), but there’ll be sessions on query tuning and all sorts of other stuff as well. It’s focused on the Red Gate tools and, let’s face it, it’s a Red Gate style event, so it’s fun and entertaining. If you really do want to drill down on DLM, there’s also the all day seminars that Red Gate is hosting that week. We’ve put a lot of work into these and I think they’re some pretty amazing hands on classes that will get you started automating your own database deployments. Also, while I’m over there, I think I may find my way to a SQL Relay event, so eyes peeled.

Later in October, well, it’s the PASS Summit. ‘Nough said. I’m doing a session focused on the beginner that’s all around statistics. They’re so important to everything you do with queries, that it’s a good idea to spend some time understanding them before you get into all the craziness of query tuning. If you’re attending Summit, please stop by my session. Click here to register for the PASS Summit. Fingers cross, I might be at SQL Saturday Portland if I get accepted.

One more that I don’t mean to leave off, but I think it deserves it’s own paragraph. Right before the PASS Summit, we’re bringing SQL in the City back to Seattle. That’s right, another shot at seeing the great stuff we’re going to bringing to London, plus a few different things (we mix up the speakers and sessions to help keep it all fresh).

Jul 20 2015

SQL Server Management Studio – Footloose and Fancy Free

That’s right. There’s been a divorce. SQL Server Management Studio (SSMS) has been divorced from the server product. In fact, Microsoft is inviting you to the new SSMS coming out party.

I’m pretty excited about this. While I’m very comfortable in SSMS, to a large degree, it’s like that old pair of jeans that you’ve worn for the last 10 years. They’re comfortable too. Well, maybe a little tight when you pull them on out of the wash. One of the knees is gone. The legs are frayed so much it almost looks intentional. You just noticed a hole in the bottom. The zipper is acting up… Yeah, OK. These jeans have had it. So has SSMS.

The plan from Microsoft is to upgrade SSMS independently from the boxed product. In fact, since one of the goals is to coordinate functionality within SSMS with releases of Azure SQL Database, I think it’s pretty safe to say that we’ll be seeing very regular and frequent updates. While this is a good thing for SSMS and it’s a good thing for Azure SQL Database. This level of frequent updates might not be a good thing for individual data pros if you’re not used to actively pursuing new knowledge and new tooling all the time. Or, it can act as an incentive to help keep your knowledge levels up to date because you’re more employable if you’re maintaining your skill set.

Let this act as a prod to go out and learn some new functionality. Oh, and buy a new pair of jeans.

 

May 26 2015

Database Engine Tuning Advisor

I would love to see the Database Engine Tuning Advisor (DTA) pulled from the product. Completely. Heck, I feel bad that I included a chapter on it in my query tuning book (all updated for SQL Server 2014 by the way). Let me tell you why we need to pull this tool.

First, I understand its purpose. It’s supposed to be a fast and easy way to get some performance tuning done for people who just don’t have the time or knowledge to go through the full process of gathering metrics, evaluating poor performers, understanding root causes and applying indexes to fix those causes. I also readily acknowledge that it actually is an amazing piece of software. If you don’t agree with that, go read this white paper. With those acknowledgements in place, I still think there are two distinct problems with the approach and a serious fundamental issue with the premise. Let’s talk about the problems of the approach first.

Many, probably even most, of your query performance problems are in your T-SQL. Some of your query performance problems are in your table structure. Some of your query performance problems are in your indexes and statistics, but not a majority. The DTA can suggest indexes, statistics, materialized views, and partitioning (partitioning for performance I might add, not data management, and that opens up a completely different can of worms we can’t address here, I don’t have time). In short, it can’t address your number one problem, your code. Since it can’t address the number one problem, all it can do is work around the edges. You’ll run this thing, think you’ve addressed your issues, but your issues are still there and now you’re just as stuck as before you found the DTA and ran it. It just doesn’t fix the core issue, your code.

The second problem I see with it is that it doesn’t have enough intelligence built into it’s functionality. I say that with some reluctance because it really is a pretty amazing piece of functional code. But it suffers from a couple of shortcomings there. It’s completely dependent on the load provided to it. If that load is off, it’s recommendations are off because it just can’t have the intelligence to recognize that a poorly representative load has been provided. This lack of intelligence is supposed to be offset by the person running the DTA to ensure that they are gathering the right information and that they can interpret and test the resultant recommendations. Which brings us to the fundamental issue with the premise.

This is supposed to be run by people with no internals knowledge. Right? BUT! These people are also supposed to make a judgement based on the recommendations whether or not they should be accepted. Further, they should test all the recommendations prior to applying them to their production server. They also must gather a well structured and meaningful representative load from their production system in order to supply the DTA with good information. Further, the very sophisticated set of tests around the DTA actually makes a fundamental assumption that could be radically off, that the person designing the database has done a good and thorough job of correctly normalizing the structures. You agree with all these assumptions on the part of the DTA? Am I alone in thinking that we have a problem here? If people lack any understanding of the internals they won’t be able to judge those recommendations. If people don’t have the ability to gather and interpret performance metrics they won’t be able to test the suggestions of the DTA or provide it with the necessary test load. In short, the DTA can’t be relied on to solve the problem it’s supposed to solve because of the root cause of that problem, peoples lack of knowledge and understanding.

I won’t even get into finding indexes with names like this:

[_dta_index_TD_13_15_1334295813__K44_K1_K8_K57_K17_K60_K16_2_3_4_5_6_7_9_10_11_12_13_14_15_18_19_20_21_22_23_24_25_26_27_28_29_]

Instead of evaluating the suggestions made by the DTA and applying just those that make sense and will have a positive impact, people assume that every single suggestion from the tool is Gospel. They apply them all, without thinking, without knowledge, without appreciation of the possibility, sometimes even the likelihood, of serious negative impact.

I recognize that many people are stuck. They have a SQL Server instance that’s causing them pain and they don’t have the knowledge necessary to fix it. Further, I know a few of you have used this tool successfully in some situations. I agree that there ought to be some way to mechanically and automatically tune the server. However, the DTA is not that tool, despite it’s clear and obvious sophistication. Let’s get rid of it.

Want to learn how to tune queries? I’m putting on an all day seminar at Connections in September. Click here right now to register. We won’t use the DTA.

Apr 16 2015

Azure SQL Database v12 and SQL Magazine

I spend many of my evenings researching and writing. Sometimes it’s writing new books. Sometimes it’s fixing and rewriting old books. Occasionally it’s a blog post like this one. Lately, it’s been a series of articles for SQL Magazine that are all about the new functionality available in Azure SQL Database v12 that was released in February for most data centers in Azure. It’s a whole new ball game. Check out my introductory article for v12 and the one on DMVs in Azure. I have more coming up on CLR in Azure, getting started, PowerShell, DBCC, T-SQL enhancements, Premium Tier and more. I’ll also get into “old” functionality like backup and restore. I’ll also explore new functionality, Azure is nothing if not dynamic, as becomes widely available.

I know a lot of you have been putting off exploring Azure, especially Azure SQL Database until it was “ready.” Guess what? It’s that time. Feel free to lean on me here, and over at SQL Mag, for questions, suggestions, thoughts, concerns, anything you have about Azure.

Apr 13 2015

Azure SQL Database Firewall Settings

The new portal for managing Azure is pretty. I’m not sure I’m in love with it, but it’s pretty.

However, one thing that I have to access regularly is the firewall settings for my Azure SQL Database. I do demos from all over the place. I never know what my IP address is going to be. Rather than expose everything, I just set up whatever IP address I’m on and then remove it later. The old portal made this easy. The new one… not so much.

So, let’s get this down real quick. Assuming you connect to the new portal and go straight to your database, you’ll see this image showing you the database and the server it’s on:

DatabaseAndServer

You won’t see anything else that suggests FIREWALL. But, you can click on the server. When you do, you’ll see another panel open up to the right of the one you’re on. It’ll have an image like this:

Server

Still nothing that screams FIREWALL, but if you click on the little gear icon that says SETTINGS you’ll get yet another panel opening to the right that resembles this image:

ServerSettings

There she blows, Firewall. Click on that and you’ll see the standard screen for editing your IP address access into the firewall:

ServerSettingsFirewall

Just remember that after adding a new IP address to your firewall you must hit the save icon at the top of the screen, or you still won’t be able to access your database.

Apr 07 2015

Error: Unknown Property ismemoryoptimized

If you’re starting the process of moving your databases in Azure SQL Databast to v12, you need to do one thing. Make sure you have SQL Server 2014 CU5 or better installed on your machine with Management Studio (current list of updates). Otherwise, like me, you’ll be looking at the above error.

Just a quick blog post to help out. I saw this error, did a search, and didn’t hit a single entry telling me what to do. I started the install of CU6 (I needed to catch up on cumulative updates anyway). While that was happening, I went to Twitter and posted to #sqlhelp to see if anyone else had hit this. I got a response from Amit Banarjee pointing me to this MSDB blog on the topic, reinforcing the decision I had already made. I just wish they had posted the error along with everything else in the blog post. It would make things easier.

Apr 06 2015

Constraints and SELECT Statements

I’ve posted previously about how a foreign key constraint can change how a SELECT query behaves. Logically that just makes sense. But other types of constraints don’t affect execution plans do they?

Yes.

Let’s take this constraint as an example:

ALTER TABLE Sales.SalesOrderDetail WITH CHECK 
ADD  CONSTRAINT CK_SalesOrderDetail_UnitPrice 
CHECK  ((UnitPrice>=(0.00)))

That will ensure that no values less than zero can slip in there. We can even validate it:

INSERT Sales.SalesOrderDetail
        (SalesOrderID,
         CarrierTrackingNumber,
         OrderQty,
         ProductID,
         SpecialOfferID,
         UnitPrice,
         UnitPriceDiscount,
         rowguid,
         ModifiedDate
        )
VALUES  (60176, -- SalesOrderID - int
         N'XYZ123', -- CarrierTrackingNumber - nvarchar(25)
         1, -- OrderQty - smallint
         873, -- ProductID - int
         1, -- SpecialOfferID - int
         -22, -- UnitPrice - money
         0.0, -- UnitPriceDiscount - money
         NEWID(), -- rowguid - uniqueidentifier
         GETDATE()  -- ModifiedDate - datetime
        );

Will give me an error:

Msg 547, Level 16, State 0, Line 470
The INSERT statement conflicted with the CHECK constraint “CK_SalesOrderDetail_UnitPrice”. The conflict occurred in database “AdventureWorks2014″, table “Sales.SalesOrderDetail”, column ‘UnitPrice’.

Let’s look at a SELECT query now. If we run this:

SELECT  soh.OrderDate,
        soh.ShipDate,
        sod.OrderQty,
        sod.UnitPrice,
        p.Name AS ProductName
FROM    Sales.SalesOrderHeader AS soh
JOIN    Sales.SalesOrderDetail AS sod
        ON sod.SalesOrderID = soh.SalesOrderID
JOIN    Production.Product AS p
        ON p.ProductID = sod.ProductID
WHERE   p.Name = 'Water Bottle - 30 oz.';

The resulting execution plan looks like this:

Constraint

But, if I modify the query to look like this, adding an additional AND filter on the constrained UnitPrice column:

SELECT  soh.OrderDate,
        soh.ShipDate,
        sod.OrderQty,
        sod.UnitPrice,
        p.Name AS ProductName
FROM    Sales.SalesOrderHeader AS soh
JOIN    Sales.SalesOrderDetail AS sod
        ON sod.SalesOrderID = soh.SalesOrderID
JOIN    Production.Product AS p
        ON p.ProductID = sod.ProductID
WHERE   p.Name = 'Water Bottle - 30 oz.'
        AND sod.UnitPrice > $0.0;

You know what happens to the execution plan? Nothing. It stays exactly the same. The optimizer knows that in order to satisfy the query, it can safely ignore the change in the WHERE clause. In fact, you can look at the SELECT operator properties for the two different plans and note that while the Query Hash values changes, the Plan Hash value stays the same. The plans are identical.

With that knowledge, I’m going to modify the query to look like this, reversing the UnitPrice reference to look for data that violates the constraint:

SELECT  soh.OrderDate,
        soh.ShipDate,
        sod.OrderQty,
        sod.UnitPrice,
        p.Name AS ProductName
FROM    Sales.SalesOrderHeader AS soh
JOIN    Sales.SalesOrderDetail AS sod
        ON sod.SalesOrderID = soh.SalesOrderID
JOIN    Production.Product AS p
        ON p.ProductID = sod.ProductID
WHERE   p.Name = 'Water Bottle - 30 oz.'
        AND sod.UnitPrice < $0.0;

And now we have a new execution plan:

Constraint_scan

 

The optimizer recognized that there is no way that any data can be returned with the WHERE clause above because there is an enforced constraint (note the use of the WITH CHECK clause on the constraint). This completely changes the execution plan in every possible way. Now, instead of attempting to access the data, a Constant Scan operator is put in as a place holder for an empty result set.

To sum up, yes, constraints absolutely affect the choices made by the optimizer when those constraints would have an affect on the plan, even a SELECT query. Also, I would argue, this means that the use of enforced constraints can be a performance enhancer since the optimizer can make intelligent choices about how a given query is dealt with.

UPDATE: Fixed a typo that said the constraint prevented data that was equal to or less than zero. It’s only for data less than zero.

May 13 2014

Add an Instance to SQL Server Azure Virtual Machine

How do you add an instance to your local SQL Server installation? You run the executable that you probably downloaded from MSDN or maybe from a CD. Works the same on an Azure VM right? Sure… but wait. Do I have to go and download the software to my VM instance? Let’s assume that you’re running one of the VMs from the Gallery, then, the answer is “No.” Just navigate to C:\SQLServer_12.0_Full. There you’ll find the full installation setup for SQL Server. And you’re off and running… Until you realize that you don’t have the Product Key for this thing. What happens when you get to this screen:

CDKey

You can look around all you want and you won’t see a product key anywhere. At least no where that I could find. So what do you do? Same question was asked and answered over on this forum at SQL Server Central. The trick is to get the product key from SQL Server. I tried several different methods, the ones you’ll find if you search for how to get the product key from an existing copy of SQL Server. But finally, as was posted on the forum, a method that worked was found. I tested it out and I was able to add an instance to a VM from the Gallery.

Which brings up the next question. Did I just violate some type of licensing with Microsoft? Lordy I hope not. But I did some research. This definition of the support policy at Microsoft says that anything that is not explicitly denied in that documentation, that is normally supported is still supported. There’s nothing in there about multiple instances. There’s nothing in the basic Azure Licensing FAQ. There’s nothing against this in the Pricing details either. And since the standard iron version of SQL Server allows you to have as many instances running on a given server that you want, from what I can tell, this still applies here.

Personally, I don’t think I’d want to run multiple instances on a single Azure VM. I wouldn’t really want to run multiple instances on a VM or, in some cases, even on iron. Multiple instances frequently have difficulty playing nice. I can’t see that getting any better inside Azure. However, there’s nothing to keep you from doing it except tracking down that Product Key. Get that, and you’re golden.

Apr 23 2014

Azure Automation

I introduced Azure Automation in a previous post. I’ve spent some more time exploring it.

There’s a set of documentation available as I noted before. Unfortunately, reading through the full set of documentation, I have some criticisms to offer. The layout of the documentation goes through “Common runbook tasks” actually more or less laying things out as I did, inadvertently, I assure you, in my previous blog post. The problem with that, as I found in that post is, the administration of the runbooks seems fairly straightforward from the screens. But, you can’t do a darned thing with any of it until you have a runbook . Further, you can’t have a runbook until that thing has some code in it. And, the documentation doesn’t include documentation about code. Instead, we just get a page with a list of samples, but no links to that code, nor an indication of where it might be. The scripts are located here. But man, that ought to be in the documentation. There’s also no clearly documented method for how to start doing the development. It’s not really necessary since the GUI leads you inevitably to the Draft screen we saw in my other post. But, documentation is generally supposed to let you know what to do, where to look, etc.

There is another set of documentation just on authoring runbooks. Lots and lots more meat there. I’ll go through it and follow up further.

Enough criticism, let’s play with some code.

I’m going to start with the “Hello World” code set. It’s supposed to be an introduction to how everything works. You can’t open it from the Azure Portal. Instead you have to download it to your machine and then either upload it into a new runbook or copy and paste it into the Draft editor window. Presumably this is so you can do the coding locally using the PowerShell ISE or other tools. Documentation for the script is clear. It’s description:

If you are brand new to Automation in Azure, you can use this runbook to explore testing and publishing capabilities.

 

Well, let’s just say that’s a little grandiose for what is, literally, a “Hello $Name” example. But, it’ll get your feet wet. I took the script, pasted it into my “RunningScare” runbook. From there, I have the  capacity to Save, Test, or Publish. Being a good paranoid type, I ran test first. It popped up a window to input the parameter and then showed the output in the Output Pane (which I hadn’t actually noticed):

OutputPane

I can’t tell you why it output multiple times, but it did from one test of the script. To see the rest of the functionality, scheduling, etc., I went ahead and hit Publish. That moved it from Draft to Published where all I can see is a faded outline of the actual script and a Start button at the bottom of the screen. I went ahead and ran it from there. It actually takes a surprisingly long time for such a silly small script to complete.  There’s event the ability to view the Job as it’s running:

JobSummary

So that works. Next up, scheduling. It’s pretty straight forward to walk through the GUI in the Portal (although, now I want to see if I can programmatically control the Automation interface, more to explore). I’m going to try to run this script once an hour. So, I’ll give the schedule the name, unique to my account, Hourly (imagination knows no bounds). And then things get weird. I can only schedule this for a “One Time” run or “Daily.” No other options available:

Schedule

Nothing in the core documentation about the details of scheduling. Checking the authoring doc (which has tons of stuff in it) there is a PowerShell command for directly controlling this (oh yes, much more to explore), Set-SmaSchedule. But, it’s not clear if the command has more variables other than a day interval. I’ll have to test it out to see. The Portal recognized that parameters were necessary, so I put one in and scheduled my runbook. Worked great.

With that, I have my first run book set up, tested and scheduled. So far, this is looking really interesting.

 

 

 

Apr 16 2014

Microsoft Azure Automation

AutomationMicrosoft just announced a new mechanism for managing your Azure resources, Automation. You can check out the documentation on it here. It’s a mechanism to create runbooks using PowerShell that you can then combine with other runbooks inside a runbook, etc. Let’s check it out. I’m doing everything you see here without consulting the documentation. I want to see how easy it is to put this stuff together. First, because it’s still in preview, you have to sign up. Once you’re accepted in the program, you get a new icon in your Management Portal.

Next, you’ll have to create an automation account. That’s pretty straight forward. It’s just a name, your selected region and the subscription you’re putting it under. No immediate guidance on where, when or if the region matters that much:

AutomationAccount

When you get into Account, nothing is there. No default runbooks or anything. Now, I know you can go and get sample runbooks and I suspect there will even be a clearing house through GitHub or somewhere for runbooks. But right now, we’re flying by the seat of our pants, so let’s just create our own runbook. It’s Azure, so the New button is right there in the lower left. Clicking on it, we can do a quick create for runbooks, code to be complete later:

Runbook

Nice and easy so far, although, so far, this thing doesn’t do anything. So now, I have a runbook. If I open it up, it shows a dashboard with several tabs, Jobs, Author, Schedule, Configure. Clicking on Jobs, I don’t see anything interesting displayed. Obviously I should click on Author next, but where is the fun in that? So I click on Schedule. That brings up a message that “You must publish this runbook before you can add a schedule. Click AUTHOR to author and publish this runbook.” So I go ahead and click on Configure (yeah, I’m that guy). It’s not that interesting. So, let’s go back to where we should have started, Author. Clicking there, I get this:

Published

OK, fine. Let’s click on Draft. Which brings me to, well, I think it’s a fascinating screen:

Script

It’s a script. And that’s PowerShell it expects me to type in there, but I really don’t know what kind of commands I should be using, so… Here endeth the beginning of my exploration of Azure Automation. I need to actually go and read the docs. More to come.