Apr 01 2014

SQL Server 2014 New Defaults

CelebratingToday, April 1st, 2014, marks the release of SQL Server 2014. There are tons and tons of great new methods and functions and processes within the product. We’re all going to be learning about them for quite a while to come. One of the most exciting though is one of the changes to the defaults. In the past there’s been a lot of debate around how best to configure your databases. What cost threshold should be set for parallelism, the max degree of parallelism, memory settings, auto growth, and all sorts of other settings affect how your databases work. But, Microsoft has finally done something smart. They’ve bowed to the pressure of hundreds and hundreds of DBAs, Database Developers and Developers around the world. They’ve finally done the one thing that will improve everyone’s code once and for all. Expect to see massive performance improvements in SQL Server 2014 thanks to this one default change.

What have they done you ask? What miracle is this that is going to result in both better code and better performance? Simple, by default, all connections to the database are now using the transaction isolation level of READ_UNCOMMITTED. In a single stroke, we no longer are forced to put with WITH NOLOCK on every single table reference in every single query. All the pain and suffering caused by blocks from locking has been removed from the product. We can look forward to a much cleaner code base and better query performance. Thanks Microsoft.

Please, note the date carefully.

Feb 12 2014

SQL Server 2014 and the New Cardinality Estimator

Cardinality, basically the number of rows being processed by an operation with the optimizer, is a calculation predicated on the statistics available for the columns in question. The statistics used are generally either the values from the histogram or the density. Prior to SQL Server 2014, and going all the way back to SQL Server 7.0 (in the Dark Ages when we had to walk uphill to our cubicles through 15 feet of snow battling Oracle DBAs and Fenris the whole way), there’s been one cardinality estimator (although you can modify the behavior somewhat with a traceflag in 2008R2 and 2012). Not any more. There’s a possibility for really complex, edge-case queries, that you may run into a regression from this.

You control whether or not you get the new cardinality estimator by setting the Compatibility Level of the database to SQL Server 2014 (120 for the picky amongst us). This could lead to regression issues. So, you’re going to pretty quickly want to know if your execution plan is using the new Cardinality Estimation Model, right? It’s tricky. Just look at the properties of the first operator in the plan (I told you to use that first operator). You’ll find one value there that will tell you what you need to know:

CardinalityEstimator

Just check this value (which you can also get from the XML behind the graphical plan) to see what calculations the optimizer used to arrive at the plan you’re observing.

Dec 19 2013

How to Set Up Managed Backups in SQL Server 2014

Earlier this week I introduced the concept of Managed Backups (and caused less of a turmoil than I thought I would). Now I want to show you how it works. It’s really simple and quite well documented. Before you get to the, insanely simple, task of actually enabling Managed Backup, you will need to go through the prerequisites. First, and this should be obvious, but I’ll state it, just in case, you need to set up an Azure storage account. That’s so insanely straight forward that I’m not going to say more. Then, you have to set up encryption on your system. I used these commands to prep it:

CREATE MASTER KEY ENCRYPTION BY PASSWORD =
    '$qlserver2012queryperformancetuning';

CREATE CERTIFICATE CloudDojoCert
WITH SUBJECT = 'Backup Encryption Certificate';

Again, shouldn’t have to say this, but covering my bases, don’t lose that password. You might need it later. You’ll also need to create a credential (I had one created for Backup to URL already):

CREATE CREDENTIAL MyCredentialName
WITH IDENTITY = 'MyStorageAccountName',
SECRET = 'MyAccessKey';

The actual command to get the backups started is crazy simple:

EXEC smart_admin.sp_set_db_backup @database_name = 'MyDatabase',
    @retention_days = 10, @credential_name = 'CloudDojo',
    @encryption_algorithm = 'AES_128', @encryptor_type = 'Certificate',
    @Encryptor_name = 'CloudDojoCert', @enable_backup = 1;

That’s it. By setting @enable_backup to 1, I’m and running. I can repeat the script for multiple databases and I’m done. Backups are now automatic. There’s auditing built in. For example, I seem to get errors on RESTORE HEADER_ONLY occasionally:

DECLARE @eventresult TABLE
    (
     event_type NVARCHAR(512),
     event NVARCHAR(512),
     timestamp DATETIME
    )

INSERT  INTO @eventresult
        EXEC smart_admin.sp_get_backup_diagnostics @begin_time = '12/1/2013',
            @end_time = '12/19/2013';

SELECT  *
FROM    @eventresult
WHERE   event_type LIKE '%admin%';

errors

[FileRetentionAdminXevent] Type = SqlError, Code = 3292, Stage = Undefined, Message = A failure occurred while attempting to execute Backup or Restore with a URL device specified. Consult the Windows Event Log for details.
RESTORE HEADERONLY is terminating abnormally., Additional Info = N/A.

Browsing through the files (using Cerebrata because it makes my life so much easier), I can see that it’s an intermittent problem, not a consistent one. And, since it’s just the verification of the backup, I’m not sweating it that much:

files

It’s the kind of intermittent error you’re likely to get when working with cloud-based technology. Code accordingly.

But that’s it. It’s so easy to get set up. You can see that I’ve been running backups for days. Managed Backups are easy to set up. They’re easy to turn on. You can use RESTORE FROM URL to pull these back into your database just as easily as I put them out there. There’s really nothing to it.

Again, this is not for everyone. It’s just another tool in the toolbox. Yet another way of getting things done. Explore it. Let me know what you think.

Dec 17 2013

Introducing Managed Backups in SQL Server 2014

Some of the new functionality of 2014 is straight forward, non-controversial and easily welcomed by the community. Think, updateable column store indexes. Some of the new functionality is going to raise an eyebrow or three (most of the time, not on one person, but you know the #sqlfamily, we have some interesting mutations represented). Think… managed backups.

Now, why on earth would a process that takes backups for you, including log backups, does it automatically per a schedule and/or when data has changed sufficiently, stores it offsite for protection and is easy to set up and maintain going to cause so much controversy? Hey, I’m wrong, it won’t, move along… Or, maybe, I’m right, and this is going to raise more than eyebrows.

We’re talking about surrendering control over your backup processes and letting Microsoft decide if and when you need a full backup or a log backup. We’re also talking about surrendering control over your storage and letting Microsoft manage your files in Azure Blob Storage. OMG! I said Azure. Several people vomited, just a little, into their mouths. Yep, Microsoft is going to take away your backups and your ability to control them. It will then store them in a way you can’t access and Microsoft will not let you have any control over them ever again…

Oh good gosh, really? You believe that? I do have a bridge for sale that I can let you have pretty cheap.

Let’s talk about what managed backup really is, who it might be for, where it could work extremely well and where it might not work well at all.

Managed backup is a service running on your system that is enabled, if you so desire, through T-SQL commands. That’s right, it’s not even turned on if you don’t want it to be. The service is pretty simple. When you enable it, you define a few parameters, which database or databases and a retention period. That’s pretty much it. From there it runs your first full backup on each of the databases you’ve put under management (no system databases and the backup size has to be less than 1tb, a limitation of Azure Blob Storage). Then, it runs log backups at least every two hours, or, after 5mb or more of transactions. Heck, if your log backup chain is broken, it’ll reset and start over. It’s smart. It’ll also run a full backup after 1gb of data growth, or once a week. And again, if you break the log backup chain, it’ll take a full and keep going. All the files go to Azure Blob Storage where you can use backup/restore from a URL to retrieve them. The backup format is, wait for it, SQL Server backups. Because they’re SQL Server backups you can restore them to a SQL Server machine. That’s it. It’s very simple and simplistic.

Who this is for, well, if you’re one of those people with the funny taste in your mouth right now, probably not you (although, you know I think you should start learning it). If you’re managing one of the larger database systems, over 1tb in size, you’re out. In fact, if you have a sophisticated backup routine in place now combining differentials and file/filegroup backups, you’re out as well. This is meant for those people who don’t want to or can’t take all the time and trouble to set up a sophisticated backup routine. It’s for people who are already running their servers on Azure VMs. It’s for small shops that don’t have the money to set up multiple off-site storage locations for their backups. It’s for consultants who want a really simple and reliable method for getting clients to have protected backups with off-site storage with a minimal number of steps. In short, there are a lot of people who are going to find this pretty handy. Again, not everyone. But that’s OK, every feature isn’t for everyone either. That mom & pop database that’s about 8gb in size and running on a 32bit server just isn’t going to benefit from in-memory table storage either. Does that negate that functionality? Nope. Nor should the fact that some large enterprise clients will never use this.

If you’ve never set up log backups and aren’t sure how often your system should have them, this is going to be pretty handy since it does it for you. If you don’t have access to a DBA… well, you may still need one because setting up is T-SQL only currently, no GUI, so it might not work well for everyone, but I’m sure someone will write an app or a script for it soon. If you’re already running your databases on Azure VMs, this thing is fast. Really fast. In fact, I think it works quicker than my local disks (the spinning ones, not the SSDs). And, the storage is right there in the cloud, so you won’t have to pay for transfer down to your machine (although plucking a weekly backup out of the blob storage might be a good idea).

Now, there are a few things I don’t like about it. First, I’d like to have the option to backup locally first and then copy to the cloud (that’s how Red Gate implemented their cloud backup solution and it works great). And, I’d like to be able to tweak the knobs on when the backups occur. I’m also still not convinced I like how the monitoring is done with it, although since the schedule is maintained by SQL Agent, you can set up alerts, secondary monitors, all that fun stuff, all on your own. While some sophisticated DBA types will absolutely use this, I think the prime audience is non-DBAs. As such, it needs a clean & extremely simple GUI. You’ll get more implementation of it. Heck, I’d say it ought to be a check box on the SQL Server install process, and one you have to uncheck to prevent getting it set up when you install the instance. That way, Next-Next-Next deployments that probably never even consider recovery models or any of the rest of that fun stuff, will get backups as part of their installation.

Overall though, I’m excited about the added functionality. I’m especially excited because this functionality is added. It’s not replacing anything. I can still do all my backups my way, completely and utterly. Or, I have another automation tool that I can put to work. I’ve been running this on a test platform for over a month with no issues (while running other tests for other new functionality, I’m finally digging into 2014). I’ll write another post on how to configure this, although I don’t really need to because the Books Online are pretty clear. But, I wanted to get the “They’re taking away my control and putting it all on Azure” freak out done before I posted the technical part of the info. Proceed with the meltdown.

Dec 11 2013

SQL Server 2014 Backup to URL

I’m absolutely in love with the concept of being able to create a backup directly to a protected, off-site location. Yeah, you can spend all sorts of money on terribly wonderful technology to enable that within your enterprise. And if you have that kind of money, great. But, what if you’re like most everyone else and you just want a little more protection without mortgaging the house? Let’s take a look at one possibility, backup to URL.

There have been ways to backup to hosted storage, whether it was DropBox, AWS or Azure blob storage, for quite a while. But, every method I tried out involved setting up some type of drive on your system. As soon as you had your K:\ drive mapped out to AWS or whatever, you could run a SQL Server backup directly to that location. Done. But, I’ve long wanted to just do it straight. Just to backup right to that location, no additional spoofing required. As long as you’re willing to keep your stuff within Azure blob storage, it’s now possible. Further, within SQL Server 2014 you can do it directly from T-SQL, no additional application required (there is an app for 2005, the 2008s and 2012). How is easy. Let’s talk about why for a minute.

Backups are not a technological problem. Oh, I know they can both literally and figuratively keep us nerds up at night, but the fundamental definition of the backup is something that the business must provide. They need to tell us how much data to risk, how long we can spend on recovery through things such as the Recovery Point Objective and Recovery Time Objective. I don’t want to get too deep in the weeds defining these terms. What I do want to point out is that your disk is at risk. No, not the one that’s storing your database (although, it’s at risk too). I’m talking about the one storing your backups. That thing is just waiting until the most opportune moment to die. And by opportune I mean of course, when it will be the hugest pain for you. Plus, are you in a flood zone? A wind zone? Earthquakes? Fire? Oh yeah, the gods and the universe well and truly hate your orderly little backup disk. So, it’s a good idea to have a second location for those backups. Back in the day, we paid a company to take our tapes to an off-site storage. Now, you can just go to a hosted storage site like, oh, I don’t know, Azure blob storage.

Here’s how it works. You have to have already set up an Azure account. You should also create a container to keep your backups in(if you need help dealing with blob storage, check out Cerebrata). While you’re there, make sure you get the URL for your storage location and the Access Key for your storage. Now, let’s go back to T-SQL. First, we need to create a CREDENTIAL. This gets us onto our URL without having to paste sensitive information into our backup processes:

CREATE CREDENTIAL MyCredentialName
WITH IDENTITY = 'MyStorageAccountName',
SECRET = 'MyAccessKey';

With that done, the rest is easy:

BACKUP DATABASE MyNewDB TO  
URL = N'http://myserver.blob.core.windows.net/scarybu/MyNewDB.bak' 
WITH  CREDENTIAL = N'MyCredentialName', 
NAME = N'MyNewDB-Full Database Backup', STATS = 10;

The syntax is pretty standard. You’re going to BACKUP a DATABASE named MyNewDB to your URL. In the URL, make sure you supply, not just the URL to your blob storage account, but the container and the backup file name. My container is ‘scarybu’ and the file name is ‘MyNewDB.bak.’ In the WITH clause you’ll specify the CREDENTIAL that you created earlier.

Yes. It’s that easy. Restore is the same thing only in reverse.

Now you can protect your backups by ensuring that you also put a copy to an offsite location and you can do it directly from with your area of expertise and control, SQL Server and Management Studio. For more details refer back to the Books Online.

Aug 07 2013

Be Cautious When Critizing About Guidance

I recently posted some comments about some guidance offered by Microsoft when talking about the CXPACKET wait type. Nothing I said was wrong, no. But, evidently there are a few whip smart and agile people who work for Microsoft. Specifically, Rick Byham, who works for the team tasked with taking care of the SQL Server Books Online. Why am I calling Mr. Byham smart and agile. Evidently he saw the blog post and has completely updated the description for CXPACKET at MSDN:

Occurs with parallel query plans when trying to synchronize the query processor exchange iterator. If waiting is excessive and cannot be reduced by tuning the query (such as adding indexes), consider adjusting the cost threshold for parallelism or lowering the degree of parallelism.

WHOOP! That’s awesome work. Thanks Mr. Byham.

Of course, one of my best and most popular recent blog posts is now completely incomprehensible if people follow the link to MSDN, but I can live with that. Well done.

And the caution in this case? You never know who is going to read this stuff, so try to be nice when offering criticisms. I could be a little more respectful with my criticisms, especially since I’ve put my full share of mistakes and weak advice out in books, articles and on this blog. My apologies if my flippant approach ruffled too many feathers. I got two lessons out of one blog post.

Jul 01 2013

Getting Started With SQL Server 2014 the Easy Way

You know you want to at least take a look at the new Client Technology Preview (CTP) of SQL Server 2014. I don’t blame you either. I want to spend hours swimming through it too. But, you’re thinking to yourself, “Heck, I’d have to download the silly thing, provision a new VM, walk through the install… Nah. Too much work.” I don’t blame you. I found myself on the road the day the software was released, so I was going to attempt to do all that work on a hotel wireless system. In short, I was going to have to wait, no options. Or were there? Actually, there is a much easier option. Azure Virtual Machines.

And no, it’s not that I can simply get a Windows Azure VM ready to go faster than I can a local one (and, depending on just how I set up and maintain my local servers, that might be true). No, it’s that I can immediately get a copy of SQL Server 2014, no download required. It’s that I can, within about five (5) minutes have a server up and running with SQL Server 2014 installed and ready to go. How? Microsoft maintains a gallery of images for quick setups of Azure Virtual Machines. A couple of those images include SQL Server 2014.

VMIcon

To get started on this, and not pay a penny, you need to make sure that you pass the MSDN permissions listed at that link. I know that some people won’t, and I’m sorry. However, get your MSDN subscription set up and link it to an Azure account, then you’re ready to go. Throughout this post, I’ll refer to paying for Azure, if you’re running through MSDN, just insert, “using up my credits” for “paying” and it should all make sense.

First, click on the Virtual Machines icon.

VMNewThis will show a list of VMs on your account, if any. We’re going to add one, so we’ll click on the little plus sign in the lower left corner of your screen.

Clicking on the New button gives you options. Reading the screen you can tell that you have a list of different services that you can add; Compute, Data Services, App Services, Networks and Store. By default, if you’ve opened this listing from the VM list, you’re going to already have Compute selected. That provides a second list of options; Web Site, Virtual Machine, Mobile Service and Cloud Service. Again, if you’ve opened these options from the VM list you’re going to have the Virtual Machine selected. If not, make sure that is what gets selected. The final two options you have are Quick Create and From Gallery. For our purposes we’re going to use the Gallery, but let me first tell you what the difference here is. Your licenses for SQL Server, Windows Server, and most Microsoft products (so far as I know) are transferable between Azure and your on-premises machines. This means you can create an empty virtual machine on Azure and then load your software on to it. You don’t pay additional licensing fees. But, you can also use the images on the Gallery. Here you can set up a VM for whatever is listed and you get those machines and their software for additional cost, but no additional license required. In short, you can pay a little bit more to get access to SQL Server or what have you without having to buy an additional license. It’s a great deal.

VMOptions

Worry about paying for it all later. We’re going to click on the From Gallery selection. This opens up a new window showing all the different possibilities you have for your VMs. You can install anything from Ubuntu to Sharepoint to several different flavors of SQL Server. You can even add your own HyperV images to this listing (although that does mean paying for licensing on any created VMs). Scroll down until you see SQL Server 2014 CTP1. On my listing currently, there are two copies. One that runs on Wndows Server 2012 and one that runs on Windows Server 2012 R2. If you want a Start button on your screen, pick the second one. You’ll then be walked through the wizard to get this thing created. Click on the right arrow at the bottom of the screen after selecting a VM.

VMGallery

Now you need to supply a machine name. It needs to unique within your account. You’ll also have to pick the size of machine you want. This, and the size of the data you store, is what you pay for. You’ll need to decide how you want to test 2014, small or large. For my simple purposes, exploring 2014, I’m going with Medium. That currently means 2 cores and 3.5gb of memory. You can go all the way up to 8 cores and 56gb of memory, but you will be paying for that, just so we’re clear. You also have to create a user and password for the system. Strict password rules are enforced, so you’ll need a special character and a number in addition to your string.

vmStep2

You need to configure how this machine will behave on the network. You need to supply it with a DNS name, your storage account, and your region. I would strongly recommend making sure that your servers and your storage are all configured for exactly the same region. Otherwise, you pay extra for that extra processing power. Also, you may see somewhat slower performance.

VMStep3

Finally you have to, if you want to, add this server to an Availability Group. For our test purposes we’ll just leave that set to None. But, you can make this a part of an AG in Azure or with a mixed hybrid approach as an async secondary with your on-premises servers. Oh yes, the capabilities are pretty slick. I would suggest also leaving PowerShell remoting enabled so that you can take advantage of all that will offer to you in terms of managing your VMs and the processes running within them.

VMStep4

VMCreatingClick on the check mark and you’re done. You’ll go back to the VM window and at the bottom of the screen you’ll see a little green icon indicating activity. It will take about five minutes for your VM to complete. While it’s running, you can, if you choose, watch the process, but it’s a bit like watching paint dry. You’ll see the steps it takes to create your machine and provision it with the OS and SQL Server version you chose.

Once it’s completed, you’ll have a VM with a single disk, ready to go. But, you need to connect to it. Remember that user name and password? We’re going to use that to create a Remote Desktop connection to the server. When the process is completed, the server will be in a Running state. Click on that server in the Management Portal and click on the Dashboard selection at the top of the screen. This will show you some performance metrics about the machine and, at the bottom, give you some control over what is happening. The main thing we’re looking for is the Connect button.

VMConnectClick on that button. You will download an RDP file from the Azure server. Open that file (and yes, your system may give you security warnings, click past them) and you’ll arrive at a login screen, configured for your Azure account. That’s not what you want. Instead, you’re going to click on “Use another account.” Then, in that window type in your machine name and user name along with the password. Once you click OK, you’ll be in an RDP session on your SQL Server 2014 CTP1 VM. Have fun!

VMRDP

Remember, you can stop the VM when you’re not using and you stop paying for it (or, using up your MSDN credits). Just go to the dashboard and use the “Shut Down” option at the bottom of your screen.

If you found this useful and you’d like to learn a lot more about the capabilities of using Azure within your environment, I’d like to recommend you sign up for my all day pre-conference seminar at PASS 2013 in Charlotte. I’ll cover this sort of thing and one heck of a lot more about the future of being a DBA working in the hybrid environment of Azure and on-premises servers.

Jun 10 2013

Praise and a Suggestion for AlwaysOn

One of my favorite additions to SQL Server 2012 is the Availability Groups, referred to as AlwaysOn. These things are awesome. It’s basically shared nothing clustering at a database-by-database level. You can set up a series of secondary machines that will enable you to failover a database in the event of an outage. This is a huge advantage in high availability and disaster recovery scenarios. We’re talking serious business continuity. Further, you can set up one of those secondary machines to allow for reads, meaning, you get a reporting database that allows you to offload read queries from a transactional machine. Another giant win. But wait, it gets better.

Now, with the capabilities that are coming with Azure Virtual Machines and Azure Virtual Networks you can go even further. It’s now possible to make it so that you have an off-site failover server for your mission critical databases, but one for which you don’t have purchase hardware and rack space. This is just absolutely huge and amazing. The capabilities for ensuring your business continuing without a hiccup in the event of serious catastrophes has just shot through the roof. Now how much would you  pay? But wait, there’s still more.

There are going to be even more capabilities of this type built into the upcoming SQL Server 2014 that was announced last week at the TechEd North America conference. Lots of enhancements between Azure, Azure VMs and Azure SQL Databases are going to be a part of the new release of SQL Server. SLAM!

I’m taking a semi-joking tone here, but I’m actually quite serious. This is the kind of technology you want to put into place in order to keep your business up and running. The capabilities in this area just keep expanding and expanding and I’ve got to say, as a paranoid DBA, I think it’s wonderful.

So, enough praise, here’s my friendly suggestion for Microsoft (I know, no one there is going to read this and it won’t matter a lick, but I’ll put it out there anyway). All this exciting stuff requires an Enterprise license. There are smaller businesses out there that may not be able to afford that. But, those businesses need a method for ensuring their continuity as much as larger businesses, maybe even more so. What I’d like to see is the capability in the Standard edition of SQL Server 2014 to have a single failover secondary machine, Azure VM only, as a part of the product. This way, there are a ton of businesses that can take advantage of the cheap (look it up, it really is) Azure VM as a means of setting up an HA environment. The one down-side for Microsoft will absolutely be some businesses who choose to skip getting an Enterprise license and instead go with Standard because all they wanted was a little HA. And I can see how that could hurt the bottom line. But I suspect that would be more than offset by all the people paying for Azure VMs in order to ensure their business continuity. Plus, make it so that it’s not read capable. That will ensure that Enterprise still retains a very clear edge.

There it is. My suggestion. Not worth much, but I’d sure love to see it happen. And now back to our regularly scheduled programs.