May 07 2013

How to Tell Your Windows Azure SQL Database Moved

The very concept of the Windows Azure SQL Database (WASD) is predicated on the up-time created by having three active copies of your database. Should there be a hardware or software failure that would cause the primary replica to go down, your database gets moved to one of the secondary replicas which has been maintaining a copy of the data from the primary replica. In theory, the worst thing that happens is that you have to retry a query. In fact, building that sort of resilience into your code is a pretty fundamental aspect of working with WASD.

I was asked the question, how do you tell if your database has been moved between replicas. I didn’t have a complete answer, so I set out to find one. The first, more obvious part of the answer, is that you’ll receive a connection error to your app while this event is occurring. Microsoft has a very solid white paper on how to deal with your connections, Windows Azure SQL Database Connection Management. Scrolling down you can see that you’ll receive a very specific error number depending on why you lost your connection. These errors include 40197, which is the number you’ll receive during any of the events that trigger a failover to the secondary replicas. Answer done, right? Well, not exactly.

I knew about the error code, but the real question is, how do you tell if the move occurred while you weren’t connected? Is there a dynamic management object or system view that shows this information? The first thing I thought of is sys.event_log. That’s because it stores aggregated information about connectivity and errors such as deadlocks and throttling. Surely, if you were moved it would be there. But, reading through the documentation there’s no indication of it. And, it looks like that information is stored within the database and is therefore going to be moved (unlike all the information in cache which just goes away).

Speaking of cache, you could try running queries against sys.dm_exec_query_stats to see when the oldest compile time of any given query might be. But, remember, we’re on a shared server with a hundred other databases. Cache is highly volatile. I don’t think I’ve ever seen more than about one day’s worth of information within the cache dependent set of DMOs. In fact, I seldom see more than about an hours worth there and sometimes quite a bit less. So that’s not a possibility unless you set up a mechanism for monitoring your servers and can see a massive delta in what’s in cache between one look at the query stats to the next. But that could still be cache volatility.

Looking through the rest of the DMOs and system views I couldn’t identify anything that definitively answers this question. So, at this point, I’d say, if you don’t get the error, there is no real indication that you’ve moved from a primary to a replica. On the one hand, it concerns me that I don’t know. On the other, who cares. The whole idea behind this approach to development is that the guts of the things are handled by others allowing you to concentrate on building the database and the app code appropriately.

For lots more about Windows Azure SQL Database, sign up for the pre-conference seminar, How to be a Successful DBA in the Changing World of Cloud and On-Premise Data, at Tech Ed 2013 in New Orleans or Tech Ed Europe in Madrid.


Aug 20 2012

Another Execution Plan… In the Cloud!

A couple of weeks ago I posted about how different execution plans look when using the Azure Management Portal. I promised to look at a more complex plan and here we are. The query looks like this:

FROM	dbo.Agent AS a
		JOIN dbo.Resource AS r
		ON a.AgentId = r.AgentId
		JOIN dbo.MovieStageResource AS msr
		ON r.ResourceId = msr.ResourceId
		JOIN dbo.MovieStage AS ms
		ON msr.Movieid = ms.MovieID
		   AND msr.MovieStageDefinitionId = ms.MovieStageDefinitionId
		JOIN dbo.Movie AS m
		ON ms.MovieID = m.MovieId
WHERE	a.AgentId = 42;

I didn’t go nuts in creating a crazy complex query. I just wanted to see a standard set of different operators to see how things changed. As before, I’m working off a controlled database that has identical structures and data in both the Azure database and my local copy of SQL Server. This way, the likelihood of identical plans is increased. Plus, if there are differences, it’s coming down to the differences in compilers between the two platforms, which makes things more interesting.

Running the query on my local server, the execution plan looks like this:


If you click on it to blow it up, you’ll see a pretty standard plan. If we read it in logical fashion (left-to-right, making Rob proud) you’ll see that a nested loop joins data between the Agent table and another nested loop. That loop joins between yet another nested loop and the MovieStage table. That loop is between one more nested loop and the Movie table. The final loop is between the MovieStageResource table and the Resource table. Nothing shocking or weird.

Same query, different (?) execution plan from Azure

First thing that jumps out at you are the red operators. We’ll get to those in a minute. So the plan looks totally different, right? No, not really. It’s pretty much the same. The estimated operation costs are different, which is interesting, but the basic plan is the same.

First thing to check in every execution plan? The properties in the first operator. Here they are from SQL Server:

Zoom in to read this if you need to. It’s a Full optmization, but the reason for early termination is a TimeOut. This means we can’t really trust this plan. On another day you may even see the plan change with the same query. Certainly if a service pack came out or something else the plan could change. However, this query looks like it’s properly optimized, even though the optimizer says it isn’t. Let’s see the properties from Azure. I’ve zoomed in to the “More” properties to get at some details:

The first point to note is the Estimated Rows value. Here, in Azure, it is 40.369. Meanwhile, back in SQL Server it is 40.8236. Now these differences are pretty subtle. I sure wouldn’t be sweating out of data statistics because of a .4546 discrepancy, but it gives us a clear indication why the estimated values for identical operations against identical data and structures are different. I’ll also note that the Optimization Level on the Azure query was also Full and the Reason for Early Termination was a timeout. These are fun and interesting results.

Now, let’s look at the most costly operator, the Clustered Index Seek operation. First the properties in SQL Server:

This is where things get a little weird. We have an estimated number of executions at 1000 and an estimated number of rows at 1. The actual number of rows was 22. The estimated cost for the operator is 2.24226. All well and good. But when we look at the same values in the Azure operator:

If you look at the Estimated Executions, Actual Rows, Average Row Size, Estimated I/O Cost and Estimated CPU Cost, they’re identical (with some rounding differences). But… the estimated cost of the operation is different; 2.24226 in SQL Server and 1.50594 in Azure. How do I account for the difference? I don’t. I think we’re looking at differences in how the optimizer establishes costs, because everything else looks the same.

Clearly, while the optimizers behave in a very similar manner, after all, we got effectively identical execution plans, they are not the same. The small difference in the cost assigned by the optimizer informs us of this. I’ll have to play around some more with actually complex queries to see what else can be seen.

If you enjoy this kind of drill down on queries, query tuning, execution plans and all that fun stuff, I’d like to suggest you sign up for my full-day pre-conference seminar at the PASS Summit 2012 this fall.

Aug 01 2012

Execution Plans … In the Cloud!

If you’re moving to a fully-fledged SQL database as part of Azure you may never even touch SQL Server Management Studio (SSMS). Instead you may do all the management of your SQL database through the Management Portal. You can actually do everything you need to through this tool. This includes writing and testing T-SQL statements. Believe it or not, it includes execution plans. But, if you’re used to seeing this from a query:


Then you may be a little bit surprised at this:


Believe it or not, those are identical plans. Yeah, the graphics are different, radically so. But the plans, those are the same.

I have a small test database that I have loaded into my local server and onto my Azure SQL Server database. It has identical structures and, thanks to SQL Data Generator, identical data sets. This means that when I run a query, assuming the statistics are the same, I should get identical plans.

For this post, I’m just going to scratch the surface and take a look at plans for this query:

SELECT * FROM dbo.Movie AS m
WHERE m.MovieId = 4138;

As you can see, both plans include the SELECT operator and a Clustered Index Seek operator. But as I’m constantly saying, just looking at the graphical part of the plan is not enough. You have to look at the properties. Let’s start with the SELECT operator. Here’s the properties from SSMS:


For comparison purposes, here’s what we get from the Management Portal. I’m going to post it up in pieces so that we can see it all.

When you click on an operator within the portal you’ll see a small set of properties, almost like the tool tip in SSMS. There’s a button there also “View More.” Clicking on that expands out the full set of properties. Here’s the first screen shot:


Obviously the order is different. But there are also some data differences. For example, the NodeID. In a plan in SSMS the nodes are numbered in a physical processing order, starting at zero. It’s one way to tell how the query was processed. In the Management Portal you can see that the first node has an ID of 1. The second one is 2. This follows a logical processing order. Also, the SELECT operator is given a NodeID. It does not have one, even in the XML, within SSMS. The other information is pretty standard and exists in both. I’ll keep scrolling down:


Here there are more differences. At the top, Statement Id. This is not displayed within the properties of an SSMS plan, but is available within the XML. It just shows which statement you’re working with if you have multiple T-SQL statements. Other pieces are the same, including exactly the same Subtree cost (which means the statistics are identical and the cost is estimated by the optimizer in both Azure and my SQL Server 2012 instance in the same way). Both plans were TRIVIAL and used simple parameterization.The Optimizer Hardware Dependent Properties show differences, but that’s to be expected. However, it does give you a glimpse at what is exposed to SQL Server within the Azure system. For example, the max degree of parallelism for this system is 3. I’ve got an EstimatedAvailableMemoryGrant of 4500062. Compare that with the 208838 on my 16gb laptop and you can see how much bigger the hardware is for the Azure system. Let me scroll down a little more:


The last major difference is more visible in this screen capture than the last one. The “Reason for NonParallel Plan” is not visible in the 2012 properties. It’s also not in the XML. This is new information exposed about this query and the execution plan generated.

With this one operator we can see that these plans, while similar, are not identical. I’ll post more about these in the coming weeks.

For more details on query tuning and execution plans, including Azure, please attend my pre-conference seminar at the PASS Summit 2012.

May 23 2011

SQL Azure Query Tuning

SQL Azure is still SQL Server at the end of the day. This means it is entirely possible to write queries against SQL Azure that really… what’s a good word… stink. So what do you do? It’s all in the cloud. You couldn’t possibly tune the queries, right? Wrong. Many of the same tools that you have available to you, such as execution plans and dynamic management objects, are still available in SQL Azure.

Let’s talk DMOs for a second. First off, don’t make the mistake I did of trying to run these outside the context of a specific database on SQL Azure. You’ll get extremely inconsistent results, trust me on this. Anyway, I did a quick run-down on some of the most used DMOs for performance tuning, the sys.dm_exec_* set. Here’s a complete listing of those procs and whether or not they’re available to you in SQL Azure:

[sourcecode language=”sql”]SELECT  *
FROM    sys.dm_exec_requests AS der

FROM    sys.dm_exec_requests AS der
        CROSS APPLY sys.dm_exec_query_plan(der.plan_handle)

FROM    sys.dm_exec_requests der
        CROSS APPLY sys.dm_exec_sql_text(der.sql_handle)

FROM    sys.dm_exec_query_stats AS deqs

FROM    sys.dm_exec_cached_plans AS decp
–invalid object

FROM    sys.dm_exec_connections AS dec

FROM    sys.dm_exec_cursors AS dec
–invalid object

FROM    sys.dm_exec_requests AS der
        CROSS APPLY sys.dm_exec_plan_attributes(der.plan_handle) AS depa
–invalid object

FROM    sys.dm_exec_procedure_stats AS deps
–invalid object

FROM    sys.dm_exec_query_memory_grants AS deqmg
–invalid object

FROM    sys.dm_exec_query_optimizer_info AS deqoi
–invalid object

FROM    sys.dm_exec_query_resource_semaphores AS deqrs
–invalid object

FROM    sys.dm_exec_sessions AS des

FROM    sys.dm_exec_requests AS der
        CROSS APPLY sys.dm_exec_text_query_plan(der.plan_handle, 0, -1) AS detqp

FROM    sys.dm_exec_trigger_stats AS dets
–invalid object

FROM    sys.dm_exec_xml_handles(@@SPID)
–invalid object

The good news, most everything you need is available so you’re really going to be able to go to town on using DMOs as part of your query tuning. The bad news, “most everything” doesn’t include sys.dm_exec_query_optimizer_info is not on the list. This does take away a tool. It’s not a vital tool, but it’s one that allows you to understand some of what’s happening on the system. I’m not going to cry and rend my clothing because I don’t have it, but I will be somewhat disappointed.

This is great news! The tools you’re learning and using (and you are using DMOs, right?) will continue to be available in SQL Azure.

May 09 2011

SQL Azure Migration Wizard

There’s a project over on code plex to come up with a mechanism for validating databases and generating the necessary scripts to allow you to move those databases, and data, over to SQL Azure. It’s called the SQL Azure Migration Wizard. There’s no real install yet since it’s just a beta. You can download the executable and run it.

It’s pretty straight forward stuff. It’s primarily focused on validating that there is a database, script or series of calls (from a trace event file) that will be properly compatible with SQL Azure, but it can also generate a deployment script from the database. Here’s the opening screen:


Not much to it. I’ll walk you through the Analyze and Migrate path which covers most of the functionality of the app. Clearly, you click Next from this screen. This will open a connection window that recommends you connect to Master and let the app bring back a list of databases. OK. Fine. Here’s the list that comes back from my server after a standard connection:


Select a particular database and move on. I chose a copy of my MovieManagement database that I had created for my DAC pac testing. Clicking on Next opens up the Choose Objects window. You can either take the default and move everything or change the radio button and select individual objects. I selected everything here just to show off the screen:


If you click on the Advanced button near the bottom of the screen, you get a whole bunch of scripting options. I won’t go into them all, but you get a surprising level of control as to what gets created and how. The most interesting part there are all the compatibility checks. You can almost get a guess as to what Microsoft might be working on for upcoming releases if you assume that things that are included here can be switched on at a later date without compiling a new set of software:


Clicking Next on the Choose Objects screen brings up a summary showing all the choices made so far.


Clicking next just generates the script. No really secrets. You can choose to save the script and the output from the process:


Clicking next, things start to get interesting. You have to connect up to a SQL Azure database. You’ll forgive me if I don’t share my connection information with you. I’m trying to keep the costs on this silly thing near zero as much as possible. You again get a list of databases with a few interesting buttons on the bottom of the screen.


I can change the connection to a new server. I can delete a database or I can create a database. Either way, the script I run is going to run, as is. There is no comparison, so except for your abilities to choose which objects get deployed, there is no incremental build, say, adding a column to a table. It’s all or nothing, drop & create or just create. Once you pick a database you get the choice to run your script and it finishes.


That’s it. Overall, I like this better than the DAC pac, at least for basic setup and operation. But there’s no way to script it, so you’re forced to walk through this wizard over & over again. However, that said, you have a mechanism that clearly offers more control and abilities than the DAC pac currently supports in SQL Azure.

But… I’m still underwhelmed. I’m still not getting my database into source control. I’m still not able to perform minor changes or in-place upgrades. This will move data as well as objects, which is good, but again, it’s all or nothing. No incremental deployments. We’re still not there with Microsoft tools, not yet.

May 02 2011

SQL Azure and the DAC Pac

WallWhen last we left our intrepid hero he had successfully deployed to SQL Azure using the Data-Tier Application Package, a DAC pac. It was easy and I had a database in place in nothing flat. There really weren’t any issues worth mentioning. I wasn’t crazy about the fact that unless I had Visual Studio 2010 I couldn’t edit the Data-Tier apps or get them into source control, but there you.

So, assuming this is a real production application, I’ve just realized that I need to get a new procedure into my database. If I just got and run the script to create the procedure then I’ll be breaking the link between my database and the DAC pac (and yes, I still enjoy saying that knowing that blood pressure is rising with each utterance), which would be bad. Instead, I need to create an updated DAC pac and then upgrade the existing deployment. There’s a bunch of information on how to do this right in the Books Online so it shouldn’t be too hard.

Then, I right click on my DAC pac on my SQL Azure database and here’s my menu:


Hmm… But I’ve been testing DAC pacs on my local machine and when I right click on one of my tests there I get this menu:


What’s missing? And no, it’s not PowerShell for crying out loud. No Upgrade Data-tier Application menu choice on SQL Azure.

But, I thought I was supposed to deploy from the DAC pac? Well, yes, you’re supposed to deploy, but you shouldn’t need to upgrade, at least not on SQL Azure. And yes, all the words, especially the four letter ones, that just went through your head have been going through mine. You can’t deploy changes to a SQL Azure using a DAC pac.

But, Microsoft is aware of the issue with the DAC pac. I don’t know if they’re working on making it functional in SQL Azure, they may well be. But, they are working on the SQL Azure Migration Wizard. More on that in our next installment. Same Azure-Time. Same Azure-Channel.

Apr 28 2011

SQL University: SQL Azure & PowerShell

SQL-University-Shield-268x300Welcome once more to the Miskatonic branch of SQL University. I see that most off you survived out last encounter… uh, class. Most of you even appear somewhat sane… somewhat. Today we’re going to talk about PowerShell and it’s use with SQL Azure. Which sounds a something like an eldritch horror breaking in from some outer dimension… hmmm… that could explain certain things… So, back to Powershell and SQL Azure.

You can’t run Powershell from SQL Azure. Thank you for attending and enjoy your trip home.

Still here? I did try. Let’s clarify. You can’t run PowerShell from SQL Azure, but I didn’t say that you couldn’t use SQL Azure as a target for PowerShell. It is possible to connect to your SQL Azure databases running PowerShell from a local desktop or server. Let’s examine the basic connections.

The strength of PowerShell, at least in my mind, is in the pipe. The ability to pass information from one command to the next through the pipe is how PowerShell stands out from just another scripting language. SQL Server 2008 R2, and by extension, SQL Azure, has very limited pipe capability (we’re all holding out hope for Denali). What little you can do comes through the Invoke-Sqlcmd (and yes, invoking things at Miskatonic is usually frowned upon).

Connecting to a SQL Azure database from PowerShell is just a question of configuring the connection appropriately. But appropriately isn’t that hard:

[sourcecode language=”powershell”]Invoke-Sqlcmd -Query “SELECT * FROM sys.dm_exec_requests;” -ServerInstance “” -Username “MyUserName” -Password “APassword”}[/sourcecode]

Yeah, it’s that easy. This will connect up and run the query. I’d show you the output, but it’s not all that thrilling.

The other way to use PowerShell is to connect to stuff through SMO. Clearly, if I can connect through Invoke-Sqlcmd, then making an SMO connection should be just as easy, and it is:

[sourcecode language=”powershell”]
[reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) | out-null

$cn = “;Database=master;User ID=MyUserName;Password=APassword;Trusted_Connection=False;”
$conn = New-Object System.Data.SqlClient.SqlConnection
$conn.ConnectionString = $cn

$srv = New-Object (“Microsoft.SqlServer.Management.Smo.Server”) $conn
$db = $srv.Databases[“master”]

$db.ExecuteWithResults(“SELECT * FROM sys.dm_exec_requests;”)}

Well, OK, maybe not as easy, but it works. I’m not a fan of SMO.

The key to remember, any functionality that is not supported in SQL Azure, will not be supported through any PowerShell calls. But once you’re in, you’re in. You can begin making use of the abilities of PowerShell to perform automation on your SQL Azure database. Be aware though, you’re still paying for data movement charges. There’s nothing magic here. Also, since PowerShell doesn’t yet run on the SQL Azure server itself, remoting is right out.

Thanks for coming to my class. I hope you begin to employ PowerShell with your SQL Azure implementations. I also hope you all get home. The building’s beginning to shake. Since we’re no where near an active fault line, it usually means yet another visitation from beyond space and time, Why can’t they pick on Harvard occasionally, or go down to Princeton? In New Jersey, who’d notice? I’m heading for my bunker… uh, I mean office. See you next time, maybe.

Apr 26 2011

SQL Azure Diagnostic Tool

One of the CSS Engineers from Microsoft has released a diagnostic tool for SQL Azure. It’s worth a look. It’s just using standard DMO queries to pull back data, but they’re put them together inside a nice UI to help out. If you’re already a DMO super-star, this might not be useful, but if you’re looking for an easy way to gather data from your SQL Azure instance, this is a good choice. Just remember, although it wasn’t mentioned on the MS web site, I’m fairly certain you get charged for running these queries. It’s always something to keep in mind when working with SQL Azure.

Apr 25 2011

SQL Azure Deployments

You’ve set up your access to a SQL Azure site so you’re ready to go. Now how do you get your database up there? Well, TSQL works just fine, so you can script it all out to a file or set of files and apply them. Because you can connect with SSMS, you could use the GUI to build out your database. You can also use the Azure development platform and it’s web based GUI to create your database structures. Personally, the scripting method doesn’t seem too bad to me because you can get your code into source control that way. But, Microsoft has something additional in mind. It’s called Data-tier Applications or DAC for short (and yes, there are other things called DAC in SQL Server, what can I say, I don’t work for Microsoft).

A DAC is created by a DAC package, called a DAC pac (yes, yes, I know). A DAC pac can be created one of two ways. You can use Visual Studio 2010 to create a data-tier application, similar to how you would create a regular database in Visual Studio using the Team System database tools. The other option is to create a database using TSQL and/or SSMS and then extract a DAC definition. For this blog post, we’ll just talk about the method of extracting a DAC and creating a DAC pac (and yes, I’m having fun saying this over & over. Can you see a SQL Server MVP near by? His blood pressure is spiking every time he reads those letters).

I have a sample database I’ve been using for some different tests, it consists of a set of tables and primary & foreign keys. I sometimes load it up with data and other times leave it blank. Anyway, here is the list of tables in the database:


To turn this into a DAC package is pretty easy. I’m simply going to right click on the database itself, click on the “Tasks” menu choice and then select “Extract Data Tier Application” This opens a wizard:


It’s pretty simple from there. Click on the “Next” button to bring up a few options that you can set. Not much really. You can name the package, give it a version, include a description and give it a location to store the package.


Clicking next validates that what you’re making into a DAC pac is a valid set of structures:


And then the final click of the Next button builds out the package and you’re done.


You would use this approach because it does a couple of things. First, it lets you develop stuff locally. You’re working on your own server using tools you’re comfortable with. Second, you can pass this to another team or another company and they can deploy your database from this package. The concept is, this is all you ever need to deploy a new structure to the database. It’s supposed to do upgrades and everything, all on it’s own. I’m working on testing it.

To deploy this to an Azure database, after you connect to the server in SSMS, open the Management folder. The only thing in it for a SQL Azure server is Data Tier applications. Right click and select “Deploy Data-tier application…” This will open another wizard and you can start by selecting the DAC pac you just created:


You don’t really need to see all the screens because it’s just the other ones in reverse except you get to decide the name of the database you’re deploying. That’s it.

A couple of issues I have with this so far… no data. Microsoft is working on some other utilities for getting data out to the database, and I’ll cover those. No source control. Yeah, you can take the DAC pac itself, it’s just XML, and put that into source control, but that’s not the same thing as having individual tables, indexes and procs in source control as you can do using other tools.

Let me test these some more and I’ll post a few more bits of information about what I find.

Apr 18 2011

SQL Azure Cost

cloudWe’ve all heard the scary stories. A developer starts testing Azure and then suddenly gets a thousand dollar bill on their credit card. The whole idea behind Azure works of the premise that you are not paying for the infrastructure. Instead of buying servers and disks and racks and switches and routers, you just pay for access and storage. It’s a great model. That model is  especially applicable to small businesses that just can’t  afford the outlay, or to a larger company that wants to perform cheap experiments, or any number of other places where purchasing and maintaining hardware just doesn’t make sense. But what are the costs?

That’s a little tricky to answer, truth be told. SQL Azure is charged a monthly fee based on the size and number of databases, but the fee is amortized daily based on the size and number of databases you use at any point during the day. Easy, right? Yeah, me neither. Here’s the legal sheet with all the descriptions in detail. I’ve pulled out the relevant quote here:

We charge a monthly fee for each SQL Azure database, but we amortize that database fee over the month and charge for the number of databases used on a daily basis. This daily fee is dependent on which database type (e.g., 1 GB Web Edition, 10 GB Business Edition, etc.) and number of databases you use. You pay based on the total number of databases that existed at any point during a particular day. For presentation on your bill, all usage for Web Edition databases is converted into the 1 GB size and all Business Edition usage is converted into the 10 GB size. For example, if you utilized a 50 GB Business Edition database, your bill will display this as five 10 GB databases.

What are those fees? Here’s another bit from the web site. You’re measured based on the size of your database. Less than 5gb and it’s a web database. Since the rounding is up, greater than 5gb on a database and we’re talking a Business database and the rates go up:

  • Web Edition
    • $9.99 per database up to 1GB per month
    • $49.95 per database up to 5GB per month
  • Business Edition
    • $99.99 per database up to 10GB per month
    • $199.98 per database up to 20GB per month
    • $299.97 per database up to 30GB per month
    • $399.96 per database up to 40GB per month
    • $499.95 per database up to 50GB per month

For example, I currently have three databases I’m playing with. All are less than 5gb, so I’m getting charged $9.99 a month x 3, but that is assuming I leave all three databases there all month. If they are only there part of of a month, say 5 days, then I get charged $9.99 * (5/30) or $.333 a day for a total of $1.665 for the month. I’m not a lawyer or an accountant, so please, validate these numbers on your own, but the math works out.

That’s just SQL Azure. Now we need to talk data transfers. The pricing in this area is confusing. Windows Azure has a whole different price structure and includes data transfers. Is that applicable to SQL Azure? I’m still not sure. It’s not in the documentation as a part of paying for SQL Azure, but I’m fairly certain it is.

Let’s work off the model, for a moment, that there are no transfer costs for SQL Azure. Assuming this is correct, an interesting thing occurs. You can just use SQL Azure and connect to it from your own app servers. Or, you can start using Windows Azure and pay the data transfer costs. What’s the interesting point? If you’re just accessing SQL Azure and you write really bad TSQL code that moves way too much data or has too many really sloppy transactions or something you get away for free. But if you do the same thing from Windows Azure, you’re going to pay through the nose for your bad coding practices. However, I think there are transfer rates on Azure too, so all that poorly written code (or all that nHibernate generated code) is going to cost serious bucks.

In short, who said the job of a DBA or a database developer was dead? Not me. In fact, if you’re decent at query tuning, your star might be rising. You are going to literally save companies money.

There is a data transfer cost, the rate is dependent on where you are located:

  • North America and Europe regions: $0.10 per GB in / $0.15 per GB out
  • Asia Pacific Region: $0.10 per GB in / $0.20 per GB out
  • Inbound data transfers during off-peak times through June 30, 2011 are at no charge. Prices revert to our normal inbound data transfer rates after June 30, 2011.

To date, I’ve been charged $.20 (that’s twenty cents). I’m still not sure why and since it was so low, I didn’t bother fighting it. The good news. If you’re just starting out and you have an MSDN Premium subscription, there is a way for you to experiment, within a narrow range of options, for free. Otherwise, plan on paying.

As I continue working with Azure, I’ll let you know, in general numbers, what it’s costing me.