Apr 23 2014

Azure Automation

I introduced Azure Automation in a previous post. I’ve spent some more time exploring it.

There’s a set of documentation available as I noted before. Unfortunately, reading through the full set of documentation, I have some criticisms to offer. The layout of the documentation goes through “Common runbook tasks” actually more or less laying things out as I did, inadvertently, I assure you, in my previous blog post. The problem with that, as I found in that post is, the administration of the runbooks seems fairly straightforward from the screens. But, you can’t do a darned thing with any of it until you have a runbook . Further, you can’t have a runbook until that thing has some code in it. And, the documentation doesn’t include documentation about code. Instead, we just get a page with a list of samples, but no links to that code, nor an indication of where it might be. The scripts are located here. But man, that ought to be in the documentation. There’s also no clearly documented method for how to start doing the development. It’s not really necessary since the GUI leads you inevitably to the Draft screen we saw in my other post. But, documentation is generally supposed to let you know what to do, where to look, etc.

There is another set of documentation just on authoring runbooks. Lots and lots more meat there. I’ll go through it and follow up further.

Enough criticism, let’s play with some code.

I’m going to start with the “Hello World” code set. It’s supposed to be an introduction to how everything works. You can’t open it from the Azure Portal. Instead you have to download it to your machine and then either upload it into a new runbook or copy and paste it into the Draft editor window. Presumably this is so you can do the coding locally using the PowerShell ISE or other tools. Documentation for the script is clear. It’s description:

If you are brand new to Automation in Azure, you can use this runbook to explore testing and publishing capabilities.

 

Well, let’s just say that’s a little grandiose for what is, literally, a “Hello $Name” example. But, it’ll get your feet wet. I took the script, pasted it into my “RunningScare” runbook. From there, I have the  capacity to Save, Test, or Publish. Being a good paranoid type, I ran test first. It popped up a window to input the parameter and then showed the output in the Output Pane (which I hadn’t actually noticed):

OutputPane

I can’t tell you why it output multiple times, but it did from one test of the script. To see the rest of the functionality, scheduling, etc., I went ahead and hit Publish. That moved it from Draft to Published where all I can see is a faded outline of the actual script and a Start button at the bottom of the screen. I went ahead and ran it from there. It actually takes a surprisingly long time for such a silly small script to complete.  There’s event the ability to view the Job as it’s running:

JobSummary

So that works. Next up, scheduling. It’s pretty straight forward to walk through the GUI in the Portal (although, now I want to see if I can programmatically control the Automation interface, more to explore). I’m going to try to run this script once an hour. So, I’ll give the schedule the name, unique to my account, Hourly (imagination knows no bounds). And then things get weird. I can only schedule this for a “One Time” run or “Daily.” No other options available:

Schedule

Nothing in the core documentation about the details of scheduling. Checking the authoring doc (which has tons of stuff in it) there is a PowerShell command for directly controlling this (oh yes, much more to explore), Set-SmaSchedule. But, it’s not clear if the command has more variables other than a day interval. I’ll have to test it out to see. The Portal recognized that parameters were necessary, so I put one in and scheduled my runbook. Worked great.

With that, I have my first run book set up, tested and scheduled. So far, this is looking really interesting.

 

 

 

Apr 16 2014

Microsoft Azure Automation

AutomationMicrosoft just announced a new mechanism for managing your Azure resources, Automation. You can check out the documentation on it here. It’s a mechanism to create runbooks using PowerShell that you can then combine with other runbooks inside a runbook, etc. Let’s check it out. I’m doing everything you see here without consulting the documentation. I want to see how easy it is to put this stuff together. First, because it’s still in preview, you have to sign up. Once you’re accepted in the program, you get a new icon in your Management Portal.

Next, you’ll have to create an automation account. That’s pretty straight forward. It’s just a name, your selected region and the subscription you’re putting it under. No immediate guidance on where, when or if the region matters that much:

AutomationAccount

When you get into Account, nothing is there. No default runbooks or anything. Now, I know you can go and get sample runbooks and I suspect there will even be a clearing house through GitHub or somewhere for runbooks. But right now, we’re flying by the seat of our pants, so let’s just create our own runbook. It’s Azure, so the New button is right there in the lower left. Clicking on it, we can do a quick create for runbooks, code to be complete later:

Runbook

Nice and easy so far, although, so far, this thing doesn’t do anything. So now, I have a runbook. If I open it up, it shows a dashboard with several tabs, Jobs, Author, Schedule, Configure. Clicking on Jobs, I don’t see anything interesting displayed. Obviously I should click on Author next, but where is the fun in that? So I click on Schedule. That brings up a message that “You must publish this runbook before you can add a schedule. Click AUTHOR to author and publish this runbook.” So I go ahead and click on Configure (yeah, I’m that guy). It’s not that interesting. So, let’s go back to where we should have started, Author. Clicking there, I get this:

Published

OK, fine. Let’s click on Draft. Which brings me to, well, I think it’s a fascinating screen:

Script

It’s a script. And that’s PowerShell it expects me to type in there, but I really don’t know what kind of commands I should be using, so… Here endeth the beginning of my exploration of Azure Automation. I need to actually go and read the docs. More to come.

 

Mar 25 2014

Save Money On Your Training Server

Save MoneyYou can spend less money. Some of us are lucky. We work for very large corporations who can easily set aside a spare desktop or even space on a rack for a server on which we can train. Others of us are not as lucky. We work for smaller organizations that have to be more careful with their money. Not only do we not get the extra machine to train on, but our laptops could be weak things that can’t run two or more VMs. In this case, how can you go about learning stuff? Spend your own money? Sure, it’s an option.

There are some very cheap servers available out there that won’t cost you even $1000 dollars to set up. And for pretty cheap you can buy some network attached storage to have your own little SAN-style setup. That’s very doable. Let’s break it down a little:

HP Proliant MicroServer G8: $549
Added Memory to 16GB: $209
24oGB SSD: $129
Lenovo/Iomega 1TB of storage NAS: $878

We’ve just spent $1765 for a decent little set up. So now you could run 3-5 VMs on this machine and you’re good to go. Of course, now you’ve got to maintain that system, patching, upgrades. What happens when it gets old? You’ve got to replace it. What if you’re not using it? That was a lot of money spent then.

Ah, but wait. Software. We need to get Windows server licensed and SQL Server. Let’s see:

Windows Server 2012 R2 Fundamentals: $501
SQL Server Developer Edition: $44

We’re now up to $2310. But… oh, yeah, the licenses for the servers, that doesn’t include VM licensing, so let’s buy… 4. That’s enough for one server and 3 VMs. That’s an additional $1500, so now we’re up to $3810. Cool though, right. That’s not much money and we’re off and running.

Here’s a suggestion, even if you have to spend your own money, how about Azure? Currently, I’ve left three servers running on my account (not something I recommend, but I’ve been doing this as an experiment), plus the storage they use, plus the SQL Databases I have, I’m racking up a bill of about $80/month. That’s $960 in a year. Which means in about 3.9 years, I’ll have spent as much as you just did on that server that’s sitting under your desk.

Yeah, I know. It runs somewhat faster, except when I burn a little cash and bump my servers up to 8 core and 56gb of ram for a test, then turn it back down, or even, turn it off or deallocate it. Because, you’re only going to pay for what you use. So if you just throw the VMs away between tests, you’re saving tons of money, way above and beyond what that hunk of iron under your desk cost. You can even estimate exactly what things are going to cost using the engine Microsoft provides.

But did I say pay? Not quite. You see, I have an MSDN account. That includes Azure credit. Anywhere from $50 to $150 per month. So, for $1199/yr, I can get $50 a month of Azure credit. That means, just buying an MSDN account, it’ll take me three years to equal what I spent on that box under the desk.

Oh, and that’s before we get to the electricity you paid.

Look, there’s a reason to buy iron. I believe in it. But, there are also reasons not to buy iron. Testing, training, personal use… maybe iron. Or, maybe it’s time to step into the 21st Century.

Mar 21 2014

PASS DBA Virtual Chapter Talk

I almost forgot to tell you about the Database Administration Virtual Chapter meeting next week, March 26th, 2014. I’ll be doing a talk about query tuning in Windows Azure SQL Database. It’s a talk I’ve given before (it was in the top 10 at the PASS Summit last year). Come find out why you’ll need to tune queries in WASD, the tools you get, and the glorious fact that you’ll actually be actively saving your business money by tuning queries! Click here now to register.

Dec 19 2013

How to Set Up Managed Backups in SQL Server 2014

Earlier this week I introduced the concept of Managed Backups (and caused less of a turmoil than I thought I would). Now I want to show you how it works. It’s really simple and quite well documented. Before you get to the, insanely simple, task of actually enabling Managed Backup, you will need to go through the prerequisites. First, and this should be obvious, but I’ll state it, just in case, you need to set up an Azure storage account. That’s so insanely straight forward that I’m not going to say more. Then, you have to set up encryption on your system. I used these commands to prep it:

CREATE MASTER KEY ENCRYPTION BY PASSWORD =
    '$qlserver2012queryperformancetuning';

CREATE CERTIFICATE CloudDojoCert
WITH SUBJECT = 'Backup Encryption Certificate';

Again, shouldn’t have to say this, but covering my bases, don’t lose that password. You might need it later. You’ll also need to create a credential (I had one created for Backup to URL already):

CREATE CREDENTIAL MyCredentialName
WITH IDENTITY = 'MyStorageAccountName',
SECRET = 'MyAccessKey';

The actual command to get the backups started is crazy simple:

EXEC smart_admin.sp_set_db_backup @database_name = 'MyDatabase',
    @retention_days = 10, @credential_name = 'CloudDojo',
    @encryption_algorithm = 'AES_128', @encryptor_type = 'Certificate',
    @Encryptor_name = 'CloudDojoCert', @enable_backup = 1;

That’s it. By setting @enable_backup to 1, I’m and running. I can repeat the script for multiple databases and I’m done. Backups are now automatic. There’s auditing built in. For example, I seem to get errors on RESTORE HEADER_ONLY occasionally:

DECLARE @eventresult TABLE
    (
     event_type NVARCHAR(512),
     event NVARCHAR(512),
     timestamp DATETIME
    )

INSERT  INTO @eventresult
        EXEC smart_admin.sp_get_backup_diagnostics @begin_time = '12/1/2013',
            @end_time = '12/19/2013';

SELECT  *
FROM    @eventresult
WHERE   event_type LIKE '%admin%';

errors

[FileRetentionAdminXevent] Type = SqlError, Code = 3292, Stage = Undefined, Message = A failure occurred while attempting to execute Backup or Restore with a URL device specified. Consult the Windows Event Log for details.
RESTORE HEADERONLY is terminating abnormally., Additional Info = N/A.

Browsing through the files (using Cerebrata because it makes my life so much easier), I can see that it’s an intermittent problem, not a consistent one. And, since it’s just the verification of the backup, I’m not sweating it that much:

files

It’s the kind of intermittent error you’re likely to get when working with cloud-based technology. Code accordingly.

But that’s it. It’s so easy to get set up. You can see that I’ve been running backups for days. Managed Backups are easy to set up. They’re easy to turn on. You can use RESTORE FROM URL to pull these back into your database just as easily as I put them out there. There’s really nothing to it.

Again, this is not for everyone. It’s just another tool in the toolbox. Yet another way of getting things done. Explore it. Let me know what you think.

Dec 17 2013

Introducing Managed Backups in SQL Server 2014

Some of the new functionality of 2014 is straight forward, non-controversial and easily welcomed by the community. Think, updateable column store indexes. Some of the new functionality is going to raise an eyebrow or three (most of the time, not on one person, but you know the #sqlfamily, we have some interesting mutations represented). Think… managed backups.

Now, why on earth would a process that takes backups for you, including log backups, does it automatically per a schedule and/or when data has changed sufficiently, stores it offsite for protection and is easy to set up and maintain going to cause so much controversy? Hey, I’m wrong, it won’t, move along… Or, maybe, I’m right, and this is going to raise more than eyebrows.

We’re talking about surrendering control over your backup processes and letting Microsoft decide if and when you need a full backup or a log backup. We’re also talking about surrendering control over your storage and letting Microsoft manage your files in Azure Blob Storage. OMG! I said Azure. Several people vomited, just a little, into their mouths. Yep, Microsoft is going to take away your backups and your ability to control them. It will then store them in a way you can’t access and Microsoft will not let you have any control over them ever again…

Oh good gosh, really? You believe that? I do have a bridge for sale that I can let you have pretty cheap.

Let’s talk about what managed backup really is, who it might be for, where it could work extremely well and where it might not work well at all.

Managed backup is a service running on your system that is enabled, if you so desire, through T-SQL commands. That’s right, it’s not even turned on if you don’t want it to be. The service is pretty simple. When you enable it, you define a few parameters, which database or databases and a retention period. That’s pretty much it. From there it runs your first full backup on each of the databases you’ve put under management (no system databases and the backup size has to be less than 1tb, a limitation of Azure Blob Storage). Then, it runs log backups at least every two hours, or, after 5mb or more of transactions. Heck, if your log backup chain is broken, it’ll reset and start over. It’s smart. It’ll also run a full backup after 1gb of data growth, or once a week. And again, if you break the log backup chain, it’ll take a full and keep going. All the files go to Azure Blob Storage where you can use backup/restore from a URL to retrieve them. The backup format is, wait for it, SQL Server backups. Because they’re SQL Server backups you can restore them to a SQL Server machine. That’s it. It’s very simple and simplistic.

Who this is for, well, if you’re one of those people with the funny taste in your mouth right now, probably not you (although, you know I think you should start learning it). If you’re managing one of the larger database systems, over 1tb in size, you’re out. In fact, if you have a sophisticated backup routine in place now combining differentials and file/filegroup backups, you’re out as well. This is meant for those people who don’t want to or can’t take all the time and trouble to set up a sophisticated backup routine. It’s for people who are already running their servers on Azure VMs. It’s for small shops that don’t have the money to set up multiple off-site storage locations for their backups. It’s for consultants who want a really simple and reliable method for getting clients to have protected backups with off-site storage with a minimal number of steps. In short, there are a lot of people who are going to find this pretty handy. Again, not everyone. But that’s OK, every feature isn’t for everyone either. That mom & pop database that’s about 8gb in size and running on a 32bit server just isn’t going to benefit from in-memory table storage either. Does that negate that functionality? Nope. Nor should the fact that some large enterprise clients will never use this.

If you’ve never set up log backups and aren’t sure how often your system should have them, this is going to be pretty handy since it does it for you. If you don’t have access to a DBA… well, you may still need one because setting up is T-SQL only currently, no GUI, so it might not work well for everyone, but I’m sure someone will write an app or a script for it soon. If you’re already running your databases on Azure VMs, this thing is fast. Really fast. In fact, I think it works quicker than my local disks (the spinning ones, not the SSDs). And, the storage is right there in the cloud, so you won’t have to pay for transfer down to your machine (although plucking a weekly backup out of the blob storage might be a good idea).

Now, there are a few things I don’t like about it. First, I’d like to have the option to backup locally first and then copy to the cloud (that’s how Red Gate implemented their cloud backup solution and it works great). And, I’d like to be able to tweak the knobs on when the backups occur. I’m also still not convinced I like how the monitoring is done with it, although since the schedule is maintained by SQL Agent, you can set up alerts, secondary monitors, all that fun stuff, all on your own. While some sophisticated DBA types will absolutely use this, I think the prime audience is non-DBAs. As such, it needs a clean & extremely simple GUI. You’ll get more implementation of it. Heck, I’d say it ought to be a check box on the SQL Server install process, and one you have to uncheck to prevent getting it set up when you install the instance. That way, Next-Next-Next deployments that probably never even consider recovery models or any of the rest of that fun stuff, will get backups as part of their installation.

Overall though, I’m excited about the added functionality. I’m especially excited because this functionality is added. It’s not replacing anything. I can still do all my backups my way, completely and utterly. Or, I have another automation tool that I can put to work. I’ve been running this on a test platform for over a month with no issues (while running other tests for other new functionality, I’m finally digging into 2014). I’ll write another post on how to configure this, although I don’t really need to because the Books Online are pretty clear. But, I wanted to get the “They’re taking away my control and putting it all on Azure” freak out done before I posted the technical part of the info. Proceed with the meltdown.

Dec 11 2013

SQL Server 2014 Backup to URL

I’m absolutely in love with the concept of being able to create a backup directly to a protected, off-site location. Yeah, you can spend all sorts of money on terribly wonderful technology to enable that within your enterprise. And if you have that kind of money, great. But, what if you’re like most everyone else and you just want a little more protection without mortgaging the house? Let’s take a look at one possibility, backup to URL.

There have been ways to backup to hosted storage, whether it was DropBox, AWS or Azure blob storage, for quite a while. But, every method I tried out involved setting up some type of drive on your system. As soon as you had your K:\ drive mapped out to AWS or whatever, you could run a SQL Server backup directly to that location. Done. But, I’ve long wanted to just do it straight. Just to backup right to that location, no additional spoofing required. As long as you’re willing to keep your stuff within Azure blob storage, it’s now possible. Further, within SQL Server 2014 you can do it directly from T-SQL, no additional application required (there is an app for 2005, the 2008s and 2012). How is easy. Let’s talk about why for a minute.

Backups are not a technological problem. Oh, I know they can both literally and figuratively keep us nerds up at night, but the fundamental definition of the backup is something that the business must provide. They need to tell us how much data to risk, how long we can spend on recovery through things such as the Recovery Point Objective and Recovery Time Objective. I don’t want to get too deep in the weeds defining these terms. What I do want to point out is that your disk is at risk. No, not the one that’s storing your database (although, it’s at risk too). I’m talking about the one storing your backups. That thing is just waiting until the most opportune moment to die. And by opportune I mean of course, when it will be the hugest pain for you. Plus, are you in a flood zone? A wind zone? Earthquakes? Fire? Oh yeah, the gods and the universe well and truly hate your orderly little backup disk. So, it’s a good idea to have a second location for those backups. Back in the day, we paid a company to take our tapes to an off-site storage. Now, you can just go to a hosted storage site like, oh, I don’t know, Azure blob storage.

Here’s how it works. You have to have already set up an Azure account. You should also create a container to keep your backups in(if you need help dealing with blob storage, check out Cerebrata). While you’re there, make sure you get the URL for your storage location and the Access Key for your storage. Now, let’s go back to T-SQL. First, we need to create a CREDENTIAL. This gets us onto our URL without having to paste sensitive information into our backup processes:

CREATE CREDENTIAL MyCredentialName
WITH IDENTITY = 'MyStorageAccountName',
SECRET = 'MyAccessKey';

With that done, the rest is easy:

BACKUP DATABASE MyNewDB TO  
URL = N'http://myserver.blob.core.windows.net/scarybu/MyNewDB.bak' 
WITH  CREDENTIAL = N'MyCredentialName', 
NAME = N'MyNewDB-Full Database Backup', STATS = 10;

The syntax is pretty standard. You’re going to BACKUP a DATABASE named MyNewDB to your URL. In the URL, make sure you supply, not just the URL to your blob storage account, but the container and the backup file name. My container is ‘scarybu’ and the file name is ‘MyNewDB.bak.’ In the WITH clause you’ll specify the CREDENTIAL that you created earlier.

Yes. It’s that easy. Restore is the same thing only in reverse.

Now you can protect your backups by ensuring that you also put a copy to an offsite location and you can do it directly from with your area of expertise and control, SQL Server and Management Studio. For more details refer back to the Books Online.

Nov 19 2013

Windows Azure Views

It’s kind of fun to see Azure development artifacts on display. I’ve posted about them before, a couple of times. I’m starting to finally get systematized about the whole thing, just so I can see stuff as it changes rather than discover them by accident or get told about them by someone else. Here’s a little query I’m running to see when system views were last modified:

SELECT  av.name, av.create_date, av.modify_date
FROM    sys.all_views AS av
ORDER BY av.modify_date DESC;

The most recent stack of changes are here:

Views

I’ll keep an eye on them to see what I can spot about interesting new functionality.

I also compared the listing of all views in Azure to those on a SQL Server 2012 instance and came up with a list of differences. These are the system views that are only in Azure. The names are intriguing. Will we be getting extended events based on the sys.dm_xe_* views? Since the create & update dates on these is 1/1/2009, I’m of the opinion that development hasn’t moved forward a whole lot. What about slo_* views? Service Level Objective? They have create dates of 8/20/2013 and modify dates for 9/19/2013, so they’re new. Nothing to add other than the interest.

Here’s the list

Sys.audits
Database_firewall_rules
Database_query_store_options
Dm_continuous_copy_status
Dm_db_objects_impacted_on_version_change
Dm_db_wait_stats
Dm_federation_operation_error_members
Dm_federation_operation_errors
Dm_federation_operation_members
Dm_federation_operations
Dm_xe_database_session_event_actions
Dm_xe_database_session_events
Dm_xe_database_session_object_columns
Dm_xe_database_session_targets
Dm_xe_database_sessions
Event_session_actions
Event_session_events
Event_session_fields
Event_session_targets
Event_sessions
Federated_table_columns
Federation_distribution_history
Federation_distributions
Federation_history
Federation_member_distribution_history
Federation_member_distributions
Federation_member_history
Federation_members
Federations
Query_context_settings
Query_store_plan
Query_store_query
Query_store_query_text
Query_store_runtime_stats
Query_store_runtime_stats_interval
Resource_stats
Resource_usage
Server_quotas
Slo_assignment_history
Slo_database_objectives
Slo_dimension_settings
Slo_objective_setting_selections
Slo_service_dimensions
Slo_service_objectives
Sysdac_instances

 

Nov 05 2013

More Azure Goodies

Microsoft keeps sneaking little things under the door for Windows Azure SQL Database. This time it’s a couple of new views, a system view and a Dynamic Management View (DMV); sys.resource_stats and sys.dm_operation_stats.

But, I also learned another fun fact, not all this stuff rolls out at the same speed. For example, if I run sys.resource_stats on a database on a server located in the North Central US data center, the output looks like this:

view1_results

But, if I run the same query against the same view with a database in a data center in East Asia (I experiment with where I put things), it looks like this (click on it to expand it, probably want to do that into a second tab or window so you can refer to it while you read):

view2_results

The first set of query results look a lot like the data you’ll get from sys.resource_usage. Almost identical. But that second result set… that’s different. Let’s break down what we can see and what it infers.

  • Start and end times are pretty clearly in five minute increments.
  • And we’re seeing this for each database in the system.
  • The sku refers to whether these are business or web databases.
  • The usage_in_seconds column is pretty interesting. It must mean active connections coming into the system and the amount of time they’ve spent doing whatever. That’s nice to know.
  • Storage… whatever.
  • avg_cpu_cores_used, this is a server on which I’ve been testing Premium, so I could see greater than one for these values, but most of them are clearly less than 1. It looks like it’s a percentage use because it’s not simply showing 1 or 2 or something, but a decimal, and one that varies between time frames.
  • avg_physical_read_iops, OK. I’m excited. This will give you a very good indication of the load you’re placing on the system and you can chart it out over time. You can tell that I wasn’t running anything on these servers.
  • avg_physical_write_iops, more excitement
  • active_memory_used_kb, That’s awesome. We’ll be able to see both the IO activity that you do to the disks on these databases, but how you’re using memory as well. These are real monitoring tools now for observing system behavior over time
  • Active_session_count, again, very useful, mainly in conjunction with the other counters to understand correlation between the number of sessions on the system and the level of activity.
  • Active_worker_count, not sure what this means unless we’re talking threads? Workers versus sessions must be referring to the fact that since this is a Premium database (although not currently enabled), I can see multi-threading through the multiple available CPUs. Cool.

Now, the DMO will just generate no data or an error if you attempt to run it on a data center that hasn’t been upgraded yet. Here’s the output from sys.dm_operation_stats from my database in East Asia:

view3_result

To tell the truth, I’m not that excited by this one, although maybe if there were different functions on display it might be more exciting. Let’s quickly break it down a little.

  • session_activity_id, ?
  • resource_type, I’ve got nothing here either. Not sure what that number suggests except for the next column
  • resource_type_desc, Oh, it’s a database.
  • major_resource_id, AKA, the database name… I love working with Microsoft, but their little… tick/penchant/obsession with renaming EVERYTHING on a regular basis can be quite frustrating
  • minor_resource_id, the GUID used to internally identify the database maybe? Interesting. Possibly useful. I wonder if it changes during a move situation. I wish there was a way to spark a move automatically.
  • operation, interesting
  • percent_complete, ooooh, I like this. Good to know where to go to possibly monitor certain operations. Although, I tested it, it doesn’t respond to normal insert/update/delete or database definition language (DDL) changes. Still…
  • error_code, cool
  • error_description, also cool.
  • error_severity, once more, cool, we’re talking the ability to monitor this DMV as a means of tracking the system for errors.
  • start_time, excellent. You can see how long a given operation ran (any monitored by this thing) ran.
  • last_modified_time, not only does it show the thing it names, but, when combined with the preceding column, you can tell how long it ran. Another piece of monitoring/performance data.

We now have more information, at least on those servers that support it, for tracking system usage and behavior within WASD. This offering is constantly growing and expanding.

Oct 15 2013

Monitoring Structure Changes

Most everyone I know works with environments that are carefully controlled and structured. All changes go through rigorous testing and full documentation. Absolutely nothing happens in a production environment that hasn’t been thought through, discussed, planned for and written down. But, there are a few, a very few, who work in a slightly different kind of environment that they refer to as “the real world.” In this “real world” changes to a production environment can happen without approval, planning or testing. Scary, right?

There’s good news for these poor benighted souls. Red Gate is testing out a new piece of software called SQL Lighthouse. It’s meant to monitor your systems for changes so that you know what has happened and when in case you don’t have a good source control system in place with a well managed deployment process. Right now it’s only running against Windows Azure SQL Datbases for testing, but you can try it out. It’s pretty easy. Go to SqlLighthouse.com and create an account. You can then add a database:

LighthouseAddDB

 

Everything should be pretty straight forward on this screen. The server name is the full name; xxx.database.windows.net. I recommend using the Test button after you put in your user name and password.

Representing a “real world” crazy DBA, I’m going to introduce a clearly unauthorized change to my database:

CREATE TABLE CrazyDBA
    (
      CrazyDBAID INT NOT NULL
                     IDENTITY(1, 1) ,
      CrazyDescription NVARCHAR(200) ,
      CrazyValue DECIMAL(19, 2) ,
      CrazyMoney MONEY ,
      CreateDate DATETIME
    );

A heap table, no primary key defined, poor data type choices like MONEY in use. No one ever does stuff like this. But, if we switch back to SQL Lighthouse after making the change, my monitored database looks like this:

LighthouseDBChanged

 

 

Which is, of course, clickable. Clicking on it, you’ll see a listing of changes that have happened to your database:

lighthouseHistory

 

Someone has been going nuts inside my database because I’ve got a long list of changes that have all occurred today. You can see the bread crumb trail forming at the top which will allow us to navigate back to previous windows. We can configure alerts for this database from here (more on that in a moment). Each of the changes is clickable as well. Opening up one, I see this:

LightHouseDetail

And there’s my change from earlier when I created the table.

In the mean time, I might notice that I have new mail in my inbox. By default I’ll get emails automatically with the changes that have occurred:

LighthouseEmail

 

And as it says, I can click on “More details” to go to the web site to see the details of the changes. Now, getting an alert for every change or set of changes might be a little chatty. So you can configure the alerts for the database:

LighthouseAlerts

This can help to reduce the noise if you’re dealing with a lot of changes occurring over time.

That’s it. It’s nice, clean and simple. We’re just getting started with it, figuring out what it can do and where we can go with it. I know that most of you don’t have to deal with this so called “real world,” but for those who do, this might be prove extremely useful.