Jun 18 2014

The Curse of Relational Databases

Let’s face it, none of Information Technology is easy. Oh yeah, there are those few geniuses that have an absolute grasp over some small aspect of the stack, or those other geniuses that have a very shallow knowledge level, but understand the entire stack. But the stack itself, it’s vast, deep, wide, utterly unfathomable. So what do you do? You cheat. You take shortcuts. You ignore things you don’t like/understand/appreciate. And then there’s all the things you just don’t know. Or, you cheat another way, you get experts that have drilled down on a particular technology so that they’ll provide you with the knowledge you need. Ah, but then you have to listen to them and what happens when your local genius (deep or wide) doesn’t agree with your hired gun? Do you override your local person for the hired gun (I’ve seen this happen a ton where consultants were favored over in-house), or do you go with your local person (I’ve also seen this where the local person who has solved all the problems before may be over their heads now, but they’ve always been right and are therefore trusted)?

I just read (and I mean I finished about 90 seconds ago) this really interesting article on The Curse of the Excluded Middle. I won’t even pretend to you that I understood all of it. But, I did get a pretty fundamental concept out of it, this programming stuff is very hard, we’re going to take shortcuts to get through it, and those shortcuts come with a cost. The argument being put forward isn’t to somehow find a magic solution. It’s simply to acknowledge that there really is a cost, maybe even a cost you don’t completely understand. Further, that cost, and especially your lack of understanding of it, will come up and bite you on the behind.

Which brings me around finally to developers and databases. Relational databases are a pain the bottom. They really are. Speaking just of SQL Server (where I spend most of my time) you have to work with a ridiculous, archaic, language, T-SQL, in order to manipulate the data. And the rules of normalization, yeah, we can all learn them, but applying them makes every single aspect of coding harder. Plus the language lets us do things that it then interprets in horrendous fashion. Oh, and don’t forget all the obscure and weird maintenance and configurations that you have to go through to keep the silly servers online and functioning correctly. Then there’s the whole object/relational impedance mismatch thing to chew on our behinds even further. In short, I completely understand why developers would like to burn the entire edifice to the ground (come see one of my presentations when I talk about the “data persistence layer” that a particular dev team wanted to build). And all that is just the technical side of this mess. I’m not even going to address the personnel issues that come with the different focuses of responsibility between a developer and a DBA.

So when the developers bring in an Object Relational Mapping (ORM) tool or they explicitly attempt to slap out at DBAs by going after a NosQL database (and no, despite the new twist, it means NO F’ING ESSQUEELL, instead of Not Only SQL as many are saying now), I understand why they would do this. It short circuits all the issues. We get around the problem. We speed development by eliminating that thing that we didn’t completely understand and certainly didn’t like and…. Hang on… Isn’t there a darn good chance we’re digging a hole here?

Yes.

Don’t get me wrong. I see the need for unstructured data stores, ID/Value pairs, speed over consistency, speed over durability, the need to move fast because your competition is sure as heck trying to move fast. So NoSQL databases serve an absolutely valuable purpose and used correctly fix unique and difficult problems. A well structured ORM properly applied absolutely saves development time. But there’s this nasty little surprise hidden behind the need, the sometimes seemingly desperate need, to completely get rid of relational storage. That surprise? Relational storage actually works and works well when applied to the appropriate problems in the appropriate ways. It provides a means of collecting information fairly quickly (although not as fast as many NoSQL databases), storing it efficiently (although, maybe not as efficient as some object databases), and returning it to the users on demand (and here relational does stick out again). And does it all in on place, not one for collection, another for reporting, or some of the other strange perambulations I’ve seen people going through with some NoSQL implementations (again, not all, some are awesome, but many are horrific).

About twice a year I get to read a “death of the DBA” article that points to a technology or process or tool that’s going to eliminate the need for those nasty, ugly, difficult, relational databases and those freaks who try to keep them online and available. And about twice a year I see lists of the most needed workers in IT and guess what’s almost always there, yep DBAs. The fact is, relational storage does work. And instead of trying to eliminate it, or the DBA, or the code necessary to interface with it, embrace the stuff and learn to use it, or hire someone who actually knows how to use it and then listen to them. I’ve just seen too many places where the need to eliminate relational storage and DBAs is driven by one of two things, I have a shiny new hammer and everything is a nail, or, databases and DBAs are a pain because they make us do stuff we don’t want to, so let’s bypass them. Those are almost precisely the wrong reasons to go about moving to a NoSQL implementation, because you’re going to be ignoring stuff, as the Curse of the Excluded Middle talks about (and I know, it didn’t talk about databases, I’m extrapolating, hang with me here), and the things you ignore, or worse yet, don’t know about, are going to hurt and may hurt badly.

Jun 17 2014

Natively Compiled Procedures and Bad Execution Plans

I’ve been exploring how natively compiled procedures are portrayed within execution plans. There have been two previous posts on the topic, the first discussing the differences in the first operator, the second discussing the differences everywhere else. Now, I’m really interested in generating bad execution plans. But, the interesting thing, I wasn’t able to, or, rather, I couldn’t see evidence of plans changing based on silly things I did to my queries and data. To start with, here’s a query:

CREATE PROC [dbo].[AddressDetails] @City NVARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

And this is a nearly identical query, but with some stupid stuff put in:

CREATE PROC [dbo].[BadAddressDetails] @City VARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

I’ve change the primary filter parameter value to a VARCHAR when the data is NVARCHAR. This difference is likely to lead to differences in an execution plan, although not necessarily. If I load my tables up and update my statistics, then create the procedures and run them both with the same parameter values, I should detect any differences, right? Here’s the resulting execution plan:

ActualPlan

It’s an identical plan for both queries. In fact, the only difference in the plan that I can find is a CAST in the Index Seek operator for the BadAddressDetails procedure, as expected. But, it didn’t prevent the plan… the plan, from showing any other difference. However, execution is something else entirely. And this is where things get a little strange. There are two ways to execute a procedure:

EXEC dbo.AddressDetails @City = 'London';
EXEC dbo.AddressDetails 'London';

Interestingly enough, the first one is considered to be the slow way of passing a parameter. The second one is the preferred mechanism for natively compiled procedures. Now, if I execute these two versions of calling the procedure, I actually see different performance. The first call, the slow one, will run, somewhere in the neighborhood of 342 µs. The other ran in about 255 µs. Granted, we’re only talking about ~100 µs, but we’re also talking a 25% speed increase, and that’s HUGE! But that’s not the weird bit. The weird bit was that when I ran the good and bad queries together, the slow call on the bad query was consistently faster than the slow call on the good query. The fast call reversed that trend. And, speaking of which, the bad query, with the CAST ran in about 356 µs or ~25% slower.

The execution plan really didn’t show any indication that this would be slower, which made me do the next thing I did. I updated my Address table so that all the values were equal to ‘London.’ Then, because statistics are not maintained on in-memory tables automatically, I updated the statistics:

UPDATE STATISTICS dbo.Address WITH FULLSCAN, NORECOMPUTE;

With the statistics up to date, I dropped and recreated the procedure (there is no recompile with natively compiled procedures, something to keep in mind… maybe, more in a second). So now, the selectivity on the index was 1. The most likely outcome, an index scan. Guess what happened? Nothing. The execution plan was the same. I then went nuts, I converted all my tables so that a horrific mishmash of data would be brought back instead of clean data sets and I put data conversions in and… nothing. Index Seeks and Nested Loops joins. Weirdness.

I’m actually unsure why this is happening. I’m going to do more experimenting with it to try to figure out what’s up. But, that lack of recompile, maybe it doesn’t matter if, regardless of data distribution, you’re going to get the same plan anyway. I’m really not positive that looking at the execution plan for natively compiled procedures does much of anything right now. However, these tests were a little bit subtle. I’ll load up more data, get a more complex query and then really mess around with the code to see what happens. I’ll post more of my experiments soon.

I promise not to experiment on you though when I’m teaching my all day query tuning seminars. There are a bunch coming up, so if you’re interested in learning more, here’s where to go.  Just a couple of days left before Louisville and I’m not sure if there’s room or not, but it’s happening on the 20th of June. Go here to register. Albany will be on July 25th, but we’re almost full there as well. You can register here. SQL Connections is a pretty cool event that takes place in September in Las Vegas. In addition to regular sessions I’ll be presenting an all-day session on query tuning on the Friday of the event. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event. Let’s get together and talk.

 

Jun 10 2014

Differences In Native Compiled Procedures Execution Plans

All the wonderful functionality that in-memory tables and natively compiled procedures provide in SQL Server 2014 is pretty cool. But, changes to core of the engine results in changes in things that we may have developed a level of comfort with. In my post last week I pointed out that you can’t see an actual execution plan for natively compiled procedures. There are more changes than just the type of execution plan available. There are also changes to the information available within the plans themselves. For example, I have a couple of stored procedures, one running in AdventureWorks2012 and one in an in-memory enabled database with a few copies of AdventureWorks tables:

--natively compiled
CREATE PROC dbo.AddressDetails @City NVARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

--standard
CREATE PROC dbo.AddressDetails @City NVARCHAR(30)
AS
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    Person.Address  AS a
                JOIN Person.StateProvince  AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN Person.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
GO

The execution plans are obviously a little bit different, one going against in-memory tables and indexes and the other going against standard ones. However, that’s not the point here. This is the point. One of the first things I always check when looking at a new execution plan is the first operator, the SELECT/INSERT/UPDATE/DELETE operator. Here it is from the estimated plan of the query against the standard tables:

StandardSelectProperties

All the juicy goodness of the details is on display including the Optimization Level and Reason for Early Termination, row estimates, etc. It’s a great overview of how the plan was put together by the optimizer, some of the choices made, useful information such as the parameters used, etc. It’s great. Here’s the same thing for the natively compiled procedure:

NativeSelectProperties

Uhm… where are all my wonderful details? I mean, honestly, everything is gone. All of it. Further, what’s left, I’m pretty sure, is nothing but a lie. Zero cost? No, but obviously not from the standard optimizer estimates either, so, effectively zero. I’m pretty sure Physical Operation is just there as an oversight. In short, this is a different game. Yes, you will still need to evaluate execution plans for natively compiled procedures, but we’re talking a whole different approach now. I mean, great googly moogly, there’s not parameter compile time values. Is that just ignored now? Are the days of bad parameter sniffing behind us, or, are the days of good parameter sniffing gone forever? And it’s not just the SELECT operator. Here are the properties for a Nested Loops operator. First the standard set:

StandardNestedLoops

And, the natively compiled procedure:

NativeNestedLoops

Now, except for the fact that everything is FREE, the differences here are easier to explain. Execution Mode is applicable to columnstore indexes, and none of those are available yet in in-memory storage, so I’m not shocked to see that property removed. Same for the others. But this complete lack of costing is going to make using execution plans, always a problematic proposition with only estimated values available for so many things, even harder. It might even make it so that all you really need to do is look at the graphical plan. Drilling down on the properties, until meaningful data starts to appear there, might be a waste of time for natively compiled procedures.

I’ll keep working on these. Next up, can you get a “bad” execution plan with a natively compiled procedure? We’ll find out.

Just a reminder that I’m taking this show on the road. I’m doing a number of all day seminars on query tuning at various events in multiple countries. Louisville has almost filled the room we have available on the 20th of June. Go here to register.  But don’t wait. I’m also going to be in Albany on July 25th, but we’re almost full there as well. You can register here. If you were thinking about attending SQL Connections in September in Las Vegas, In addition to regular sessions I’ll be doing a day on query tuning. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event.

 

Jun 06 2014

Speaker of the Month, June 2014

It’s not like I can’t find plenty of great presentations here in the US, but, while I was over in Belgium at Techorama I checked out several of the presenters there. They were awesome. This was the first ever Techorama. It’s a developer focused event, but there was stuff there for data-centric people too. They had a great international collection of speakers from all over. The venue was a movie theater which was a lot of fun to present in, although may be a little too comfy to watch presentations (I fell asleep in one, I sure hope I didn’t snore). It was such a great event that I decided to pick my speaker of the month from there. I saw a bunch of very good presentations (even the one I fell asleep to was good, the parts I saw), but one stood out for me, both because of the topic and the presentation of the topic. I’m giving my speaker of the month award to Tiago Pascoal (b|t) of Portugal for his presentation at Techorama, “My Code is Ready, Now What.”

Tiago is a Microsoft MVP for Application Lifecycle Management (ALM) from Portugal, or, as he himself put it, “on the ass of Europe.” Pardon the language, but that was funny. I loved watching Tiago present. He was really funny, which was excellent because discussing ALM can be pretty dry. He said several times as he was presenting stuff “I should get a monkey to do this for me.” It was great. I loved the way he discussed things, stating matter of fact things like, regarding code in source control “It’s 2014, everyone is doing this now.” and his ease and manner of just assuming that, of course, the database is treated the same way. I like the way he talked about provisioning, comparing it to pets vs. cattle. Do you want to have to pamper and groom to get a server online, or is it just one more cow in the herd? Great stuff. I also loved how free and easy he was with typing. He demoed in a raw, live manner and got it all to work too. His slides had great pictures that both made his point and were entertaining. I really loved it. His demonstration of Octopus was so smooth I’m actually pretty jealous.

I don’t have much to offer Tiago for improvements. I loved his slides, but the look and feel within them wasn’t completely consistent. Minor nit, but I have to say something. I loved how he typed through the demos instead of having them canned (which I do), but it did sometimes slow down the flow, just a little. Again, minor nit. The presentation was just that good.

I’ve no idea where he’s presenting next. He is on Lanyrd (yay), but doesn’t having anything upcoming. I can heartily recommend going to see him speak.

Jun 05 2014

Natively Compiled Procedures and Execution Plans

The combination of in-memory tables and natively compiled procedures in SQL Server 2014 makes for some seriously screaming fast performance. Add in all the cool functionality around optimistic locking, hash indexes and all the rest, and we’re talking about a fundamental shift in behavior. But… Ah, you knew that was coming. But, you can still write bad T-SQL or your statistics can get out of date or you can choose the wrong index, or any of the other standard problems that come up that can negatively impact all those lovely performance enhancements. Then what? Well, same as before, take a look at the execution plan to understand how the optimizer has resolved your queries. But… Yeah, another one. But, things are a little different with the natively compiled procedures and execution plans. I have a very simple little example in-memory database with just a few tables put up into memory and a straight forward procedure that I’ve natively compiled:

CREATE PROC dbo.AddressDetails @City NVARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

The fun thing is, even with these really small examples, the performance differences are staggering when compared to standard tables or just in-memory tables alone. Anyway, this is what the estimated plan for this procedure looks like:

ActualPlan

Looks like a pretty standard execution plan right? Here’s the actual plan:

 

 

No, don’t bother refreshing your browser, that’s just a blank couple of lines because, there is no actual plan. You’re not dealing with a standard query, remember. There are only a couple of reasons to get an actual plan. First, if you’re experiencing recompiles, you might want to see the plan that was ultimately executed. An actual plan will reflect this, as will a plan pulled from cache. Second, you want to see some of the run-time metrics, actual rows, actual executions, runtime parameter values. Well, the first is not an issue since you’re not going to see these things recompile. It’s a DLL. The second could be an issue. I’d like to see actual versus estimated to understand how the optimizer made it’s choices. Regardless, the actual plan won’t generate in SSMS when you execute the natively compiled procedure.

There are some more differences between the plans for natively compiled procedures and standard procedures. I’ll go over a few more in another blog post.

Hey, if you do want to talk query tuning? I’m taking my one day seminar on the road to a bunch of different events. There’s still time to get to the event in Louisville on the 20th of June. That’s right before the SQL Saturday there. Go here to register. I’m also going to be putting this on the day before SQL Saturday Albany. You can register here. I’m very honored to have been selected to speak at SQL Connections in September in Las Vegas. This includes an all day seminar on query tuning. Go here to register for this great event. I’m also very excited to be able to say that I’m also going to be doing a different seminar in Belgium for SQL Server Days. I’ll be presenting an all day seminar on execution plans, including lots of details on SQL Server 2014. Go here to register for this event.

That’s four opportunities to get together and spend an entire day talking about query tuning, execution plans, statistics, the optimizer, extended events, oh, all sorts of things.

May 13 2014

Add an Instance to SQL Server Azure Virtual Machine

How do you add an instance to your local SQL Server installation? You run the executable that you probably downloaded from MSDN or maybe from a CD. Works the same on an Azure VM right? Sure… but wait. Do I have to go and download the software to my VM instance? Let’s assume that you’re running one of the VMs from the Gallery, then, the answer is “No.” Just navigate to C:\SQLServer_12.0_Full. There you’ll find the full installation setup for SQL Server. And you’re off and running… Until you realize that you don’t have the Product Key for this thing. What happens when you get to this screen:

CDKey

You can look around all you want and you won’t see a product key anywhere. At least no where that I could find. So what do you do? Same question was asked and answered over on this forum at SQL Server Central. The trick is to get the product key from SQL Server. I tried several different methods, the ones you’ll find if you search for how to get the product key from an existing copy of SQL Server. But finally, as was posted on the forum, a method that worked was found. I tested it out and I was able to add an instance to a VM from the Gallery.

Which brings up the next question. Did I just violate some type of licensing with Microsoft? Lordy I hope not. But I did some research. This definition of the support policy at Microsoft says that anything that is not explicitly denied in that documentation, that is normally supported is still supported. There’s nothing in there about multiple instances. There’s nothing in the basic Azure Licensing FAQ. There’s nothing against this in the Pricing details either. And since the standard iron version of SQL Server allows you to have as many instances running on a given server that you want, from what I can tell, this still applies here.

Personally, I don’t think I’d want to run multiple instances on a single Azure VM. I wouldn’t really want to run multiple instances on a VM or, in some cases, even on iron. Multiple instances frequently have difficulty playing nice. I can’t see that getting any better inside Azure. However, there’s nothing to keep you from doing it except tracking down that Product Key. Get that, and you’re golden.

May 09 2014

Carolina Cruise

I’m going to speak at three user groups in three days in North and South Carolina. Evidently this is known as the Carolina Cruise.  Here are all the details. And yeah, that’s me and a bunch of Scouts down in the Florida Keys at Seabase. I’m really looking forward to this event. If you’re in the area, let’s get together and talk.

May 02 2014

Speaker of the Month, May 2014

Whoa! Another month gone by already? I guess I better pick a speaker of the month then. I went to several events this month, so selection was difficult, getting to see so many great speakers. But, one stood out in my mind, partly because he’s the least experienced speaker I’ve seen in quite a while. But his inexperience didn’t show. Speaker of the Month for May 2014 is Andy Yun (t) and his presentation Every Byte Counts.

The session was about using the right data types. You’d think this is self-obvious, but from the way Andy packed the room along with the attention and questions from the attendees, it’s clearly a topic that needed attention. I really liked how he did a presentation on the problem space before showing his goals for the session slide. It worked. He did a couple of good little tricks, leading people to make a poor choice for the data type with vacation days and then explaining why. It was a great bit. He also did a fantastic job of repeating everyone’s questions. It was a small room, but it was packed, so the speaker needs to do this (and I need to get better at it). He had a great bunch of examples that worked well to illustrate his main points and I really enjoyed the whole thing.

Areas that he could work on to move this up a notch? ZoomIt. ‘Nough said. While he presented everything extremely well in clear, easily understood language, it was a little fast, so he might want to check every so often if everyone is still within him. And Andy, breathing is important. You should do a little during the presentation.

It was a great experience, especially when you consider it was his second time presenting. I know for absolute fact I didn’t do that good my second time (or 10th) and I’m frankly shocked by the number of experienced speakers who wouldn’t have done as good a job as Andy did. It was a great presentation and worth checking out if you get the opportunity.

Where is he speaking next? I don’t know. No blog. Not on Lanyrd. He’s just going to appear somewhere, so be ready.

A couple of honorable mentions this month. I got to see Tim Ford do a great experiment with a presentation on query tuning. I think it worked well, but I know he’s going to modify it, so I’ll wait for the finished product to give him the speaker of the month. If you haven’t seen Kevin Kline speak… you’re just doing it wrong. Go get it fixed.

Apr 23 2014

Azure Automation

I introduced Azure Automation in a previous post. I’ve spent some more time exploring it.

There’s a set of documentation available as I noted before. Unfortunately, reading through the full set of documentation, I have some criticisms to offer. The layout of the documentation goes through “Common runbook tasks” actually more or less laying things out as I did, inadvertently, I assure you, in my previous blog post. The problem with that, as I found in that post is, the administration of the runbooks seems fairly straightforward from the screens. But, you can’t do a darned thing with any of it until you have a runbook . Further, you can’t have a runbook until that thing has some code in it. And, the documentation doesn’t include documentation about code. Instead, we just get a page with a list of samples, but no links to that code, nor an indication of where it might be. The scripts are located here. But man, that ought to be in the documentation. There’s also no clearly documented method for how to start doing the development. It’s not really necessary since the GUI leads you inevitably to the Draft screen we saw in my other post. But, documentation is generally supposed to let you know what to do, where to look, etc.

There is another set of documentation just on authoring runbooks. Lots and lots more meat there. I’ll go through it and follow up further.

Enough criticism, let’s play with some code.

I’m going to start with the “Hello World” code set. It’s supposed to be an introduction to how everything works. You can’t open it from the Azure Portal. Instead you have to download it to your machine and then either upload it into a new runbook or copy and paste it into the Draft editor window. Presumably this is so you can do the coding locally using the PowerShell ISE or other tools. Documentation for the script is clear. It’s description:

If you are brand new to Automation in Azure, you can use this runbook to explore testing and publishing capabilities.

 

Well, let’s just say that’s a little grandiose for what is, literally, a “Hello $Name” example. But, it’ll get your feet wet. I took the script, pasted it into my “RunningScare” runbook. From there, I have the  capacity to Save, Test, or Publish. Being a good paranoid type, I ran test first. It popped up a window to input the parameter and then showed the output in the Output Pane (which I hadn’t actually noticed):

OutputPane

I can’t tell you why it output multiple times, but it did from one test of the script. To see the rest of the functionality, scheduling, etc., I went ahead and hit Publish. That moved it from Draft to Published where all I can see is a faded outline of the actual script and a Start button at the bottom of the screen. I went ahead and ran it from there. It actually takes a surprisingly long time for such a silly small script to complete.  There’s event the ability to view the Job as it’s running:

JobSummary

So that works. Next up, scheduling. It’s pretty straight forward to walk through the GUI in the Portal (although, now I want to see if I can programmatically control the Automation interface, more to explore). I’m going to try to run this script once an hour. So, I’ll give the schedule the name, unique to my account, Hourly (imagination knows no bounds). And then things get weird. I can only schedule this for a “One Time” run or “Daily.” No other options available:

Schedule

Nothing in the core documentation about the details of scheduling. Checking the authoring doc (which has tons of stuff in it) there is a PowerShell command for directly controlling this (oh yes, much more to explore), Set-SmaSchedule. But, it’s not clear if the command has more variables other than a day interval. I’ll have to test it out to see. The Portal recognized that parameters were necessary, so I put one in and scheduled my runbook. Worked great.

With that, I have my first run book set up, tested and scheduled. So far, this is looking really interesting.

 

 

 

Apr 22 2014

I Am Better Than You

That is a patently false statement and total BS. It sure does crawl up your spine though doesn’t it? Why then do we need to do this?

I read an article, “How DevOps is Killing the Developer,” and, frankly, was a little put off by this:

Good developers are smart people. I know I’m going to get a ton of hate mail, but there is a hierarchy of usefulness of technology roles in an organization. Developer is at the top, followed by sysadmin and DBA. QA teams, “operations” people, release coordinators and the like are at the bottom of the totem pole. Why is it arranged like this?

Because each role can do the job of all roles below it if necessary.

Nice to know I’m almost as good as a developer. Now, I could go off on a “bash the developers” rant, but I think that would be seriously silly and counter-productive, not to mention completely against my beliefs. I really do like, admire, respect, developers. I also respect system administrators and SAN admins and QA people, hell, project managers. You can go find a web site or a blog specializing in any of these IT disciplines and locate the “DBAs are better than developers” article or the “QA people save the universe” article or “Thank the gods project managers keep all you poo flinging monkeys in line” article. They’re out there. And every single one of them is wrong.

My actual job title these days is Product Evangelist, but I’ve spend the last 15 years as a DBA, database developer and data architect. I feel like I have a handle on that job. I’ve specialized around the Microsoft stack, not because of any sort of religious beliefs, but because it pays the bills. Before that I worked as an application developer. Before that I was in tech support. And you know what, at no point in my career was any of the jobs I did literally more important than the others. You know why? They’re all in support of the same thing: the company succeeds.

Now, fine, developers are smart people. No question, no argument. Developers are capable of learning other jobs (I’m living proof). But you know what, so are people in all those other positions. How do I know this? Because I’ve worked with the QA person turned developer. She was a GREAT QA person. She was so great because she learned the full stack. She developed an understanding of databases and systems and code in support of doing a better and better job at QA. Then, she finally decided she wanted to fix the code a little earlier and switched over to development full time. I’ve also met the QA person turned sysadmin. I’ve met the developer turned SAN admin. The system admin turned DBA, the DBA turned developer, a million and one support desk people turned developer/dba/admin. Every one of these people followed similar trajectories. They started learning the full stack and found areas where they could specialize while using their knowledge of the full stack to make each position they were in better.

But all these jobs and all these people are all, or all should be, focused on one thing, helping the business to do what the business does, for most businesses, support the customer.  Regardless, the focus needs to be on the goals of the organization, not on the purity of a job, a process, a software stack, or a system. Purity and perfection are dangerous concepts within IT. We need to keep our focus where it belongs, not on MY code or on MY database or on MY servers, but on OUR BUSINESS.

And DevOps (I wish the term didn’t have such bad connotations), is about breaking down communication barriers, not just putting all the work on one person/team. Again, focus back on the business and what the business does.

So no, I am absolutely not better than you (the title is just click-bait). I’m not. But, you’re not better than me either. If my saying that makes you angry, maybe you need to reexamine your assumptions.