Category: Tools

Mar 25 2015

Toshiba Portege Z301-A Review

Last year I purchased a Lenovo W530. Nightmares then ensued. Nice laptop if you just want to use it as a laptop, and only a laptop. Horror show if you need to present from it. So, I replaced it with an HP… which also gave me some troubles (try presenting with your laptop bricked in front of your boss AND 100 people), but that’s OK because I dropped it the next day (totally by accident, it had just worked great all day, I was happy with it, mostly). So there I was, laptop free and in need of something that would let me work, present, and be an all round road warrior. PLUS, I had just broke a laptop and was feeling bad that company would be replacing it. So I wanted to keep the costs down. After a lot of research, I decided to pull the trigger on the Toshiba Portege.

I couldn’t be happier.

This has been my best experience with a laptop since my Lenovo W520 (which just wore out from three years on the road). This thing is snappy fast to start up. It sleeps just fine. 16GB of RAM so I can run a couple of VMs (or one really fast VM). It’s got an SSD, so that’s it for quick I/O. The smaller footprint of the 13 inch monitor is great (and, I’m not at all bothered dealing with reduced screen real estate on the road). Heck, I’m typing this blog post on the thing right now. The battery life is awesome (I’ve been working in a hotel lounge for about four hours at this point and it’s still got another four hours to go. Oh, and the power brick is tiny so my overall weight when travelling has dropped.

Nothing is perfect. The touchpad tends to catch my palm and redirects where I’m typing sometimes. That is because I’m a little lazy with my hands sometimes while typing, so, to a degree that’s actually a plus, forcing me to watch dropping my hands. I think the keyboard is just a tad tight for my hands, but not too bad. I’m still not used to the function key locations.

But, I’ve been presenting all over the world with this thing on both VGA and HDMI connections. No issues. Ever. Of any kind. I’m in absolute love with it as a presentation machine. Oh, and did I mention it has both HDMI and VGA plugs, so no dongle in either direction. Three USB ports. It’s just great.

Screen quality and resolution is good enough for my tired old eyes. It’s also touch screen so I can scroll & zoom when reading stuff on it. I’m just happy with my laptop.

Now, if I can just keep from dropping it.

Jan 15 2014

Database in Source Control

Many years ago, I was working with a great DBA. Seriously, a very smart and capable guy. He told me, “We need to put the database into source control, just like app code.” And I just laughed. Not because I disagreed with him. I knew he was right, but I had tried, several times, to do just that. See, I’m not really a DBA. I’m a developer. I knew that code (and all the T-SQL that describes databases is code) needed to be versioned, sourced, tracked and audited. But great googly moogly, it was not an easy thing to do.

I first tried just exporting the entire database into a script and then occasionally checking that script into source control. Yay! Mission Accomplished… Well, I had a database in source control, yes, but I didn’t have any of the great stuff that went with it, most of all, a way to deploy from source control.

Next, I tried just storing the stuff that changed most, procedures. But, I had to store everything as an ALTER, or, I had to store it all as a DROP/CREATE and store the security settings and extended properties. I tried both. Neither satisfied and it was WAY too easy for someone else to modify a script the wrong way and bring the entire thing crashing down. And, not to mention the fact that any and all structural changes outside of stored procedures had to be built manually, or using a compare tool to generate them (but not the procs, cause we have those in source control, remember) by comparing prod & dev or qa & dev or something & something… Oh yeah, that was fun.

Man, it was painful back then. But now, there are several ways you can do this using Microsoft and/or 3rd party tools.

Why aren’t you?

Seriously, most of you aren’t. I’ve been going all over the country teaching a class on database deployments (next one is in Cleveland if you’re interested) and I know most people don’t put their databases into source control. Of course, I’m pretty sure most people don’t talk to their Dev teams if they can help it, and it does seem like most Dev teams seem to be on less than a perfectly chatty basis with their DBAs. is that the cause? The thing is, you are experiencing pain in terms of time spent, mistakes made, slower deployments, less frequent deployments, possibly even down time, all because you don’t do your deployments right. And deployments start from source control.

Developers have spent the last 30 years or so figuring out better and better ways to arrive at functional code in the hands of their customers (internal or external) as fast as possible, as accurately as possible. Over the same 30 years, DBAs have been figuring out how to better and better protect the information under our charge ensuring that it’s backed up, available, performing well, and always on (to use a phrase). My suggestion to you, data pro, talk to your developers. Figure out what they do and how they do it. Take advantage of their years and years of process improvement and apply what they’ve learned to your database development and deployment.

There’s a new concept growing out there. It’s fairly well established within the *nix communities, DevOps. It’s the realization that the world doesn’t begin and end with your database/server/application. Instead, your database/server/application is completely dependent on the database/server/application that it needs to run. Notice, no one is more important here. We’re talking about creating mechanisms for teams to deliver functionality to their clients (internal or external). And it all starts with the realization that there are parts of the process that some of us are better at than others. Developers know how to develop and deploy within teams. Let’s learn from them. And the start, well, that’s source control.

So, is your database under source control… for real. If not, get it there. The excuses I used to have are gone. That means the excuses you have now are probably gone too.

Fair warning, I may use the term DevOps more in the future.

Aug 22 2012

24 Hours of PASS, Fall 2012

It’s time to get your learn on again. The schedule for the Fall 24 Hours of PASS is up and ready for registration. This is the Summit preview session, so many (most, all) of the speakers are showing off some of what you can learn at their sessions at the PASS Summit 2012 itself. It looks like a pretty exciting bunch of topics given by some of the best professionals in the industry.

I’ll be presenting Three Ways to Identify Slow Running Queries on September 20th, 1400 GMT. This is just a sub-set of the information that I’ll be presenting during my all day pre-conference seminar, Query Performance Tuning: Start to Finish. The full seminar I talk about how to measure the performance of your systems, identify which queries are causing you the most trouble, figure out what that trouble is, and show how to fix those queries. This session of 24HOP, I just focus down on three methods you can use, right now, to understand the most costly queries on your servers. If you want to know what to do about them, well, you’ll have to register for the seminar.

A lot of this information is derived from the new edition of my book, SQL Server 2012 Query Performance Tuning. So you can check that out too if you’re so inclined.

Jul 16 2012

SQL In The City: London 2012, Recap

Presenting on Ring BuffersWow!

How’s that for a recap?

The concept for the SQL in the City events is pretty simple. Put on a free event that instructs people on SQL Server, Azure, and related technologies along with a healthy smattering of Red Gate tools. All teaching is done by some of the best people in the business (and me).

This was the second event in London. The concept was launched there last year and succeeded quite well. This year the event filled it’s registrations so quickly that Red Gate felt obligated to have a second day, which almost completely filled up too. There were more than 350 people in attendance on Friday, and then, on Saturday, a day off, another 250+ people showed up. That’s well over 600 attendees over the two days. And what people! The UK audience is just excellent. These people really pay attention to what you’re saying. They don’t make many comments while the presentation is going on, but oh my gosh the detailed questions you get afterwards. It’s just wonderful. Plus, these guys are part of my #sqlfamily. I got to meet several people that I’ve met before in both the US and the UK. I love spending time with Tobiasz, Dave, Kev, Annette, Jonathan, Thomas, Neil. I also got to meet people that I had interacted with only online and they’re wonderful in person. Thanks to everyone who sought me out, especially Colin and Stephanie. It was a real privilege to meet you two (although neither of you knows the other). It’s the interpersonal aspects of these events that makes them great.

It’s so nice to be able to relax and show people something like how to get a sandbox environment set up, but all the ways that using Red Gate tools to do it can help you make the job easier, faster and cheaper. Normally you can’t give an open answer when someone asks how to do something better or easier during a session. This venue makes that possible.

I put on three sessions during the day. One on how to improve performance, yours and your code, in T-SQL. I got to regale the crowd on all the evils of ddltbl (not a typo, you had to be there) as well as common, simple, mistakes made all the time in T-SQL code. I also did a session on sandbox deployments. While I’m personally against giving everyone & their brother a copy of the production database for development, I acknowledge that it is a good set of data to develop against. So, if you have to do it, you may as well use Virtual Restore to save some space. Finally, my last session was on some of the lesser used performance metrics that are actually more useful than people give them credit for.

I sat in on some of the other presentations and they were great. I really liked Steve Jones (blog|twitter) session on handling disasters. I also liked watching a new speaker, Annette Allen (twitter), stretch her legs for the first time. She was good. UK user groups take note. You have another resource available.

I had a blast presenting all these sessions, twice, and the crowds seemed receptive. I really appreciate everyone who attended and the excellent feedback that they politely (but firmly) provided. Then we had beer.

Yeah, you heard me. Wonderful, glorious, Red Gate beer served right there at the event. It was a great batch of Select * Ale. Highly recommended at the end of a hard day of T-SQL learning and networking.

It was a magnificently run event. The only complaint I heard was that we had an inadequate number of bathrooms for the men (which, I learned, are not called stalls in the UK, some humor doesn’t translate well). Thanks to Annabel Bradford and all the team at Red Gate who put the event together (even if I do work for them, it was a really well run event). You guys are magnificent.

If you missed a session while you were there or you want to see a session again, keep an eye on the SQL In The City web site. Videos of the sessions will be uploaded. If you weren’t there, you missed it. But, I have good news.

We’re taking the show on the road. We’re going to be hitting five cities in the US in September and October and then Seattle in (which I think is still in the US) in November. It’s going to be a lot of the same people presenting the same topics, but it’s also going to include a ton of excellent local speakers at each of the cities we hit. This means the excitement and education that SQL In The City represents will be accessible to lots more of you soon.

May 02 2011

I’ve got 99 Problems, but a disk ain’t one

Tom LaRock has a new meme for Meme Monday. It’s all about the problems caused in your system other than disks. Thankfully, despite the title, I don’t have to list 99 separate things, only 9, but you know what, 99 is possible. I’m going to present the problems. You find the solutions on your own today. Let’s go.


I’ve seen queries so big that they take more than three minutes to compile. That’s the edge case, but as an edge case it is educational. The most important thing to remember about recompiles is that they are driven by data changes. Once a threshold is reached on any given set of statistics, all queries referencing that set of statistics gets marked for recompile. The key words and tricky phrase here is “any given set of statistics.” My edge case was a query against 86 tables. Any of the statistics on any of the 86 tables changed, and the query went into it’s three minute recompile. The problems come in because while it’s recompiling, it’s locked and everyone has to wait. And waiting is the performance problem.


The default cost threshold for parallelism is 5. This means that any query plan that has an estimated cost greater than 5 could be marked as a parallel query plan. Parallel execution is a good thing, when the costs of breaking the code into disparate streams and gathering them back together is offset by the use of multiple processors. The problem is that cost is generally much higher for the break apart and gathering than the benefits of bringing multiple processors into play.


Yeah, deadlocks are a performance problem. You have to deal with the fact that at least one process goes through some part of the steps it was doing and then has to roll back. The rollback process is costly and then it has to be resubmitted for processing again. These things can seriously kill performance.


These things are great. These things are evil. I think of ORM tools like a firearm. In the proper hands, used the proper way, they’re an excellent tool. In the wrong hands and used the wrong way, mayhem ensues. Same thing goes with most of the ORM tools. Further, many of the ORM tools, very well built pieces of software, are advertised and pushed in a way that makes people use them incorrectly. It would be like someone showing off hand gun twirling with loaded pistols as an example of safe gun handling. More education and more appropriate expectations are needed here.

Dynamic Queries

To a degree these are an offshoot of ORM tools, but they’ve been around a lot longer. You’ve all seen them and I think most of us have built them at one point or another; the catch-all query, the report with infinite flexibility, etc. They fill memory up and flush things out of cache and cause untold grief.

Database Engine Tuning Advisor

If Microsoft is going to put something out there that advises people on how to improve performance, you would expect it to work. It doesn’t. Oh, it can. I’ve seen it catch some of the simplest queries and fix them. I’ve also seen it miss the simplest queries and what it suggests for complex queries can be really, really scary. It will have you adding indexes and individual sets of statistics all over the place. You’ll duplicate existing indexes. All sorts of stuff. And, because it’s from Microsoft, people take what it tells them as gospel and implement it, with no testing or evaluation at all.


I don’t think the language is all that hard. But, evidently it is. And it’s easy to muck it up and write code that is truly not good.


No, not the act of backing up databases, but the fact that people can’t seem to take backups. Don’t think that’s a performance issue? How fast are queries running when everything is offline? How good is performance when the database is offline because it’s corrupt?

Query Optimizer

And no, it’s not that I think the optimizer itself is a performance problem, but that so many people rely on it to perform miracles that it just can’t perform. I have an 86 table join, why isn’t everything running fast? I’ve got 36 deep nested views, how much the query is bad? It’s the expectations that the optimizer can and will figure out anything for them quickly, efficiently, and accurately, so that they don’t have do the things that they should that becomes a problem. And I understand it. The optimizer does incredible amounts of processing in very short periods of time. It’s truly an amazing piece of software. But, because it does all this stuff so well, I think it’s taught people to rely on it for too much and it just can’t deliver everything. It doesn’t have a “run faster” button that many people seem to think it has.

Mar 21 2011


It sure seems like there’s a lot of miscommunication between developers and database specialists. In fact, the communication can become so poor that outright hostility between the groups is common. At the end of the day we are all working towards a common goal, to add value to whatever organization we are working for. It’s a shame that we all lose sight of this commonality and create such a false dichotomy between the groups. I think there are some ways that we, as database specialists, can use to attempt to cross that gap.

Prior to being suborned to the dark side, I was a developer. I had a little over 10 years experience working in VB, Java & C#. I remember, distinctly, cursing our database team for being so problematic about how they did things. They slowed me down. They got in the way. When I had problems they were slow to respond, unless the problems were on production. I know I even instigated a few fights with them in an attempt to get them to move the way I wanted (hard to believe, I know). Then came the day when I shifted over to all database work.

Suddenly, I’m responsible for making sure the production system stays online and that the data is readily available to the business. Now I’m slowing down development, because I want a chance to review their design and validate their code to ensure it’ll work well and not affect production. Now I’m acting as a gatekeeper to prevent unauthorized access to the systems or at least keep people from making any of the 10,001 simple errors that could impact production. Now when a developer wants something fixed in dev, I’m the guy telling them they have to wait because something in production is wonky. And yeah, I’ve instigated fights from this side as I tried to get devs to understand that simply delivering code is not enough and that data persistence is there for a reason (again, shocking I’m sure).

Remember, both of these groups are more right than wrong, and both are working towards that common goal, value for the business. But they really don’t get along. What’s more, what they work on and how they work with it is frequently at odds. Ever heard of the object-relational impedance mismatch? How about the concept that you don’t have a database, but a persistence layer? What about managing data integrity within the application (one of my abiding favorites)? Never heard of those terms or concepts? Then you’re probably a database specialist and you’re probably not talking to your developers. If they haven’t already, they’ll soon be introducing an Object Relational Mapping tool to your enterprise. Best of luck.

A lot of these communication issues probably can’t be solved, but I know of one place where most database specialists are not communicating well with their dev teams, and database guys, it’s your fault. Source Control. Do you think of the structures and procedures within your database as code? You should, because, to a large degree, it is. The Data Definition Language (DDL) calls that make up your tables, views and procedures are code. That code needs to be checked into a source control management system. There, the individual objects can be versioned and managed. There you can create labeled releases of your code. There you can branch your code to create alternate development or support streams that contain variations of your database. There you can merge changes from multiple users and branches into a single main source for deployment to production. There you can keep your database directly in sync with application developers code.

Did you catch that last one? You can become more tightly coupled with your development team. Best of all, you can do this using their tools and their language. This is the communication problem I want you, the database professional to solve. Very few of us database types are using source control these days. This, despite the fact that there are fantastic tools and methods under development from different vendors that directly address the issue of getting and keeping database code within a source control system.

Years ago, when I first made the jump to databases, I was appalled that I couldn’t keep my code in source control. Then, as I worked more and more with databases, despite the problems, I abandoned the idea of managing the code in source because, frankly, it was way too hard. But several years ago new tools appeared on the market to make it possible (if still somewhat painful) to get the database into source control. I’ve been working that way for years now. It has completely eliminated one of the many problems I used to have with developers. They now know that my code is stored with theirs. That my versions are their versions. That their labels are my labels. That we branch the code together. It’s taken completely for granted, and we share a common language about change and deployment.

This has not solved every problem or conflict with database teams I’ve worked with. It has eliminated a source of friction. It has increased communication. It’s something that I could do, and you can do, to get a little closer to your development team. Not to mention the fact that you will now have your databases in a known state, all the time, that you’ll be deploying from a single location, that you can manage access to your code, and all the other things that having your databases in source control will bring.

For more details on the concept of putting your database in source control, and working better within teams in general, I’d recommend reading the SQL Server Team-based Development book. It’s a free download.

Addendum (3/27/2011): If you got this link through an email, could you post a comment below as to which distribution list it’s from? Thanks.

Jan 31 2011

Powershell to Automate SQL Compare

A question came up on the SQL Server Central Forums, how could you use Red Gate SQL Compare to automate creating scripts for all your databases at once. What’s the first thing that popped into my head? Yep, Powershell.

SQL Compare Pro comes with a command line utility that allows you to automate just about everything you can do with the GUI. I like to use it when I’m setting up deployment mechanisms for systems because it makes one click deployments possible. I won’t go into all the methods of that type of automation here. For a lot more information on that topic, check out the Team-Based Development book.

If you’re interested in exploring how you can use the command line, just type this:

[sourcecode language=”sql”]sqlcompare/?[/sourcecode]

Now is a good time to note that I’ve set SQL Compare’s location within my path because I use it so often from various locations in the disk, that having to type the full path every time I want it, is just too much.

Here is the PowerShell script:

[sourcecode language=”powershell”]$Server = “grant-red1\gfr1”
$Path = “c:\dbscripts”

Invoke-Sqlcmd -Query “sp_databases” -ServerInstance $Server | ForEach-Object {Invoke-Expression “sqlcompare.exe /s1:$Server /db1:$($_.DATABASE_NAME) /mkscr:$($Path)\$($_.DATABASE_NAME) /q” }[/sourcecode]

It’s just a single line of code, not counting setting the variables. I’ll be making it into a full blown script and posting it on the Red Gate site (location to be determined) so you can just call it passing variables. In the mean time, let me explain what I’ve done.

First, I use the Invoke-Sqlcmd utility to pass the query sp_databases to the server. This will retrieve the result set which consists of DATABASE_NAME, DATABASE_SIZE, and REMARKS for all user databases on the instance and any databases available through a gateway on that instance. From there I simply use the pipe, “|”, to pass the results to a ForEach-Object loop. Yes, I’d prefer not to use a loop, but this is a DOS command line utility, not a PowerShell commandlet, which means it’s not aware of the beauties of the pipe. Invoke-Expression is used to call sqlcompare.exe for each row within the result set. I use the abbreviation for the parameter /Server1, /s1, to set the instance through the variable $Server. I then use the abbreviation of the parameter /Database1, /db1, and the database name came from my object, and the column name, $_.DATABASE_NAME. Then the magic. The parameter /MakeScript, I again abbreviated to /mkscr, along with a path, in which I included the database name in order to separate out each database, was all that was needed to complete the command.

In short, three PowerShell commands and one call SQL Compare’s command line are all that are needed.

When you run this script, you’ll get a folder for each database and inside that will be a complete set of scripts for recreating the database. You can use this to script out your databases for inclusion in a source control system, or for whatever else you might need.

This example is just a small taste of what’s possible to automate using the command line utility in SQL Compare. Further, you can see that it’s very possible, easy even, to integrate SQL Compare with the primary means of server automation within Microsoft today, PowerShell.

PS: I won’t normally be posting much Red Gate stuff on my personal blog. I’ll be doing that inside the official Red Gate area where there are lot more eyeballs looking at it. But, until we get that all set up, I really wanted to share this little tid bit.

Dec 13 2010

Life/Work Balance

Apple iPad - Work Life Balance ToolTechnology, especially information technology, is the greatest thing to ever happen to mankind, freeing us from toil and drudgery. Technology, especially information technology, is a pernicious evil taking over our lives forcing us to work harder and longer. Depending on the time of day, the day of the week, my mood, my wife’s mood, or the direction the wind is blowing, either of these statements could be true.

The fact is, I love technology and I do have to wrestle with keeping it from taking over my life, but only because I have so much fun with the toys that technology brings. You want to know how much I love toys, ask me about my Droid sometime. Pull up a chair. We’re going to be here a while. The trick is, finding that sweet spot, where you use the tools presented to you in order to enhance your life while enhancing your work. Just enough of each and you can be a hero at home and on the job and have a blast doing both.

The one thing I really hate about being a DBA is being on call. I’m not sure why but most systems fail one of three times, right when I’m going to sleep, so I get to stay up another 1-3 hours fixing the issue; around 3AM, so I can spend about 1/2 an hour figuring out how to log into the network before I spend 1-3 hours fixing the issue; or, when I’m half way up a mountain with the Scouts, in which case, I just have to call the boss and get someone else engaged (and yes, I do prefer these last failures to the others). The real trick here is, to get your systems set up so that you don’t have constant emergencies, regardless of the time of day. How do you do this? Proactive monitoring.

Red Gate handed me 10 iPad’s along with 10 licenses for SQL Monitor, their new monitoring tool. I’m to give these 10 devices away to the best responses in the comment section of this post to the question I’m going to put to you shortly. That’s right, you can get out in front of the issues you’re running into and avoid whenever it is that you get called from work and get an awesome toy at the same time.

The goal is life/work balance. Notice which one I put first. That’s the priority. Here’s your question:

What do you think the most common cause of server outages is, why, and how would being able to monitor your systems remotely help solve this issue, thereby improving the quality of your life?

The contest runs from now until 11:59 PM, December 17th, 2010. Please reply below, but keep it pithy. Don’t publish your version of War & Peace in the comments (I might delete it). I’m the sole judge and arbiter (which means, I probably will delete anything resembling War & Peace). One entry only. Make sure there’s a means of contacting you in the post, or I’ll give your iPad to someone else. Remember, pithy is our watch word. You can answer this question in three well constructed sentences. If you win, I’ll want to get a picture of you using the iPad to monitor your systems remotely. Plan on sending me that picture by January 31st. An interesting picture. Something with you sitting in your cube at work just won’t fly.

That’s it. I’ll announce the winners in a new post on the blog at the end of the week. Here are the official rules:

  1. The contest is open to professionals with SQL Server monitoring responsibility. Entrants must be 18 years old or over.
  2. Entries must be received by Friday, December 17, 2010. The contest organizers accept no responsibility for corrupted or delayed entries.
  3. Employees of Red Gate, the contest organizers and their family members are not eligible to participate in the contest.
  4. Entries are limited to one per person across the three simultaneous contests hosted on SQLServerCentral.Com, BrentOzar.Com, and ScaryDba.Com.
  5. The organizers reserve the right, within their sole discretion, to disqualify nominations.
  6. The organizers’ decisions are final.
  7. Red Gate Software and those involved in the organization, promotion, and operation of the contest and in the awarding of prizes explicitly make no representations or warranties whatsoever as to the quality, suitability, merchantability, or fitness for a particular purpose of the prizes awarded and they hereby disclaim all liability for any loss or damage of any kind, including personal injury, suffered while participating in the contest or utilizing any prizes awarded. 
Dec 06 2010

Review: Idera SQL Doctor

Recently, a co-worker practically slammed me up against the wall, exclaiming “You have to check out this new tool, right now!” The piece of software he was so excited about was Idera’s SQL Doctor. Based on this assaultrecommendation, I decided to take a little time & look the software over.

SQL Doctor, as the name implies, is a diagnostic tool. It runs a set of best practice rules against your server, your databases and your code. As the rules are executed, your system’s compliance with these best practices is evaluated and an interactive report is generated. With the report you can drill down on various aspects of your system to see where you may have gone wrong.

All that sounds very clinical, just laid out like that. But the fact of the matter is, if you have a lot of inherited systems, or systems that have not been maintained, designed, or developed as well as they should have been, you know you have problems all over the place. How exciting would it be to get a pretty fast evaluation of the systems? Wait. Don’t answer that yet. What if, after the evaluation, a set of reports was available? Hang on. It gets better. What if the reports included a pretty decent set of suggestions on how to resolve the issue? Yeah, that’s right, suggested solutions.

I know some people reading this are tuning monsters who won’t benefit from a piece of software like this. But a lot of you are not. For those, here’s how it works.

First, you connect to the server you’re interested in. Once connected, SQL Doctor will ask what type of evaluation you’d like, a health check or some type of system slow-down, as shown below:


For this example, I’ll just pick the health check. SQL Doctor then asks if this is a production system or not. Then it asks what kind of system, OLTP or not, and then it runs the evaluation. The evaluation of the rules is somewhat time consuming. Once it is completed, a new window showing the health of the system is displayed:


And this is where things get interesting. You can see the server name, the state of health, etc., but the good stuff is down near the bottom, the recommendations. On this particular server, chosen because I knew it would list interesting results, you can see 509 total suggestions, 427 Query Optimization issues, 48 Index Optimization issues, 2 network issues and 2 memory issues. You can use the tabs to look directly at the recommendations or you can click on the recommendation categories. Clicking on the Query Optimization category the list of optimizations is shown:


The optimizations are listed by severity, meaning, the damage they can cause to your system. At the very top are two instances of functions being used on columns in the WHERE clause of queries. I can’t drill down and show you the details on this system, but what you get is a precise listing of the procedures and the specific location where the problem exists along with a recommendation for how to fix it.

This is momentous. Basically, what you’ve got, is the ability to capture the low hanging fruit. The easy, but painful, stuff that people who optimize queries all the time are fixing, can be found, fairly quickly, and you don’t have to be a tuning expert to make this happen. There’s quite a bit more to Idera SQL Doctor, but this is the main point. You can also flag certain optimizations, marking them for others so that they know what you consider to be a priority. You can block optimizations if you either know about them or don’t care about them. There’s quite a bit of control possible through the tool. But, I can’t emphasize the main point enough. You can get an automated basic health check on your system that will suggest the best ways to fix the problems that it finds.

There are recommendations I’m not crazy about. For example, it flags UNION operators as a potential performance problem. Yes, they might be, and if you could use UNION ALL instead, performance would improve, depending on the query we’re talking about, but having this one poking me in the eye over & over… well, at least around my shop, it’s a candidate for getting turned off. Further, the explanation of why it could be problematic is very short. It just doesn’t provide enough information for people to make up their mind based on a full set of data. I’m also a bit nervous (shocked and apalled are almost applicable) that they’ve included index usage stats in a tool for less experienced DBAs. Yes, that index may not have been used for the last six months, but tomorrow, the CIO is running his favorite report that he runs once every six months, and it had better be lightening fast… but you just dropped the index that the report depends on because some piece of software told you it’d be OK… oops. In short, there are areas within the software that could be improved.

Most of the TSQL recommendations I saw were excellent. I already mentioned the functions making arguments non-sargeable and there are more good ones, use of SELECT *, unfiltered deletes (I’ll be talking to those developers real soon), ANSI settings that affect performance, differences between estimated execution times and actual times (neat use of DMOs), and more. In short, I think there’s one heck of a lot more good here than bad.

Is SQL Doctor magic? Are all your performance problems going to disappear over night? No, absolutely not. If you’ve made poor choices in database design, if you’ve got stacks and stacks of bad code, this tool is not going to swoop in to clean up your mess. You’re still going to have to do a lot of hard work, depending on your situation. Further, don’t take the suggestions of any tool (or some yahoo on the web for that matter) as messages from the gods. Evaluate and test the suggestions, independently.

However, the workload for tuning your systems was just reduced. Because there are a lot of problems, possibly even the majority, that are very clearly defined, that can be found by running a search routine, that absolutely do hurt the performance of your systems, badly. Those problems will be identified by Idera SQL Doctor which will enable you to fix them, in a much faster and more efficient manner. That’s a win.

Dec 02 2010

SQL Spackle

I previously mentioned how SQL Server Central was listing ideas for articles, primarily for short, quick, pointed articles that they were terming SQL Spackle. Spackle is a term in the US that represents the filler you put into dry-wall to smooth it out or fix small holes, so SQL Spackle is meant to fix small gaps in knowledge or documentation.

My first SQL Spackle article was published today. I kind of forgot it was coming. I wrote a quick and pointed outline of how to set up and schedule a server-side trace. This is something I’ve always advocated, and now I can point to a bit of spackle to fill in the gaps for those who don’t know how to get it done.