Category: SQL Server 2012

Jul 15 2014

Execution Plan Details

I wouldn’t say it’s common knowledge that you should look at execution plans when tuning queries, but it’s not exactly uncommon knowledge either. But, people tend to get focused on just looking at the graphical part of the plan and there’s just not enough information there. Let’s take a look at a query:

SELECT  pc.Name,
        ps.Name,
        v.Name,
        pr.ReviewerName,
        p.Name,
        v.ModifiedDate,
        p.Color
FROM    Production.Product AS p
        LEFT JOIN Production.ProductReview AS pr
        ON pr.ProductID = p.ProductID
        JOIN Production.ProductSubcategory AS ps
        ON p.ProductSubcategoryID = ps.ProductSubcategoryID
        JOIN Production.ProductCategory AS pc
        ON pc.ProductCategoryID = ps.ProductCategoryID
        JOIN Purchasing.ProductVendor AS pv
        JOIN Purchasing.Vendor AS v
        ON v.BusinessEntityID = pv.BusinessEntityID
        ON pv.ProductID = p.ProductID
WHERE   v.ModifiedDate = '2006-02-17 00:00:00.000'
AND p.Color LIKE 'Y%';

This generates an execution plan that looks like this:

PlanDetails

Neither v.ModifiedDate nor p.Color have indexes. Yet, we only see a single scan in this plan, on the BusinessEntity.Vendor table. Why? Well, we can’t tell from the GUI directly, so, you have to look to the tool tips or the properties. The tool tip in this case actually proves helpful, as does the properties:

PlanDetailsProperties

In short, the clustered index is used to seek out a number of rows and then a secondary predicate is placed on those few rows to further filter the results. You would never even guess at that using just the GUI alone. You have to look to the details of the execution plan to understand that. There are lots of other examples where you can only get the information from the properties because not everything is available in the tooltip. This can include things like Optimization Level, Reason For Early Termination, Scan Direction, and a bunch of other things. Just remember to drill down to the properties in order to better understand your execution plans.

Why do you need information like this? Are we going to tune this query? Maybe we could help it by adding an index t the Vendor table. But, at least for this query with this date range, it appears we don’t need an index on the Product table. But, that may change depending on the number of rows returned from the Vendor table. A different data set can result in an entirely different execution plan with entirely different performance characteristics.

Query tuning is tough. That’s why I’ve put together a one-day seminar to get you started on it. There’s still room in Albany, on July 25th. You can register here. I’ll also be teaching at SQL Connections in September in Las Vegas. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on just execution plans at SQL Server Days. Go here to register for this event. When I did a similar session at the PASS Summit two years ago, it sold out. I’ll be presenting at Summit this year in Seattle in November. Go here to register. But do it early or you may be put on a waiting list.

 

 

 

 

Apr 10 2014

I’m a Traveling Man

We are coming into quite a busy time for my speaking schedule. I’m hitting the road. It does one thing for me that I truly love, I get to talk to people. So, if you have questions, want to chat, need to call me a pompous know-it-all to my face, I’ve got some opportunities for you.

Next week, April 13-16, is SQL Intersection. You can register by clicking here. The following week, I’ve got two events. First, on Friday April 25th, Red Gate Software is hosting a free half day SQL in the City Seminar in the Chicago area. We’ll be talking database deployment all day. Go here to register, but don’t wait, seats absolutely are limited. And, since this is a Red Gate event, at the end of the day, I’ll buy you a beverage or two while we exchange war stories. The next day, Saturday April 26th, is SQL Saturday Chicago I’ll be presenting a session. Check out the lineup and get yourself registered. That’s it for April.

May gets really fun. Saturday May 3 is SQL Saturday Atlanta. This is one of those “I was there” events for the Atlanta area. I’ll be there. Saturday May 17 is SQL Saturday Detroit. This one, at the moment, looks pretty intimate, but that means you get to hang out with Jeff Moden, Ginger Ford, Allen White, Tim Ford and ask questions until you run out of questions. I wouldn’t miss it if I lived in that area. Heck, I don’t live in that area and I’ll be there too. Then I get to go on my Carolina Cruise. I’m visiting three user groups in three days in the Carolinas. First up is Raleigh at the Triangle SQL Server User Group on the 20th. Then I get to Charlotte (and that was a great city for hosting the PASS Summit) on the 21st. Finally I’m off to Columbia and the Midland PASS Chapter on the 22nd. That’s going to be a blast. And we’re not done with May. On the 27th and 28th I’m going to hop the pond to speak at TechoRama in Belgium. I’m terribly excited about this event. Maybe it’s just because I like Belgian beer, but it really does look pretty cool. Go here to get registered. And I love the count-down clock on the web page. That’s exactly how I feel.

In June I come back over to my side of the pond. There are some events we’re still setting up. But the ones I know I’m going to are SQL Saturday Louisville on the 21st of May. But, the day before, on the 20th, I have an all day seminar on query tuning. Click here to register. We should have another SQL in the City Seminar set up for June as well as a couple of more SQL Saturday events. I’ll post once I learn more.

July is still pretty open (please, please, please, OH, PLEASE, I want to got to SQL Bits), but I do have another all day seminar on query tuning set up for Albany. You can go here to register. That’s the day before SQL Saturday Albany. It’s going to be their first event, so let’s help make it a great one.

As the schedule for June and July solidifies I’ll publish another listing. Let’s get together and talk.

Mar 25 2014

Save Money On Your Training Server

Save MoneyYou can spend less money. Some of us are lucky. We work for very large corporations who can easily set aside a spare desktop or even space on a rack for a server on which we can train. Others of us are not as lucky. We work for smaller organizations that have to be more careful with their money. Not only do we not get the extra machine to train on, but our laptops could be weak things that can’t run two or more VMs. In this case, how can you go about learning stuff? Spend your own money? Sure, it’s an option.

There are some very cheap servers available out there that won’t cost you even $1000 dollars to set up. And for pretty cheap you can buy some network attached storage to have your own little SAN-style setup. That’s very doable. Let’s break it down a little:

HP Proliant MicroServer G8: $549
Added Memory to 16GB: $209
24oGB SSD: $129
Lenovo/Iomega 1TB of storage NAS: $878

We’ve just spent $1765 for a decent little set up. So now you could run 3-5 VMs on this machine and you’re good to go. Of course, now you’ve got to maintain that system, patching, upgrades. What happens when it gets old? You’ve got to replace it. What if you’re not using it? That was a lot of money spent then.

Ah, but wait. Software. We need to get Windows server licensed and SQL Server. Let’s see:

Windows Server 2012 R2 Fundamentals: $501
SQL Server Developer Edition: $44

We’re now up to $2310. But… oh, yeah, the licenses for the servers, that doesn’t include VM licensing, so let’s buy… 4. That’s enough for one server and 3 VMs. That’s an additional $1500, so now we’re up to $3810. Cool though, right. That’s not much money and we’re off and running.

Here’s a suggestion, even if you have to spend your own money, how about Azure? Currently, I’ve left three servers running on my account (not something I recommend, but I’ve been doing this as an experiment), plus the storage they use, plus the SQL Databases I have, I’m racking up a bill of about $80/month. That’s $960 in a year. Which means in about 3.9 years, I’ll have spent as much as you just did on that server that’s sitting under your desk.

Yeah, I know. It runs somewhat faster, except when I burn a little cash and bump my servers up to 8 core and 56gb of ram for a test, then turn it back down, or even, turn it off or deallocate it. Because, you’re only going to pay for what you use. So if you just throw the VMs away between tests, you’re saving tons of money, way above and beyond what that hunk of iron under your desk cost. You can even estimate exactly what things are going to cost using the engine Microsoft provides.

But did I say pay? Not quite. You see, I have an MSDN account. That includes Azure credit. Anywhere from $50 to $150 per month. So, for $1199/yr, I can get $50 a month of Azure credit. That means, just buying an MSDN account, it’ll take me three years to equal what I spent on that box under the desk.

Oh, and that’s before we get to the electricity you paid.

Look, there’s a reason to buy iron. I believe in it. But, there are also reasons not to buy iron. Testing, training, personal use… maybe iron. Or, maybe it’s time to step into the 21st Century.

Mar 19 2014

Query Tuning Near You

It really is so much easier to just throw hardware at badly performing databases. Just buy a bigger, faster server with more and faster disks and you can put off doing tuning work for another 6-9 months, easily. But, for most of us, sooner or later, our performance problems get so big or, we just don’t have any more money to spend, and we’re stuck. We have to tune the queries. And frankly, query tuning is a pain in the nether regions.

But, after you’ve tuned queries 20 or 30 times, you start to recognize the patterns and it gets easier (never easy, just not as hard). But, if you haven’t done it 20 or 30 times, what do you do? My suggestion, talk to someone who has done it 30 times (or even a couple of hundred times), like me for example.

I have an all day session on tuning queries. It goes from understanding how the optimizer works (which will automatically lead you to write better queries), to how to gather performance metrics (so you know where the pain points are located), to reading execution plans (you need to know what has gone wrong with the query) to various mechanisms for fixing the query. This information is applicable to systems from SQL Server 2005 to SQL Server 2014 (sorry everyone still on 2000, it’s time to upgrade). The session is based on the books I’ve written about query tuning and execution plans, plus years and years of doing lots of query tuning.

Right now I’ve got two events scheduled. Before SQL Saturday #286 in Louisville, KY, I’ll be putting on this precon. We’re limited to seating, so don’t wait. You can go here to register. Then we can get together the next day at the SQL Saturday event to get some more education from all the great speakers there. Next, before SQL Saturday #302 in Albany, NY (their first one, ever), I’ll be hosting this. You can register by clicking here. Don’t miss the early bird special. Again, the next day will be filled with learning at the SQL Saturday event.

I’m working on taking this to other locations and venues. If you’re interested, please get in touch. I’ll do what I can to come to you.

If you have a particularly thorny query, bring it along with an actual execution plan. If we have time at the end of the day, I’ll take a look and makes suggestions, live (uh, please, no sensitive patient data or anything like that).

Let’s get together and talk query tuning.

Mar 18 2014

Finding Mistakes

Ever had that moment where you start getting errors from code that you’ve tested a million times? I had that one recently. I had this little bit of code for pulling information directly from query plans in cache:

WITH XMLNAMESPACES(DEFAULT N'http://schemas.microsoft.com/sqlserver/2004/07/showplan'),
QueryPlans AS
(
SELECT RelOp.pln.value(N'@PhysicalOp', N'varchar(50)') AS OperatorName,
RelOp.pln.value(N'@NodeId',N'integer') AS NodeId,
RelOp.pln.value(N'@EstimateCPU', N'decimal(10,9)') AS CPUCost,
RelOp.pln.value(N'@EstimateIO', N'decimal(10,9)') AS IOCost,
dest.text
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp
CROSS APPLY deqp.query_plan.nodes(N'//RelOp') RelOp (pln)
)

SELECT  qp.OperatorName,
        qp.NodeId,
        qp.CPUCost,
        qp.IOCost,
        qp.CPUCost + qp.IOCost AS EstimatedCost
FROM    QueryPlans AS qp
WHERE   qp.text = 'some query or other in cache'
ORDER BY EstimatedCost DESC;

I’ve probably run this… I don’t know how many times. But… I’m suddenly getting an error:

Msg 8114, Level 16, State 5, Line 7
Error converting data type nvarchar to numeric.

What the hell? There is no where this should be occurring. I dig through the query over and over and I can’t figure it out. Until… I finally notice that one of the operators in the plan has the CPUCost value stored as a float:

EstimateCPU=”1e-006″

Ummmm, since when? Since forever. I’ve just been lucky with my code. I’d just never hit a sufficiently small cost in the plans before. I hadn’t bothered to look for the actual data type in use in the schema definition, although it’s right there:

<xsd:attribute name=”EstimateCPU” type=”xsd:double” use=”required”/>

 

Long-Kiss-Goodnight-samuel-l-jackson-14034435-720-480

I never did one thing right in my life, you know that? Not one. That takes skill.

What did I do wrong? I was looking at the data output from the queries and in the plans rather than looking at the structure to know what to expect. It’s the kind of thing I would never do with T-SQL. I would always look to the table structure to know what data type a given column was. But in this case, with the XML, I looked at the data and made an assumption. And we all knows what that means. It makes an ass out of you and mption.

Or, in this case, me and mption.

Anyway, the corrected query is a pretty trivial change:

WITH XMLNAMESPACES(DEFAULT N'http://schemas.microsoft.com/sqlserver/2004/07/showplan'),
QueryPlans AS
(
SELECT RelOp.pln.value(N'@PhysicalOp', N'varchar(50)') AS OperatorName,
RelOp.pln.value(N'@NodeId',N'integer') AS NodeId,
RelOp.pln.value(N'@EstimateCPU', N'float') AS CPUCost,
RelOp.pln.value(N'@EstimateIO', N'float') AS IOCost,
dest.text
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp
CROSS APPLY deqp.query_plan.nodes(N'//RelOp') RelOp (pln)
)

SELECT  qp.OperatorName,
        qp.NodeId,
        qp.CPUCost,
        qp.IOCost,
        qp.CPUCost + qp.IOCost AS EstimatedCost
FROM    QueryPlans AS qp
WHERE   qp.text = 'SELECT * FROM HumanResources.vEmployee AS ve'
ORDER BY EstimatedCost DESC;

But I do feel bad if anyone has been using this the way I showed it. ‘Cause, yeah, I’ve demonstrated with this code in the past. Oops. However, great point. Especially when working with a public XML schema like this, it pays to go and look at that schema the same way you would a table in order to ensure that you’re using the correct data types.

Mar 11 2014

sp_updatestats Is Not Smart

No, I don’t mean the use of sp_updatestats is not smart. It’s a fine, quick mechanism for getting statistics updated in your system. But the procedure itself is not smart. I keep seeing stuff like “sp_updatestats knows which statistics need to be updated” and similar statements.

Nope.

Not true.

Wanna know how I know? It’s tricky. Ready? I looked at the query. It’s there, in full, at the bottom of the article (2014 CTP2 version, just in case yours is slightly different, like, for example, no Hekaton logic). Let’s focus on just this bit:

if ((@ind_rowmodctr <> 0) or ((@is_ver_current is not null) and (@is_ver_current = 0)))

The most interesting part is right at the front, @ind_rowmodctr <> 0. That value is loaded with the cursor and comes from sys.sysindexes and the rowmodctr column there. In short, we can know that the “smart” aspect of sp_updatestats is that it assumes if there are any modifications, then updating statistics is good to go. We can argue for hours over how exactly you determine whether or not statistics are sufficiently out of date to warrant an update, but I’d be willing to bet that the sophisticated answers are not going to include just finding everything that’s been touched.

Now, don’t get me wrong. I’m not implying, suggesting or stating that sp_updatestats shouldn’t be used. It should. It’s fine. Just be very clear about what it does and how it does it.

 

Just a reminder, I’m putting on an all day seminar on query tuning in Louisville on June 20th, 2014. Seats are going fast, so please sign up early.

sp_updatestats:

USE [master]
GO
/****** Object:  StoredProcedure [sys].[sp_updatestats]    Script Date: 3/6/2014 8:09:58 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER procedure [sys].[sp_updatestats]
	@resample char(8)='NO'
as

	declare @dbsid varbinary(85)

	select @dbsid = owner_sid
		from sys.databases
		where name = db_name()

	-- Check the user sysadmin
	if not is_srvrolemember('sysadmin') = 1 and suser_sid() <> @dbsid
	begin
		raiserror(15247,-1,-1)
		return (1)
	end
	-- cannot execute against R/O databases  
	if DATABASEPROPERTYEX(db_name(), 'Updateability')=N'READ_ONLY'
	begin
		raiserror(15635,-1,-1,N'sp_updatestats')
		return (1)
	end

	if upper(@resample)<>'RESAMPLE' and upper(@resample)<>'NO'
	begin
		raiserror(14138, -1, -1, @resample)
		return (1)
	end

	-- required so it can update stats on ICC/IVs
	set ansi_warnings on
	set ansi_padding on
	set arithabort on
	set concat_null_yields_null on
	set numeric_roundabort off

	declare @exec_stmt nvarchar(4000)		-- "UPDATE STATISTICS [sysname].[sysname] [sysname] WITH RESAMPLE NORECOMPUTE"
	declare @exec_stmt_head nvarchar(4000)	-- "UPDATE STATISTICS [sysname].[sysname] "
	declare @options nvarchar(100)			-- "RESAMPLE NORECOMPUTE"

	declare @index_names cursor

	declare @ind_name sysname
	declare @ind_id int
	declare @ind_rowmodctr int
	declare @updated_count int
	declare @skipped_count int

	declare @sch_id int
	declare @schema_name sysname
	declare @table_name sysname
	declare @table_id int
	declare @table_type char(2)
	declare @schema_table_name nvarchar(640) -- assuming sysname is 128 chars, 5x that, so it's > 128*4+4

	declare @compatlvl tinyint

	-- Note that we looked up from sys.objects$ instead sys.objects since some internal tables are not visible in sys.objects
	declare ms_crs_tnames cursor local fast_forward read_only for
		select name, object_id, schema_id, type from sys.objects$ o
		where o.type = 'U' or o.type = 'IT'
	open ms_crs_tnames
	fetch next from ms_crs_tnames into @table_name, @table_id, @sch_id, @table_type

	-- determine compatibility level
	select @compatlvl = cmptlevel from sys.sysdatabases where name = db_name()

	while (@@fetch_status <> -1) -- fetch successful
	begin
		-- generate fully qualified quoted name
		select @schema_name = schema_name(@sch_id)
		select @schema_table_name = quotename(@schema_name, '[') +'.'+ quotename(rtrim(@table_name), '[')

		-- check for table with disabled clustered index
		if (1 = isnull((select is_disabled from sys.indexes where object_id = @table_id and index_id = 1), 0))
		begin
			-- raiserror('Table ''%s'': cannot perform the operation on the table because its clustered index is disabled', -1, -1, @tablename)
			raiserror(15654, -1, -1, @schema_table_name)
		end
		else
		begin
			-- filter out local temp tables
			if ((@@fetch_status <> -2) and (substring(@table_name, 1, 1) <> '#'))
			begin
				-- reset counters for this table
				select @updated_count = 0
				select @skipped_count = 0

				-- print status message
				--raiserror('Updating %s', -1, -1, @schema_table_name)
				raiserror(15650, -1, -1, @schema_table_name)

				-- initial statement preparation: UPDATE STATISTICS [schema].[name]
				select @exec_stmt_head = 'UPDATE STATISTICS ' + @schema_table_name + ' '

				-- using another cursor to iterate through
				-- indices and stats (user and auto-created)
				-- Hekaton indexes do not appear in sys.sysindexes so we need to use sys.stats instead
				-- Hekaton indexes do not support rowmodctr so we just return 1 which will force update stats
				-- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables
				if ((@table_type = 'U') and (1 = OBJECTPROPERTY(@table_id, 'TableIsMemoryOptimized')))	-- Hekaton tables
				begin
					set @index_names = cursor local fast_forward read_only for
						select name, stats_id, 1 as rowmodctr
						from sys.stats
						where object_id = @table_id and indexproperty(object_id, name, 'ishypothetical') = 0 
						order by stats_id
				end
				else
				begin
					set @index_names = cursor local fast_forward read_only for
						select name, indid, rowmodctr from sys.sysindexes
						where id = @table_id and indid > 0 and indexproperty(id, name, 'ishypothetical') = 0 
						and indexproperty(id, name, 'iscolumnstore') = 0
						order by indid
				end

				open @index_names
				fetch @index_names into @ind_name, @ind_id, @ind_rowmodctr

				-- if there are no stats, skip update
				if @@fetch_status < 0
					--raiserror('    %d indexes/statistics have been updated, %d did not require update.', -1, -1, @updated_count, @skipped_count)
					raiserror(15651, -1, -1, @updated_count, @skipped_count)
				else 
				begin
					while @@fetch_status >= 0
					begin
						-- create quoted index name
						declare @ind_name_quoted nvarchar(258)
						select @ind_name_quoted = quotename(@ind_name, '[')

						-- reset options
						select @options = ''

						declare @is_ver_current bit
						select @is_ver_current = stats_ver_current(@table_id, @ind_id)

						-- note that <> 0 should work against old and new rowmodctr logic (when it is always > 0)
						-- also, force a refresh if the stats blob version is not current
						if ((@ind_rowmodctr <> 0) or ((@is_ver_current is not null) and (@is_ver_current = 0)))
						begin
							select @exec_stmt = @exec_stmt_head + @ind_name_quoted

							-- Add FULLSCAN for hekaton tables
							-- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables
							if ((@table_type = 'U') and (1 = OBJECTPROPERTY(@table_id, 'TableIsMemoryOptimized')))	-- Hekaton tables
								select @options = 'FULLSCAN'

							-- add resample if needed
							else if (upper(@resample)='RESAMPLE')
								select @options = 'RESAMPLE '

							if (@compatlvl >= 90)
								-- put norecompute if local properties are set to AUTOSTATS = OFF
								-- note that ind name is unique within the object
								if ((select no_recompute from sys.stats where object_id = @table_id and name = @ind_name) = 1)
								begin
									if (len(@options) > 0) select @options = @options + ', NORECOMPUTE'
									else select @options = 'NORECOMPUTE'
								end

							if (len(@options) > 0)
								select @exec_stmt = @exec_stmt + ' WITH ' + @options

							--print @exec_stmt
							exec (@exec_stmt)
							--raiserror('    %s has been updated...', -1, -1, @ind_name_quoted)
							raiserror(15652, -1, -1, @ind_name_quoted)
							select @updated_count = @updated_count + 1
						end
						else
						begin
							--raiserror('    %s, update is not necessary...', -1, -1, @ind_name_quoted)
							raiserror(15653, -1, -1, @ind_name_quoted)
							select @skipped_count = @skipped_count + 1
						end
						fetch @index_names into @ind_name, @ind_id, @ind_rowmodctr
					end
					--raiserror('    %d index(es)/statistic(s) have been updated, %d did not require update/disabled.', -1, -1, @updated_count, @skipped_count)
					raiserror(15651, -1, -1, @updated_count, @skipped_count)
				end
				deallocate @index_names
			end
		end
		print ' '
		fetch next from ms_crs_tnames into @table_name, @table_id, @sch_id, @table_type
	end
	raiserror(15005,-1,-1)
	deallocate ms_crs_tnames
	return(0) -- sp_updatestats
Feb 27 2014

Let’s Talk Query Tuning

I spend quite a bit of time writing about query tuning on this blog. I’ve written (re-written and am actively re-writing) books on query tuning. But what I like most is talking about query tuning. I love giving sessions at various events on different aspects of query tuning, but, what I like the most is spending a whole day, trying to do a complete brain dump to get as much information out there as possible. Sound attractive? Then I’ve got a great deal for you. Come to Louisville on June 20th, 2014. We will talk query tuning at length. You have a specific question? Let’s get it answered. Then, the next day, we can all go to SQL Saturday 286 there in Louisville to get more learning and some serious networking. What’s not to like?

Feb 26 2014

SQL Intersection, Spring 2014

I am terribly jazzed to be involved with this amazing event, SQL Intersection. It’s featuring some truly amazing speakers presenting on important topics. It’s being held here on the East Coast, right near the Mouse, the Duck and Dog. This is one of those conferences you need to get to. Check out the lineup. That is some of the smartest, most capable people I know. I’m quite humbled to be on the list with them, so I’ll do my level best to deliver good content. Look at the sessions. While I don’t know precisely when SQL Server 2014 is coming out, I’m sure it’s real soon, so this will be a great place to get a leg-up on understanding what this new set of technology offers, or just learn more about SQL Server in general, Azure, SSRS and SSIS.

Click here now to register for this special event.

Feb 18 2014

The CASE Statement and Performance

In case you don’t know, this query:

UPDATE dbo.Test1
SET C2 = 2
WHERE C1 LIKE '%33%';

Will run quite a bit slower than this query:

UPDATE dbo.Test1
SET C2 = 1
WHERE C1 LIKE '333%';

Or this one:

UPDATE dbo.Test1
SET C2 = 1
WHERE C1 = '333';

That’s because the second two queries have arguments in the filter criteria that allow SQL Server to use the statistics in an index to look for specific matching values and then use the balanced tree, B-Tree, of the index to retrieve specific rows. The argument in the first query requires a full scan against the index because there is no way to know what values might match or any path through the index to simply retrieve them.

But, what if we do this:

UPDATE dbo.test1
SET C2 =
CASE
WHEN C1 LIKE '19%' THEN 3
WHEN C1 LIKE '25%' THEN 2
WHEN C1 LIKE '37%' THEN 1
END;

We’re avoiding that nasty wild card search, right? So the optimizer should just be able to immediately find those values and retrieve them… Whoa! Hold up there pardner. Let’s set up a full test:

IF (SELECT  OBJECT_ID('Test1')
   ) IS NOT NULL 
    DROP TABLE dbo.Test1; 
GO
CREATE TABLE dbo.Test1 (C1 VARCHAR(50),C2 INT, C3 INT IDENTITY);

SELECT TOP 1500
        IDENTITY( INT,1,1 ) AS n
INTO    #Nums
FROM    Master.dbo.SysColumns sC1,
        Master.dbo.SysColumns sC2;
        
INSERT  INTO dbo.Test1
        (C1,C2)
        SELECT  n, n
        FROM    #Nums;
        
DROP TABLE #Nums;

CREATE CLUSTERED INDEX i1 ON dbo.Test1 (C1) ;


UPDATE dbo.test1
SET C2 =
CASE
WHEN C1 LIKE '%42%' THEN 3
WHEN C1 LIKE '%24%' THEN 2
WHEN C1 LIKE '%36%' THEN 1
END

DBCC FREEPROCCACHE()

UPDATE dbo.test1
SET C2 =
CASE
WHEN C1 LIKE '19%' THEN 33
WHEN C1 LIKE '25%' THEN 222
WHEN C1 LIKE '37%' THEN 11
WHEN C1 LIKE '22%' THEN 5
END

I added the extra CASE evaluation in the second query in order to get a different query hash value.

Here are the execution plans from the two queries:

TwoPlans

They’re pretty identical. Well, except for me forcing a difference in the hash values, they’re identical except for the details in the Compute Scalar operator. So what’s going on? Shouldn’t that second query use the index to retrieve the values? After all, it avoided that nasty comparison operator, right? Well, yes, but… we introduced a function on the columns. What function you ask? The CASE statement itself.

This means you can’t use a CASE statement in this manner because it does result in bypassing the index and statistics in the same way as using functions against the columns do.

Jan 15 2014

Database in Source Control

Many years ago, I was working with a great DBA. Seriously, a very smart and capable guy. He told me, “We need to put the database into source control, just like app code.” And I just laughed. Not because I disagreed with him. I knew he was right, but I had tried, several times, to do just that. See, I’m not really a DBA. I’m a developer. I knew that code (and all the T-SQL that describes databases is code) needed to be versioned, sourced, tracked and audited. But great googly moogly, it was not an easy thing to do.

I first tried just exporting the entire database into a script and then occasionally checking that script into source control. Yay! Mission Accomplished… Well, I had a database in source control, yes, but I didn’t have any of the great stuff that went with it, most of all, a way to deploy from source control.

Next, I tried just storing the stuff that changed most, procedures. But, I had to store everything as an ALTER, or, I had to store it all as a DROP/CREATE and store the security settings and extended properties. I tried both. Neither satisfied and it was WAY too easy for someone else to modify a script the wrong way and bring the entire thing crashing down. And, not to mention the fact that any and all structural changes outside of stored procedures had to be built manually, or using a compare tool to generate them (but not the procs, cause we have those in source control, remember) by comparing prod & dev or qa & dev or something & something… Oh yeah, that was fun.

Man, it was painful back then. But now, there are several ways you can do this using Microsoft and/or 3rd party tools.

Why aren’t you?

Seriously, most of you aren’t. I’ve been going all over the country teaching a class on database deployments (next one is in Cleveland if you’re interested) and I know most people don’t put their databases into source control. Of course, I’m pretty sure most people don’t talk to their Dev teams if they can help it, and it does seem like most Dev teams seem to be on less than a perfectly chatty basis with their DBAs. is that the cause? The thing is, you are experiencing pain in terms of time spent, mistakes made, slower deployments, less frequent deployments, possibly even down time, all because you don’t do your deployments right. And deployments start from source control.

Developers have spent the last 30 years or so figuring out better and better ways to arrive at functional code in the hands of their customers (internal or external) as fast as possible, as accurately as possible. Over the same 30 years, DBAs have been figuring out how to better and better protect the information under our charge ensuring that it’s backed up, available, performing well, and always on (to use a phrase). My suggestion to you, data pro, talk to your developers. Figure out what they do and how they do it. Take advantage of their years and years of process improvement and apply what they’ve learned to your database development and deployment.

There’s a new concept growing out there. It’s fairly well established within the *nix communities, DevOps. It’s the realization that the world doesn’t begin and end with your database/server/application. Instead, your database/server/application is completely dependent on the database/server/application that it needs to run. Notice, no one is more important here. We’re talking about creating mechanisms for teams to deliver functionality to their clients (internal or external). And it all starts with the realization that there are parts of the process that some of us are better at than others. Developers know how to develop and deploy within teams. Let’s learn from them. And the start, well, that’s source control.

So, is your database under source control… for real. If not, get it there. The excuses I used to have are gone. That means the excuses you have now are probably gone too.

Fair warning, I may use the term DevOps more in the future.