Jul 23 2014

Challenge Accepted

There seemed to be some question whether my comfort level with my own masculinity would prevent me from wearing these:

Fluffy

Oh please! Couldn’t we be a little more challenging? Anyway, here’s the deal, you donate to a good cause, Doctors Without Borders. We hit 10K and I’ll sport those lovely rainbow whatever-they-are at the PASS Summit 2014. Sound good?

I’ll go one better. You double the goal, make it hit 20K, and I’ll present my session while wearing the rainbow whosimawatchits. BOOM!

BRING-IT

Jul 21 2014

Victims of Success

I took part in the PASS Summit 2014 selection committee this year because I was really curious about seeing how the sausage gets made. I’ve seen how actual sausage gets made and I still eat sausage.  Despite a few hiccups and communication issues, internal and external, I think the selection process for the Summit went really well this year. But, there was still some controversy. Being a naturally pushy person, I got involved in the controversy, for good or ill, and subsequently have had conversations with many people about the selection process (which, reiterating, I think went extremely well overall). But, the one thing that kept coming up over and over was a simple question:

How come I/PersonX didn’t get picked?

The easy answer is because you/PersonX had a horrible abstract. But you know what, in probably most cases, that’s not true. Good abstracts by good people didn’t get selected, so what the heck? I think the more complex answer does not go back to the selection committee or the selection criteria or the selection process. Do I think some improvements are possible there? Yes, and I’m putting my foot where my mouth is (or something) and joining the committees to try to make some tweaks to the system to make it better (and really, we need tweaks, I want to repeat, risking ad naseum, the process went well and worked great and I’m happy I took part and I think the outcome is pretty darned good). No, the real problem lies elsewhere, SQL Saturdays.

I’m not saying SQL Saturdays are themselves a problem. What I’m saying is that PASS took on the whole SQL Saturday concept for several reasons, one of which was for it to act as a farm team for speakers. This will be my 10th Summit. Looking back to 10 years ago, while I really loved the event, oh good god have the speakers improved. I remember sitting in sessions with people who were mumbling through their presentations so much that, even with a microphone, you couldn’t hear half of what they said. Slide decks that consisted of 8-12 pages of text (yes, worse than Paul Randal’s slides, kidding, don’t hit me Paul). Speakers who really, clearly, didn’t have a clue what they were talking about. It was kind of rocky back then. I learned my second year that you had to talk to people to find out, not just which sessions sounded good, but which speakers were going to present those sessions well enough that it would be worthwhile. Why were there so many weak presenters? Well, because there was almost nothing between speaking at local user groups and speaking at Summit (I made the leap that way). There were a few code camps around, a couple of other major events, various schools and technical courses, and Summit. I don’t know how the old abstract/speaker review process worked (and I apologize to whoever read my first abstract because I know now just how horrific it was and I’m so sorry I wasted your time), but I’m pretty sure they were desperate to get enough submissions that sounded coherent with a speaker attached that probably could get the job done. Not any more.

Now, people are getting lots of opportunities to present at SQL Saturday events all over the world. And it’s working. We’re growing speakers. We’re growing good speakers. Don’t believe me? Then you go to two or three events in a month, sit through 8-12 sessions, mostly by newer people, not Brent Ozar, not Denny Cherry, not Kim Tripp, and you review them, each, individually, then go back and try to pick the best one. Oh yeah, there’s going to be a few dogs in the bunch, but overall, you’re going to find a great bunch of presentations by a great bunch of speakers. Our farm system is working and working well. But there’s a catch.

Because we have upped the bar pretty radically on all the introductory level speakers (and if you’re thinking about presenting, don’t let that slow you down, everyone starts at zero and goes up), that means the competition at the top (and yes, I do consider the Summit the top in many ways, not all, see SQLBits) is becoming and more and more fierce. That means, my abstracts probably need quite a bit more polish than they’re getting (and so do yours) because there are a whole slew of speakers coming up that are writing killer abstracts. That means I need to really be concerned about the evaluations (despite the fact that I get dinged because the stage is low, the room is hot/cold, lunch didn’t have good vegetarian choices, England left the Cup early, all outside my control) because there are new speakers that are knocking it out of the park. In short, you/I/PersonX didn’t get picked because the competition has heated up in a major way.

In short, a sub-section of the community, defined by those who wish to speak, are victims of the success of the farm team system as represented by SQL Saturday. On the one hand, that sucks because I now need to work harder than ever on my abstracts, on the other, we’re going to see very few instances of really bad presentations at Summit. We’ve improved the brand and the community. It’s a good thing.

Jul 15 2014

Execution Plan Details

I wouldn’t say it’s common knowledge that you should look at execution plans when tuning queries, but it’s not exactly uncommon knowledge either. But, people tend to get focused on just looking at the graphical part of the plan and there’s just not enough information there. Let’s take a look at a query:

SELECT  pc.Name,
        ps.Name,
        v.Name,
        pr.ReviewerName,
        p.Name,
        v.ModifiedDate,
        p.Color
FROM    Production.Product AS p
        LEFT JOIN Production.ProductReview AS pr
        ON pr.ProductID = p.ProductID
        JOIN Production.ProductSubcategory AS ps
        ON p.ProductSubcategoryID = ps.ProductSubcategoryID
        JOIN Production.ProductCategory AS pc
        ON pc.ProductCategoryID = ps.ProductCategoryID
        JOIN Purchasing.ProductVendor AS pv
        JOIN Purchasing.Vendor AS v
        ON v.BusinessEntityID = pv.BusinessEntityID
        ON pv.ProductID = p.ProductID
WHERE   v.ModifiedDate = '2006-02-17 00:00:00.000'
AND p.Color LIKE 'Y%';

This generates an execution plan that looks like this:

PlanDetails

Neither v.ModifiedDate nor p.Color have indexes. Yet, we only see a single scan in this plan, on the BusinessEntity.Vendor table. Why? Well, we can’t tell from the GUI directly, so, you have to look to the tool tips or the properties. The tool tip in this case actually proves helpful, as does the properties:

PlanDetailsProperties

In short, the clustered index is used to seek out a number of rows and then a secondary predicate is placed on those few rows to further filter the results. You would never even guess at that using just the GUI alone. You have to look to the details of the execution plan to understand that. There are lots of other examples where you can only get the information from the properties because not everything is available in the tooltip. This can include things like Optimization Level, Reason For Early Termination, Scan Direction, and a bunch of other things. Just remember to drill down to the properties in order to better understand your execution plans.

Why do you need information like this? Are we going to tune this query? Maybe we could help it by adding an index t the Vendor table. But, at least for this query with this date range, it appears we don’t need an index on the Product table. But, that may change depending on the number of rows returned from the Vendor table. A different data set can result in an entirely different execution plan with entirely different performance characteristics.

Query tuning is tough. That’s why I’ve put together a one-day seminar to get you started on it. There’s still room in Albany, on July 25th. You can register here. I’ll also be teaching at SQL Connections in September in Las Vegas. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on just execution plans at SQL Server Days. Go here to register for this event. When I did a similar session at the PASS Summit two years ago, it sold out. I’ll be presenting at Summit this year in Seattle in November. Go here to register. But do it early or you may be put on a waiting list.

 

 

 

 

Jul 11 2014

Speaker of the Month, July 2014

Another month another bunch of great presentations. I almost don’t want to do this any more. It’s hard. I sit through a presentation and I think, “Well, here’s the winner this month.” Then I go to another presentation and I think, “Well, fudge, now one of these people loses.” Then I go to a third and I’m simply blown away. And now I have to pick. Well, it’s hard. So let me do this, I’m going to declare two winners this month, but only review one of them. Hey, my blog, my rules.

First, I want to award speaker of the month for July 2014 to Wayne Sheffield(b|t) and his presentation Table Variables and Temp Tables that I saw at SQL Saturday 294.

What’s my measure? That I learned stuff and was entertained. Well, I pretty much guarantee, unless you too are an MCM, and none of you can ever become one, ever again, you’ll probably learn something from Wayne too. This was a great presentation. I liked the way he started off with a “What do you think” set of slides to kind of get the audience primed for the material that was coming. It was a good approach, much better in a lot of ways than simply giving an agenda. And the information, scads of it, just flowed out of the presentation. I picked up a number of things that I didn’t know about some of the innards of cardinality estimates on table variables (during recompiles). The demos were absolute perfection. They did an excellent job of making the points in an extremely clear manner. And Wayne told a joke that I immediately stole (sorry Wayne, it was that good), the very next week while presenting (I did give you attribution, afterwards). In short, informative, structured, educational, entertaining, a great presentation.

There were a few issues that Wayne and I talked about. He has too much information to give out in one hour. Because of this, Wayne tends to not stop or pause to ensure the audience got his point and just moves on. It’s frankly a mixed bag on this. I think he ought to pause to ensure people are understanding his concepts (because they’re great), but he has a lot to present, so he can’t waste a lot of time (because his stuff is great). Tough conundrum, but I’d suggest going with trimming some material and ensuring full understanding, but that’s one that people can honestly disagree on. While when he tells a joke, it’s funny, the overall presentation style was just a little dry and slightly monotone. I think that’s just a question of a little practice more than anything. Maybe chatting up the crowd before you start to loosen yourself up will help.

That’s it. It was a great presentation with tons and tons of wonderful information. I’m quite happy I attended it. You should track it down too.

Second, and not second place, just my other awardee of the month, Louis Davidson(b|t)  simply blew my socks off with his presentation on Database Design Fundamentals at SQL Saturday 286. Louis is an MVP, author and frequent presenter, and oh my god, now I know why. It’s like a switch flips and Louis, who is a pretty quiet and unassuming guy, turns into SUPER-PRESENTER! And a tsunami wall of information comes hurtling at you along with quips and experience and stories. It was one of the single great presentations I’ve seen in, well, forever. So if I didn’t bring up that fact here on the blog, I’d be doing him an utter disservice. But… my blog, my rules, he’s so good, I didn’t want to just hand him the title and be done. It wouldn’t be fair to all the other amazing presentations I saw that they were upstaged by a consummate professional like Louis.

Please people, help me out, do really horrible presentations, at least some of the time, so I don’t have to actually think about this stuff. I’m lazy and you’re making me work.

Jul 08 2014

Worst Day of a DBAs Life

Red Gate Software is running a campaign around coping with the worst day of a DBAs life. We’ve been posting some really fun stories with, I hope, a kernel of useful information inside each. Chances are, if your DBA career has been like mine, your worst days don’t involve explosions and car chases. But now they’re asking for people to write up stories, what was the worst day in your life as a DBA. I know, I know, first world problems, right? Regardless, I have yet to put a company out of business or kill anyone with any errors I’ve made, although I’ve worked at places where either was possible. But the one day that just stands out, well it started about three weeks ahead of the bad day.

I was working for an internet startup. I was very much still learning the ropes as a DBA, although, I had helped design a system on SQL Server 7.0 that was collecting about 1gb of data a day. Back in those days, that was big data. But, frankly, I wasn’t doing the monitoring correctly. I was doing a lot of manual checks and manual maintenance, stuff I should have taken the time to automate. Live & learn, right? Anyway, because I was taking a lot of time out of each day to do maintenance and run checks on the systems, I wasn’t spending lots of time supporting the development teams. One day, one of the managers came in and said, “No more maintenance. Things should be fine. Spend time on development.” And he was serious. I argued and lost. So I started spending a lot of time doing development and let the maintenance slide. Fast forward three weeks, things had largely been stable, but, I didn’t have monitoring in place, so I wasn’t noticing that we were running a little hot on transactions. The transaction log was bigger than it had been. And then disaster struck. The backup drive filled. I didn’t notice. Transaction log backups started failing. I didn’t have alerts. The log drive filled, all the way, and our 24/7, zero downtime web site went kablooey.

It was at 2 in the afternoon or something, so I, and my fellow DBAs were right there. We immediately tried to backup the log, but it wouldn’t backup. We tried adding a log file. Didn’t work. Then we started getting creative. We tried every possible thing we could think of. Some of them failed quick, so we tried something else. Some of them took hours to fail, making us think we had fixed the problem. It took us 48 hours of failed attempts before we finally admitted defeat, went to the last good backup and restored the database, losing about 12 hours worth of transactions. It was a nightmare.

The good news was, our directive to stop doing maintenance was cleared. We immediately went about setting up alerts and other things so that we wouldn’t get surprised like that ever again. It wasn’t fun, but it was a great learning experience. Plus, all the troubleshooting for 48 hours straight provided excellent camaraderie within the team. That said, I’d rather have avoided the whole thing, and could have with proper monitoring and alerting.

Jul 03 2014

Reflections on the 2014 PASS Summit Selection Process

Oh we are a bunch of high school kids at heart. Maybe high school never ends (and there’s a nightmare, god I hated high school). But, there’s been drama about the 2014 PASS Summit sessions and the Selection Committee’s work. I was on the committee. I worked as a part of the team responsible for rating sessions for the Azure track (said track is gone, more on that later). As self-serving a statement as this is, I think we did a good job. Further, I think the process worked. You can read the official explanation of the process here. Amy did great work and deserves your thanks. All the volunteers who reviewed over 900 submissions from more than 300 people, ON THEIR OWN TIME, FOR FREE, also deserve your thanks. The vitriol directed at the PASS organization over the outcome of this selection process is not directed only at the Board. It’s also directed at the volunteers. And, as a volunteer, that sucks.

The team I worked on rated, I forget, 50 sessions I think. We had to read through them and give them a score based on several criteria. We also had to write comments on each and every session. I was dinged by HQ for not writing a comment on a session that I gave 5′s to on the ratings (so I commented something like “Can’t wait to see this at Summit”). We were only given 10 slots to fill, so that means 40 sessions got kicked to the curb. That’s a lot of people who didn’t get selected. And not getting selected sucks (yes, I do know, I’ve been rejected by a number of events this year, big ones, even ones I’ve spoken at previously, not whining, just pointing out that I don’t have a secret method for getting accepted). Our track actually got eliminated and the sessions that we selected were distributed to other tracks. Also, a couple of sessions we rated highly didn’t do so well when the speaker scores were applied, so there was some shift there (one thing PASS could improve, give us some indication of the secret sauce there, we know there is one, but a little understanding of how it’s applied would help). But over all, the sessions we rated highly, got selected. Congratulations and well done to those speakers. Just look at the people presenting, many for the first time. That’s going to be an absolutely awesome event. And once more, thank the volunteers for doing all that work.

So, some of you are now thinking that, “Oh, Grant’s on the side of PASS” (well, actually, yes, I am, so should you be) “Grant has been told to be nice and play good and not be critical” (even though I’ve already made a criticism about the magic numbers and I was tweeting almost literally threatening messages this week) “Grant got selected so he’s being a <insert bad name here> about the whole thing” (I may or may not be a <insert bad name here> but I don’t agree that I’m being one about this) or, maybe you’re on the other side “OMG! He’s criticizing PASS in any regard, The HUMANITY! Have you at long last sir no decency” (no, not really).

Remember those comments, that I had to write for every abstract, including the great ones? I put a small critical review of the abstract in every one (OK, not the one that I gave 5s to). I said what was wrong with the abstract in my subjective opinion. And let’s be perfectly clear about that (channeling President Obama), they’re my opinions. If I thought you didn’t define the problem space your presentation was meant to address, I said that. You disagree? OK. If I thought your very clever and witty title seriously detracted from the clarity of what the session was about, and it wasn’t that witty, I said that too. You’re the wittiest person you know and everyone says so? OK. My opinion may not jive with yours. But, it’s the one thing I’ve seen everyone who has ever been rejected by the committee ask for, “Tell me what I can do to improve.” OK. I did. At least in my opinion. On every single abstract (except that one).

PASS didn’t release them.

And then, PASS did.

The volunteers (unpaid, remember) did the work, and now it gets to see the light of day.

This brings up a number of points. First, when I got rejected by those other events, did I get a reason for my rejection? Nope. Other events just reject you, thanks for playing. I think PASS, which is all about community, should be different. We should tell people why, not just that there were higher rated sessions, but what they can do to improve. I’ve talked to people in the know, not all the comments provide that kind of information. I think we’ll get better next time. Second, peoples feelings are going to be hurt by these comments. Yes. Yes they are. Suck it up buttercup. You want to know what you can do to improve so you can get selected, but your abstract is absolute perfection (in your opinion), so how dare someone else suggest that it’s not worthy of inclusion, blah, blah, blah, We’re going to see lots of blog posts where people disagree with these comments and that could reflect back in some negative way on the organization. I suppose so, but if we’re going to be about community and we’re going to try to raise up new speakers, we’re going to have to be able to deal with some degree of friction. That may even come from experienced people irked that they didn’t get picked. Everyone has a bad day. Again, I think we can weather this. Finally, the different teams and individuals on the teams probably gave substantially different levels of comments with varying degrees of quality. Some of the comments are just going to be useless. Further, My opinion probably doesn’t jive with my teammates in every regard. Maybe a team didn’t put critical comments in at all (although they had to put in comments). Yes, these things are going to be uneven, maybe even contradictory. OK. Again, cope.

This blog post once started off as a rebuke of the selection process around those comments. It’s not now. I want to repeat, one more time, I think the committee did great work and selected an awesome set of presentations that will make for a wonderful Summit. Thank you for all your hard work. And thank you, Amy, for doing a great job organizing what is a daunting task. And thanks for releasing the comments.

Jul 02 2014

Statistics and Natively Compiled Procedures

Statistics are one of the single most important driving factors for the behavior of the query optimizer. The cardinality estimates stored within the statistics drive costing and costing drives the decision making of the optimizer. So, how does this work with the new SQL Server 2014 natively compiled procedures? Differently.

In-memory tables do not maintain their statistics automatically. Further, you can’t run DBCC SHOW_STATISTICS to get information about those statistics, so you can’t tell if they’re out of date or not or what the distribution of the data is within them. So, if I create some memory optimized tables, skip loading any data into them and then run this standard query:

SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = 'London';

The estimates in the execution plan for the query show some pretty wild values:

Estimates

That’s an estimated number of rows of 65,536 and an actual of zero in a table scan because I created my table without an index. If I recreate it with an index, but still no data, the estimates change and we have an index seek operation now:

EstimatesIndex

That’s suggesting that the optimizer thinks there are 256 rows. But there’s no data here. So, let’s load some data into the table. Here are the new estimates from the index seek operator:

estimateswdata

I haven’t yet updated the statistics, so the optimizer still thinks there are zero rows in the table, or at least, it has no statistics. Well, not true:

SELECT  s.name,
        s.auto_created,
        s.user_created,
        s.filter_definition,
        sc.column_id,
        c.name AS ColumnName
FROM    sys.stats AS s
        JOIN sys.stats_columns AS sc ON sc.stats_id = s.stats_id
AND sc.object_id = s.object_id
        JOIN sys.columns AS c ON c.column_id = sc.column_id
AND c.object_id = s.object_id
WHERE   s.object_id = OBJECT_ID('dbo.Address');

If I run this, I’ll see statistics on the table, including system generated statistics:

stats

One point, while statistics don’t update automatically, they clearly can still be created automatically. But I can’t run DBCC SHOW_STATISTICS to see what’s in there. So, let’s see what estimates look like in the natively compiled procedure. I’ll take the same query code above and convert it to a proc. Then, when I capture the estimated plan from the procedure (no actual plans allowed), the index seek operator shows these estimates:

estimatescompiled

Now, we have an accurate estimate on the number of rows this query is likely to return. Interesting. So, let’s drop the procedure, update the statistics and then rerun the query and the procedure. First the query. The estimates don’t change. I’m still seeing estimated values as 256 while the actual is 434. So, let’s free the procedure cache and try again:

EstimatesUpdated

Ah, there we go. The plan itself came out the same way, but we clearly have more accurate estimates now. On to the procedure. I’ll recreate it and then get the estimated plan. Here are the estimate values from the same index seek operation:

Estimatescompiledupdated

Oops. Still estimated 0 rows.

What’s all this mean? I’m not sure. The documentation from Microsoft in this area is sketchy. During the most recent 24 Hours of PASS, I was able to ask Microsoft about the impact of statistics on natively compiled plans. They suggested that it was not necessarily going to be the same as we see in standard queries. These tests make that fairly evident. Also, it looks like the default values of estimated rows for in-memory tables is different. If I create standard tables, empty, and run the same query against them, the estimated number of rows is what I expect, 1. But in the case of in-memory tables it’s 256 with an index and 65,536 without one (or at least that’s what I’m seeing). However, the estimates for the natively compiled procedure never changed in this test case, always at 0. This is hardly shocking, but it seems that different rules apply for in-memory tables and their statistics as well as natively compiled procedures and their consumption of those statistics. And, as Microsoft has changed the default estimated number of rows for table variables from 1 to 100 in SQL Server 2014, it seems we have another instance where they’re defaulting to an even higher value and one where the values seem to just disappear.

The behavior of statistics within in-memory tables is extremely interesting because you may see changing behavior with those tables as your queries get more complex and your data changes. It makes a very strong case for making sure that you update your statistics on a regular basis on these tables.

I’m taking this show on the road. If you want to get an all day class on query tuning, I’ve got a lot of opportunities coming up. I believe that Albany, on July 25th, is not yet sold out. You can register here. I’m teaching an all day session at SQL Connections in September in Las Vegas. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event. I’m excited, and more than a little humbled, to get the opportunity to present an all day pre-conference seminar at the PASS Summit in Seattle in November. Go here to register.

Jun 30 2014

The Curse of Working With A DBA

noI no more than finished my rant from last week than I started thinking about all the reasons why a healthy chunk of the reasons that developers want to bypass relational database is not the horror of the relational database itself, although, that’s there. No, a very large reason why is the DBA.

We’re on a blog called The Scary DBA. I earned that title, well sometimes. Sometimes I got it and I wasn’t sure why. However, it’s perfectly in keeping with how many people view their database administrators; grumpy, obstructionist, slow, difficult, control freak, etc.. There are even jokes about it, “What’s the DBAs favorite word? No!” And for those answering “It depends” that’s two words.

I understand why. In large part it’s that phone in your pocket (used to be a pager on your belt, I’m old). That darned thing can go off any time night or day. It tends to make you very gun-shy. You start doing anything you can to keep that thing from going off. And developers, holy moly, they want to change things. They want to introduce new tables and new queries and they want to do it all really fast, faster than you can possibly review all that code, and all, ALL, AAAAALLLLLL, that code needs to be reviewed before we can let it unsettle our production servers. No.

And developers get crazy ideas in their heads sometimes. Maybe it would be easier to put the queries in the code rather than in stored procedures? What? How the heck can I review all the code too? No.

Developers also start thinking to themselves, you know, most of this T-SQL code could be generated using other code. Wait, that means even more T-SQL generated even faster, and generated by a program, and I can’t review that program, or it’s code and you want to put it into my production server? Are you smoking something over there? No.

CLR? Ha! No.

ORM tools? Have you seen that T-SQL? Hell no!

How about other tool sets? Maybe an object database would work here. We may be better off using unstructured storage for data collection in this situation. ID/Value pairs might work well for this application. No, no, no and no again. Just in case you think of something else, no.

Gee, I’m sure if I were a developer I’d be perfectly happy with this approach. I’ve no doubt as developers introduce even harder subjects like agile development, devops, and other things in the future the responses will be just as nuanced. In short, I’d be doing anything I could to bypass the DBAs too.

So, what do we do about it as DBAs? Change. Use the word “Yes.”

We need to recognize that the business is changing, fast. That means that the applications are going to change, faster. We, DBAs, must become enablers. We must create processes and methods that smooth and speed the deployment process in order to provide lots of opportunities for automated testing because you’re not going to review this code and you can’t just stand in the way. We need to adopt to and adapt the development and deployment paradigms used by the developers. We have to start treating databases, as much as possible, like code. We need to have our code in source control along side the application code. We need to be at the stand ups. In short, we need to change what we do and the way we do it. We can’t just say no. We need to say yes. The goal is to get in with the developers and influence through assistance, not just stand in the way.

Is it more work? Yes. Is it going to be hard? Yes. Will we have to go quite a long ways to convince them that we’re not just going to say “No” again? Yes. Are there damned good reasons for us to make these efforts so that someone who loves and protects the data and will be able to provide special skills not developed by most programmers? Yes again.

See, it’s easy. Try it.

Jun 26 2014

Passion

I know I tend to be overly passionate. It’s something that has gotten me into trouble in the past. It’s also probably a huge factor in the things I’ve been able to accomplish in life. I’m bringing it up at this time because I think passion is causing some conflict within the community around the Professional Association for SQL Server (PASS).

On the 25th of June just past the announcements went out for the sessions accepted at the PASS Summit 2014. I found this stressful and exciting two ways. First, and for me personally, most importantly, because I had submitted several sessions and I desperately wanted to speak at the PASS Summit (I’ve spoken there every year since 2008 and I’ve made the Top 10 sessions two years in a row, for which I’m truly grateful, back to our story). Second, because this year I wanted to help make a difference so I volunteered on the selection committee (and I was on a committee other than one I submitted for, I didn’t influence selection there at all). I wanted to get my sessions accepted, and I wanted to see the work I put in on display. Happily, both occurred. But, the day was marred.

Let’s sidetrack (again) for a moment. I consider myself to be just a guy, a DBA, a developer, an IT pro. It’s what I’ve been doing for 20+ years (yeah, I’m old) and I’ve been relatively successful at it. But, I’m also a Microsoft MVP, a published author, frequent blogger, and an international speaker. I attribute most of that stuff, not to any great ability I have, but to a lot of luck, a lot of hard work, and, here’s the kicker, to my involvement with PASS. Go back ten years, I went to my first Summit down in Dallas, TX. I attended sessions and went back to my hotel room, except one night. During that day I had spent a little time chatting with a company and they invited me to a party they were throwing that night. I went. And I met some people. They were just DBAs and developers, just like me, but, they were also involved in the organization that put on the event, PASS. I liked these people. So, I started volunteering which led to another Summit and another and writing and speaking and… well, let’s just say, getting involved was a good thing. Being passionate about it all paid off, literally and figuratively. I really do owe PASS and the people that make it up a lot.

So, there are a lot of passionate people in this little gang of ours. And some of those passionate people didn’t like the outcome of the selection process. Being passionate, they voiced their opinions. LOUDLY. At length. Some of what they said had merit. Some of what they said was just hurt feelings. Some of what they said was a complete misunderstanding of how things worked within the committees and the selection process. But a lot of passionate people, who care about PASS, argued for a little while about the Summit selections. And, being a passionate guy, I took part. A lot of the work I did for the committee wasn’t making the light of day (more on that later, maybe, depending on how some internal communications turn out) and I was quite passionate about that. I don’t know this for a fact, but I suspect pretty strongly that my passion, what’s more, my public passion, around this topic made some people angry. I’m positive that others passion for the topic, regardless of their causes and the rightness or wrongness of their cause, definitely made people angry. Here’s where I get in trouble.

Get over it.

If we didn’t care about PASS and what the organization has done for us, and how we’d like to help it, and help others, and grow it, and reward ourselves (because I do believe everyone is fundamentally greedy, might as well acknowledge it), and just plain replicate the experience for others that I’ve had (because it’s been an overwhelmingly positive experience, I can’t say enough good things about PASS), then there wouldn’t be any passion. And if there was no passion, there would be no brouhaha and hurt feelings and the developing cliques (oh yeah, people are drawing lines like this was a war in the Balkans, apropos on the 100th Anniversary of World War I). But you know what, if there wasn’t any passion for, in, and around this organization, then it wouldn’t be the organization that it is.

It’s a great organization and people are going to be passionate about it. Cope. Passion is going to lead people to saying negative as well as positive things. Deal. People just might say negative things about you. Develop an epidermis.

Look, we should be able to disagree without being disagreeable, but passion leads us down dark roads sometimes. Let’s try to be understanding of that fact and recognize that the passion that makes this organization great is also the one that’s going to lead to conflict sometimes. Let’s just try to remember that and maybe we’ll be able to work towards sharing the great things this organization does with others and fight with each other less. Maybe.

NOTE: I made an edit about the work I did on the selection committee. It was on a track that I didn’t submit for. There was no way my work there could influence my selection. Plus the fact that the abstract evals and speaker evals were done by two different teams of people. Just want to be clear about that.

Jun 25 2014

The Utility of Execution Plans in Natively Compiled Procedures

I’m actually having problems identifying the utility of execution plans when working with natively compiled procedures. Or, put another way, why bother? I’ve posted a couple of times on natively compiled procedures and SQL Server execution plans. I’ve found the differences interesting and enlightening, but I’m seriously questioning why I should bother, at least currently. I’m sure there will be many changes to the behaviors of the natively compiled procedures and their relationship with execution plans. But right now, well, let’s look at an example. I have three simple tables stored in-memory. Here’s the definition of one:

CREATE TABLE dbo.Address
    (
     AddressID INT IDENTITY(1, 1)
                   NOT NULL
                   PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 50000),
     AddressLine1 NVARCHAR(60) NOT NULL,
     AddressLine2 NVARCHAR(60) NULL,
     City NVARCHAR(30) COLLATE Latin1_General_100_BIN2 NOT NULL,
     StateProvinceID INT NOT NULL,
     PostalCode NVARCHAR(15) NOT NULL,
     ModifiedDate DATETIME
        NOT NULL
        CONSTRAINT DF_Address_ModifiedDate DEFAULT (GETDATE())
    )
    WITH (
         MEMORY_OPTIMIZED=
         ON);

I can then create the following code as a natively compiled procedure:

CREATE PROC [dbo].[AddressDetails] @City NVARCHAR(30)
    WITH NATIVE_COMPILATION,
         SCHEMABINDING,
         EXECUTE AS OWNER
AS
    BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
        SELECT  a.AddressLine1,
                a.City,
                a.PostalCode,
                sp.Name AS StateProvinceName,
                cr.Name AS CountryName
        FROM    dbo.Address AS a
                JOIN dbo.StateProvince AS sp
                ON sp.StateProvinceID = a.StateProvinceID
                JOIN dbo.CountryRegion AS cr
                ON cr.CountryRegionCode = sp.CountryRegionCode
        WHERE   a.City = @City;
    END
GO

When I call for an estimated plan (remember, no actual plans) I’ll get this:

Scan

If you click on it, you’ll note that there’s an index scan. But the costs are all zero. Everything is FREE! Or not. The execution time is 93ms. If I put an index on the City column, the execution plan changes to the one I showed previously, an index seek, and the execution time goes to 42ms. Clearly, the scans are costing something. Scans aren’t necessarily bad and seeks aren’t necessarily good, but it’s hard to spot issues with execution plans with no costing involved at all. Which makes me wonder, should we bothering with execution plans for the natively compiled procs? I’m honestly unsure.

For most query tuning, statistics matter a lot. I understand we still have room in Albany on July 25th. You can register here. I’m doing an all day session at SQL Connections in September in Las Vegas. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event. I’d love to talk query tuning with you all day long.