The Utility of Execution Plans in Natively Compiled Procedures

Home / SQL Server 2014 / The Utility of Execution Plans in Natively Compiled Procedures

I’m actually having problems identifying the utility of execution plans when working with natively compiled procedures. Or, put another way, why bother? I’ve posted a couple of times on natively compiled procedures and SQL Server execution plans. I’ve found the differences interesting and enlightening, but I’m seriously questioning why I should bother, at least currently. I’m sure there will be many changes to the behaviors of the natively compiled procedures and their relationship with execution plans. But right now, well, let’s look at an example. I have three simple tables stored in-memory. Here’s the definition of one:

I can then create the following code as a natively compiled procedure:

When I call for an estimated plan (remember, no actual plans) I’ll get this:

Scan

If you click on it, you’ll note that there’s an index scan. But the costs are all zero. Everything is FREE! Or not. The execution time is 93ms. If I put an index on the City column, the execution plan changes to the one I showed previously, an index seek, and the execution time goes to 42ms. Clearly, the scans are costing something. Scans aren’t necessarily bad and seeks aren’t necessarily good, but it’s hard to spot issues with execution plans with no costing involved at all. Which makes me wonder, should we bothering with execution plans for the natively compiled procs? I’m honestly unsure.

For most query tuning, statistics matter a lot. I understand we still have room in Albany on July 25th. You can register here. I’m doing an all day session at SQL Connections in September in Las Vegas. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event. I’d love to talk query tuning with you all day long.

 

 

2 Comments

  • alzdba

    As always: it depends !
    If your environment urges you to squeeze the last ms (millisecond in this case 😉 ) out of it, your plan may show the actually used objects and guide you to the things you didn’t expect.
    We’re talking InMemory, (hearsay) currently 20 – 70GB/Sec scan speed.

    I would indeed love to see some statistics info in the SQLPlan too, but the dev team may have chosen to not show it all, due to the expected volatility of it all.

    Great read!

  • Funny how this one was running in milliseconds. Last several times I tested it was microseconds. But yeah, I see the point. Honestly, I think we’re seeing an artifact of their speed to market. It’s also reflected in the size of the footprint of T-SQL code they support. They shipped it as is rather than add bells & whistles. That’s fine, but I’d like a bell or two to get added.

OK, fine, but what do you think?