Execution Plan Estimated Operator Cost

Home / SQL Server 2008 / Execution Plan Estimated Operator Cost

I’ve said it over and over again, the costs on operators in execution plans, even in actual execution plans are estimates.  You need to understand that when looking at your execution plans. It’s vital because you need to be able to distinguish between the truly costly parts of a plan and the less costly parts of a plan. Don’t believe me? Take a look at this picture and see if you can spot the discrepancy:


Spot it yet?

Add up the costs for the operators visible in the part of the plan…

Yep 125%.  And there’s more to the plan that I’m not showing. I think this one must total near 200%. The statistics are up to date and there’s no consistency errors in the database. These estimates are just off sometimes.

This is a rather costly query being run against Microsoft Dynamics CRM. There were two missing queries identified by the optimizer and about four key lookup operations that I fixed with two index changes. This jumped out and I wanted to share. The fact is, right now, the query is working well. That particular table has one scan and some reads, identified from looking at the STATISTICS I/O, and it’s not even close to 100% of the cost of the query, but I would sure be in a panic if I believed the estimated operation cost.


  • Erland (one of the SQL MVPs) often mentions the query that he had once where the estimated costs totalled over 10 000% I think in his case it had to do with a recursive CTE.

    It was not a once-off occurrence.

  • Nice post , I think that most people starting out with performance and query plans tend to trust the figures 100%. I hope lots of people read this and realise you need to know which operators are “bad” vs “good”

  • scarydba

    This one is very consistent too. It’s absolutely not the first time I’ve seen the estimated costs be completely off. It is the first time I’ve seen the estimated costs add up to more than 100%. 10,000% would be cool.

  • scarydba

    Thanks Christopher. The thing is, the information is out there from lots of sources, but I don’t think people believe it. But surely, a picture is worth a thousand words, seeing is believing, are you going to believe me or your lieing eyes… you know the drill.

  • The problem with ‘bad operators’ (and it’s a topic I have on my blog to-do list) is that an operator’s classification of bad or good is very dependent on the conditions and the query.

    A table scan is not a problem on a table with 5 rows.
    A clustered index scan is the best way to evaluate a query that returns 95% of the table
    A hash join is the ideal operator to join two 5 million row, unsorted result sets.
    A nested loop join is terrible when the two result sets that it’s joining are very large.

    Yet people often consider table scans, clustered index scans and hash joins to be bad and nested loop joins to be good.

  • Patrick

    Grant – Love your articles. A quick question related to execution plans but not specific to this post…

    I’m reading ‘Dissecting SQL Server Execution Plans and there’s some confusion relating to Nested Loop Joins (page 55). In figure 16, the description of the nested loop states that the top input is the outer and the bottom input is the inner. However in your description to you say that, “… the outer data set [is] the bottom operator in a graphical execution plan.”

    Could you clarify which is correct?


  • scarydba


    Thank you for the compliments. I usually write on stuff that I’m interested in or excited about it. It’s good to know I’m being useful to others as well.

    Both say the same thing, sort of. It’s just which of the two is “outer” and “inner” varies. The first data stream, on top, is scanned a row at a time against the set of data on the bottom. But, my text should have matched the picture, since that’s straight out of SQL Server. I would always take what’s there as correct (unless completely and utterly proved otherwise, and even then… question it.).

    Thanks for pointing it out.

  • If I recall, the bottom table (in the exec plan) is the ‘inner table’ of the nested loop. This can be seen from the number of executions of the seek/scan on the bottom table.

    Since a nested loop is just that, the inner table will get read once per row of the outer resultset. Knowing that, a check of the execution count of the seeks/scans will show which table is inner or outer.

    for (i-0; i < table1.rowcount; i++) {
    // read table2 and return matching rows

    Execution count is shown in the tooltip when viewing exec plans in 2008’s management studio. 2005 doesn’t show the exec count, but it is in the XML for anyone feeling adventurous.

  • I just ran across a query on a server I acquired few weeks ago that shows the cost % is 427095307%. I wish I could post the screenshot here!

    Its huge table and the statistics sampled ratio is 0.11%. Time to update stats with fullscan.

  • scarydba

    That’s a big number. It’s funny because, I’ve never seen this until last month, now I’m seeing it all over the place and the numbers are getting huge.

  • Sam Trenchard

    In your book on understanding the optimizer, you mention in relation to Indexed views that the output may not be syncronous with the query against the underlying tables. How can this be possible ? I understood that the schemabinding and unique clustered index ensured that the aggregated data in the view was in step with the base table data.
    Hope you have time to respond.
    Sam Trenchard

OK, fine, but what do you think?