Extended Events: Live Data Explorer, Grouping

Of all the things that Extended Events does, I’ve found the ability to quickly and easily gather a little bit of data and then use the Data Explorer window Live Data grouping to aggregate it to be one of the greatest. Sure, if we’re talking about using Extended Events on a busy production server, this method probably isn’t going to work well. There, you are going to be better off querying the XML (I know, I know, but I have ways to help there too).

But in development, when doing testing and query tuning, the Live Data window is a gift of the gods on par with fire or beer (it’s not as good as whiskey).

Live Data Grouping

Let’s imagine a scenario like this. You’re working on some query tuning. You have multiple sessions running from different people. Yet, you care only about one procedure. You’re capturing data using this Extended Events session:

CREATE EVENT SESSION QueryPerformance
ON SERVER
    ADD EVENT sqlserver.rpc_completed
    (WHERE (sqlserver.database_name = N'AdventureWorks')),
    ADD EVENT sqlserver.sql_batch_completed
    (WHERE (sqlserver.database_name = N'AdventureWorks'))
    ADD TARGET package0.event_file
    (SET filename = N'/bu/QueryPerformance');
GO

All your procedures are mixed in with everyone elses:

Procedures and batches all mixed together. All you want to see are your procedures. Now, we could use the Filter to eliminate everything but what we want. However, I also want to see average duration, cpu and reads.

Click on the “Grouping…” button on the tool bar (has to be a paused Live Data stream, or, open a file), select “Grouping…” from the menu, or use the key strokes ALT-SHIFT-G. Any of these methods will open the Grouping window:

The available columns for this window are from the columns that you’ve selected to add to the table view (see my last blog post). Since I’m focused on the stored procedure, I’m going to use the object_name, which is included as part of the event’s data package (unlike in Trace).

I’ll select the column and click on the right arrow in the center. This moves it from the left to the right part of the screen. If I had additional things I wanted to group by, I could add them here as well. If you add more than one column, you can also rearrange the order using the arrows on the right. When I’m ready, the view looks like this:

Clicking OK completely changes the screen:

You can now see that I’ve quickly and easily grouped my data by the selected column, object_name. The NULL values are events that do not have data in this column. In the case of the session above, this would be the sql_batch_completed events.

Better still, I can look at the information I’ve captured within my grouping by clicking on the plus sign:

Now we can see each individual call, including all the details. Also worth noting, although I’ve debunked this myth before, you can see every individual call and the parameter values passed to make that call.

However, we’re not done here. I want to see averages.

Live Data Aggregation

Once you’ve picked columns to group by, you can also get to the Aggregation to put that to work. There’s an “Aggregation…” button or a menu choice. Selecting either will open the following window:

The operation is simple. Pick the columns you would like, and depending on the data type, you’re given different kinds of aggregation. Here’s the list from the duration column:

I’ve selected average, AVG, for the three columns I’m interested in: duration, logical_reads, cpu_time. I’m also going to choose to sort by duration in a descending manner:

With all that I’m going to click on OK. The screen will change once more:

You can now see that I’ve got averages for the columns I’m interested in and that the order of the data is now by the duration column. Further, the data descends as I defined it.

I can still see all the detail. All I have to do is click the plus sign like before.

You Can’t Do That In Profiler

I mean, sure, you can. After you import the data from the profiler output file, clean the parameter values off using a script, and then write some queries to pull the data together. What you can’t do in Profiler is throw this data together in a few clicks as I’ve done here.

Don’t get me wrong. Extended Events grouping isn’t the be-all, end-all. You’re not going to be able to use this method with really large data sets. For that you will still have to import the data and then use queries, just like with Profiler.

One example of how I use this is when tuning a query. You can just turn on STATISTICS IO and STATISTICS TIME (or use one of the other mechanisms) and get the basic info. But, if you run a given query two or three times, you’ll see different times. Which is most accurate? I don’t know. So, instead, I’ll turn on my extended events session, run the query 50 times and then average it out. Quick. Simple. Accurate enough that even Hugo Kornelis was happy with that approach when tech editing my book.

These are the types of functionality you get, with little to no effort, using Extended Events.


If you liked this information and you’d like to see a lot more about how to effectively use Extended Events, I’m giving a live session on April 1, no fooling, as part of Redgate Streamed. Please, follow the link, register, and let’s get together and talk Extended Events.

Please let me know what you think about this article or any questions:

This site uses Akismet to reduce spam. Learn how your comment data is processed.