SQL Saturday #60, Cleveland Speaker Evals

Home / PASS / SQL Saturday #60, Cleveland Speaker Evals

I just received 35 speaker evaluations from SQL Saturday #60 in Cleveland. It was a great event (although I had a hard time getting there) and I really enjoyed giving my presentation on “Gathering and Interpreting Performance Metrics” (a warm-up presentation of part of my SQL Rally pre-con).

Feedback is a wonderful gift. Thanks to everyone who filled out the eval and especially to those who commented.

The evals have six questions and an area for comments. The questions are rated from Very Poor to Excellent. I’ve decided to assign them number values from 1-5. The overall average is 4.82. Here are the breakdowns per question:

How would you rate the usefulness of the session information in your day-to-day environment: 4.79
How would you rate the Speaker’s presentation skills: 4.85
How would you rate the Speaker’s knowledge of the subject: 5
How would you rate the accuracy of the session title, description and experience level to the actual session: 4.76
How you rate the amount of time allocated to cover the topic/session: 4.67
How would you rate the quality of the presentation materials: 4.85

Here are samples of some of the comments:

  • Great enthusiasm, demos are valuable
  • Not enough on “interpretation” focused on “how” I was hoping to hear more on what they mean, and which ones to focus on. Too general & broad for the title
  • Lots of info to digest. I hope I can remember enough to use it.
  • Fun guy
  • Great finish to the day
  • Clean up the order some, but good stuff
  • Good energy and style. Get your order sorted & you’re good

My take on the evals? Overall, I’m very happy. I’m especially happy with that 5 on Speaker Knowledge (not that I believe it). I’m a little disappointed in the low scores on Usefulness and Accuracy. I’m flatly surprised that people aren’t finding this useful in their environments (although, I realize I got a 4.79, which is really good, but I need to be able to comment on something). Maybe everyone already has a great monitoring tool installed. Accuracy of the session, I’m not sure where to go with that one. I tried to be very clear in the description that we would be talking about data collection and interpretation and if you look at the slides, it’s about 50/50. Yeah, I do cover how to collect the data, but I also spend time giving actual numbers for evaluating what’s what. And the time on the topic rating? Who cares. I hate this question. Did people rate it low because they thought more time was needed or did they rate it low because I spent way too much of their time? Yeah, I don’t know either. Take that away and my overall average goes up to 4.85. As to the comments on order, I had rearranged some of the slides and kind of lost track of which was when. My mistake. I think I’ll keep the order their in, but just remember where I put them.

Thanks again for the feedback everyone. It was a wonderful gift.

OK, fine, but what do you think?