Apr 12 2016

A View Is Not A Table

Blog post #4 in support of Tim Ford’s (b|t) #iwanttohelp, #entrylevel

In SQL Server, in the T-SQL you use to query it, a view looks just like a table (I’m using the AdventureWorks2014 database for all these examples):

SELECT  *
FROM    Production.vProductAndDescription AS vpad;

 

SELECT  vpad.Name,
        vpad.Description,
        vpmi.Instructions
FROM    Production.vProductAndDescription AS vpad
JOIN    Production.Product AS p
        ON p.ProductID = vpad.ProductID
JOIN    Production.vProductModelInstructions AS vpmi
        ON vpmi.ProductModelID = p.ProductModelID
WHERE   vpad.ProductID = 891
        AND vpad.CultureID = 'fr';

The above query actually combines two views and a table. This is what is commonly referred to as a “code smell”. A code smell is a coding practice that works, but that can lead to problems. In this case, we’re talking about performance problems. The performance problems when using views to join to tables and other views as if they were real tables comes about because a standard view is not a table. Its a query. For example, the second view introduced, vPorductModelInstructions looks like this:

ALTER VIEW [Production].[vProductModelInstructions] 
AS 
SELECT 
    [ProductModelID] 
    ,[Name] 
    ,[Instructions].value(N'declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
        (/root/text())[1]', 'nvarchar(max)') AS [Instructions] 
    ,[MfgInstructions].ref.value('@LocationID[1]', 'int') AS [LocationID] 
    ,[MfgInstructions].ref.value('@SetupHours[1]', 'decimal(9, 4)') AS [SetupHours] 
    ,[MfgInstructions].ref.value('@MachineHours[1]', 'decimal(9, 4)') AS [MachineHours] 
    ,[MfgInstructions].ref.value('@LaborHours[1]', 'decimal(9, 4)') AS [LaborHours] 
    ,[MfgInstructions].ref.value('@LotSize[1]', 'int') AS [LotSize] 
    ,[Steps].ref.value('string(.)[1]', 'nvarchar(1024)') AS [Step] 
    ,[rowguid] 
    ,[ModifiedDate]
FROM [Production].[ProductModel] 
CROSS APPLY [Instructions].nodes(N'declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
    /root/Location') MfgInstructions(ref)
CROSS APPLY [MfgInstructions].ref.nodes('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
    step') Steps(ref);

GO

That’s a query against the XML stored in the ProductModel table. The view was created to mask the complexity of the necessary XPath code, while providing a mechanism for retrieving the data from the XML. This is a common use of views. However, when we then treat the view as a table, and join it to other tables and views, we present a problem for the optimizer. Because a view is not a table, but is instead a query, the optimizer has to resolve this query in combination with any other views or tables to arrive at an execution plan for the whole combined mess. While the optimizer is very good at what it does, because of the complexity caused by the additional unnecessary processing to figure out which parts of the view is not needed to satisfy the query, it can make poor choices. That can result in poor performance.

If I were to rewrite the query, it would look something like this:

SELECT  p.Name,
        pd.Description,
        pm.Instructions.value(N'declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
        (/root/text())[1]', 'nvarchar(max)') AS Instructions
FROM    Production.Product AS p
JOIN    Production.ProductModelProductDescriptionCulture AS pmpdc
        ON pmpdc.ProductModelID = p.ProductModelID
JOIN    Production.ProductDescription AS pd
        ON pd.ProductDescriptionID = pmpdc.ProductDescriptionID
JOIN    Production.ProductModel AS pm
        ON pm.ProductModelID = p.ProductModelID
CROSS APPLY Instructions.nodes(N'declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
    /root/Location') MfgInstructions (ref)
CROSS APPLY MfgInstructions.ref.nodes('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions"; 
    step') Steps (ref)
WHERE   p.ProductID = 891
        AND pmpdc.CultureID = 'fr';

That’s a lot more complex than the query we had above that only referenced three objects and had only two JOIN operations. However, if you capture the I/O and the execution time for these queries, you’ll see a different story.

I used two methods for measuring performance. I used SET STATISTICS IO and SET STATISTICS TIME to ON for the queries for one set of tests. For another set I used Extended Events. Consistently the execution time for the query with the view was around 110ms. The query that didn’t reference any views was around 37ms. The reads were 155 for the query with views, but only 109 for the query without. If you look at the individual table I/O, you can start to see where the differences come from. These are the results from the query with the views:

Table ‘ProductDescription’. Scan count 0, logical reads 56
Table ‘ProductModelProductDescriptionCulture’. Scan count 28, logical reads 56
Table ‘xml_index_nodes_418100530_256001’. Scan count 13, logical reads 37
Table ‘ProductModel’. Scan count 0, logical reads 2
Table ‘Product’. Scan count 0, logical reads 4

These are the results for the query without the view:

Table ‘ProductDescription’. Scan count 0, logical reads 56
Table ‘xml_index_nodes_418100530_256001’. Scan count 13, logical reads 37
Table ‘ProductModelProductDescriptionCulture’. Scan count 6, logical reads 12
Table ‘ProductModel’. Scan count 0, logical reads 2
Table ‘Product’. Scan count 0, logical reads 2

You can see the differences in both ProductModelProductDescriptionCulture and Product. This is because of the differences in the execution plans caused by differences in the choices made by the optimizer.

A standard view is not a table. There is such a thing as a materialized view or indexed view, which is a table. That’s not what we’re talking about here. While you can use a view as if it was a table, don’t mistake it for a table. A view is just a mask in front of a query. It can’t be used like an object so that you avoid rewriting the same JOIN. That will lead to issues for the optimizer as this simple set of examples showed. Don’t shy away from using views, just understand what their real behavior is. A view is a query, not a table.

Apr 05 2016

Views and Simplification

I’ve been getting lots of questions on views lately. Must be something in the water.

Because SQL Server allows you to treat a view as if it was a table, lots of people pretty much assume that it is a table since they get to treat it that way. The thing is, a view is not a table. It’s a query. Let’s explore this just a little bit. Here’s a relatively straight forward view:

CREATE VIEW dbo.PersonInfo
AS
SELECT  a.AddressLine1,
        a.City,
        a.PostalCode,
        a.SpatialLocation,
        p.FirstName,
        p.LastName,
        be.BusinessEntityID,
        bea.AddressID,
        bea.AddressTypeID
FROM    Person.Address AS a
JOIN    Person.BusinessEntityAddress AS bea
        ON a.AddressID = bea.AddressID
JOIN    Person.BusinessEntity AS be
        ON bea.BusinessEntityID = be.BusinessEntityID
JOIN    Person.Person AS p
        ON be.BusinessEntityID = p.BusinessEntityID;
GO

I can query this view like this:

SELECT  *
FROM    dbo.PersonInfo AS pni
WHERE   pni.LastName LIKE 'Ran%';

The resulting execution plan looks like this:

viewSimple1

You don’t even need to expand it for what I’m about to show. If we modify the query against our view as follows:

viewSimple2

Again, you can expand these, but you don’t need to. Notice, the first plan had four tables being referenced, which represent the four tables from the view. The second query only has two tables. This is because the optimizer looked at the query that the view represents, not simply the query that I used to call the view. It then recognized that simplification could be used to eliminate unnecessary JOIN operations from the execution plan and still get the same data because of foreign key constraints on the tables.

The important point to note is that the optimizer is absolutely not treating the view like a table. The optimizer is treating the view like a query, which is all it is. This has both positive and negative impacts when it comes to query performance tuning and this view. You could spend all sorts of time “tuning” the view, only to find all that tuning you’ve done tossed out the window when the query doesn’t reference a column in the view and that causes the optimizer to rearrange the plan. I don’t want to convey that this is an issue. It’s not. I’m just trying to emphasize the point that a view is just a query.

Now, when we get into treating a view exactly like a table in JOINs or calling a view from a view (known as nesting), then we’re talking about issues. I’ll put up another post on a JOIN and views.


For lots more information on query tuning, I’m presenting an all day pre-con at SQL Day in Wroclaw Poland on May 16.

Apr 01 2016

Speaker of the Month: April 2016

THIS IS NOT AN APRIL FOOL POST!

Seriously.

My Speaker of the Month for April 2016 is Keith Tate (b|t) and his session at SQL Saturday Chicago called Profiler is Dead, Long Live Extended Events.

I actually suspected very strongly from the start of the session that it was going to be good. The reason for this, Keith was having issues with his machine, but he started the session anyway. It was an excellent beginning. Then, he started to talk about Extended Events and use his slide deck to emphasize the points he was making, and it was wonderful. For example, as he talked about the way the number of events has grown in each version of SQL Server since 2008, he used larger and larger fonts with the bigger and bigger numbers. It really hammered the point home. He continued the entire talk that way. His volume was excellent for the size of the room. He handled questions really well. He had a series of takeaways that he wanted to ensure that people understood, and as he made each point, he went back to the takeaways so that you remembered what everything was all about. I really liked a couple of his demos and I learned some stuff about how to better use the Data Explorer window with ExEvents. Wonderful.

I’ve already shared my criticisms with Keith. He needs to make sure he repeats the question, even in a small room. And yeah, that’s something every presenter gets wrong occasionally. He had a couple of slides that were very difficult to read. He spent too much time on a demo of the Profiler to show how bad it was (although, emphasizing why you need to stop using the Profiler is not a bad thing). He could have used that time to show off a little more in ExEvents.

Overall though, wonderful presentation, packed with information, presented in an interesting and engaging manner. I was impressed. I also just loved the topic.

And no, no jokes. It’s not an April Fools post. I’m really being serious about this. I post these things on the first Friday of the month, this one just fell on an unfortunate date.

Mar 29 2016

Do You Teach Azure Data Platform?

azureI offer instruction on the Azure Data Platform, and have for about six years, since shortly after it came out. I started using Azure SQL Database (although it had a different name then) Day 1.

I know a few other people who don’t work for Microsoft, but have been actively pursuing Azure SQL Database, SQL Server on Azure VMs, and pretty much all the Microsoft Data Platform. I’m not counting the BI people who have dived into PowerBI and related tech. The BI people, who are generally pretty smart, jumped on Azure with both feet. I’m talking about the data platform aspect of Azure. The people that I know who regularly teach classes are (in no particular order, sheesh, you people):

Karen Lopez(b|t)
Denny Cherry(b|t)
Jes Borland (b|t)
Thomas LaRock (b|t)
Joe D’Antoni (b|t)
Ron Dameron (b|t)
Aaron Bertrand (b|t)
Tim Radney (b|t)

I’m sure I overlooked someone who is active in this space. Please help me out. Let’s create a good list of active educators on the Azure Data Platform. I believe this is needed so that people know where to go, besides the excellent Microsoft resources (and they are excellent), to get more information. Please, no Microsoft employees. Yeah, many of them are great educators and I’m sure going to go and sit in their classes, as you should. I’m just trying to get the fundamental list of non-Microsoft speakers together and share it out.

Azure interest is growing, fast. Independent voices are valued and needed. Let’s get this list together, published, and maintained. Send me your input through the comments or my email (grant – at – scarydba -dot- com). I’ll get things published ASAP.

Oh, and if I missed you from the initial list and you were an obvious inclusion, my apologies. I’m old.

And, thinking about it, let’s get the BI people in Azure listed too. I was being lazy, not exclusive. Lazy & old. The intent is still good.

If you, or someone you know, is actively teaching Azure Data Platform, I want to know about it so I can add them to the list that I’ll maintain.

Mar 28 2016

Query Store and Optimize For Ad Hoc

I love presenting sessions because you get so many interesting questions. For example, what happens with Optimize for Ad Hoc when Query Store is enabled? Great question. I didn’t have the answer, so, on to testing.

For those who don’t know, Optimize for Ad Hoc is a mechanism for dealing with lots and lots of ad hoc queries. When this is enabled, instead of storing an execution plan the first time a query is called, a plan stub, basically the identifying mechanisms, for the plan is stored in cache. This reduces the amount of space wasted in your cache. The second time the query is called, the plan is then stored in cache.

I’m going to set up Optimize for Ad Hoc and Query Store and, to clean the slate, I’ll remove everything from cache and clear out the Query Store, just in case:

EXEC sys.sp_configure
    N'show advanced options',
    N'1';
RECONFIGURE WITH OVERRIDE;
GO
EXEC sys.sp_configure
    N'optimize for ad hoc workloads',
    N'1';
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sys.sp_configure
    N'show advanced options',
    N'0';
RECONFIGURE WITH OVERRIDE;
GO
DBCC FREEPROCCACHE();
GO
USE AdventureWorks2014;
GO
ALTER DATABASE AdventureWorks2014 SET QUERY_STORE = ON;
GO
ALTER DATABASE AdventureWorks2014 SET QUERY_STORE CLEAR;
GO

Then, we just need an ad hoc query:

SELECT  p.Name,
        soh.OrderDate,
        sod.OrderQty
FROM    Sales.SalesOrderHeader AS soh
JOIN    Sales.SalesOrderDetail AS sod
        ON sod.SalesOrderID = soh.SalesOrderID
JOIN    Production.Product AS p
        ON p.ProductID = sod.ProductID
WHERE   p.Name = 'Road-750 Black, 48'
        AND sod.OrderQty > 10;

If I run this query one time in my cleaned up environment, I can check to see if Optimize For Ad Hoc is working by querying the cache:

SELECT dest.text,
deqs.execution_count,
deqp.query_plan
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp
WHERE dest.text LIKE ‘SELECT p.Name,%’;

The results look like this:

adhoc

So, what’s in the Query Store. We’ll use this query:

SELECT  qsqt.query_sql_text,
        qsq.count_compiles,
        CAST(qsp.query_plan AS XML)
FROM    sys.query_store_query AS qsq
JOIN    sys.query_store_query_text AS qsqt
        ON qsqt.query_text_id = qsq.query_text_id
JOIN    sys.query_store_plan AS qsp
        ON qsp.query_id = qsq.query_id
WHERE   qsqt.query_sql_text LIKE 'SELECT  p.Name,%';

The results look like this:

adhocquerystore

In short, the plan is stored in the query store, even though the plan isn’t stored in cache. Now, this has implications. I’m not saying they’re good and I’m not saying they’re bad, but there are implications. If you’re in a situation where you need to use Optimize For Ad Hoc to help manage your cache, now, you’re going to possibly see negative impacts on your Query Store since it’s going to capture all the plans that you avoided. There are mechanisms for managing Query Store behavior.

I’m going to modify my own Query Store to change the capture behavior from “All” to “Automatic.” This enables an internal filtering mechanism, defined by Microsoft, to eliminate some captures. When I reset everything and run the example ad hoc query one time, I get the plan stub in cache, but nothing in the query store (that I can see). I run the ad hoc query again and now I get a plan in the cache, but nothing in the query store. If I run the ad hoc query for a third time, there’s a counter somewhere (I haven’t found it yet) because I suddenly get a query in the Query Store.

For a bit more information, let’s modify the Query Store query to include some runtime stats:

SELECT  qsqt.query_sql_text,
        qsq.count_compiles,
        CAST(qsp.query_plan AS XML),
        qsrs.count_executions
FROM    sys.query_store_query AS qsq
JOIN    sys.query_store_query_text AS qsqt
        ON qsqt.query_text_id = qsq.query_text_id
JOIN    sys.query_store_plan AS qsp
        ON qsp.query_id = qsq.query_id
JOIN    sys.query_store_runtime_stats AS qsrs
        ON qsrs.plan_id = qsp.plan_id
WHERE   qsqt.query_sql_text LIKE 'SELECT  p.Name,%';

Now, I run this query and the one from above against the cache, I get the following information:

adhocboth

 

(you might have to click on that to make it bigger).

Here’s the interesting bit. The execution_count from cache, the top set of results, is 2, even though I ran the query three times. What happens in cache is that the plan stub is removed and the count is reset. The bottom set of results, from Query Store, shows only a single execution.

What does all this mean? Just that as we add additional behaviors to our systems, we have additional management worries. With the ability to modify the Query Store behavior, you won’t need to necessarily worry that you’re going to get hurt by your need to use Optimize for Ad Hoc.

Mar 25 2016

Happy Dance!

I’m all like:

snoopy-dance

Because I saw this on an eval:

I’ve been trying to ramp up to take advantage of my MSDN subscription and haven’t known where to start. I don’t have that excuse now.

And then I was all like:

Happy_Dance

Because:

We are moving a lot of stuff to Azure. I had some experience using SQL Azure but felt blind when doing it. Grant made me feel better about my experience as it is very much like he explained.

and:

Azure is becoming a REAL THING. It’s nice to get such a great primer of it.

<calming down>

I’m quite pleased to see that Azure sessions are getting such an improved reception.

<SQUEEE>

Mar 24 2016

PASS Board 2016: Update #2

Time flies. I didn’t notice that I hadn’t posted an update in February.

There’s been a lot going on since I last posted! I’ve attended the executive committee meetings. I’ve also hosted my first board meetings and I took part in my first Town Hall. I’ve been working with PASS HQ to set the agenda for upcoming meetings and we’re starting the budgeting process for FY2017. I’ve got a couple of blog posts I’ve put together on the Board Elections (for my blog) and on the goals and plans for the EVP (on the PASS blog) that are going through an editing process. I should be able to share those with you soon.

Today, I’m going to discuss a couple of things that I’ve been mulling over. They’re things that I think ought to help drive our organization forward, but I’d like to hear back from others. First up (and remember, this is just me thinking about things by writing them down; this isn’t a commitment, promise, goal, or solemn oath), I’m trying to come up with a good list of why people should become involved with PASS. I know my involvement has led to amazing things. I know lots of others who can say the same. However, I also know a lot of people who aren’t involved at all, or are involved but don’t see benefits because of it. I want to see people actively seeking out PASS: Chapters, events, knowledge. Connect, Share, Learn. I think we have a positive and unique story here.

Connect

  • We support Chapters around the world through the website, regional emails promoting local meetings, and the management tools.
  • We support the infrastructure that makes SQLSaturday events possible.
  • PASS Summit!
  • Business Analytics Conference!
  • Women in Technology.
  • Promotion of all these aims to connect people.

Share

  • Chapters again — offering people the opportunity to organize and run a local Chapter, but also a venue for sharing your knowledge.
  • SQLSaturday again, if you don’t start presenting at the local Chapter, you probably start here.
  • Summit and Business Analytics Conference.
  • Virtual Chapters.
  • 24 Hours of PASS (Incarnation X).

Learn

  • All of the above.
  • Recordings of much of the above.
  • A very productive relationship with Microsoft.

This is a unique and rich community that we have built. Frankly, I want more. I’m greedy. I don’t just want to add to this list, or improve on the stuff already on it, but make people actively want to get involved. Help me out here. Let’s get others to become as passionate about this stuff as we are!

Mar 21 2016

Cross Database Query in Azure SQL Database

You can’t query across databases in Azure SQL Database… or can you?

Let’s check. I’ve created two new databases on an existing server:

dblist

I’ve created two tables on each respective database:

CREATE TABLE dbo.DB1Table (
     ID INT IDENTITY(1, 1)
            NOT NULL
            PRIMARY KEY,
     Val VARCHAR(50)
    );


CREATE TABLE dbo.DB2Table (
     ID INT IDENTITY(1, 1)
            NOT NULL
            PRIMARY KEY,
     Val VARCHAR(50)
    );

Now, let’s query the DB2 table from the DB1 database:

SELECT  *
FROM    DB2.dbo.DB2Table AS dt;

And here’s the lovely error message:

Msg 40515, Level 15, State 1, Line 35
Reference to database and/or server name in ‘DB2.dbo.DB2Table’ is not supported in this version of SQL Server.

So, like I said, you can’t do three part name cross-database queries in Azure SQL Database… oh wait, that’s not quite what I said is it. Let’s do this. Let’s create a new security credential within DB1 for a login that can get us into DB2:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'securitymatters';

CREATE DATABASE SCOPED CREDENTIAL DB2Security 
WITH IDENTITY = 'Grant',
SECRET = 'securitymatters';

Then, we’ll use that to define an external data source:

CREATE EXTERNAL DATA SOURCE DB2Access
WITH (
	TYPE=RDBMS,
	LOCATION='myservernotyours.database.secure.windows.net',
	DATABASE_NAME='DB2',
	CREDENTIAL= DB2Security);

With this, we can put Elastic Query (corrected from Polybase see note below) to work and create an external table:

CREATE EXTERNAL TABLE dbo.DB2Table (
	ID int,
	Val varchar(50))
WITH
(
	DATA_SOURCE = DB2Access);

And that’s it. If I query dbo.DB2Table from DB1, I get to see data in DB2. In short, you can do a cross database query within Azure SQL Database. Yeah, it’s going to require some setup and possibly some code modifications since you can’t use the old three part naming for performing the query, but, you can do it. Further, note that these are Standard, not Premium databases. Further further, they’re not a part of an elastic pool. It’s just using the external data source and external table to connect the two databases. However, if the one thing keeping you from moving into Azure SQL Database is the ability to query across databases, that’s gone.

 

Mar 17 2016

Opportunities To Talk

It’s weird being an introvert who likes to talk to people, but what can I do. I like talking to people. I have a number of upcoming trips, quite literally all over the world, that provide us with the opportunities to get together and have a chat.

First, I’ll be at SQL Saturday Boston (the 500th SQL Saturday event, HUZZAH!), this weekend, March 19th 2016. I’ll be talking about the Query Store and I’ll be doing a presentation for PASS since this is a milestone event. The first SQL Saturday event in Boston was #34, six years ago, which I helped organize. It’s been quite the journey.

I’m going to SQL Saturday Madison on April 9th. I’ll be talking about the Query Store and how to automate your database deployments. I haven’t been in Wisconsin for years.

Also in April, on the 19th, I’ll be heading down to Orlando. I’m pleased to be able to say I have the honor (and I really do consider it that way) to be able to take part in SQL Intersection. Check out the speakers there. Amazing. I’m doing a couple of new sessions on improving your T-SQL and on hybrid Azure environments.

Then things get busy. First, on May 2nd and 3rd, I’ll be at the PASS Business Analytics Conference. I’m going there to learn as well as support the event in my role as the PASS EVP. I’m very excited about it. Last year the BAC was great. This year looks even better.

On May 4th, yes, leaving one to get to the next, I fly out to merry old England where I’m presenting at the SQLBits conference. Bits is hands down one of the great events each year. I truly look forward to it and to getting to talk with all my friends from over the pond.

I get to come home for a few days, and then, something completely new. I’m off to Wroclaw Poland for the SQL Day Poland conference, May 16-18. This will be the furthest from home I’ve ever travelled to present. It will be my first time ever in Poland. I’m excited like a puppy dog about this event. I’m doing a pre-conference seminar and a couple of sessions, all about query tuning and execution plans. I don’t know when, or if, I’ll be back over there again, so please, take advantage of this special opportunity.

Back in the states, in June, I’m doing a road trip (still unnamed, I need help with that) through the state of Ohio hitting multiple SQL Server user groups. I’ll do another couple of posts on this event as we get it slightly more nailed down (I still haven’t picked a topic).

The last thing I have scheduled currently this year is another new trip. Remember that record I’m going to set by flying off to Poland in May? Yeah, well, it’s only going to stand for three months. In August, I’m travelling to India for the SQL Server Geeks Conference. There I’ll be presenting a pre-conference, all-day, seminar as well as a couple of sessions. And yeah, puppy dog time again.

I’m going to try to get to a SQL Saturday event in July and maybe another in August. Nothing picked yet. I’m open to suggestions.

Please, if you come to one of these events, introduce yourself. I do want to talk to you. That’s why I’m there.

Mar 14 2016

Leadership Lessons

Not for you, for me.

I’m sure you’ve heard the statement: Praise in public. Criticize in private.

I agree with this approach. However, I find it extremely difficult to do. It’s one of the fundamental proofs that all leadership, all life for that matter, is about constant practice and discipline. It’s not enough to know something. It’s not enough to practice something occasionally. To get good at this stuff, you need to practice a lot.

Let me tell you about a recent failure on my part. My 17 year old daughter had friends for a sleepover (yeah, they still do that). She makes her own breakfast and starts eating. I remind her to ask her friends what they want. She does so in this really irritated manner. Of course, the friends don’t want anything because she’s so clearly put out. I proceed to lecture her on the mistake and how she should have done it. She’s embarrassed and I realize I screwed up.

Now, you can say that’s just parenting, and you wouldn’t be completely wrong. However, the same lessons apply in the business world. It’s so easy to see people doing stuff that is wrong and openly correct them. The hard way, the right way, is to get the correction in, but do so without being critical, in public. You can, you should, be critical of people. You just need to be cautious about how, when, and most importantly, where.

I’m typing this up because, in addition to my screw up as a parent, I’ve been a little too openly critical of others of late and I need to remind myself of the right way to get things done. There. I’ve been warned. I hope you enjoyed this little chat.