In the October 2008 issue of Inc. Magazine, Joel Spolsky discusses Robert Austin’s 1996 book, “Measuring and Managing Performance in Organizations.” As Spolsky explains, 

The book’s central thesis is fairly simple: When you try to measure people’s performance, you have to take into account how they are going to react. Inevitably, people will figure out how to get the number you want at the expense of what you are not measuring, including things you can’t measure, such as morale and customer goodwill.

I like the summary in the book’s Forward by DeMarco and Lister (of Peopleware fame):

When you try to measure performance, particularly the performance of knowledge workers, you’re positively courting dysfunction.

Even the heading in Spolsky’s column gives a good summary:

Employees will always game incentive plans

Anyway, Spolsky gives several charming anecdotes of incentive plans that backfired and in the last two paragraphs brings it home to his company, Fog Creek, and explains how he avoids the problems:

We simply established as a rule the idea that gaming the incentive plan was wrong and unacceptable.

Now Spolsky is a bright guy, and I’m generally quite impressed with his comments (my subscription to Inc. Magazine started when they added him as a columnist), but either I misunderstood him or he missed the point here.

Austin did a Ph.D. dissertation on why employee measurements (like commission-based incentives) are essentially impossible to get right because of the conflicting signals they send to the person being measured, and Spolsky response is, “It’s not so hard, just tell the employees not to cheat”!

[This reminds me of a case I reviewed at the Federal Trade Commission. An attorney wanted to bring a case and he claimed that a company was making millions of dollars based on certain allegedly deceptive advertisements. He proposed a fine of $300,000 and explained that the fine could be one-tenth of the illegal profits because “people will obey the law if you tell them to do so.” Although my political views tend more to the laissez-faire, I was not so optimistic about inherent goodness of the businessman; would he would walk away from millions in profits because of a few hundred thousand in fines? I don’t think so!]

The problem is that not all distortions are as flagrant as those cited by Spolsky. As long as you attempt to measure individual performance, including an incentive compensation plan (paying salespeople on commission), then you will fall into the trap described by Austin. The problem Austin describes is not that people are going to cheat unless we tell them not to cheat. The problem is that we cannot set up explicit measures of all the factors that are important, and measuring any subset will always result in sending the wrong message to employees. 

In most jobs, we expect employees to exercise some discretion (unless, of course, you follow the views of Frederick Taylor: “We do not want any initiative. All we want of them is to obey the orders we give them, do what we say, and do it quick.”). Which customer should I contact? Which phone call should I return first? Should I help a colleague complete a sale (where I don’t get a commission) or should I work on my own sale (where I do get a commission)? In most situations faced by such employees, there is a good argument to be made for either choice (and neither would be “cheating”).

The question is, what factors do we want employees to consider as they weigh two legitimate alternatives?

Sometimes it would be better for the company if an employee acted contrary to the measurement incentive (e.g., helping a colleague with a difficult sale). But this is not always clear-cut, and choosing among legitimate options can hardly be identified as “cheating.” The difficult cases are ones where the correct decision is not obvious (so accusations of cheating are not realistic). Austin’s point is that we introduce a distortion when we place an artificial incentive that favors one result over another. 

Spolsky’s simplistic conclusion, “tell them not to cheat,” completely misses the point of Austin’s book. If Fog Creek pays salespeople a commission, then–even without cheating–the salespeople are being given the message to value certain uses of their time and effort greater than other uses. This message is not always correct and will, in time, become counter-productive.

Spolsky concludes:

The problem with most incentive systems is not that they are too complicated–it’s that they don’t explicitly forbid the kind of shenanigans that will inevitably make them unsuccessful.

I respectfully disagree. What I get from Austin is the following: The problem with all incentive systems is that they cannot possibly measure all the factors that should be considered and any subset of measurements creates distortions.

Advertisements