Volume 6, Issue 3
Reprinted from Product Development Best Practices Report

UNTANGLING THE MEASUREMENT MESS

Fast Cycle Time author Christopher Meyer thinks most organizations are in the middle of a measurement mess. Meyer (who came up with the measurement "dashboard" concept a few years ago in his Harvard Business Review article, "How the Right Measures Help Teams Excel"), says that when most organizations, like Harley-Davidson, went through their reengineering periods in the 1980s, moving away from traditional command-and-control to distributed power and cross-functional teamwork, they often left their measurement systems largely untouched.

As a result, they find themselves saddled with multiple measurement systems--financial, functional, and personnel--that don't talk to each other leaving people unable to see quickly a straightforward link between measurement and action. (When it comes time to evaluate individual performance, for example, Meyer says the measures are typically linked only haphazardly to functional measures and, at best, tangentially to financial measures unless the individual is senior enough to have P&L goals).

Noting the folly of trying to drive fast-paced contemporary business strategies and integrated organizational structures with antiquated measures from a pre-reengineered era, Meyer identifies several contributors to this measurement mess:

Too many measures: "We're driving Ford Escorts using dashboards from a Boeing 747. People stare at the knobs and dials when they should be looking out the window."

Measuring what's easy to measure. Says Meyer, "Engineers and finance people would rather measure precisely an unimportant variable than measure an important variable imprecisely." But an imprecise measurement--with common sense judgment added--is often what's needed to forward significant action is...

Measuring mostly results when predictive measures would have the most value. Too often, says Meyer, it's as though the aim of our measurement systems is to inform us that our car just drove off a cliff, and who was driving, rather than alerting us--in time to take corrective action--that a cliff is on the horizon!

Measures are cost centered and use dollars as the "default denominator." What were the actual development costs of that product? But what about other critical factors, asks Meyer, like learning, speed of penetration, insight gained?

Measures are often functionally based. With product development increasingly a cross-functional game, using functional measures has two big shortcomings. The different functions often speak different languages; one function's measures may mean little to another. In addition, traditional functional measures don't look at things like team effectiveness, the capabilities needed to be on teams, whether a team started fully staffed, up to speed, and on time.

Most measures are based on history, few are real time. What did a product cost to develop? How long did it take? How much revenue did it generate. Says Meyer, "We rarely look at what revenue was passed up. There was a famous McKinsey study that showed that if a project was 50 percent over budget but on time in a rapidly moving market you'd lose only two to three percent of the profits over the product's life, but if it was six months late and on budget, you can lose 28% to 35% of profits."

Measures are often seriously inaccurate and misleading. Meyer says this is the worst contributor. Unless your organization works rigorously with activity-based cost accounting (relatively few do), the odds are slim that you know the true cost of developing a product (or any other significant activity).

Why Measure?

Meyer asserts that a healthy measurementbmc system has a simple purpose: to provide goal-centered feedback to allow you to detect problems and correct course. Good measures guide (Where are we going? Are we on the best path to get there?), forewarn (Is there an unanticipated problem or opportunity on the horizon?), inform (quickly and concisely, says Meyer: the few essentials needed to keep moving), and enable action.

Says Meyer, "We have too many measures in our organizations. If you have any measure that does not lead to action, trash it. Think of it as like a gauge in your car where the needle can swing to the left or right and you ignore it--you don't even think about it: toss it in the circular file, it adds no value."

Creating good predictive measures is relatively simple, says Meyer:

  • Map the complete process. You have to understand what the work is. Map it with all the stakeholders, the people who do the work, involved. It's best to use a recently developed product or service.
  • Define critical sub-processes (technical and social), tasks, and capabilities. Says Meyer, "We found years ago that most of the problems are in the white space between groups, in the handoffs."
  • Identify what drives results. Meyer illustrates with an example from Quantum Corp., which found a few years ago that it could take 22 months to define a new product in a marketplace where the product life is twelve months. They came to see that one of their drivers was having defined decision points and clear product definition early, even if it changed, so they set an arbitrary law: all contract books (the development of specs, the business plan, the schedule, the capabilities and numbers required, would be done in 45 days.
  • Define measures for result drivers and cycle-time start/stop for sub-processes. In product development, says Meyer, this is where show stoppers often turn up: he cites examples like the number of spec and requirements changes, your percentage staffing against plan, turnover, how many parts you're reusing/versus new or unique parts, and the percentage of time lost to other projects.

New Measurement Model

Meyer outlines a new seven-point measurement model:

  • Focus on the critical few. He thinks you should have no more than fifteen measures at the top level for any project.
  • Measures should be designed and used locally, especially for projects. "Those who are doing the work can create good predictive measures because they know where the problems come in." (And what if the local measures don't give corporate the information it wants? Keep two sets of books, advises Meyer).
  • Measures should be real-time...imprecise, perhaps, coupled with good judgment. Waiting for precision can be costly.
  • Retain traditional results measures--augment and balance them with process measures.
  • Develop, use, and align measures with strategy. Asks Meyer, "If you develop measures locally, how do you make sure they align with strategy? I think that senior management's role is to make sure that local measures nest within corporate strategy, that their own executive performance measures reinforce what happens locally, and that the measures align to have the right critical conversations. That doesn't mean they have to be the same measures. Asking someone in a project team to look at earnings per share is ridiculous, pushing EVA to that level may be noble, but it isn't very informative."
  • Support knowledge work. When he began his cycle-time work, Meyer says he couldn't figure out why it took so long for people to get with it. It gradually dawned on him that the way people experienced measures was a problem: it wasn't really cycles that were being measured, they felt, it was them. Create a mindset where measures are used for knowledge, not to beat people. Use the measures for learning, to detect and correct, full stop.

Have project teams create their own dashboards, advises Meyer. But in today's post-command-and-control, networked organization, with widely distributed power, information, too, must be widely distributed. He advocates extending the dashboard model to Web-based measurement: let everyone know what all the parts are doing, what they see as important enough to measure, and how the parts integrate into a whole enterprise. It let's everyone know the score real time. But remember, concludes Meyer, "It's not just the measures...it's how the measures are used."PD

Copyright 1999 The Management Roundtable, Inc.