A recognized expert on measurement and evaluation, Jack Phillips is the author or editor of more than thirty books, including The Consultant’s Scorecard: Tracking Results and Bottom-Line Impact of Consulting Projects, The Human Resources Scorecard, Return on Investment in Training and Performance Improvement Programs, and How to Measure Training Results.
Phillips is the Chairman of the ROI Institute™. He also provides consulting services for businesses around the world. His expertise is based on more than thirty years of corporate and academic experience.
We talked to Phillips about The Consultant’s Scorecard and the process he developed for measuring return on investment for consulting projects.
McLaughlin: What is the Consultant’s Scorecard, and why is it an important tool for consultants and clients?
Phillips: Let’s first look at it from the client’s perspective. There is a tremendous interest in return on investment (ROI) these days, and many clients want to know the payoff of a consulting project. The Consultant’s Scorecard is a systematic way to develop a balanced perspective on the success of consulting projects. Clients can use the Scorecard to see the monetary payoff of a project and to examine cost versus benefit.
Consultants should welcome the Scorecard because it builds excellent data to show their success with projects. It’s also a great way to show your value, and that can be used as a strategic marketing tool.
McLaughlin: What, specifically, does the Scorecard measure?
Phillips: ROI is only one element the Scorecard measures. Other measures are important for creating a balanced perspective on the impact of the consulting process. Building on David Norton’s concepts in The Balanced Scorecard, we developed six measures for the Consultant’s Scorecard:
We capture satisfaction with the consulting intervention, the learning that has taken place, and the success of application as the new process or system is implemented. We track what people are doing differently and how well is it working.
Then we assess the business impact of the project and evaluate intangibles, such as employee and customer satisfaction. Finally, using the business impact assessment and the cost of the project, we calculate the ROI for the project.
These six quantitative and qualitative measures track the chain of impact for a consulting project and provide a balanced profile of success up to and including ROI, always trying to isolate the effects of the project. From the client’s perspective, the Scorecard adds up to ultimate accountability for a project.
From the consultant’s viewpoint, it’s not always a welcome tool. In reality, no one wants to be measured, let alone be measured in six different ways.
McLaughlin: Is there more demand for measuring ROI and the other data in the Scorecard for consulting today than in the past?
Phillips: Yes, both because of the high cost for consulting projects and because consulting has a tarnished image. The consulting process sometimes engenders criticisms and even animosity towards consultants.
It doesn’t help that criticism of consultants has been brought to public attention by books such as Consulting Demons and Dangerous Company, as well as by the comic strip Dilbert. Some criticism may be fair, some not, but there has been a lot of publicity about consultants.
And so there is more demand for greater accountability for consulting results, and ROI has been used in so many different ways. It’s a natural measure for clients to request. Although clients may not know if it’s feasible to determine ROI specifically for consulting interventions, they are asking for it more and more. This is not surprising given that expenditures for consulting projects can be huge.
McLaughlin: Do you think that integrating ROI and the other measures into a consulting project would potentially create a stronger relationship between client and consultant?
Phillips: Yes, definitely. I think the Scorecard is a crucial tool for building future business. There may be some fear that the client might discover your project is not adding the value it should, but I would suggest that the client would eventually learn that anyway.
If you collect data, you can make changes and adjustments to deliver the intended results. That strengthens the relationship and argues for the client to continue working with you; failure to do these measurements can undermine the consultant’s credibility, even if the consultant is adding value. Telling a client that you don’t know what effect you had or will have is not a good response these days.
McLaughlin: Is it really possible to isolate the impact of a consulting project from other factors that affect business performance?
Phillips: Yes. In the book, we describe eight ways to do that. I’ll outline three of them:
The classical way is to implement the consulting project in one division or group but not in a control group. Then, you compare the performances of the two groups, examining the business measures that are driving the project. In about 500 impact studies we have done, roughly a third of those used a control group arrangement.
Another approach is to examine the trending of data. Using the available pre-project data, you project where that data would have been without the consulting project. Then you compare the trend data to the actual data.
There are a couple of conditions for this to work. First, the owners of the process have to be able to predict if the pre-project trend would have continued without the project, and they would have to be aware if any other new influences entered the process after the consulting project was implemented. This technique won’t work if new influences entered. About fifteen percent of studies meet those two conditions and trending is a useful tool.
An approach that is more likely to work is to use expert estimation from the people who are driving performance. The group who understands the business measures better than anyone meets in focus groups to analyze the impact of the consulting work and the other factors related to shifts in the business.
We make sure that we have considered all the factors that have driven the changes, and then ask the groups to determine which factors, besides the consulting project, really caused those changes. It may be something external in the market, or it may be some other internal process that was adjusted. We consider the factors one at a time, and allocate a percentage of the change to each factor. We examine what percentage of the change was due to the consulting project, what percentage was due to external market factors, and so on.
The team may be uncomfortable with these estimates, so we also ask them to indicate their level of confidence with those allocations, using 100% for certainty and zero for no confidence at all. We then establish an error range around the estimates. When it comes to a final answer for the impact of the consulting intervention, we always use the low side of the error range. One of our guiding principles is to understate the impact of the consulting intervention if there is any doubt. If you overstate your impact, you won’t be invited back. Understate, and you can at least stay.
About half the time when I first discuss expert estimation with the group, I find resistance. They say, you can’t use that appraoch; it’s too subjective. But, it may be the only technique that will work when multiple influences are present.
McLaughlin: When you present findings based on expert estimation, what is the reaction from clients?
Phillips: We describe the difficulty of separating the various factors, show how much our project contributed to success, and explain the conservative estimation process we used. With a few exceptions, clients buy into the findings. Most senior executives live in a vague, ambiguous world anyway, and they are used to estimations and subjective input; many of them run their businesses on best guesses.
McLaughlin: Are clients becoming more willing to make the extra investment in time and money to study the ROI of projects?
Phillips: Yes, but it’s slow. One of the most important issues is that the evaluation needs to be as objective as possible. Ideally, neither the people on the consulting team or on the client team should facilitate the study. An external person or group is better for objectivity.
Cost is another issue. It could cost up to five percent of the project cost to evaluate results. I suggest you build the evaluation costs right into the project. If the amount is significant, see if the client will share that cost with you.
McLaughlin: Any tips on how a consultant could get started using the Scorecard?
Phillips: Well, consultants should look at the book, of course! We also offer workshops and a certification process to teach people how to coordinate the method. That’s probably the best way to build internal capability and keep costs down.
McLaughlin: Last question: what’s on your research agenda?
Phillips: We continue to expand the Scorecard and ROI process into different areas. We started building this approach in the training area then moved to human resources, consulting, and organizational change. Our most recent work is with technology groups, and we plan to expand that to public relations, supply chain management, and procurement.
We are working more in the public sector these days. Last year, sixty-two percent of our revenue was in government. We have also worked with educational institutions. We work in thirty-six countries.
Also, we are building more and more case studies and, tied in with that, we are developing software that is both a database and a tool to conduct studies. We hope to get up to 50,000 to 100,000 studies.
McLaughlin: Thanks for your time.
Find out more at the ROI Institute.