Managers in all businesses are driven to measure key elements of their performance, because we all know that old maxim:

‘If you can not measure it, you can’t manage it’.

In blogs and white papers you will find a raft of must have KPI’s that will give you the control you crave, but is it really that easy?

  Key Messages
  • Always have your people & users in mind when designing a performance management processes
  • Wherever possible, focus measures on the outcomes experienced by the customer and those factors that influence the outcomes.
  • Using operational data that already exists in most businesses, it is possible to create leading measures that drive action.
  • That combining measures to create forward looking lead indicators of customer experience is possible and can be very effective in creating a simple measure that drives action. However transparency as to the drivers of this measure needs to be clear to all stakeholders.
  • The balance between simplicity for clarity versus detail for action is an ongoing challenge. Constantly evaluate your performance management process in relation to your customers and the business priorities.
  • With the IoT changing our perspective on data, and the availability of easy to use business intelligence solutions, technology is not the barrier to better measures. It is more having the mind-set to try a different approaches which is the challenge.
This saying on measuring performance is often attributed to the management guru Peter Drucker. In fact this is a misquote and he had a much more nuanced view according to a blog on Measurement Myopia by the Drucker Institute. Although he did believe that measuring results is crucial to performance, he also believed that the relationship between a manager and their people is also key.

“Work implies not only that somebody is supposed to do the job, but also accountability, a deadline and, finally, the measurement of results —that is, feedback from results on the work and on the planning process itself,” Drucker wrote in Management: Tasks, Responsibilities, Practices.

But for all that, Drucker also knew that not everything could be held to this standard.

“Your first role . . . is the personal one,” Drucker told Bob Buford, a consulting client then running a cable TV business, in 1990. “It is the relationship with people, the development of mutual confidence, the identification of people, the creation of a community. This is something only you can do.” Drucker went on: “It cannot be measured or easily defined. But it is not only a key function. It is one only you can perform.”

Experienced managers will naturally relate to this balance and I know many (including myself) will have imagined the possibility of having one Key Performance Indicator(KPI) that could predict how customers would experience your service provision. One simple measure that your teams could use as a focus for their primary mission; to ensure customers remain satisfied, loyal and profitable. The limitations of most measures of customer satisfaction and loyalty are that they look in the rear view mirror, in that they ask questions after the fact. Far better to create a leading indicator, but how?

To get a better feel for customer satisfaction, many managers spend time in the field talking to customers and their teams. Some will create rafts of measures to monitor and improve their operations. Their logic being a great performing team is more likely to have loyal customers. However there is a temptation to measure everything, which can confuse team members. To overcome this managers bring focus through introducing key KPI’s and dashboards to make it easier to see the issues and take action. More sophisticated businesses look towards the Balanced Score Card methodology in which a more holistic view is taken of the operation. They not only examine financial and processes efficiency, but also consider organisational capacity and customers in relation to their strategic goals. This balanced approach is pretty sensible, but a can be too ‘management speak’ for the people at the sharp end of the business. The key challenge is to create measures that drive the right behaviours and culture, and not ones where people start to find ways of working around. So it is not quite as simple as many make out. From my own experiences, I always felt it would be extremely beneficial to develop a simple measure that was:

  1. Easily understood by everyone.
  2. That gave us a forward view that a particular piece of equipment was potentially going to lead to severe customer irritation and dissatisfaction.

Our business was injection moulding systems, and we knew that something was going wrong in the customer when the spare parts spend of the machine increased, fault reporting was high and the same problem re-occurred over a 12 month period. We created a ratio of these 3 indicators and found that at a machine level, we could start to rank problem systems and identify those that were likely to turn into an irate customer. Our thinking was that not only could this be used by the local teams to bring focus to a specific customer issue, it also gave an indications of how well teams were managing their installed base. Unfortunately for a number of reasons we were unable to operationalize this strategy and I often wondered how effective it would have been.

Fortunately I have had the opportunity to witness these ideas in action.  For example Inca  who design and manufacture digital printers,  gave themselves the goal to improve the equipment productivity and hence satisfaction of their customer base. For their technology, it is the performance of the print head that controls up to 256 ink delivery nozzles, which is critical to uptime. By combining 3 key performance parameters of the machine, alarms, nozzle deviations and productivity, Inca could rank their equipment in terms of the likelihood to cause customer dissatisfaction. They created simple dashboards that clearly identified the priority machines to be working on. This allowed them to identify faults before they became critical therefore reducing costs, enabling their customer support centre to identify issues more effectively, maintaining better print quality and improving the planning of engineer visits around customers production runs. As they developed their management process, Mark Noble, Customer Support Director described important lessons learned:

  1. The temptation to over complicate the dashboard: The possibility of easily bringing together different measure into an ‘easy to read’ dashboard came from an investment into ‘Business Intelligence’ software. However, the temptation to add lots of interesting, but not necessarily essential measures proved to great. Before long they found that a great idea had become to complicated to use effectively. The ‘Keep It Simple Stupid’ or KISS principle is very important when thinking about how to make KPI’s effective.
  2. A focus on customer outcomes improves loyalty: Because the programme had an intense focus on the most important customer outcome, print quality, Inca found that they had important information that could help improve their customer’s business. When they detected a problem, they started to call customers to inform them of the issue and resulting actions that needed to be taken. Rather than provide this feedback at the operator/supervisor level, the key account manager would contact the senior operations director and inform them of the situation and action plan. They saw two unexpected benefits:
    1. Customer’s loved this personal service which added value to the bottom line of their business. Because it was a senior management business discussion, recommendations were quickly actioned.
    2. Inca’s own sales force became enthusiastic about the power of service in helping them drive revenues and loyalty.

A second example of this approach comes from Alec Pinto of Peak-Service, part of the Qiagen corporation, a €1Bn technical services supplier for medical, analytical and industrial equipment. As part of their transformation journey, they created a customer experience indicator which aggregated measures of machine utilisation, revisits, call response time and call completion time. They made it very transparent how this indicator was driven by their four KPI’s. And then under this, their teams could easily drill down to the next level of measures required to get into more detailed problem solving.

The Customer Experience Indicator helped bring a focus to teams and people on the drivers of customer experience as they moved through a significant change. This gave them one measure, which could be used by teams to highlight potential customer experience issues, as well as the ability to drill down to the detail drivers in order to develop solutions. It enabled the management team to move through the internal changes in their service organisation, whist keep an eye on the impact on the customer of those changes.

These are both examples of moving away from a rear view mirror perspective, towards focusing on the outputs that impact their future customer loyalty. Inca are using machine performance to manage customer experience for each piece of equipment in a pro-active manner. Peak-Service are using existing service measures to highlight when their organisation might not be functioning as well as it should.

A third example comes from one of the worlds leading IT hardware / solutions companies. In the early 2000’s, this business started to combine operational metrics to predict potential customer dissatisfaction, so that they could get ‘in front of the customer satisfaction scores’ for customer support. They mapped out the key stages in the lifecycle of the service support process, such as the time taken to answer a call, the time to diagnose a problem or the first time fix rate. From their operational experience they knew when customers would become dissatisfied and began to develop algorithms to monitor and analyse key measures through the lifecycle. If a combination of measure went outside thresholds, then alerts were raised. Thresholds could be set depending on the context of the customer, such as the SLA agreements or the mission critical nature of the application. At first they required additional analysts to monitor what was a clunky process. But they persevered, because service had become critical to their survival. Over time, with improvements in analytics solutions, this business process became much smoother and they experienced a number of benefits:

  1. Improved customer satisfaction: when the business was later acquired, they found very high satisfaction rates in comparison to other parts of the business.
  1. Upgrading skills: As potential customer experience issues were identified in real time within the support chain, the engineers received immediate support. Often tricky problems would be ‘swarmed’ by experts, so helping raise the technical competence of all the support staff.
  1. Potential for Automating the Support process: As equipment health data was fed into the process, so the potential for automating the support started to become a reality. Service requests would automatically be raised, standard actions identified, the customer informed that an action was required, and then asked : ‘when would they like the field visit?’. Obviously there is a limit to the complexity of problem that can be solved in this way, but areas where automated support was most prevalent, also proved to have the best customer experience score. When you consider the potential time saving for organisations with literally thousands of pieces of equipment this is not a major surprise.

The problem with combining metrics in this way is that there is a danger that the process becomes too complex to understand. In this situation algorithms and specialists were used to turn knowledge into forward thinking action. As with nearly all global businesses, they deployed a standard set of cross-business KPI’s. To manage the complexity challenge, they broke their KPI’s down into levels:

Level 1: High-level business measures such as Customer Satisfaction

Level 2: Operational Metrics such as utilisation

Level 3: Detailed Operational metrics

With the advent of sophisticated business intelligence tools, this approach allows managers to drill down to the detail required to solve problems and improve. How they share this data and motivate their staff to take action, well these are the management skills that Peter Drucker eluded to.

In trying to summarise this exploration of the impact of metrics on performance, there are perhaps seven key messages that we can take away:

  1. Always have your people & users in mind when designing a performance management process as;
    1. It needs to be at the level that people can action
    2. Not everything is measurable, and some elements of performance need old fashioned personal feedback
  2. Wherever possible, focus measures on the outcomes experienced by the customer and those factors that influence the outcomes.
  3. Using operational data that already exists in most businesses, it is possible to create leading measures that drive action.
  4. That combining measures to create forward looking lead indicators of customer experience is possible and can be very effective in creating a simple measure that drives action. However transparency as to the drivers of this measure needs to be clear to all stakeholders.
  5. The balance between simplicity for clarity versus detail for action is an ongoing challenge. Constantly evaluate your performance management process in relation to your customers and the business priorities.
  6. With the IoT changing our perspective on data, and the availability of easy to use business intelligence solutions, technology is not the barrier to better measures. It is more having the mind-set to try a different approaches which is the challenge.

All these case studies are examples of companies applying their deep know-how of their equipment and customers, to identify problems before they happen.

Fore-armed is fore-warned!