When IT service desk metrics are chosen incorrectly, the effects are not always immediately apparent. They will eventually come to light. Using the wrong service desk metrics can cause organizations to make ill-informed decisions. The repercussions of bad metrics are often evident in decreased customer satisfaction and increased, often overwhelming, service requests.
It is critical that the service desk not only know potential mistakes when choosing metrics, but to understand the effects of these decisions. In this article, we will discuss some of the most common mistakes organizations make when deciding which service desk metrics to use. We’ll look at the pitfalls of these mistakes—as well as solutions to minimize the fallout.
The “following the trends” approach
Metrics are often chosen not because of their relevance to business decisions, but because of their popularity in the industry. This may be due to a lack of direction or an excitement of possibilities. Your motivations to select a limited set of popular metrics may include:
- The availability of benchmarking data
- Convenient design and implementation
- Past experiences of IT service decision makers in yours and other industries
- Recommendations from external vendors, consultants, and thought leaders
While it is certainly appropriate to take advantage of proven track record, knowledge, and established best practices, be careful that you’re not following of-the-moment trends or “cutting edge” but unproven metrics.
Solution: Design metrics specifically to meet your company’s unique requirements—and those likely differ from the wider industry.
The “metrics as targets” approach
Data from the right set of metrics captures valuable insight that can drive business excellence. Of course, the data itself requires thorough analysis before raw data transforms into useful information and, subsequently, actionable knowledge insights.
Unfortunately, many organizations merely observe metrics as targets and take the necessary steps of improving business processes only when the targets are not met satisfactorily. This causes at least two immediate business problems:
- The business responds only reactively to the service desk’s changing performance. The delay between the poor service desk performance, per the metrics, and the organization’s response to address the underlying issues means lost business.
- The service does not follow a strategy of continuous improvement when corrective actions are performed only upon failing to reach the target performance of metrics.
Solution: Establish baseline targets for metrics based on the true performance potential of your service desk (not another organization’s). This performance might not be sufficiently consistent to be reflected in a single target number.
The “service desk over customer” approach
Service desk metrics are often designed to evaluate and map internal operational efficiency to end-user satisfaction, which has direct correlation with business performance. These metrics don’t always evaluate the overall experience offered to the end user. Instead, they only focus on how well the service desk operations were performed.
For example, the Average Time to Resolve Tickets metric reflects the speed of the service desk response to issues facing end-users, but it does not account for recurrent issues that take little time to resolve but affect a wider user base more frequently. In fact, solving a high number of frequent issues in less time only demonstrates the fast performance of the IT service desk. Viewed in isolation, this metric inaccurately represents the service desk, as its operational performance is opposite to the quality of customer experience end users receive.
This means that you must evaluate a variety of metrics to correlate service desk performance with customer experience quality. Some metrics are, of course, more appropriate: Individual surveys asking end-users to describe their experience of service desk support may not be accurate and insightful. This is because only a small proportion of users tend to respond to such surveys; the ones who do respond are more encouraged to share their experience when it’s too good or too bad. As a result, evaluating the true customer experience quality by evaluating metrics that only describe the quality of service desk operations may provide inaccurate and misleading representation of customer sentiments and experiences.
Solution: Implement and evaluate customer satisfaction metrics to couch against your service desk metrics.
The “too much tech” approach
Metrics log data typically describe the physical, logical, or operational state of a technology node. This state can be mapped to a business process, which in turn represents business performance. (This desirable state is known as IT-business alignment.) The disconnect between IT and business is apparent when the metrics are evaluated solely from a technology or operational standpoint.
For example, the IT Services with Most Incidents metric describes the performance of the technology underlying a specific IT service. The high number of incidents relative to other IT services does not suggest the service was worst performing or requires more attention from a business perspective. Perhaps, instead, the organization is not immediately interested in resolving the incident frequency. In this case, the incident performance must be weighed against importance, success factors, and impact to the business.
Solution: Identify these hierarchical differences before evaluating the metrics, or the entities covered by the metric, in isolation.
The “aggressive benchmarking” approach
The metrics information itself must be evaluated against a benchmark reference before influencing a decision. Comparing your stats against a valid benchmark can provide an intuitive description of service desk performance. But how do you ensure that the benchmarking data is both accurate and valid to the metric in consideration?
Many organizations rely on industry benchmarks or past performance as a reference to evaluate current metric performance. An issue that arises here is that your metric may not be designed, captured, or applied to the decision-making process in the same way. With the changing technology and business landscape, operational practices, and end-user expectations, past benchmarking information may serve only as inaccurate target references.
Solution: Collect the benchmark data that most closely resembles the operating environment of your IT service desk. Incorporate all metrics and decision factors that have been applied to the available benchmarking information.
Choosing the right metrics
So how can you make well-informed choices for your IT service desk metrics? How do you choose IT metrics that matter? The best metrics aim to capture end-user sentiment and accurately represent the state of IT service desk operations.
For more information on choosing the best and most informative metrics, check out these BMC Blogs: