Chris Conway
Chief Architect, Quantiv
The proliferation of artificial intelligence (AI) tools means it’s now easier than ever to do things like get answers to specific questions, create outlines of work to be done or generate custom images to illustrate a concept or idea.
And the fact these services are often ‘free’ (at least initially) means there’s no immediate reason not to use them.
But in an age of growing recognition of the importance of sustainability in business operations, understanding what’s involved in the services also matters. In particular, just because something can be done (apparently cheaply or easily), doesn’t mean it should be done or that there’s no hidden cost to doing so.
The brute force of AI
Many services also seem to have something of the Victorian brute-force approach to their implementation, such as:
- AI’s habit of working out answers from first principles instead of having a pre-prepared answer (clever, but not necessarily efficient)
- The use of encryption for data that isn’t sensitive
- The use of untargeted bulk emails for marketing
Sustainability relies not just on using what can be replaced but also on using no more than is necessary.
Interestingly, AI services themselves can be guarded when it comes to admitting these costs. For example, Microsoft Copilot’s response to my question, “How much energy have you used since 1st Jan 2025?” elicits the answer:
“I don’t have the ability to track my own energy usage directly. However, I can tell you that the energy consumption for running AI models like me depends on the servers and infrastructure used by the data centers. These data centers are designed to be energy-efficient, but they still consume a significant amount of electricity overall.”
If monitoring resource consumption is a struggle for AI organisations, what hope is there for others?
The importance of good information in data-driven decision-making
In the case of AI, and indeed other services, the problem isn’t lack of data to highlight the resource usage. If anything, it’s almost the opposite: so much data it’s difficult to be able to decide which to use.
This highlights one of the fundamental principles of operational metrics design: not all data is created equal. While bigger (more) data might seem like an obviously good thing, it doesn’t necessarily lead to better information. If the data doesn’t relate to the concept being investigated, it really doesn’t matter how much of it there is. Conversely, if information is well targeted, even a small amount can be useful in supporting good, data-driven decision-making (DDDM).
For optimising operations, good information is essential to highlight inefficiencies and suggest optimisations.
Why you need to understand your organisation’s process metrics
A good rule of thumb for identifying your good information is it usually relates to ‘process’ metrics (e.g. number of orders) rather than ‘structural’ ones (e.g. number of customers). This isn’t to say structural metrics aren’t important (they are), just that process ones are often more useful in this context.
A first-pass analysis of your organisation’s processes will usually identify high-level ‘inter-activity’ metrics:
- These will relate to the handover points between different activities within processes.
- They provide indications of the results of processing, which are often expressed in terms of volumes or values.
Metrics at this level can be useful in showing the efficiency of the process by highlighting whether different volumes are being processed at each step. For example:
- A metric here could cover the volume/value of orders processed or the volume of data encrypted/decrypted. If the same values/volumes processed at each stage are the same or growing, this might show redundant processing.
Subsequent, more detailed analyses will then identify ‘intra-activity’ metrics:
- These will relate to providing indications of the processing steps within individual activities themselves.
- They provide indications of the methods of processing. Again, often expressed in terms of volumes. But at this level they can also indicate the type of processing being performed.
Metrics at this level can be useful in showing the efficiency of the activities. For instance, do you need to authenticate (log on) at each stage or can the first authentication carry through to the following stages? Identifying metrics in this way helps provide definitions, both of what should be expected and where data needs to be collected. By comparing the two, inefficiencies in usage of resources – such as energy, materials and time – can be identified.
The redundancies in these examples may seem small on an individual basis. But aggregated across your organisation, and especially across all the organisations using the services, they can be significant. And so, they can increase your organisation’s overall sustainability – not to mention profitability.
What’s more, being able to monitor resource usage in this way will enable you to anticipate and plan for future operations. That means you can dynamically adjust your resource use and avoid unnecessary usage.
How NumberWorks and NumberCloud can help
Quantiv’s NumberWorks method is specifically designed to identify your key operational process metrics. Likewise, our NumberCloud service supports the collection of metrics identified.
And because these metrics don’t require analysis of vast quantities of data, they can be made available in near real-time, so adjustments can be made quickly and with confidence.
In short, supporting sustainability by providing metrics that matter to your organisation.
Find out more about sustainability in operational metrics design
Talk to our team on 0161 927 4000 or email: info@quantiv.com