Chief Architect, Quantiv
Today, your organisation – whatever its size – is likely to use multiple applications and computer systems. The days of a single, monolithic application being able to support all your required functionality are passing, if not already gone.
Specialist, sometimes micro, applications and services now provide sophisticated functionality aimed at your distinct operations, and these can be used as and when you need them, rather than being permanently installed. And with each of these having their own remit, the volume and variety of data produced can be vast.
But this ‘big’ data can lead to a paradox: more data doesn’t necessarily mean more information.
A needle in a haystack
At first sight, such an abundance of data may seem like a blessing, but it can also be a curse, prompting thoughts of needles and haystacks, and wheat and chaff.
What’s more, the tools used to process data can also affect the quality of the information produced. After all, not all tools are suitable for all purposes.
For example, chisels and screwdrivers may look similar, and while you could use a chisel to turn screws, it would make a lot more sense to use it for carving. Meanwhile, a screwdriver is designed to be used with screws and it would be imprecise and time consuming to use it for sculpting.
Data processing is no different: tools and techniques are most effective if used in the situations for which they were originally intended.
Big data vs good data
Business intelligence can help identify unexpected patterns in large, diverse volumes of data. For this purpose, ‘big’ data is needed, in terms of both granularity and detail. But conversely, while the data may need to be big, it doesn’t need to be perfect, i.e. entirely complete or particularly timely. A few gaps won’t hide a significant pattern, and if patterns take days, weeks or even months to emerge, they’re still valuable.
Over time, such data analysis will highlight possible connections between different aspects of your organisation’s operations, showing a need to expose information that hasn’t been used before – in effect, making known what you need to know.
But when you already know what you need to know – perhaps because of an earlier data analysis exercise – big data can be unhelpful. Instead, accurate, comparable aggregate information, i.e. ‘good data’, is essential.
Big data can be processed to create such summaries. But if the data on which the summaries are based is incomplete or delayed, the value of the summaries can be reduced considerably.
Worse, producing such summary information in this way is often a duplication or corruption of work already performed by your source applications. If such information is useful for your business operations, it should already be available to you direct from the corresponding operational systems.
Business intelligence or operational metrics?
To create good operational information, your data needs to be described in a standard format – classified, qualified and quantified – and then stored and managed in that format. You can achieve this by using an operational metrics service, which is more effective for this purpose than a business intelligence application.
Like chisels and screwdrivers, business intelligence and operational metrics both have their place. But, like carving and securing, it makes much more sense to use these tools for the purposes for which they were intended.
In other words, when you don’t know what you need to know, you need business intelligence.
But when you already know what you need to know, you need operational metrics.
Turn your data into useful information
At Quantiv, we specialise in operational metrics. Our NumberWorks method and NumberCloud platform will help you organise and manage your organisation’s application data and turn it into meaningful information.
To find out more about how our services can help your organisation, call us on 0161 927 4000 or email: email@example.com