Chris Conway
Chief Architect, Quantiv
This year sees the silver anniversary of the 21st century. And in terms of information technology, a lot has changed in those first 25 years, like:
- The maturing and expansion of the World Wide Web
- Mobile telephony becoming universally available
- The explosion of social media
- The transformation of retail, broadcasting and banking – to name a few
- The growth and evolution of handheld gaming
So, what about IT predictions for the next quarter century?
As my list shows, a lot can happen in 25 years, and the next quarter of a century probably won’t be any different. Moreover, it’s likely we haven’t seen even a twinkling of most of it yet. But at this stage, it seems as if the single biggest theme – perhaps echoing the development of the World Wide Web – is artificial intelligence (AI).
The term covers a multitude of sins. There’s plenty of marketing hype: ‘AI-powered’ is plastered on everything from cameras to white goods. But more generously, this uncertainty could also be a sign of the newness of the technology because no-one really knows what it can, or can’t, do.
So, although AI’s come a long way, it still feels like it has a lot further to go before it heralds a new industrial revolution.
But how much further will artificial intelligence go?
Famously, Alan Turing (arguably the ‘father of modern computing’) set a test for AI based on whether a human user would know whether they were interacting with another human or an AI replacement.
Inspired by that approach, our team at Quantiv have carried out assessments that have found a variation on the ‘Turing Test’ is useful in helping non-technical people understand AI.
Quantiv’s version of the ‘Turing Test’
In our expanded test, rather than a user being asked just to differentiate between human and artificial content, we make it known the content is artificial and then ask for estimates about the level, number and speed of human expertise that would be needed to imitate the content.
This estimate has three parts:
- How many humans would be needed to replicate the AI content?
- How quickly would they need to work?
- What level of expertise would they need?
Another way to look at those three parts is in terms of human equivalents, i.e.
- How many: a single person, small team, whole army?
- Time: walking, running, Usain Bolt-speed?
- Expertise: primary school, degree, doctorate?
What do our AI assessments reveal?
From these assessments, we’ve discovered that once users see the AI-generated content from these different perspectives (e.g. doing lots of the same thing, very quickly, but not necessarily with great expertise), it’s much easier for users to understand how AI content should be viewed.
For example, if AI content is seen as being something a group of GCSE students might create as a summary of reading an instruction manual, users begin to understand the importance of:
- The source of reference data
- The capability required of the students to comprehend and summarise the concepts
- The amount of time and effort the students might need to do this
There are clearly more specific parameters that could be added to qualify this approach further, for example energy usage, legal permissions, funding, etc. But we’ve found these three tests are enough to provide users with a much better understanding of what an AI can and can’t do.
However, you need metrics to be able to quantify (i.e. put values to) these different qualities:
- Some can be simple numbers: such as pages of content per time available to show the capacity the AI has available to it
- Some have formal units: for example, power or currency to show the resources used
- And some are more discretely fitted into pre-defined categories, such as to show the levels/grades required
In turn, those metrics rely on good information, both about what’s being done by the AI on behalf of the user, but also what’s expected of it. This ‘good’ information isn’t vast quantities of anything and everything, but targeted data with context – i.e. values with units of measure, dates, times, references. For example:
- The sizes of the pages in terms of words/paragraphs/complexity
- The sources of power or types of processing performed
- The subjects or even curriculum items associated with particular grades
How good data can help
It’s not entirely coincidental there’s a parallel here to the training of AI systems. Even there, although a lot of data is needed, it’s much easier if it’s fed mostly the high-quality information in the first place and not all the unnecessary stuff too.
To support that, the ways in which data is classified (sorted into groups), qualified (related to other things) and quantified (measured) need to be established.
And those steps still need a well-defined method, and that’s where Quantiv’s NumberWorks comes in.
With this context available, data can be used both to support the creation of good artificial intelligence and to help explain the qualities of that intelligence to human users.
So, while AI is likely to feature in any article talking about IT predictions for the next 25 years, it’s also important to talk about methods to help understand that intelligence.
Learn more about NumberWorks
To discover how our approach could benefit your organisation, contact our team on 0161 927 4000 or email: info@quantiv.com