Chris Conway
Chief Architect, Quantiv
Manual jobs have had a tough time in the last 50 years or so, especially in traditional manufacturing industries and locations. Robotics, automation and globalisation have relocated – and massively reduced – the number of roles available, significantly impacting local communities.
But if some predictions are to be believed, those changes will be nothing in comparison with the effects of the use of artificial intelligence (AI).
However, while the hype around AI might support those beliefs, it’s harder to judge the reality based on objective judgements about the likely results.
One way to do this might be to consider AI in the same way as any other job applicant. After all, it’s effectively ‘interviewing’ for a position. So, it seems fair to ask the same questions that might be asked of a ‘natural intelligence’ candidate:
- What do they know?
- What skills do they have?
- What have they achieved?
Here, I’ll look at each of those questions in turn.
1. What do they know?
Popular chatbots, such as ChatGPT and Gemini, are mostly considered ‘generative AI’. If they’re given a question or prompt, they generate an answer. The answer they give is based on the data on which they’ve been trained. Think of them as having swallowed the encyclopaedia and then able to produce on-demand responses based on what they’ve read.
This might seem like magic, but at its heart, it’s not actually that different from the way a search engine already works:
- Analyse the input sentence looking for keywords and patterns
- Transform those patterns into forms compatible with those used to analyse the source data
- Compare those transformed keywords and patterns to the patterns created when the source data was read
- Create a response based on ‘reverse transforming’ the data patterns that match best
In effect, it’s a lot of identifying and comparing, both for the question and the possible answers.
Both generative AI and search engines use long-standing techniques, including transformations and dictionary/thesaurus lookups. And while the natural language processing is indeed new (bigger, more complex dictionaries and thesauruses), even that uses language analysis techniques (parsing, lexical analysis) that those involved in the early days of IT would recognise immediately.
But are the AI answers any good?
If decisions are based only on matching something in the question with something in the source – rather than truly understanding the meaning of the question and source – then it’s all too easy for misunderstandings to happen. This is made worse if there’s no possibility of a feedback loop to help identify when those confusions occur.
Human intelligence, on the other hand, understands meaning almost instinctively and has a natural tendency not to repeat techniques that have failed. And while artificial intelligence can be quick and comprehensive, it can make mistakes and be slow to react. In fact, it’s not unusual for AI to present incorrect statements and data as fact, which makes you wonder how much you can trust any of its answers.
So here, humans still have their value.
2. What skills do they have?
The other main variety of AI, ‘agentic’, doesn’t just provide answers but instead supplies a virtual ‘agent’ to help perform tasks.
This can also seem clever, but here too the ability to handle natural language instructions hides the use of existing technologies.
Automation and repetition are both long-standing IT techniques. Automation is based on:
- Providing a series of steps to take
- Identifying the expected outcomes of each step and the ways to handle them
- Providing reports to show the processing performed
Repetition just repeats this process based on a pre-defined schedule.
Agentic AI may make those processes more approachable by using the same techniques as generative AI, especially being able to define the steps, outcomes and schedules in natural language. But it doesn’t fundamentally result in something new being done.
And, again, how clever and different is the agent, really?
Most agentic AI doesn’t work out what to do on its own. Instead, it helps provide evidence of what’s been done before and the results that followed. So, in this way, it can help make suggestions of what could be done. But it won’t – thankfully – unilaterally decide to do something just to find out whether it works.
So, if AI can’t act independently, perhaps human input is still required for that, too.
3. What have they achieved?
Both the main forms of AI have had a relatively short period of ‘work experience’ – three or four years at most (even if the techniques they use go back further).
In that time, they’ve been used in many different situations. Some of those uses have enabled processing that would have been considered too time-consuming or expensive to do with existing approaches. Equally, some have supported identification of variants that would never have been considered had they not been suggested by a ‘naive reader’.
Both have considerable value.
But they’ve also been used in ways that have less value and could even be considered counterproductive. Aside from any mistakes, generative AI’s ability to transform a description from one form to another means it’s capable of creating vast quantities of content that is at best just a duplicate of what already exists, and at worst a pale imitation of the original work.
And agentic AI makes it possible to perform tasks repeatedly that really didn’t need doing even once.
That duplication and redundancy come at a cost. The quantities of source data to be consumed, the wide variety of possible answers that must be filtered and the range of tasks that must be executed – all means significant quantities of resources are wasted.
Humans are ultimately to blame for deciding to use those resources for AI. But when performing tasks more directly, humans are much more conscious of when resources are being used wastefully and so operate more selectively.
Conclusion: will AI take my job?
Artificial intelligence does indeed introduce some new techniques, in particular natural language processing (both understanding and explaining). And these undoubtedly add new capabilities and make life easier.
However, AI is also extremely adept at hiding the use of older technologies:
- Machine learning
- Automation
- Repetition
- Search
- Data transformation
- Compilation
And even:
- Spelling/grammar correction
- Auto-completion (words, searches, sentences)
- Code completion/templating/generation
And these technologies have existed for many years. So, while AI might make them more approachable, it doesn’t introduce anything radically new. So, that suggests if jobs were susceptible to those technologies, presumably they should already have disappeared.
Overall, while artificial intelligence might mean there aren’t quite as many jobs as there were before in certain sectors or disciplines, it’s unlikely to result in the wholesale disappearance of entire areas of employment.
There’s one other inescapable conclusion from AI’s use of those techniques: to work effectively, they all need good information. And ‘good’ information doesn’t just mean lots of data. It means data that is appropriate and consistently classified and quantified. Quantiv’s NumberWorks method provides a way to identify those definitions, while our NumberCloud platform allows for collection based on those principles.
But those definitions and collections don’t just happen – they require intervention. Moreover, as those definitions are inputs to, rather than outputs from, the techniques used by AI, it’s unlikely AI is going to be able to help, as it would be feeding itself.
So, even if artificial intelligence perhaps does result in fewer jobs in some areas, it’s almost certain more will be created in the areas feeding the AI.
In effect, if AI does take your job, it’s also likely to provide a replacement role.
To find out more, contact the Quantiv team on 0161 927 4000 or email: info@quantiv.com