The AI debate has been speeded up by the emergence of ChatGPT. And I’ve been a little muddled about it, until my friend and colleague Nick Silver took my name in vain in the Radix Friday Forum last week – then I suddenly remembered.
I have written two books (notably Tickbox) which, among other things, try to explain what AI is – because it uses the word ‘intelligence’ as an analogy, rather than an accurate description. It is poetic licence, not the real thing.
It isn’t the mysterious mixture of skills that make up human intelligence, though its stock of facts may be more impressive and its mathematical speed more brilliant. As Joe Zammit-Lucia implied in the same forum – and as Bryan Appleyard put it some years ago, we have been witnessing over the years people talking down human intelligence, so that the virtual version would look clever in comparison.
As Barry James said in a parallel article, you could fill a book with a history of human intelligence – and there is in fact an excellent one by Stephen Jay Gould about its measurement and the history of that. Barry says that ‘AI’ ought really to be replaced by the term KP or Knowledge Processor.
The truth is that there are many different kinds of human intelligence, maybe an infinite number. When I was 16, i was found to have excellent abstract intelligence but dismal mechanical intelligence.
Still, I have managed to get by.
The final lines of the film It’s a Wonderful Life see the characters giving a toast to ‘George Bailey, the richest man in town!’ Now, we know that ‘rich’ in this case doesn’t refer to money. But there may come a time when only data on money is recognised by the term.
If we stop using this informal knowledge completely, we start to forget it is there, along with all the poetic ambiguity that goes with it. At that point, It’s a Wonderful Life becomes completely incoherent.
Or imagine we selectively breed human beings who are skilled at maths, because that is how we measure intelligence. The meaning of the word ‘intelligence’ would then have shrunk to the boundaries of the data, and the world of reality would follow suit. Or imagine that only our data denoted by Facebook emoticons counted as real emotions. How quickly would we forget about ambiguity or contradictory feelings once the language had shrunk?
For me, this particular problem is by far the most important. If we allow this peculiarity to change the way we understand words, it will represent the final victory of tickbox over life – the reduction of the breadth of the human mind to the limitations of data.
It will be a victory for those who believe we can diminish human beings to blips on silicon – not because our skills at artificial intelligence have increased, but because we have started to regard human emotions as increasingly limited.
“On the one hand, they seduce us, we want them to contain, include and involve us,” wrote Bryan Appleyard about smart machines in his book The Brain is Wider than the Sky. “On the other hand, they demand that we become more ‘machine readable’. We pay for inclusion and involvement by becoming more like machines.”
So don’t let’s forget that, however complex they have become, chatbots are not intelligent in a human way.
They are not conscious – they simply use programmable silicon, programmed by clever people to ‘know’ what to say in virtually any circumstance.