“The computer is not a smart machine that helps stupid people, on the contrary it is a stupid machine that only works in the hands of smart people”. It was 1986 when Umberto Eco wrote these words, much water has passed under the bridge since that date. Who could, back then, even imagine the developments that the digital revolution would have? Today, we carry computers in our pockets, more often in our hands. Sixty-five per cent of the world’s inhabitants own a smartphone, a tool with computing power to put the computers of the 1980s to shame. The speed and amount of data exchanged are impressive and constantly growing (it has been calculated that more data was produced in 2012 alone than in the previous five thousand years). Machine learning and deep learning systems have led Alpha Go, Google DeepMind’s Artificial Intelligence, to beat the world’s best players of Go, the ancient Chinese game considered more complex than chess.
The question is therefore as topical as ever: will Artificial Intelligence (AI) be able to replicate and then perhaps even surpass human intelligence? Opinions among experts are very divergent: some say never; some, like Rodney Brooks, director until 2007 of the computer science and artificial intelligence laboratory at MIT in Boston, think it will take hundreds of years; and others, together with Demis Hassabis of Google’s Deep Mind, are convinced that this goal can be reached in a few decades. The results, in any case extraordinary, achieved by artificial intelligence are in fact not classifiable as: ‘AGI’ (Artificial General Intelligence), an AI capable of fully replicating human intelligence, they belong instead to the AI defined as weak or limited, which aims at the realisation of a system capable of successfully performing a specific complex task such as, for instance, text translation or image recognition.
However, the progress of AI is such that it has alarmed many people about the future. A certain stir was caused, for example, by the appeal against the dangers of uncontrolled AI development, which featured a figure of the calibre of Stephen Hawking shortly before his death. But, without going too far, it is the subject of work that generates serious concerns. Indeed, it is difficult to talk about ‘new professions’ without evoking an increasingly widespread fear of a hostile and uncertain future in which increasingly intelligent machines will steal jobs from humans. “The future without jobs” is the title of a book, published in Italy in 2017, that deals with this. The author, Martin Ford, a Silicon Valley entrepreneur who has been working in the field of AI for 25 years with a reputation as a futurologist, asks a serious question: is this time different? Data in hand, he shows that unlike past industrial revolutions, during which cancelled jobs were replaced by new ones, this is no longer the case. Ford argues that one only has to read the numbers, and those available show that the jobs created are fewer than those usurped by machines. He also adds that the impact of technology is also to blame for phenomena such as stagnating wages, long-term unemployment, underemployment of new graduates, and a sharp rise in inequality.
But even in the case of labour, the experts’ forecasts are very dissimilar. Let us cite, by way of example, two different studies: the first is the study by two Oxford academics, Carl Benedikt Frey and Michael A. Osborne, who calculated that in the next two decades, 47% of jobs in the United States could be wiped out by robots and intelligent machines (in one of his last speeches, US President Barack Obama stated that: the day autonomous driving systems become operational, four million American truck drivers will lose their jobs). The second example relates to a recent OECD survey by Melanie Arntz, Terry Gregory and Ulrich Zierahn, which estimates that just 9 per cent of jobs in the most industrialised countries are at risk.
What is certain is that we are facing an epochal change, proof of which is that, contrary to what many people think, the impact of intelligent machines does not only concern the production sector, but is also expanding to the service sector, where the number of robots and intelligent software interfacing with users is already double that of the industrial sector. It is a revolution that is involving sectors of the world of work, unimaginable until recently, related to relational activities: lawyers, journalists, military personnel, nurses, doctors, babysitters, waiters, etc. No profession, in short, seems to be totally sheltered any more.
The fear of machines taking over man’s work is an ancient story, taking us back to the origin of the industrial revolution and the Luddites who destroyed the mechanical looms. But today’s fear is no longer towards the machine as a possible substitute for ‘labour power’, today the machine challenges man’s intelligence, his noblest aspect. It is for this reason that we must be able to define and describe precisely, what distinguishes human intelligence and in which areas it is not replaceable.
It may seem straightforward to answer the question of what distinguishes, and therefore what is the true, and different, value of human intelligence, but when one begins to investigate, one discovers that the matter, in light of the findings in the field of AI, is decidedly very complex. The American philosopher John Searle, known for his studies on the ‘philosophy of mind’, argues that a machine may be able to ‘simulate’ intelligent behaviour, but this does not make it truly intelligent. Thinking and simulating are two completely different activities. The machine merely applies instructions or rules, however complex, without understanding anything of what it is doing. It only has ‘syntactic’ competence in combining symbols, but does not possess ‘semantic’ competence, which is indispensable for attributing meaning to the symbols it is operating on. Thinking, according to Searle, as a conscious experience, lived by the subject, is therefore an activity irreducible to any other form that is not linked to the conscious experience of the human being.
In contrast to John Searle, there are researchers in the field of deep learning who, as we saw at the beginning, are convinced that the creation of a conscious machine, consisting of artificial neurons, is an attainable goal. From this point of view, those who are sceptical about the intelligence of machines will have felt more than a thrill in front of the AIVA computer, presented in Vancouver by Pierre Barreau, which composes music autonomously (after a period of learning) inspired by Beethoven, or in front of similar machines in the field of painting, as in the case of ‘The Next Rembrandt’ project, capable of extraordinary achievements in both cases. Achievements of this level, despite the fact that the machine is ‘ordered to create’, seem to threaten a domain of intelligence, so far considered exclusively human, such as creativity. There remains, as a last bastion in defence of human intelligence, the consideration that the machine is not aware of what it is doing. This is well explained by a pioneer in this field such as Judea Pearl, who assesses these results as the product of super-powerful machines, which nevertheless merely “find hidden regularities in a large dataset”. This statement is supported by other experts in the field of AI, who are convinced that we have reached a limit that is difficult to surpass. Despite the ever-increasing amount of data that these machines are able to process, the software, which operates on the basis of statistical calculations, is still unable to perform processes typical of human intelligence such as the ability to generalise and reason abstractly, to arrive at problems concerning meanings and common sense. I personally remember a lecture many years ago by Massimo Piattelli Palmarini, who explained how it is easy for a child to understand that if two people individually lift 30 kilos, together they will probably lift 60. But if the same people individually jump one metre, together they will not jump two metres. Well, he concluded, try to explain this to a machine and you will understand a lot about mental processes and intelligence. What happens, and what seems difficult to overcome, is that small variations in context, easily handled by any person (for example within a dialogue with previously undefined boundaries), can send the best AI available today into a tailspin.
.
The picture we have just outlined hints at a future in which we can sleep soundly: no super-intelligence looms around the corner. If we look at the human brain with its 100 billion neurons and trillions of synapses, we realise that nothing like it has ever even come close to being built (leaving aside the fact that engineers claim that such a machine would have to be a billion times more energy efficient than today’s best computers: a technical achievement unachievable with current technology). The challenge with human intelligence thus appears to have been postponed for many years or (according to some) forever. Instead, the problem of the impact that limited Artificial Intelligence (systems capable of very high efficiency in solving specific problems) may create in terms of job losses, as Martin Ford has argued, remains entirely present.
In the face of this real threat, the real issue we should be concerned with is our attitude towards technology, to ask ourselves whether we are still capable of choosing and leading possible events. The risk of a comfortable and passive dependence on technology, which on the one hand facilitates our lives but on the other leads us to abandon apparently obsolete skills and competences, is very strong. What is the use of being able to orient ourselves if we possess a GPS capable of taking us only a few metres from our destination? What is the use of knowing the rules of spelling and syntax if you have an automatic corrector at your disposal? Is it still important to struggle to learn languages if, before long, we can have a powerful little universal translator in our hands? Is there any point in developing specific skills in searching for information and comparing sources, if all I have to do is click on Google to have ‘all the information in the world’ at my disposal? (It is a pity that hardly anyone goes beyond the first page and asks what criteria generated that first page). These are all wonderful opportunities, how can we deny it? But what happens to our heads? It all gets worse, then, when these opportunities are given to us without having asked for them. Harmless suggestions, or avalanches of information constantly distracting our minds? (Powerful AI systems are active in this regard, backed by large investments).
Ethologists have taught us that animals that find food too easily develop lower intelligence. For millions of years we have evolved by solving problems, now we risk delegating everything to a few clicks. The idea that technological tools are, in their essence, neutral, so it is up to us whether we use them well or badly, as McLuhan taught us, is naive thinking. The changes they produce in our representation of reality are decisive, because they affect the way we react and think. This is dealt with extensively by Manfred Spitzer, who heads the Centre for Neuroscience and Learning at the University of Ulm, in his book Digital Dementia. The subtitle, “how the new technology is making us stupid”, is quite explicit in describing a scenario in which our critical capacity, ability to reflect and concentrate is threatened; bombarded by stimuli capable of activating the release of dopamine, a neurotransmitter linked to the mechanisms of reward and pleasure, which generates addiction.
This is not useless alarmism; the discouraging data on our country, concerning a phenomenon such as functional illiteracy, at 28%, should give us pause for thought: almost one Italian in three can read and write, but cannot understand the meaning of a minimally complex text, such as a newspaper article. This sad ranking, which sees us in last place in Europe, is a sign of a serious loss of competence that is occurring in a historical phase in which it would instead be fundamental to be able to draw on our best cognitive resources, in order to face the challenges of the future thinking lucidly, but also with a touch of pride, about the value of human intelligence.
There is a prophetic passage, reminiscent of Orwell’s and Huxley’s dystopian novels, in the foreword of a 1985 book by Neil Postman, “Fun to be had”, when the internet was a privilege for a few specialists:
“We were all waiting for 1984. It came, but the prophecy did not come true… we were spared Orwell’s nightmares. We had forgotten another vision less hellish and notorious than Orwell’s but just as chilling. The one contained in: “New World” by Aldous Huxley. Orwell had imagined Big Brother, in Huxley’s vision it will not be a supreme dictator who will take away our autonomy and culture. People will be happy to be oppressed and will worship technology that frees them from the drudgery of thinking. Orwell feared that books would be banned; Huxley, not that books would be banned, but that there would be no desire to read them. Orwell feared those who would deprive us of information; Huxley, those who would give us too much of it, to the point of reducing us to passivity and selfishness. Orwell feared that ours would be a civilisation of slaves; Huxley, that it would be a boorish culture, full only of sensations and childishness. Libertarians and rationalists – always ready to oppose the tyrant – did not take into account that humans have an almost insatiable appetite for distractions. In ‘1984’ people are kept in check with punishments; in ‘The New World’, with pleasures… The thing that afflicted the people of ‘The New World’ was not laughing instead of thinking, but not knowing what they were laughing about and why they had stopped thinking.
There is one final lesson we can learn from research into Artificial Intelligence. Those working in this field pragmatically define intelligence as the ability to deal with and successfully solve situations and problems, and we have seen this brilliantly achieved by machines. So perhaps it is time to ask ourselves, in keeping with the search for the deeper value of human experience, whether intelligence or wisdom is more important. Let us leave this reflection to the words, also extraordinarily prophetic, written by Thomas Eliot in 1934:
“Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?”.
;
.
.
.