Section “PSYCHOLOGY OF EMOTIONS” by Diego Ingrassia – “EMOTIONAL-COMPORTANT ANALYSIS AND NEW TECHNOLOGIES”
for PSICOLOGIA CONTEMPORANEA – Futuro – no. 267, May-June 2018 – GIUNTI EDITORE
During the 1980s, the American filmography was particularly prolific in the science fiction genre. Films in which the future was represented through two very different strands: if we think of Robert Zemeckis’s Back to the Future trilogy, we were faced with a world populated by time machines, flying cars and other strange devices, in which progress was seen in an optimistic and hopeful way. In the same years, however, films of a very different kind, more inclined to take up the spirit of Anglo-Saxon dystopian literature, of which Orwell and Huxley are the most famous examples, were also successful. Films such as Ridley Scott’s Blade Runner or James Cameron’s Terminator portrayed androids rebelling against their creators, laying the foundations for an ominous future in which human beings risked losing their primacy on the evolutionary scale, supplanted by a superior, invulnerable and more intelligent ‘artificial species’.
Thirty years later, which of these scenarios is coming true? As far as the three films mentioned above are concerned, none. But when asked about the future, opinions continue to divide: optimists convinced of the goodness of progress, on the one hand; critics, worried and pessimists, on the other. Significant in this respect is the media reaction to the news that described the behaviour of Alice and Bob, tworobots developed within the Facebook AI (Artificial Intelligence) experiment, who became famous because, out of nowhere, using their intelligence and speed in processing abstract concepts, they were able to invent a language incomprehensible to the team of researchers, who, between amazement and panic, decided to stop the experiment. The technology we have today carries both risks and benefits.
The revolution that has taken place through digital means of communication allows us to do things that were unimaginable just a few years ago, with extraordinary speed, ease and precision. At the same time, it exposes us to risks that we were not aware of with respect to privacy, possible economic damage and the protection of our rights. It is clear, therefore, that an epochal change has been taking place for some years now, which requires new knowledge, but above all a renewed awareness, in order to be governed.
Two examples, by their very nature close to the topics we have dealt with on these pages, can help us better understand the change taking place.
AVATAR (Automated Virtual Agent for Truth Assessment in Real time) is an experimental project conceived by Aaron Elkins, professor at the Fowler College of Business Administration at San Diego State University. Its function is to support airport security services in assessing passengers: a true digital lie detector, according to its creators.
Its operation is sophisticated: once the identity document has been scanned, the ‘kiosk’ (a box in which one interacts with an artificial intelligence system equipped with microphones and very high resolution cameras) subjects the passenger to a long series of questions. During this phase, the system analyses and evaluates the tone of voice, pupil dilation, eye movement, facial expressions and posture.
By analysing these involuntary behavioural elements, the system formulates its own assessment based on the stress levels reported by the subject under examination. The system, as we have seen, is still at an experimental stage, but it represents a big step forward compared to traditional polygraphs, which are unusable in highly populated contexts to make quick large-scale analyses.
The second example concerns a specialised field that has historically seen the psychology profession as a protagonist. Unilever has, since 2016, adopted a staff selection process using Artificial Intelligence and Gamification. The first step involves the opening of job positions on LinkedIn or Facebook by the company. The candidate signs up without having to send a classic CV, an algorithm makes an initial assessment of skills based on the LinkedIn profile. The next step, for those deemed suitable, consists of a series of games that measure concentration, short-term memory, general knowledge, and problem solving. This is also carried out from the comfort of home, from their smartphone. People who pass this stage must submit a video message of their own, which a sophisticated software processes based on their voice, facial expressions, verbal style and content. Only those who pass this last step are called to the company for a classic selection interview, conducted by experienced psychologists.
We are dealing with innovative technologies that clearly drastically reduce costs and speed of information processing, and in this particular aspect lies their value. We know that we cannot compete with the machine in the ability to quickly process huge amounts of data and information. The goal of ‘efficiency’ is certainly achieved, but what price are we paying in terms of quality?.
Returning, therefore, to the initial dilemma between enthusiastic adherence to the technologies of the future and the fear of pushing towards progress that is dangerous for the human condition, it is simpler to see that it is impossible to stop the changes, even massive ones, that are taking place, and it is for this reason that we must be able to fully understand the distinctive elements and the true value of human intelligence.
The machine is not aware of what it does, it has no consciousness, it has no emotions. The ability to contextualise the elements of meaning, to ensure emotional balance and reasonableness in decision-making, to responsibly understand the consequences of a given choice, to be able to provide answers even when faced with ambiguous situations, are aspects peculiar to human intelligence that, for the time being, software is unable to replicate. An opinion also shared by psychologist Paul Ekman when he reminds us that, for now, human input is the most reliable tool in the coding and interpretation of behaviour (particularly when analysing emotional aspects), as you can read in more detail in the article dedicated to the FACS method in this same issue of the magazine.