- 1 “At the IRI, we apply artificial intelligence to benefit people, from robot assistants that help people to get dressed to the recycling of electronic products”
- 2 “In thirty years, Barcelona will have robots everywhere, even in schools”
- 3 “France wants to promote artificial intelligence through a third path between Chinese hyper-surveillance and the ultra-liberal model of the United States”
“At the IRI, we apply artificial intelligence to benefit people, from robot assistants that help people to get dressed to the recycling of electronic products”
“In thirty years, Barcelona will have robots everywhere, even in schools”
“France wants to promote artificial intelligence through a third path between Chinese hyper-surveillance and the ultra-liberal model of the United States”
Carme Torras, research professor at the Institut de Robòtica i Informàtica Industrial IRI (CSIC – UPC) is one of the best-known figures with authority to talk about ethics applied to robotic systems. Her opinion has been sought at sector meetings of UNESCO and at the recent Global Forum on AI for Humanity. Her professional career, divided between the United States and Barcelona, has been propelled by numerous publications and technology developments. In addition to all this, she is deeply involved in dissemination, with excellent results. One of her books, ‘La mutación sentimental’ (The Sentimental Mutation), has been translated into English for inclusion in study materials on ethics and robotics in various US and European universities.
Would it be an exaggeration to say that the ethical use of robotics is as important as the technological development?
No, this statement is not an exaggeration, I firmly believe in this. It would be good if the general public understood basic concepts of artificial intelligence (IA) and robotics, knew how they worked and what benefits they can provide. And of course if they also knew the risks.
This applies to the users. But what about those who design, develop and sell this type of technology?
It’s true for them as well. Those who programme and develop this technology must know the ethical risks they face. Biases in learning processes in this type of systems is one risk, and we are seeing this in real cases of artificial intelligence. We are talking about aspects relating to sex, race or place of residence that can be considered by algorithms that, for example, predict the probability of a criminal reoffending or a person defaulting on a loan. In these cases, the previous history could introduce considerable bias.
What system is used to ensure that these ethical criteria are met in phases prior to launching a robot technology?
There are mechanisms for this. For example, in all research projects funded by the European Union on robotics and AI, an ethical evaluation is required before funding is awarded. Researchers must provide protocols indicating how machine learning will occur, how robots will interrelate with people or how users’ rights will be safeguarded. The CSIC has an ethical committee that establishes work criteria in these areas.
What nuances differentiate robot ethics from ethics that apply to all technological development?
Up to now, models used in ethics for technology have focused only on safety, to ensure that autonomous machines do not create a hazardous situation. However, when we move on to social robotics, with a closer relationship between robots and people, needs arise such as ensuring that the machines do not cause delusion. Vulnerable users, for example, elderly people or children, may believe that the robots they interact with are interested in and concerned about them, and this is very dangerous.
The problem arises when robots relate directly with humans, when they carry out their activity in an environment where there is emotion?
That’s right. Some time ago, the European Union drew up a directive derived from human rights, with basic principles to determine how the interaction between robots and humans should be undertaken. Other international bodies, such as the Institute of Electrical and Electronics Engineers (IEEE), are developing these principles, which are being updated. It is important to have a common foundation to guide governments in the establishment of a legal framework. In the application of robotics to activities such as teaching or health, there is a basic rule that must be respected: preserve people’s dignity and do not objectify them.
In the case of care robots, is the red line emotions?
I think it’s very good if robots can detect the user’s emotions, for example, to encourage them or attract their attention, but I’m against the machine simulating that it has emotions, because this is a deception and could lead to defencelessness and social isolation of the person.
The use of algorithms on a large-scale forms part of the robotisation process. Is their mass use in tools like search engines and information systems contributing to intellectual impoverishment of users?
The danger is falling into the information bubble, in other words, that users end up only talking and relating with those who share the same approaches. Part of the digital divide occurs between those who can use the internet to learn and improve and those who settle for what the system offers them as the first option.
What are the alternatives to these systems, which operate almost like a monopoly?
There are many search tools, and users must know how to find reliable information among so much fake news and self-serving information. And to check information. This should also be learnt.
One problem derived from globalisation and mass access to information systems is the increase in fake news. Can algorithms help us to expose it? How?
Some people are already working on using AI to expose fake news, by trying to identify reliable sources and eliminate biases. Artificial intelligence can cause the problem and at the same time provide the solution. I like to stress that many of its applications are positive, with systems that learn to improve through users’ experience, such as online translators.
You have shown support for directing technology towards ethical positions that benefit citizens. Does this mean that governments and the public sector should lead the way in robotics research?
I think that there must be monitoring by public authorities, with clear regulations and some controls, but I don’t think it’s feasible for research to be led by the public sector. Private entities play a very important role.
What is the trend in the development of artificial intelligence and robotics globally?
In October, I participated in the Global Forum on AI for Humanity in Paris. The French government aspires to lead the development of artificial intelligence through a third path, between the hyper-vigilance of China and the ultra-liberal model of the United States, which I find very interesting. At local level, France is investing over a billion in attracting talent and expanding its infrastructures and research teams. Of course, I think that Spain should take this path, which seeks a more ethical development of this type of technologies.
You have published two novels related to science and technology. One of them, La mutación sentimental (The sentimental mutation), has been translated into English and is used in a university subject on ethics and robotics in several countries. How did you achieve this?
The novel was published in Catalan in 2008 and won two awards. Then it was translated into Spanish, and MIT Press became interested in using it as educational material in a subject on ethics in engineering and computing. They asked me to edit complementary material with 24 questions that could be used to guide the debate, and they translated it into English. Since then it has been in US universities, and in Sweden, Great Britain and, coming full circle, in Catalonia. It’s very satisfying.
Let’s talk about the project that you are carrying out at the IRI right now.
We’re working on several areas of AI application. The biggest project we have is CLOTHILDE (Cloth manipulation learning from demonstrations), which combines topology and machine learning to apply it in three areas: assistance in the process of dressing people with reduced mobility; manipulation of professional textiles like bedding and covers, and improving automatic processes in reverse logistics (returning fashion products to the sales chain after purchasing).
In an associated area, we are developing BURG, which is focused on creating benchmarking systems for handling rigid and deformable objects, specifically clothing. The aim is to standardize robotic processes in the handling of textile garments.
In relation to people who suffer from dementia, we are collaborating with the Fundación ACE to use robots in diagnostic and treatment processes. The project is called SOCRATES, and is focused on five areas: emotion, intention, adaptability, design and acceptance.
And in another area of research we use artificial intelligence in the automation of recycling of electronic products through the project IMAGINE. What we do is apply it to the processes of identifying and handling equipment for recycling, from hard discs to mobile phones. These are complex operations due to the wide range of models, which makes the work of a robot difficult as it has to learn to recognize the equipment, design the sequences of disassembly and execute them.
Let’s imagine the future. What role will robots have in a city like Barcelona in 2050?
In thirty years, Barcelona will have robots everywhere, from robotised assistants in residential centres, hospitals and civic centres, and in airports and train stations. We will see robot gardeners cutting flower beds and maintaining roadside ditches. And of course, robots will also be doing surveillance tasks. We will become familiar with many types of robots in these activities, even in schools. Of course! Not as replacements of teachers, but in support tasks. For example, in learning music and languages. And this is not science fiction: in South Korea they are already using robots to teach English, remotely operated by native teachers in England.