AI and ethics: the impossible couple?

L’intelligence artificielle envahit peu à peu nos existences personnelles et professionnelles et nous sommes de plus en plus nombreux à utiliser des solutions s’appuyant sur les différents pans de « cette technologie » (Machine learning, Deep learning…). Mais si l’IA recèle beaucoup de promesses, l’opacité qui entoure les algorithmes interpelle voire même inquiète avec ses risques de biais, de discrimination et d’exclusion.

Can artificial intelligence and ethics cohabit?

The topic of artificial intelligence raises a number of questions, to the point that it has become a buzz word. But what is really behind this term? And what about the decision-making ability given to AI, automation and the place of humans? This technology also raises problems of discrimination, underscores governance and security issues, as well as ethical, legal and moral questions. This was the subject of this ReadyForIT round table.

Which principles should guide AI?

Experiments are currently underway to determine how artificial intelligence must react in the event of mortal danger: Who should it prioritise in case of danger: the youngest, the oldest, pregnant women? The answer depends on the country: in Asia, for example, the young are sacrificed, whereas it’s the elderly in European countries.

Questioning the notions of life or death will become essential especially in transportation, with autonomous vehicles, and in health for vital diagnostics. So, which ethical principles must AI integrate, and how can it integrate them?

A consensus has to be reached at every level, in particular as regards the notion of consent for passengers boarding a vehicle that could decide to sacrifice them in the event of danger. Of the principles around which this consensus must be found, the notion of transparency is central.

Algorithms will have to offer a level of transparency enabling experts to perform audits, and thus offer their opinion on safety and interoperability. However, GAFAM like Facebook currently appear unwilling to provide this level of transparency. But for political and democratic reasons, the status quo of opacity cannot continue.

In addition to transparency, six other elements were advanced in the latest report of the European Commission for ethical intelligence: human agency and oversight; technical robustness and safety; privacy and data governance; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability.

 

To what extent should the law intervene in the field of artificial intelligence?

A law on ethical artificial intelligence, as for bioethics, does not appear necessary. In 2019, we can identify three legal approaches with respect to AI:

  • Nominal power: sets of rules and obligations designed to manage relations between individuals within the society
  • Distinction between morality and ethics: morality is normative and imperative, while ethics appeal to discernment. With ethics, there is a dilemma, and the answer isn’t obvious, requiring that we seek the best solution here and now.
  • Creation of an algorithm: with artificial intelligence in autonomous vehicles, death can occur not only because the vehicle was programmed, but also because the sensors may misinterpret signals, which poses a problem. AI has to decide who to kill in the event of a “lose-lose” situation. This is a huge responsibility for programmers, who address elements of indecision and have to introduce random variables.

The law is not necessarily effective because it isn’t mature enough. But artificial intelligence remains a very good subject for the approach and ethical reflection. Indeed, the transparency demanded by the European Commission will not come through the goodwill of a player like Facebook, because companies will never hand over the keys to their algorithm on their own volition. The obligation of transparency will have to be framed by the law.

How can ethics be integrated in this technology?

There needs to be a political debate on artificial intelligence in democratic societies in order to establish a consensus on what is and is not acceptable, especially in the area of health. Europe finds itself today between the American laissez-faire attitude with the example of Facebook, and Chinese social credit, with its three-figure citizen scoring system by 2020-2021.

Moreover, the issue of transparency is all the more complex in that it is not always easy to explain an algorithm, especially when it is based on neural networks. It is sometimes impossible to anticipate the results of artificial intelligence as a human. So, in certain medical specialties, AI now offers better results. In the future, it will be necessary to ask whether we should rely only on humans or prefer a dual perspective.

In the case of autonomous cars, if using them reduces the number of accidents by more than half, we will have to wonder whether using traditional vehicles is an acceptable risk.

In any case, we can no longer allow the GAFAM to continue as they have, i.e. allowing them to acquire such intimate knowledge of individuals that it can give rise to abuses.

A form of roadblock has to be raised, as it has in Germany, where the aggregation of data from the Facebook ecosystem (Facebook, Instagram and WhatsApp) is now prohibited. Nevertheless, according to Paul-Olivier Gibert, "we have to be careful not to go too far, i.e. we have the problem of the GAFAM who have a global monopoly on […] internet data, […] except that we are presumably at a time where they will go no further.

Speakers: Paul-Olivier Gibert, AFCDP; Bernard Benhamou, Institut de la Souveraineté Numérique, Frédéric Charles, Suez Smart Solutions, and Ouafaa El Moumouhi, IBM France