Ethics and Artificial Intelligence

(by Michele Gorga) By evaluating the path of humanity from its origin to the present day, we can reach a certainty that is that our reality of technological development is not linear, but exponential and this by virtue of the fundamental law that goes under the name law of acceleration of returns. A law, the latter, which can be understood only if we join points on a Cartesian plane: on the abscissa axis the x (horizontal) we set the time, on the ordinate axis (vertical) the y the level of technological progress reached at a certain date. The result, however, is not a straight line more or less inclined but an exponential curve and based on the trend that we can derive from it, we can believe that in a few years it will become a straight line parallel to the ordinate. So there will no longer be any point of contact with the ordinate and the consequence will be that we will no longer know the abscissa where we are at every single moment and, therefore, we will jump into the condition that quantum physics calls singularity.

By "singularity" we mean that condition in which the four fundamental forces of nature operate, namely: gravity; electromagnetism; the strong force that unites the protons in the nucleus in the atom; the weak force that governs the principle of radioactive decay. We can identify these forces with the four corresponding letters that is the G of genetics; the R of robotics; the N of nano technology; the I of artificial intelligence.

Proceeding in order, the G of "genetics" allows us to identify, of all living beings, the genetic code will allow us to improve the race according to the normotype that at a given historical moment the morality and the ideology in force will allow its landing and being able to be, for man, what he wishes to be, regardless of what he is, and therefore, genetically modified organisms on the basis of what morality and politics will allow in the relative historical period. Turning to the R of "Robotics" for it we must mean, not what robotics is today, but advanced machines that will have both the knowledge of space and time and able to take on human aspects, Androids, capable of having emotional consciousness [7 ]. So robotics not only as intelligence-driven machines or work assistants, but entities capable of relating in professional activities with Man. The consequence that robotics will have on society and on politics will be social movements aimed at recognizing civil rights to Androids and in general to all creation, to the animal and plant world, to the environment, etc., also in this case, morality and politics will be forced to take a stand.

Passing to N for "nano technology" for it we must mean infinitely micro technology, liquid elements that are instantly able to change the outside and inside of reality by replacing and modifying the very space of human existence. Finally, by I of "artificial intelligence" we mean what historically can be compared to the Holy Grail, - in the sense of AI as an algorithm of life and eternal knowledge - or Deus ex-machina [8], the most dangerous technology and, for scenarios future, a technology so invasive as to undermine the system of values ​​and the foundations of human civilization and of man as it has historically come to be determined over time.

In fact, Artificial Intelligence is based on algorithms which can be of domination, control, careerism, consumerism, competition, all technologies that are fulminating for human senescence and which could undermine the very existence of humanity. And it is precisely artificial intelligence that, more than any other technology, is developing rapidly in our society and consequently is changing our lives by improving health care services, increasing the efficiency of agriculture, helping to mitigate climate change and to improve the efficiency of production systems through predictive analysis with an increase in safety aspects. At the same time, however, artificial intelligence brings with it new risks that are connected to opaque decision-making mechanisms, possible discrimination based on gender, intrusion into private lives or the use of devices in use for criminal purposes. EI technology is powerful because it has become transparent, and we all rely on devices that we can now define as our technological and cognitive prostheses. For this reason we have become more vulnerable, by those who use technology for unethical purposes or for criminal purposes.

These are the reasons that led to the need for European coordination for the discipline of IE and this is due to the human and ethical implications inherent in artificial intelligence, with the need for profound reflection to improve the use of big given for innovation, which due to the need for a mandatory regulatory discipline and investments aimed at achieving a double objective, namely that of promoting, on the one hand, the widespread use of AI and, on the other, to mitigate all possible risks associated with new digital technology.

These are the profound reasons underlying the communications of 25 April 2018 and 7 December 2018, with which the European Commission presented its vision on artificial intelligence (AI), baptized as "AI 'made in Europe' ethics, safe and avant-garde ".

The Commission's vision is based on three pillars: 1) to increase public and private investment in AI to promote its uptake, 2) to prepare for socio-economic change, and 3) to ensure an appropriate ethical and legal framework to strengthen European values. To support the implementation of this vision, the Commission set up the High Level Expert Group on Artificial Intelligence, an independent group tasked with developing two documents: 1) the ethical guidelines for AI and 2) the investment recommendations and politics.

For the group, with regard to the document on ethical guidelines for AI, it must be borne in mind that AI systems, while offering concrete benefits to individuals and society, still involve risks and have negative effects, even difficult to predict. , for example on democracy, the rule of law, distributive justice or the human mind itself. That is why it is necessary, according to the group of experts, to take appropriate measures to mitigate the risks, in a way that is proportionate to their extent and to consider methods to ensure their implementation. First of all, for the experts it is necessary to foster research and innovation to support the evaluation of AI systems and the achievement of compliance with the requirements, disseminating the results, to train a new generation of experts in AI ethics. Consequently, clearly inform stakeholders about the capabilities and limitations of the system. Being transparent, facilitating the traceability and verifiability of AI systems, particularly in critical contexts or situations, becomes essential as well as involving stakeholders throughout the entire life cycle of the AI ​​system and promoting training and education so that all stakeholders are trained and informed about trustworthy AI. Starting from the latter, that is the need to have a reliable AI, it should be emphasized that for people and companies, reliability is a prerequisite for the development, distribution and use of the systems themselves. Otherwise the use of the related technologies could be hindered and thus the possible social and economic benefits would be slowed down. Confidence, therefore, in the development, distribution and use of AI systems does not only concern the intrinsic properties of technology, but also the qualities of technical and social systems. The working group thus established, in its first document, the guidelines for a reliable artificial intelligence, providing that it should be based on three components.

These have been identified: a) in legality, AI that is an Artificial Intelligence subject to all the laws and regulations applicable in the specific matter; b) ethics, AI to be understood as the set of rules of the code of ethics and the related ethical principles and values ​​to always keep in mind; c) robustness, from a technical and social point of view since even with the best of intentions, AI systems can cause unintended damage. Now each of the aforementioned components is in itself necessary but nevertheless not sufficient for a reliable Artificial Intelligence. These three components are primarily aimed at stakeholders and are binding indications on how the aforementioned ethical principles can be made operational in social and technical systems.

Starting from the fundamental rights-based approach, first of all, the ethical principles and related values ​​are identified that must be respected in the development and distribution and use of AI systems to then move on to indications on how to create reliable AI. through the declination of the seven requirements that the AI ​​systems should satisfy and for whose implementation technical and non-technical methods can be used and which are identified: 1) - in human intervention and surveillance; 2) - in technical and security robustness, 3) - in data privacy and governance, 4) -in transparency, 5) -in diversity, non-discrimination and fairness, 6) -in social and environmental well-being and 7) -in 'accountability. Finally, the expert group proposed a checklist for assessing the reliability of AI that can help make the seven requirements listed above operational. Finally, in the final section of the document, examples of advantageous opportunities and serious concerns that AI systems raise, despite the awareness that Europe enjoys an exclusive advantage, which derives from its commitment to placing the citizen at the center of own activities.

Michele Gorga, lawyer and observatory component for the coordination of AIDR DPOs

Ethics and Artificial Intelligence

| NEWS ', EVIDENCE 3 |