(by Michele Gorga, lawyer and observatory member for the coordination of DPOs, RTDs and Reputation Manager of Aidr) As regards the automated procedure for the processing of personal data for the purposes of an administrative decision, the reference standard as is well known is 'art. 22 of the GDPR of 2016/679 which provides that: "The interested party has the right not to be subjected to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or which significantly affects him on his person. " This principle can be summarized in the maxim that no one can suffer consequences on his rights in view of a decision entirely adopted through a machine, an algorithm, another non-human decision-making process, finds its correspondence in the 15th CONSIDERING of the GDPR 2016/679, where it is provided that " In order to avoid the emergence of serious risks of circumvention, the protection of individuals (the treatment) should be neutral from a technological point of view and should not depend on the techniques used …… ". Both the provisions expressed in the 2016/679 EU Regulation are therefore perfectly consistent with the principles established by domestic administrative law which in the coordination between rules on transparency and the need to comply with data protection legislation are combined on the one hand with administrative discretion , on the other hand with the category of the bound act to which all public administrations are subjected and which places, to our attention, further critical issues in relation to the algorithm assumed at the basis of a decision adopted by the artificial intelligence system.

The first profile concerns the need to respect the principle of transparency, which is now central to the activity of the PA and must be at the basis of new public services based on algorithms and automated decisions. In this sense, it will be necessary to ensure the transparency not only of the data, but also of the algorithms, of the database construction logics, of the service operation process. The second relates to the legal responsibility of the Public Administration even when it resorts to artificial intelligence solutions in the provision of services or in the decisions to resort to fully automated procedures and administrative jurisprudence has dealt with cases in which the administrative procedure is completely governed by a machine and it has moved while maintaining the fundamental assumption, that is, that judicial protection cannot be excluded or limited for certain categories of acts. The administrative judge addressed the problem of the automation process in the administrative procedure in some very recent sentences and identified three basic principles as regulators of the matter; that of knowability; that of non-exclusivity of the algorithmic decision; that of algorithmic non-discrimination.

Dwelling on the first principle that is "the right not to be subjected to a decision based solely on automated processing including profiling" this implies that every algorithmic decision, taken by the PA can never be without a control by an official - natural person charge -, a principle already affirmed by the US District Court of Wisconsin and accepted by our administrative jurisprudence with the jurisprudential arrest crystallized in 2019, which a recent opinion given by the Privacy Guarantor seems not to recognize.

And in fact on the request for an opinion from the Autonomous Province of Trento, regarding a proposal normally prepared to be able to perform treatments that involve fully automated decisions, for the possible approval of a future bill in the context of economic support interventions financial disbursed, i.e. relating to the granting of grants, subsidies, grants, and other forms of economic advantages for residents, the province has expressed the possibility of using, in these procedures, systems, even totally automated, of the algorithmic logic even if periodically verified at in order to minimize the risk of errors, distortions or discrimination of any kind.

In the request, however, even though it was foreseen that the algorithmic formula would be made fully known to the recipients interested in the administrative procedure, - even if in the same provision, the prescriptions of how to make the algorithm formula known to the recipients - only generically it was envisaged that it would have been possible for the interested parties to "challenge the decisions made on the basis of the same formula and request an effective human intervention" and, therefore, an ex post and not ex ante intervention with consequent violation of the additional criterion set by the legislation European Commission of non-aggravation of the administrative procedure to the detriment of the users of the PA, the Guarantor, despite the evident criticalities, expressed a favorable opinion on the proposed standard and dictated prescriptions and conditions which however appear marginal and residual. 

Privacy and automated decision-making in the PA