The automation of public social protection services: an analysis of the risks for their beneficiaries
DOI:
https://doi.org/10.14295/juris.v33i1.16361Keywords:
social protection, state-administration;, automation;, social risks, fundamental rightsAbstract
The exercise of rights in the virtual environment raises concerns from a constitutional point of view, given the inability of analogical constitutions to regulate the myriad of relationships established in cybersociety, which is permeated by new transnational actors. With regard to social rights, it is well known that the public authorities are responsible for providing services to society, especially those aimed at the social protection of individuals, such as social assistance and social security, which are the objects of this article's analysis. With the tendency for the relationship between the state and its citizens to develop fully in digital environments, we seek to answer the following question: "Once AI is used in the decision-making process regarding a social security or welfare right, what are the risks to which the holders of these rights are subject?". The hypothetical-deductive approach was chosen, adopting the basic premise of the need to make artificial intelligence in the state compatible with the principles of public law, with the aim of protecting fundamental rights. In terms of objectives, the research is exploratory and theoretical in nature. The main results were the identification of five current risks to the realization of social assistance and welfare rights: 1) the economic interests of private companies in profiting at the expense of the user's navigation; 2) the progressively withdrawn position of the State in relation to fundamental rights; 3) the surveillance, labeling and segregation of citizens based on the integration of the database of public bodies; 4) the oversimplification of the norm by transforming it into algorithmic language and, finally; 5) the extreme reliability of automated systems. With regard to the first, third and fifth risks, it would be possible to foresee that the approval of a legal framework for artificial intelligence, similar to what has been done in Europe, would be an appropriate measure to address these problems. However, further research is also needed to explore possible approaches to help address the negative consequences related to the second and fourth risks.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Ao encaminhar os originais, o(s) autor(es) cede(m) os direitos de publicação para a JURIS.