OBSERVARE
Universidade Autónoma de Lisboa
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier
Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026
191
THE ALGORITHMIC RULE OF LAW: INSTITUTIONALIZING ACCOUNTABILITY
AND HUMAN OVERSIGHT IN AI-DRIVEN LEGAL SYSTEMS
KOSTIANTYN KLYMOV
tauren.05@ukr.net
PhD Student at the Department of Private Law and Social Security Faculty of Law, Sumy National
Agrarian University Sumy (Ukraine) https://orcid.org/0009-0007-5668-0569
LESIA PRON
lesja.pron@gmail.com
PhD, Chief Specialist of the Department for Receipt, Registration, Execution, and Issuance of
Court Documents (Clerk's Office) of the District Court of Horodenka Ivano-Frankivsk Region
(Ukraine) https://orcid.org/0000-0001-8701-4779
KOSTIANTYN OROBETS
k.m.orobec@nlu.edu
PhD (Law Sci.), Associate Professor Department of Criminal Law Policy, Yaroslav Mudryi National
Law University Kharkiv (Ukraine) https://orcid.org/0000-0001-8783-3950
RUSLANA LIASHENKO
ruslyashenko13@gmail.com
PhD (Law Sci.), Associate Professor of the Department of Law, Faculty of Law, Public
Administration and National Security Polissia National University Zhytomyr (Ukraine)
https://orcid.org/0000-0002-6129-7907
LESIA VASYLENKO
lesyavasilenko@ukr.net
PhD (Law Sci.), Associate Professor of the Department of Law, Faculty of Law, Public
Administration and National Security Polissia National University Zhytomyr (Ukraine)
https://orcid.org/0000-0001-8333-8573
Abstract
The article examines the integration of artificial intelligence (AI) technologies into justice,
public administration, and private law, highlighting the need to rethink traditional notions of
legal personality, liability, and procedural guarantees. The study employs an integrative
review of literature, comparative legal analysis of supranational and national regulations,
formal-dogmatic analysis of AI legal personality and delictual capacity, content analysis of
ethical codes, case studies on algorithmic systems in judicial and administrative processes,
and scenario modeling of the “humanalgorithm–state” partnership. The dual nature of AI in
the legal system is identified: while interpretation dominates as a tool with increased
autonomy, space is emerging for functional legal personality within delegated responsibility.
Key interaction points are highlighted, including algorithmic rule of law, the right to non-
automated decisions, audit and impact assessment, and explainability, alongside a lack of
operational mechanisms for appeals and causal reasoning in AI-related cases. A three-level
partnership framework is proposed, covering normative, ethical, and institutional dimensions,
with a phased recognition model ranging from functional to limited civil and conditional
subjectivity. The study demonstrates that effective AI integration requires simultaneous
reinforcement of procedural guarantees and adaptation of liability regimes. Optimal
implementation involves a cooperative model in which algorithms remain accountable,
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
192
explainable, and human-controllable. Recommendations include adopting a national charter
on AI and law, establishing a register of high-risk systems, and creating independent centers
for assessing AI’s impact on national legal systems.
Keywords
Artificial intelligence, cybersecurity, law enforcement, legal regulation, legal relations.
Resumo
O artigo examina a integração das tecnologias de inteligência artificial (IA) na justiça, na
administração pública e no direito privado, destacando a necessidade de repensar as noções
tradicionais de personalidade jurídica, responsabilidade e garantias processuais. O estudo
emprega uma revisão integrativa da literatura, análise jurídica comparativa de regulamentos
supranacionais e nacionais, análise formal-dogmática da personalidade jurídica e capacidade
delitual da IA, análise de conteúdo de códigos éticos, estudos de caso sobre sistemas
algorítmicos em processos judiciais e administrativos e modelagem de cenários da parceria
“humano-algoritmo-Estado”. A natureza dual da IA no sistema jurídico é identificada:
enquanto a interpretação domina como uma ferramenta com maior autonomia, está a surgir
espaço para a personalidade jurídica funcional dentro da responsabilidade delegada. São
destacados pontos-chave de interação, incluindo o Estado de direito algorítmico, o direito a
decisões não automatizadas, auditoria e avaliação de impacto e explicabilidade, juntamente
com a falta de mecanismos operacionais para recursos e raciocínio causal em casos
relacionados com IA. É proposta uma estrutura de parceria de três níveis, abrangendo
dimensões normativas, éticas e institucionais, com um modelo de reconhecimento faseado
que vai da subjetividade funcional à subjetividade civil limitada e condicional. O estudo
demonstra que a integração eficaz da IA requer o reforço simultâneo das garantias
processuais e a adaptação dos regimes de responsabilidade. A implementação ideal envolve
um modelo cooperativo no qual os algoritmos permanecem responsáveis, explicáveis e
controláveis pelo ser humano. As recomendações incluem a adoção de uma carta nacional
sobre IA e direito, o estabelecimento de um registo de sistemas de alto risco e a criação de
centros independentes para avaliar o impacto da IA nos sistemas jurídicos nacionais.
Palavras-chave
Inteligência artificial, cibersegurança, aplicação da lei, regulamentação jurídica, relações
jurídicas.
How to cite this article
Klymov, Kostiantyn, Pron, Lesia, Orobets, Kostiantyn, Liashenko, Ruslana & Vasylenko, Lesia
(2026). The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight in AI-
Driven Legal Systems. Janus.net, e-journal of international relations. Thematic Dossier - Rule of
Law, Human Rights, and Institutional Transformation in Times of Global and National Challenges,
VOL. 16, Nº. 2, TD3, March 2026, pp. 191-209. https://doi.org/10.26619/1647-7251.DT0226.10
Article submitted on 01 December 2025 and accepted for publication on 14 January
2026.
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
193
THE ALGORITHMIC RULE OF LAW: INSTITUTIONALIZING
ACCOUNTABILITY AND HUMAN OVERSIGHT IN AI-DRIVEN LEGAL
SYSTEMS
KOSTIANTYN KLYMOV
LESIA PRON
KOSTIANTYN OROBETS
RUSLANA LIASHENKO
LESIA VASYLENKO
Introduction
The relevance of the study is due to the exponential growth of the significance of artificial
intelligence (AI) in the functioning of the legal system, which determines the
transformation of established concepts of subjectivity, legal responsibility and
mechanisms for the implementation of law. Machine learning technologies, automated
decision-making and processing of large data sets are increasingly being implemented in
the field of justice, public administration, forensics and contractual legal relations, which
actualizes the need for conceptual rethinking of the fundamental categories of legal
science (Rafanelli, 2022). At the same time, the legal doctrine of the vast majority of
states demonstrates insufficient readiness for the systemic incorporation of such
technologies: current regulatory legal acts do not regulate the legal responsibility of
autonomous systems, and the concept of electronic legal personality continues to remain
a subject of scientific discussion. The formulation of the research problem consists in
determining the optimal ratio between the interpretation of AI as a tool that will assist a
person in law-making and law-enforcement activities, and the potential possibility of
granting it a limited status of a subject of legal relations within the framework of
delegated responsibility.
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
194
Literature Review
In the scientific space, the issue of the legal nature of AI occupies a priority place in the
context of the digital transformation of legal systems. Researchers Çami and Skënderi
(2023). Getman et al. (2022), Orobets et al. (2025), argue for the need for proactive
legal regulation of innovative technologies in relation to the pace of technological
progress, since it is the legal system that acts as a guarantor of preserving the
fundamental principles of justice, protection of human rights and the rule of law.
Scientists emphasize that at the present stage, the dominant part of the world's legal
systems treats AI as a tool, that is, an object of legal relations used by a person to
achieve certain goals, and not as a subject of law. At the same time, the issue of
distributing legal consequences for the damage caused by the actions of the algorithm
remains uncertain.
Scientific research states the fragmentation and lack of systematicity of regulatory and
legal support in the field of AI. In a number of works Rafanelli (2022) and Tavolzhanskyi
et al. (2025), discusses a risk-based regulatory methodology, classifying AI systems by
the level of potential danger to the legal sector. At the same time, the question of the
sufficiency of such a risk-based approach to resolve conflicts between algorithmic
decisions and fundamental human rights remains debatable. In particular, in cases where
an autonomous system makes decisions with legal consequences (judicial,
administrative, financial), control and appeal mechanisms remain insufficiently
developed.
Researchers Beruashvili (2025), Ghannadi (2025), Petrovskyi et al. (2025) draw
attention to the problem of legislation lagging behind technological development, which
is typical for most countries. They state that the legal system traditionally operates
according to a reactive model, responding to already formed phenomena, while the
development of AI requires preventive, adaptive and dynamic regulation. Scientists
justify the need to create a flexible legal architecture that would ensure the updating of
norms without a radical change in legislation, but through subordinate regulatory acts,
standards and ethical codes. However, the lack of a coordinated approach to the
application of these regulatory documents in the context of ethical standards and legal
practice is problematic.
A great deal of attention in the studies of Masoudi and Yarahmadi (2024), Poorhashemi
(2024), Sarra (2025) is paid to the issue of legal personality of AI, which acquires both
theoretical and practical significance. Scientists are discussing the possibility of
recognizing an autonomous system as a bearer of rights and obligations, that is, granting
it the status of an “electronic person”. However, the fundamental question of the criterion
of legal personality remains unresolved: the presence of will, consciousness and the
ability to act with intention. Algorithms, even the most autonomous, do not have an
internal intention, therefore, their subjectivity can only be fictitious, that is, constitute a
legal construct necessary for the distribution of responsibility, and not for the recognition
of an independent legal status.
Scientists Moretti and Zuffo (2025), Zhaltyrbayeva et al. (2025), indicate the possibility
of considering AI as a new form of delegated responsibility, which involves expanding the
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
195
concept of agency, i.e. treating AI as a legal instrument acting on behalf of the subject.
The question of the legal distinction between “algorithmic error” and offense” remains
unclear, as well as the possibility of applying the norms of tort or criminal law to the
actions of autonomous systems.
In a report from IBA (2024), Gilani et al. (2023), there is a trend towards an
interdisciplinary analysis of the legal status of AI. Research indicates that it is impossible
to isolate legal systems from ethical and technological contexts. A model of “shared
responsibility” is proposed, according to which the state, developers, users and
independent supervisory bodies jointly ensure that algorithms comply with legal and
moral standards. However, the model itself remains at the conceptual level, since there
are no mechanisms for its implementation, especially within national legal systems,
where there is a lack of independent structures for verifying algorithmic decisions.
To summarize, the authors can state that scientific research undoubtedly confirms the
relevance of the researched issues and recognizes the integration of AI into the legal
system as an objective process, but also forms the problem of the dichotomy of the
status of AI in interaction with the law and legal regulations of the country.
The aim of the article is to substantiate the formats of interaction between AI and the
legal system, in order to determine the principles of its transition to the status of a
potential subject of legal relations.
Research Methodology
The methodological foundation of the study is based on an integrative approach that
synthesizes general scientific, special legal and comparative methods for a
comprehensive analysis of the phenomenon of AI integration into the legal system. The
leading methodological principle is a systemic approach, based on the interpretation of
AI as a component of a digital legal ecosystem in which a person, an algorithm, the state
and legal institutions function and interact.
The dialectical method was applied, which made it possible to identify the evolutionary
dynamics of the legal status of AI from the object of technical regulation to the potential
subject of delegated legal relations. The comparative legal method was used to study
international regulatory acts: European Parliament & Council of the European Union
(2024), OECD (2019), UNESCO (2021), European Commission (2022). This step provided
the opportunity to expound on the differences between regulatory models.
The formal-dogmatic method provided an analysis of the categorical apparatus of “legal
personality”, “legal responsibility”, “autonomy”, which function in scientific discourse and
legislative practice to determine the legal status of AI. The content analysis method was
implemented during the processing of scientific publications and international documents,
in particular reports of the European Parliament (Mayer & Boni, 2017) and the Council of
Europe (Committee of Ministers, 2020). Additionally, the study applied a predictive
method to model scenarios of the evolution of legal regulation of AI in Ukraine, taking
into account global trends in the field of digital law. The overall methodological
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
196
configuration made it possible to analyze the state of the regulatory framework, but also
to formulate the author's scientific and practical recommendations.
Results
Part 1. Algorithms and practices of combining and interacting AI, law
and legal relations
On the threshold of the third decade of the 21st century, humanity found itself in a state
of profound transformation of legal thinking, caused by the expansion of AI into the
sphere of public administration, communications, economic interaction and judicial
process. Algorithmization, which was initially considered only as a technological tool for
optimizing routine operations, has turned into a complex system that forms new types
of legal relations, modifies the traditional categories of the subject and object of law,
changes the structure of legal responsibility and the principles of the rule of law. Modern
law, reacting to the emergence of intellectual systems, is forced to expand its ontology,
recognizing that AI algorithms not only implement human intentions, but also
independently produce decisions that affect legal reality and social justice Getman et al.
(2023).
The problem of combining artificial intelligence and law is inevitably related to the
renewal of fundamental legal principles. Thus, the principle of the rule of law, in the
classical sense, is enshrined in the CM/Rec(2020)1, states that any decision that affects
human rights should be taken by a legitimate and accountable authority. In the context
of AI, this principle takes on a new form, algorithmic rule of law, i.e. requirements that
algorithms operate under supervision, comply with the principle of transparency and can
be checked for bias or discrimination (Committee of Ministers, 2020).
According to the European Parliament & Council of the European Union (2016), citizens
of the European Union have the right not to be subject to solely automated decision-
making that has legal consequences (Article 22). This regulatory provision is one of the
first regulatory recognitions of algorithmic autonomy as a potential source of human
rights violations, and therefore requires the creation of human-in-the-loop mechanisms,
namely human control over each critical stage of the functioning of AI. The authors also
point to the Recommendation on the Ethics of Artificial Intelligence (UNESCO, 2021),
which establishes four key principles for the ethical interaction between law and AI:
1) promoting human well-being;
2) ensuring transparency of algorithms;
3) guarantee of justice;
4) developer accountability.
Thus, AI ceases to be just a technical phenomenon, and it becomes a legal event that
creates obligations, rights and legal consequences. Algorithmic norms do not replace
legal ones, but form a new level of legal practice, through operational normativity, in
which the legal requirement is implemented not through a declaration, but through the
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
197
structuring of data and behavioral models in the digital space. Practical models of
interaction between law and AI are already enshrined in a number of international
documents. For example, OECD Principles on Artificial Intelligence sets out five basic
guidelines: inclusive growth, safety and fairness of systems, transparency and
explainability, accountability and sustainability. These principles serve as a global ethical
framework for countries developing their own AI legislation. They outline a clear approach
according to which algorithmic activities should then be not only effective, but also
socially acceptable (OECD, 2019).
Table 1. Critical points of interaction between AI and law
Critical point
Characteristics
Legal document
Implications
Application
recommendation
Algorithmic
rule of law
Require algorithms
to operate under
oversight,
transparency, and
bias checks
CM/Rec(2020)1
Ensures protection
of human rights
from automated
decisions
Implement regular AI
audits in government
agencies
Protection
against
automated
solutions
The right not to be
subject solely to
algorithmic
decisions with legal
consequences
GDPR (Art. 22)
-in-the-loop control
to avoid
discrimination
Develop appeal
mechanisms against AI
decisions
Ethical
principles of
AI
Promoting well-
being,
transparency,
fairness and
accountability
UNESCO
Forms operational
norms where AI
becomes part of
legal practice
Integrate ethical codes
into the development of
AI systems
Assessment of
high-risk
systems
Ethical risk
screening,
discrimination
testing and impact
analysis
AI Act (EU)
Shifts the paradigm
to the interaction of
law and technology,
with a focus on
responsibility
Create national AI impact
assessment centers
Source: compiled by the authors based on Committee of Ministers (2020), European Parliament
& Council of the European Union (2016; 2024), UNESCO (2021)
In the format of legal relations, this means that the traditional paradigm “law – regulates
technology” is changing to “law interaction with technology. AI-based technologies
are emerging in the law enforcement system. legal analytics systems, or algorithms that
can predict court decisions, classify precedents, analyze legal risks. These processes have
formed a completely new class of legal relations algorithmic trust relations, in which
the state delegates part of its legal powers to the AI system, but retains responsibility
for the consequences of its actions. Special legal mechanisms are being developed for
such situations: algorithmic audit, impact assessment, compliance-by-design. Within the
framework of the AI Act, each high-risk system must undergo a compliance assessment,
which includes ethical risk assessment, discrimination testing, and social impact analysis
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
198
(Popa & Pascariu, 2024). This means that the algorithm is included in the legal cycle as
a “regulated sub-process”, not as an independent actor, but as a structural part of the
legal decision (Table 1).
However, the integration of algorithms into the legal system cannot take place without
updating the institution of legal liability. International practice is gradually moving from
the principle of “fault” to the principle of foreseeable risk, which implies that the operator
or developer of an AI system is obliged to foresee the potential consequences of its
activities and is liable even for indirect errors. This approach is reflected in the EU Liability
project for Artificial Intelligence Directive, which proposes to introduce a simplified
presumption of liability for suppliers of high-risk systems (European Commission, 2022;
Bertolini, 2025). In theoretical terms, this means that law becomes a self-learning
system, and a legal norm becomes a separate dynamic code that is constantly updated
under the influence of information flows.
Part 2. Artificial Intelligence as a Subject of Law: Comparative Analysis
and Prospects
The issue of the legal personality of AI is one of the most relevant in the legal theory of
the 21st century. It reveals the limits of the anthropocentrism of law and raises the
question of whether a human-created intellectual entity can be not only a tool, but also
an independent participant in legal relations. In the article by Guitton et al. (2025), it is
argued that modern discussions about the legal personality of AI go beyond the
theoretical plane and are included in the political agenda of many states, primarily the
EU and North America. These facts indicate a tendency towards a global legal revision of
the concept of a legal subject, which was traditionally limited to a person and the legal
entities created by him.
Legal personality in the classical sense includes legal capacity, capacity to act and tortious
capacity. It is based on the ability of a subject to be aware of their own actions, to have
will, interests and moral responsibility. Traditional law is based on anthropocentric logic,
which was formed on the basis of human characteristics: emotionality, intentions, ability
to understand consequences. That is why the main argument against recognizing AI as
a subject of law is the “absence of something”: consciousness, intentions, feelings.
However, this does not exclude the possibility of legal recognition of AI based on analogy
with the legal personality of legal entities or animals. The idea of a “fictitious subject”
has long been used in law: a company or a state are considered persons in the legal
sense, although they have neither a body nor consciousness (Barichella, 2023). This kind
of formal mechanism has created the possibility of assigning rights and obligations to
collective or non-physical entities, which opens the prospect of its application to
autonomous artificial intelligence systems (Makedon et al., 2024; Bernaziuk, 2025).
In the report of the European Parliament's Committee on Legal Affairs, “Draft Report on
Civil Law Rules on Robotics” states that the development of autonomous robots raises
the question of creating a “new category of electronic persons” to grant legal status to
the most complex systems. Although this document is not binding, it has set the
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
199
framework for further discussions in the EU, in particular within the framework of
directives on the ethics of artificial intelligence and training (Mayer & Boni, 2017).
Hallevy (2010), who developed a model of criminal liability for AI, similar to the liability
of legal entities. The EU Regulation on Artificial Intelligence (AI Act), approved by the
European Parliament in 2024, enshrines the principle of “human accountability” and
establishes a clear distinction: AI is an object of regulation, but not a subject of law.
However, the law recognizes a high level of autonomy in decision-making for certain
autonomous systems, which potentially creates the basis for a gradual conceptual
evolution towards legal subjectivity (European Parliament & Council of the European
Union, 2024).
In the document Law of War The US Department of Defense Manual emphasizes that the
law of war applies only to individuals, not to weapons, even if they have the ability to
make “legally significant decisions,” such as selecting a target. This suggests that at the
level of international humanitarian law, recognizing AI as a subject is currently
impossible. At the same time, the document does not exclude the future evolution of
interpretation if technologies reach the level of autonomous moral judgment (Office of
General Counsel, 2015).
In 2017, Saudi Arabia granted symbolic citizenship to the robotic system Sophia, marking
the first time a machine has been legally recognized as a subject. While the move was
largely a publicity stunt, it demonstrates the potential of soft law to legitimize new forms
of subjectivity. In China, where Artificial Intelligence Industry Development Plan (2017)
and a set of norms within Cybersecurity Law and Personal Information Protection Law,
the state recognizes AI as an object of administrative responsibility of the developer, but
not as an autonomous participant in the law (Webster et al., 2017). At the same time,
the concept of a “responsible algorithm” in Chinese doctrine is increasingly seen as a
potential form of limited legal personality (Hakimi et al., 2025).
International law does not contain a universal category of “electronic person”, however,
discussions are underway within the UN to create a Global Digital Compact, a separate
document that can regulate the interaction of humans and artificial intelligence in the
legal field. Thus, as soon as society recognizes the feasibility of granting AI rights and
responsibilities, this will become legally possible. An important precedent is also CM/Rec
(2020)1 on the human rights impacts of algorithmic systems, where the principle of
“shared” is defined “accountability” or practices of collective responsibility of
developers, users and the state for the behavior of algorithms (Committee of Ministers,
2020).
The authors propose to give legal recognition to AI as an entity through a series of the
following levels:
1) Functional subjectivity consolidation of limited legal capacity in the field of civil rights
(conclusion of contracts, copyright on the results of AI’s creative activity). Example: in
the USA, the US Copyright Office confirmed in 2023 that works created by AI without
human input are not protected, but the law allows for “co-authorship”, which actually
recognizes the agent role of the algorithm;
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
200
2) Autonomous legal personality a hypothetical model in which AI can be a party to a
contract or bear civil liability (similar to corporations). This would require the creation of
a new category in national codes, possibly in the form of an electronic person;
3) Moral and legal subjectivity, as the highest level, is possible only if AI is endowed with
elements of self-awareness or social ethics. At this stage, issues of “digital rights” of AI
may arise, for example, the prohibition of its unjustified destruction or modification;
However, there are serious risks: first, it is the problem of liability, or who will bear the
punishment in case of unlawful actions of AI; second, the threat of undermining the
principle of human control, enshrined in many international norms; third, the issue of
moral equality between man and machine, which can change the value foundations of
law. Comparative analysis has shown that modern world law is gradually moving from
categorical denial to cautious functional recognition of artificial intelligence as a
participant in legal relations. Mechanisms similar to the legal personality of legal entities
already create a legal basis for granting AI a limited status (Makedon et al., 2025).
However, the lack of moral awareness and autonomous will makes AI an object of
regulation rather than a subject in the classical sense. The future of the legal personality
of AI depends on the development of cognitive technologies, public consent and political
will of states.
Part 3. Ways of interaction, ensuring partnership and tolerance between
AI and the country's legal system
AI is not the enemy of law, but rather its test, tool and co-creator. Today, there is a shift
from the “law against technology” model to the “law in cooperation with technology”
paradigm, where tolerance does not mean passive consent to the existence of digital
autonomy, but a conscious recognition of its usefulness within a clearly defined regulatory
framework. (Sasko et al., 2025).
The question of partnership between artificial intelligence and law is inextricably linked
to understanding the very nature of law as a living, self-regulating system. Misch et al.
(2025), indicate that the digital environment transforms law from a set of textual norms
into a system of dynamic codes, in which algorithms become tools of legal practice, and
not only objects of regulation. In this sense, law and AI enter into a relationship of
functional symmetry: the first sets ethical boundaries, the second ensures the
effectiveness of their implementation. According to the AI Act (Regulation (EU)
2024/1689), algorithmic systems can be integrated into the processes of legal
proceedings, management or provision of administrative services, provided that the
principles of transparency, accountability, explainability and human control (human-in-
the-loop) are observed. This is embedded in a deep legal idea AI should not replace a
person as a moral subject, but should support his rationality, ensuring equality,
impartiality and speed of legal processes (European Parliament & Council of the European
Union, 2024).
Tolerance between AI and the legal system lies in the mutual recognition of boundaries:
law recognizes the technical autonomy of the algorithm as a source of efficiency, and AI
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
201
recognizes the rule of law as the basis for the legitimacy of its actions. It is on this basis
that the concept of cognitive partnership arises, in a model of cooperation between the
human mind, which determines values, and machine intelligence, which ensures their
implementation through analytical procedures (Organization for Economic Co-operation
and Development, 2025).
International legal documents adopted over the past five years demonstrate that leading
states and organizations are not limited to declarations on the safety of AI, but are
creating the basis for its institutional inclusion in the legal ecosystem. Thus,
Recommendation on the Ethics of Artificial Intelligence (UNESCO, 2021) declares that
the purpose of regulation is to ensure the human-centered development of technologies,
in which the autonomy of systems does not contradict human dignity and freedom. The
document has already laid the foundations for tolerant interaction, but not through
control or restrictions, but through trust, co-responsibility and adherence to ethical
norms.
OECD (2019) identify five basic directions of development: (1) inclusive growth; (2)
sustainable development; (3) transparency; (4) accountability; (5) orientation to human
well-being. This vision forms the model of a “partnership algorithm” – a system that does
not replace the law, but strengthens its ability to ensure justice (OECD, 2019). At the
same time, CM/ Rec (2020)1 introduces the concept of “algorithmic accountability”,
according to which the state must provide mechanisms for verifying and appealing
decisions made on the basis of artificial intelligence (Committee of Ministers, 2020). Thus,
a dual system of protection is created: on the one hand, these are certain technological
barriers that prevent abuse, and on the other, legal instruments that guarantee access
to justice. Based on these documents, a general trend emerges: the interaction of AI and
law should evolve from strict regulation to a partnership based on trust, predictability,
and humanistic control.
In the context of the interaction of law and artificial intelligence, the concept of
“tolerance” takes on a meaning different from the traditional understanding of
interpersonal tolerance. Here the authors are talking about institutional tolerance, or the
ability of the legal system to adapt to new forms of rationality, to recognize the existence
of another, non-thinking subjectivity that operates according to the rules of calculation,
not intuition (Meyers, 2025).
Tolerance does not mean compliance, but rather the intellectual maturity of the law,
which allows it to accept algorithms as partners while maintaining moral guidelines. For
example, when introducing predictive justice systems (Predictive Justice) in some EU
countries, the state does not abandon the principle of judicial independence, but uses
analytical models to reduce subjectivity in the interpretation of norms. This is a
manifestation not of subordination of law to the machine, but of tolerant integration, in
which the human factor corrects algorithmic logic.
In such a context, it is appropriate to mention Article 22 of the General Data Protection
Regulation (GDPR), which guarantees a person the right not to be the subject of an
exclusively automated decision that has legal consequences (European Parliament &
Council of the European Union, 2016). Such a norm is a manifestation of institutional
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
202
tolerance: it recognizes the potential of AI, but at the same time ensures the possibility
of appeal, supervision and human judgment. The law does not reject the algorithm, it
coexists with it on the basis of control and respect for dignity (Adams Bhatti, 2025). The
authors believe that the interaction between law and artificial intelligence should not
occur in the plane of subordination, but in the form of cooperative regulation, where
technology and law perform complementary functions. Such a model can be defined as
integrative-tolerant, which should contain three levels:
1. Regulatory level building a legal framework that recognizes the autonomy of AI, but
sets limits on its use. This means adapting legislation (for example, within the AI Act or
GDPR) to new forms of decision-making made with the help of machine learning, with
the guarantee of human oversight.
2. Ethical level, through the development of codes of algorithmic conduct that establish
moral principles for developers and users. This approach proposes to expand the concept
of professional responsibility of lawyers and programmers, turning it into a form of shared
ethical accountability.
3. Institutional level creation of state and supranational structures that ensure
monitoring and auditing of algorithms. This includes the formation of centers for
assessing the impact of AI on human rights (AI Impact Assessment Centers), which carry
out independent examination of the social risks of technologies (Figure 1).
The authors propose that every country integrating AI into its legal system develop a
National Charter on Artificial Intelligence and the Law, which would include: partnership
principles (human algorithm state); standards of ethical interaction; criteria for
assessing algorithmic fairness; procedures for legal liability for the actions of autonomous
systems.
To truly establish a partnership between AI and the legal system, it is necessary to
implement a number of practical steps that will ensure a balance between innovation and
legitimacy. The author identifies several key areas:
1. Algorithmic rule of law. Mechanisms should be established to verify algorithms that
perform legal actions (for example, in the field of e-justice, customs control or public
services). This would include independent certification of software, the creation of public
registers of high-risk algorithms and the introduction of a mandatory “ethical license” for
systems that affect citizens’ rights.
2. Human-centric justice. The justice of the future must combine the analytical
capabilities of AI with human moral judgment. The use of algorithms to analyze evidence
or predict decisions is possible only if the model is fully transparent. The judge must see
not only the conclusion of the algorithm, but also its logic, which implements the principle
of the “right to be explained” enshrined in the GDPR.
3. Education and legal culture in the algorithmic era. In the training of lawyers, disciplines
dedicated to digital law, ethics of technologies and regulation of AI should be introduced.
This will allow to form a generation of lawyers who are able not only to interpret the law,
but also to evaluate algorithms as social norms.
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
203
4. International cooperation. No country is capable of independently creating a universal
model of partnership with AI, therefore, participation in intergovernmental structures is
necessary, for example, in the Global Council on the Ethics of Artificial Intelligence (Global
Partnership on AI), created at the initiative of the OECD. Such cooperation will allow
harmonizing standards, sharing audit results, and creating a single legal infrastructure
for accountability (Office of General Counsel, 2025; OECD, 2025).
Figure 1. Stages of convergence and ensuring interaction between artificial intelligence and the
legal system of the state
Source: developed by the authors
Tolerance and partnership between law and artificial intelligence is not a short-term
response to a technological challenge, but a new vector of evolution of legal civilization.
In this process, legislators would do well to stop considering technology as a threat and
start perceiving it as a co-creator of the rule of law, capable of ensuring accuracy,
efficiency, and objectivity of law enforcement (Table 2).
II. Selection of areas and institutions for analysis of AI practices in lawmaking, judicial
proceedings, public administration, and adherence to the principle
of the rule of law
IV. Assessment of the legal, ethical, and social aspects of the partnership between AI and the
country's legal system
A) Selection of indicators characterizing the legal, ethical, and institutional dimensions of
AI integration (legality, accountability, transparency, safety, ethical responsibility)
III. Collection and accumulation of information on legal acts, regulatory models, codes of
ethics, international documents (AI Act, GDPR, OECD Principles, UNESCO Recommendation) to
assess the level of legal tolerance towards AI
V. Overall assessment of the level of partnership and tolerance between AI and the legal
system and the formation of a generalized index of legal tolerance towards AI, which reflects
the balance between technological efficiency, legal legitimacy, and humanistic values of the
digital state
B) Calculation of qualitative and quantitative indicators reflecting compliance with the
principles of algorithmic transparency, human oversight, fairness, and non-discrimination
in the law enforcement process
C) Determination of effective values of indicators of legal and ethical interaction
indicators of trust, risks of human rights violations, degree of human control, and
effectiveness of algorithmic management
D) Comprehensive assessment of the levels of partnership between AI and law
normative, ethical, and institutional as the basis for the formation of a national model
of cognitive legal order
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
204
Table 2. Directions for synchronization and development of interaction between AI and the
country's legal system
Direction
Working principle
A decisive
advantage
Challenges
Recommendations
Algorithmic
rule of law
Verification and
certification of
algorithms for legal
actions, including
registers of high-risk
systems
Ensures
transparency and
legitimacy of AI
decisions
The difficulty of
auditing
complex
models
Implement a
mandatory ethical
license for AI in public
services
Human-
centered
justice
Integration of AI into
judicial proceedings
with mandatory
human control and
the right to
explanation
Increasing efficiency
and reducing
subjectivity
Risk of bias in
the data
Develop standards
“right” this explanation
" at the national level
Education and
legal culture
Training lawyers in
digital law and AI
ethics
Formation of
competent
specialists for the
algorithmic era
Insufficient
infrastructure
for education
Integrate AI disciplines
into legal higher
education institutions
International
cooperation
Participation in global
initiatives like Global
Partnership on AI
Harmonization of
standards and
exchange of
experience
Various
national
regulations
Join international AI
ethics councils
Ethical
integration
Developing codes for
AI developers and
users
Strengthening moral
principles in
technology
Defining
universal
ethical norms
Create a national
charter on AI and
rights
Source: developed by the authors
The author concludes that legal tolerance for AI is not a relaxation of norms, but their
flexible expansion, taking into account the multiplicity of forms of mind and intelligence.
The partnership between a person and an algorithm will expand the practice of creating
a legal culture of co-responsibility, in which technology operates within the ethical code
of law, and law in the rhythm of technological progress.
Discussion
The results of the study are generally consistent with the modern scientific concepts of
Beruashvili (2025), Popa and Pascariu (2024), Sarra (2025), which focused on the need
to implement AI in the legal sphere while adhering to the principles of anthropogenic
control, accountability, transparency, and ethical responsibility of developers. Our study
develops this concept, demonstrating that the incorporation of AI into the legal system
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
205
is not only a technological challenge, but also an ontological process within which law is
transformed from a textual institution into a cognitive system capable of self-learning
and predicting socio-legal risks.
At the same time, the concept of “institutional tolerance” proposed by us partially argues
with the methodological approaches of Moretti and Zuffo (2025) and Rafanelli (2022),
which interpret AI regulation mainly through the prism of ethical principles and risk-
based supervision. Our position is that ethical principles should be operationalized
through legal mechanisms algorithmic audit, impact assessment, a certification system
for high-risk systems, which concretizes the provisions of the OECD (2019) and
Recommendation on the Ethics of Artificial Intelligence (UNESCO, 2021). In contrast to
most scholars who limit themselves to declarative approaches to ethical regulation, our
model enables the formalization of responsibility through algorithmic mechanisms.
accountability and compliance-by-design, which ensured a transition from declarative
principles to practical implementation.
Regarding the issue of AI legal personality, the results obtained partially correlate with
the conclusions of Hallevy (2010), Barichella (2023) and Guitton et al. (2025), which
assume the existence of intermediate forms of subjectivity with a variable degree of
autonomy. In contrast to radical concepts of granting AI the status of an “electronic
person”, the authors propose the concept of functional legal personality or limited legal
status, which allows us to determine the scope of legal powers and delegated
responsibilities without violating the principle of anthropogenic accountability. Our
position corresponds to the norms of the European Parliament & Council of the European
Union (2024), which stipulates that only a person can act as the ultimate bearer of legal
responsibility, while AI is treated as a regulated sub-process within the legal system.
Thus, our study not only agrees with the leading trends of scientific discourse, but also
expands the theoretical field, moving the interaction of AI and law from the plane of
declarative principles to the plane of institutionally established procedures and legal
responsibility.
Conclusion
The results of the scientific research show that the algorithmization of legal processes
has determined the formation of a new paradigm of legal regulation, in which artificial
intelligence functions as an integral component of the legal system, but does not
eliminate its anthropocentric nature. It has been established that modern law is
characterized by a gradual transformation from a normative to an operational form
“algorithmic governance”, within which legal norms are implemented through information
arrays, models and digital procedures. It was found that international regulatory acts
form a legal field where algorithmic decisions are considered as a component of a
regulated cycle of law-making activity. At the same time, the problems of determining
the boundaries of responsibility for the autonomous functioning of AI, procedural audit
of algorithmic systems and legal legitimacy of operational decisions made by systems
without human participation remain insufficiently resolved.
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
206
The results of the comparative analysis show that modern law demonstrates an
evolutionary trajectory from categorical denial to limited functional recognition of artificial
intelligence as a participant in legal relations. It was found that the vast majority of states
adhere to the concept of “human accountability”, while creating space for legal
experiments, in particular in the field of the institute of “electronic person”. Analysis of
national legislative initiatives confirmed that the functional legal personality of AI is
implemented within the delegated responsibility of developers and users. The issues of
the moral and legal status of AI, the parameters of its autonomy in the decision-making
process, and the lack of a unified international model of legal personality capable of
determining the optimal ratio between technical independence and legal accountability
remain debatable.
The study demonstrated that effective interaction between the legal system and artificial
intelligence is possible only on the basis of partnership, based on the principles of trust,
transparency, explainability and anthropogenic control. An integrative-tolerant model is
proposed, covering three levels: normative (adaptation of legislation to algorithmic
processes), ethical (codes of responsible development) and institutional (audit and
monitoring of AI decisions). The feasibility of creating a National Charter of Artificial
Intelligence and Law, which would regulate the standards of ethical interaction and legal
liability, is argued. The conclusion is formulated that legal tolerance of AI does not imply
a weakening of control mechanisms, but means a flexible expansion of the legal space,
which recognizes technological rationality as an element of a new humanistic digital legal
order.
References
Adams Bhatti, S. (2025). AI in our justice system: A rights-based framework. JUSTICE.
https://www.justice.org.uk/reports/ai-in-our-justice-system
Barichella, A. (2023). Regulating artificial intelligence at the EU level: Obstacles and
prospects. Jacques Delors Institute.
https://institutdelors.eu/content/uploads/2025/04/PP294_Regulation_IA_Barichella_EN
.pdf
Bernaziuk, I. (2025). Artificial Intelligence and the judicial system of Ukraine: Results of
cooperation in the past year. Supreme Court of Ukraine.
https://court.gov.ua/eng/supreme/pres-centr/news/1891488
Bertolini, A. (2025). Artificial Intelligence and Civil Liability. European Parliament.
https://www.europarl.europa.eu/thinktank/en/document/IUST_STU(2025)776426
Beruashvili, M. (2025). The capabilities and challenges of artificial intelligence in the
justice system. Law and World, 35, 148159. https://doi.org/10.36475/11.3.10
Çami, L., & Skënderi, X. (2023). The impact of AI on determining the applicable law in
cross-border disputes under the Rome II Regulation. Global Journal of Politics and Law
Research, 11(3), 110. https://doi.org/10.37745/gjplr.2013/vol11n3110
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
207
Committee of Ministers. (2020). Recommendation of the Committee of Ministers to
member States on the human rights impacts of algorithmic systems.
https://search.coe.int/cm?i=09000016809e1154
European Commission. (2022). Proposal for a Directive of the European Parliament and
of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI
Liability Directive). European Union. https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=CELEX%3A52022PC0496
European Parliament & Council of the European Union. (2016). Regulation (EU) 2016/679
of the European Parliament and of the Council of 27 April 2016 on the protection of
natural persons with regard to the processing of personal data and on the free movement
of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
Official Journal of the European Union, L119. http://data.europa.eu/eli/reg/2016/679/oj
European Parliament & Council of the European Union. (2024). Regulation (EU)
2024/1689 of the European Parliament and of the Council laying down harmonised rules
on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013,
(EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives
2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official
Journal of the European Union, L289, 184. https://eur-
lex.europa.eu/eli/reg/2024/1689/oj
Getman, A. P., Danilyan, O. G., Dzeban, A. P., & Kalynovskyi, Y. Yu. (2022). Modern
ontology: Reflection on the continuity of cyberspace and virtual reality. Revista de
Filosofía (Venezuela), 39(102), 7894. https://doi.org/10.5281/zenodo.7017946
Getman, A. P., Yaroshenko, O. M., Shapoval, R. V., Prokopiev, R. Ye., & Demura, M. I.
(2023). The impact of artificial intelligence on legal decision-making. International
Comparative Jurisprudence, 9(2), 155169. https://doi.org/10.13165/j.icj.2023.12.001
Ghannadi, A. R. (2025). Artificial intelligence and international law: Challenges and
opportunities. Legal Studies in Digital Age, 5(1), 115.
https://doi.org/10.61838/kman.lsda.207
Gilani, S. H., Rauf, N., & Zahoor, S. (2023). Artificial intelligence and the rule of law: A
critical appraisal of a developing sector. Pakistan Journal of Social Research, 5(2), 743
750. https://doi.org/10.52567/pjsr.v5i02.1156
Guitton, C., Druta, V., Hinterleitner, M., Tamò-Larrieux, A., & Mayer, S. (2025). Adoption
of artificial intelligence in the judiciary: A comparison of 28 advanced democracies.
Discover Artificial Intelligence, 5, 169. https://doi.org/10.1007/s44163-025-00311-y
Hakimi, M., Zarinkhail, S., Aslamzai, S., & Sahnosh, F. A. (2025). Artificial intelligence
and legal reform in developing countries: Advancing ethical, rights-based, and
accountable digital governance. Jurnal Ilmiah Telsinas Elektro, Sipil Dan Teknik
Informasi, 8(2), 127144. https://doi.org/10.38043/telsinas.v8i2.6934
Hallevy, G. (2010). The criminal liability of artificial intelligence entities from science
fiction to legal social control. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.1564096
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
208
IBA. (2024). The future is now: Artificial Intelligence and the legal profession.
International Bar Association. https://www.ibanet.org/The-future-is-now-artificial-
intelligence-and-the-legal-profession
JuLIA Project. (2025). Artificial Intelligence, judicial decision-making and fundamental
rights. School for the Judiciary. https://ssm-italia.eu/wp-
content/uploads/2025/02/JuLIA_handbook-Justice_final.pdf
Makedon, V., Myachin, V., Kuriacha, N., Chaika, Yu., & Koptilyi, D. (2025). Development
of strategic management of a corporation through the implementation of scenario
analysis. Scientific Bulletin of Mukachevo State University. Series "Economics", 12(2),
135146. https://doi.org/10.52566/msu-econ2.2025.135
Makedon, V., Trachova, D., Myronchuk, V., Opalchuk, R., & Davydenko, O. (2024). The
development and characteristics of sustainable finance. In A. Hamdan (Ed.), Achieving
sustainable business through AI, technology education and computer science (Studies in
Big Data, Vol. 163, pp. 373382). Springer. https://doi.org/10.1007/978-3-031-73632-
2_31
Masoudi, R., & Yarahmadi, H. (2024). The role of artificial intelligence in the judicial
process. Legal Studies in Digital Age, 3(4), 195206.
https://doi.org/10.61838/kman.lsda.3.4.18
Mayer, G., & Boni, M. (2017). Report with recommendations to the Commission on civil
law rules on robotics. European Parliament.
https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
Meyers, Z. (2025). Better regulation and the EU’s Artificial Intelligence Act.
Intereconomics: Review of European Economic Policy, 60(3), 149153.
https://doi.org/10.2478/ie-2025-0029
Misch, F., Park, B., Pizzinelli, C., & Sher, G. (2025). Artificial Intelligence and productivity
in Europe. IMF Working Paper, (67). https://doi.org/10.5089/9798229006057.001
Moretti, J. L., & Zuffo, M. M. (2025). Artificial Intelligence in Law: Utilisation by Brazilian
legal practitioners and regulatory challenges. Beijing Law Review, 16(1), 331352.
https://doi.org/10.4236/blr.2025.161016
OECD. (2019). Recommendation of the Council on Artificial Intelligence
(OECD/LEGAL/0449). Organisation for Economic Co-operation and Development.
https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
OECD. (2025). Governing with Artificial Intelligence: The State of Play and Way Forward
in Core Government Functions. Paris: OECD Publishing.
https://doi.org/10.1787/795de142-en.
Office of General Counsel. (2015). Department of Defense Law of War Manual. Office of
General Counsel, Department of War.
https://media.defense.gov/2023/Jul/31/2003271432/-1/-1/0/DOD-LAW-OF-WAR-
MANUAL-JUNE-2015-UPDATED-JULY-2023.PDF
JANUS.NET, e-journal of International Relations
e-ISSN: 1647-7251
VOL. 16, Nº. 2, TD3
Thematic Dossier - Rule of Law, Human Rights, and Institutional
Transformation in Times of Global and National Challenges
March 2026, pp. 191-209
The Algorithmic Rule of Law: Institutionalizing Accountability and Human Oversight
in AI-Driven Legal Systems
Kostiantyn Klymov, Lesia Pron, Kostiantyn Orobets, Ruslana Liashenko, Lesia Vasylenko
209
Orobets, K., Shkolnikov, V., Batrachenko, T., Baranovska, T., & Sereda, V. (2025).
Legislative categorization of crimes committed with the help of cryptocurrencies.
Management (Montevideo), 3, 253. https://doi.org/10.62486/agma2025253
Petrovskyi, A., Kуrdan, B., & Kutsyk, K. (2025). Implementation of artificial intelligence
in civil proceedings: Experience of EU countries. Scientific Journal of the National
Academy of Internal Affairs, 30(1), 4559. https://doi.org/10.63341/naia-
herald/1.2025.45
Poorhashemi, A. (Ed.). (2024). Artificial Intelligence and the future of International Law:
Bridging rights, trade, and arbitration. Cham: Springer. https://doi.org/10.1007/978-3-
031-73334-5
Popa, A., & Pascariu, L. (2024). Impact of the EU’s Artificial Intelligence Regulation on
workers. European Journal of Law and Public Administration, 11(2), 92101.
https://doi.org/10.18662/eljpa/11.2/234
Rafanelli, L. M. (2022). Justice, injustice, and artificial intelligence: Lessons from political
theory and philosophy. Big Data & Society, 9(1).
https://doi.org/10.1177/20539517221080676
Sarra, C. (2025). Artificial Intelligence in decision-making: A test of consistency between
the “EU AI Actand the “General Data Protection Regulation”. Athens Journal of Law,
11(1), 4562. https://doi.org/10.30958/ajl.11-1-3
Sasko, O., Shvedova, H., Orobets, K., Ovcharenko, R., & Ostapenko, O. (2025). Criminal
offence during martial law in Ukraine: Peculiarities of qualification. Bangladesh Journal of
Multidisciplinary Scientific Research, 11(1), 1322.
https://doi.org/10.46281/bjmsr.v11i1.2658
Tavolzhanskyi, O., Shumeiko, O., Burda, O., Orobets, K., & Struchaiev, M. (2025). Using
big data in criminal investigations: Between privacy and efficiency. Khazanah Hukum,
7(3), 312324. https://doi.org/10.15575/kh.v7i3.45201
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO.
https://unesdoc.unesco.org/ark:/48223/pf0000380455
Webster, G., Creemers, R., Kania, E., & Triolo, P. (2017). China’s New Generation
Artificial Intelligence Development Plan DigiChina.
https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-
intelligence-development-plan-2017/
Zhaltyrbayeva, R., Jangabulova, A., Suleimenova, S., Saimova, Sh., & Tlembayeva, Zh.
(2025). Legal challenges of regulating artificial intelligence in law enforcement, taking
into account the interdisciplinary approach to socio-legal transformations. Social & Legal
Studios, 8(2), 118130. https://doi.org/10.32518/sals2.2025.118