It is a privilege to address you today within the framework of this important initiative organized by the Global Academy for Future Governance. I would like to thank the organizers of this Global Advisory platform for bringing together distinguished participants from government, academia, international organizations, and industry – notably Pascal Lamy; Fernando Messias, Julian Kassum and others to reflect on the relationship between new technology and the judiciary—one of the most important questions facing contemporary legal systems.
We live in an era of profound technological transformation. Artificial intelligence, advanced digital tools, and data-driven systems are reshaping the way societies function, economies operate, and institutions govern. Inevitably, these developments are also reaching courts, arbitration mechanisms, and the broader legal order. The judiciary, as a central pillar of democratic governance and the rule of law, cannot remain untouched by these changes.
Allow me to add a brief personal reflection from my years serving on, and later presiding over, the Constitutional Court of Slovenia. Judicial work is ultimately about responsibility. Judges are entrusted with interpreting the constitution, protecting rights, and resolving conflicts in ways that affect individuals and society as a whole. Over time, I came to appreciate that legal systems must evolve with society—but they must do so carefully, preserving the principles that safeguard justice.
From the perspective of my own country, Slovenia—a relatively small but well-organized European state—we have learned that the strength of a judiciary does not depend on the size of a nation, but on transparency, professionalism, and institutional integrity. Slovenia has worked to maintain a judiciary that is independent and accessible, increasingly supported by modern digital administration. Yet even as technology assists legal work, the legitimacy of justice still depends on public trust and on the accountability of those who make decisions.
Artificial intelligence undoubtedly offers important opportunities. In litigation and arbitration, AI can assist with legal research, document review, case analysis, and even predictive assessments of legal outcomes. These tools may reduce costs, accelerate proceedings, and help courts and legal practitioners manage the growing complexity of modern legal disputes. In this sense, technology can strengthen efficiency and accessibility in the justice system.
However, alongside these benefits arise deeper questions that go beyond efficiency. One of the most important concerns relates to social responsibility. Technology, including artificial intelligence, is not neutral in its societal impact. The development and deployment of AI systems are shaped by economic interests, institutional priorities, and political choices. If left entirely to market forces or technological enthusiasm, there is a risk that AI may serve primarily the interests of those who already possess technological or financial power.
Therefore, we must ask a fundamental question: Does artificial intelligence serve the common good, or does it reinforce exclusivity?
The judiciary, by its very nature, exists to protect equality before the law. If AI systems used in legal practice become accessible only to powerful actors—large corporations, highly resourced law firms, or technologically advanced jurisdictions—then the balance of justice may be altered. Equal access to justice requires that technological innovation not create new forms of legal inequality.
Closely related to this issue is the societal perception of artificial intelligence. For the legal system to maintain legitimacy, society must believe that technological tools support fairness rather than undermine it. If citizens begin to perceive that algorithms influence legal outcomes in opaque or unaccountable ways, trust in judicial institutions could erode. Justice must not only be done; it must also be seen and understood to be done.
Another important dimension concerns authenticity and credibility in legal proceedings. The emergence of advanced AI systems—including generative technologies capable of producing realistic texts, documents, or audiovisual materials—creates new challenges for courts and litigators. Questions arise regarding the authenticity of evidence, the reliability of digital materials, and the potential misuse of synthetic or manipulated information within legal disputes.
These developments place new responsibilities on judges, lawyers, and legal institutions. Courts must develop clear standards regarding the admissibility, verification, and evaluation of AI-generated or AI-assisted evidence. Without such safeguards, the credibility of legal processes may be undermined.
Finally, there is the complex issue of liability. When artificial intelligence is used in legal practice—whether for research, analysis, or decision-support—who ultimately bears responsibility for errors or misuse? Is it the developer of the technology, the legal professional who relies upon it, or the institution that adopts it? These are not merely technical questions; they are fundamental legal questions about accountability and responsibility in a digital age.
The answer, in my view, must remain consistent with the basic principles of law: responsibility cannot be delegated to machines. Artificial intelligence may assist human decision-makers, but it cannot replace human accountability. The ultimate responsibility for legal decisions must remain with identifiable human actors—judges, lawyers, and institutions—who are accountable under the law.
Events such as this—convened under the auspices of the Global Advisory of GAFG – Global Academy for Future Governance by dr. Philipe Reinisch and prof. Anis H. Bajrektarevic and their consortium of international partners (such as IFIMES) —play an important role in addressing these questions. By bringing together policymakers, legal professionals, scholars, and representatives of international institutions, we create the dialogue necessary to ensure that technological progress strengthens rather than weakens the rule of law.
In the end, artificial intelligence will undoubtedly transform many aspects of litigation and international arbitration. But the essence of justice must remain human-centered. Courts derive their authority not from technology, but from the trust that society places in fair, transparent, and accountable institutions.
Our responsibility, therefore, is not only to innovate, but also to ensure that innovation serves justice, equality, and the common good.
(Author is Justice Ernest Petric, President of the Constitutional Court of Slovenia (2010-13), diplomat, author, former IAEA Governor, serving member of the European Academy of Science and Arts.)
Comments