Gonzalo Sanz Segovia | 22/01/2026
Spain leads the way in the ethical supervision of AI in Europe with the creation of AESIA, the Spanish Agency for the Supervision of Artificial Intelligence (Agencia Española de Supervisión de la Inteligencia Artificial). This body is charged with implementing community regulations and ensuring responsible, safe, and a people-centered use of AI, promoting technological advancement and reinforcing social trust in the digital era.
On August 1, 2024, Regulation 2024/1689 on Artificial Intelligence (AI) came into force, applicable throughout the European Union. This regulation ensures that AI systems are developed and used in a responsible, ethical, sustainable, and reliable manner, and that they protect health and safety and the rights enshrined in the EU Charter of Fundamental Rights.
In this context, Spain has created Europe’s first agency for AI monitoring, AESIA, headed up by Ignasi Belda, a specialist in the management of AI and scientific organizations and companies, who is also a computer science engineer and doctor in AI applied to life sciences and in science and technology law.
What are AESIA’s functions?
Our main lines of work are awareness-raising and training in AI, so that nobody is left behind. We supervise compliance with European-level AI regulations in Spain and the regulatory sandbox environment that facilitates SMEs, emerging companies, and large companies in meeting the obligations established in the regulation, work we do in close collaboration with the General Directorate of AI. We’re also involved with the creation of an ideas laboratory that represents society as a whole, where we anticipate the impacts of AI.
What rights does AESIA protect?
The European AI regulation states that its objective is to “improve the functioning of the internal market and promote the adoption of human-reliable AI, while ensuring a high level of protection of health, safety, and the fundamental rights enshrined in the charter, including democracy, the rule of law, and environmental protection against the harmful effects of AI systems in the union, as well as providing supporting innovation.”
AESIA’s statutes also include “the supervision of the implementation, use, and marketing of systems that include AI, particularly those that may pose significant risks to health, safety, equality of treatment and non-discrimination —particularly between women and men— and to other fundamental rights.”
In any case, as with European AI regulation, AESIA always advocates for human supervision of AI.
How does AESIA supervise compliance with AI regulations?
Our approach is simultaneously proactive and reactive. We conduct ongoing monitoring and surveillance of the market, using random checks to help us gauge overall compliance with the law. We also respond to third-party claims that arrive through the consultation mailbox, the European alert system, or other public entities.
The exception is the general-purpose models, that is, the LLMs like Copilot, ChatGPT, Gemini, and others, which are supervised at European level.
What are the main risks we’re facing?
AI law defines four levels of risk for AI systems: unacceptable risk (prohibited practices), high risk, limited risk, and minimal or no risk, in addition to the general-purpose AI models.
Most AI systems present limited or no risk and can help address many global challenges, such as the climate emergency, disinformation that threatens democracies, or the rapid increase in inequalities.



