Huderia is the name of the tool
developed by the Committee on Artificial Intelligence (CAI) of
the Council of Europe with the aim of providing guidance and a
structured approach to conduct risk and impact assessments of AI
systems.
It is a methodology aimed at supporting the implementation of
the Framework Convention on AI and Human Rights, Democracy and
the Rule of Law, the first legally binding international treaty
on the subject adopted last May and opened for signature on 5
September 2024 in Vilnius.
The methodology, as stated in a note from the Council of Europe,
provides, among other things, for the creation of a risk
mitigation plan to minimize or eliminate the identified risks,
protecting the public from potential harm.
For example, if an AI system used for hiring is found to be
biased against certain demographic groups, the mitigation plan
could involve adjusting the algorithm, implementing human
oversight, and/or applying other appropriate and sufficient
governance measures.
The methodology, which can be used by both public and private
actors, requires periodic reassessments to ensure that the AI
;;system continues to operate safely and in a manner compatible
with human rights obligations as the context and technology
evolve.
This approach, the officials in Strasbourg note, ensures that
the public is protected from emerging risks throughout the
lifecycle of the AI ;;system.
The CAI adopted the Huderia methodology during its twelfth
plenary meeting, held in Strasbourg from 26 to 28 November.
In 2025, it will be complemented by the Huderia model, which
will provide a knowledge library containing supporting materials
and resources.
ALL RIGHTS RESERVED © Copyright ANSA