(ANSA) - ROME, AUG 28 - More than 700 risks linked to
Artificial Intelligence (AI) have been classified by a group of
researchers at the Massachusetts Institute of Technology (MIT).
The database, the AI Risk Repository, supplies a panorama,
accessible to the public and open to updates, of the risks
deriving from the development and use of the new technology
because, the experts observe, "the lack of a shared
understanding of the risks of AI may hamper our capacity to
discuss, research and react in an exhaustive way".
This is the first attempt to gather and analyse the risks linked
to AI by extrapolating them from reports, daily newspapers and
other documents and turning the database into a common frame of
reference. The risks have thus been classified in relation to
their cause, intentionality and the moment they manifest.
According to the researchers, the risks are attributable in most
cases (51%) to AI systems rather than to a decision or action of
human beings (34%).
Further, it is more likely that the risks manifest once the AI
has been trained and used (65%) rather than during its
development (10%).
The risks identified have been put into seven different
categories, with greater concerns regarding security,
socio-economic and environmental damage, discrimination and
toxicity. Other risks cited in MIT's database pertain to privacy
and security; malign actors and improper use; human-machine
interaction, and disinformation. (ANSA).
MIT classifies 700 risks linked to AI, first database
Greater concerns for security and socio-economic damage