Sections

Draft code on AI models biased in favor of Big Tech - EP

AI Act rapporteurs 'concerned, legislator's intent overturned'

Draft code on AI models biased in favor of Big Tech - EP

Redazione Ansa

(ANSA) - ROME, MAR 26 - A group of MEPs involved in the negotiations on the European law on artificial intelligence, including the AI ;;Act rapporteur Brando Benifei, have written to the vice president of the European Commission, Henna Virkkunen, to express "grave concern" about the code of good practices on general-purpose AI (such as OpenAI's Gpt-4) that is expected to be finalized in May.
    This is a code that will have to detail the rules of the AI ;;Act that apply to providers of general-purpose AI models, in particular those with systemic risks.
    The first drafts of the code, MEPs say, are watering down the AI ;;Act's rules on general-purpose AI models with systemic risk to the point of "completely reinterpreting and narrowing a legal text that the co-legislators agreed, through a code of practice". A "dangerous, undemocratic move, creating legal uncertainty", MEPs argue, criticising a text that is too unbalanced in favour of Big Tech.
    Under the AI ;;Act, providers of models with systemic risks are required to assess and mitigate risks, report serious incidents, conduct state-of-the-art testing and model assessments and ensure the cybersecurity of their models. In the letter, MEPs denounce the fact that "the assessment and mitigation of various risks to fundamental rights and democracy is now suddenly entirely voluntary" and stress that "this was never the intention of the trilogue agreement". "Risks to fundamental rights and democracy - they write - are systemic risks that the providers of the most impactful AI models must assess and mitigate".
    "If the providers of the most impactful generic AI models were to adopt more extreme political positions, implement policies that undermine the model's trustworthiness, facilitate foreign interference or electoral manipulation, contribute to discrimination, restrict freedom of information or spread illegal content, the consequences could profoundly disrupt the European economy and democracy", warn the MEPs, calling on Virkkunen "if necessary" to reject any code that does not protect society from systemic risks to health, safety, fundamental rights and democracy.
    The rules on general purpose AI will apply from 2 August 2025.
    The AI ;;Office, set up within the Commission, is facilitating the drafting of the Code of Practice, with four working groups chaired by independent experts and involving almost a thousand stakeholders, representatives of EU Member States and European and international observers. (ANSA).
   

Leggi l'articolo completo su ANSA.it