Sections

International Network of AI Safety Institutes makes its debut

Forum for collaboration to address AI safety risks, mitigation

Redazione Ansa

(ANSA) - ROME, NOV 21 - The International Network of AI Safety Institutes, which aims to develop a unified understanding of AI safety risks and mitigation strategies, has started work.
    AI safety institutes and government-mandated offices from Australia, Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States convened in San Francisco for the first meeting of the network.
    The European Commission said in a statement that initiative marks the beginning of a new phase of international collaboration on AI safety.
    The Network brings together technical organisations dedicated to advancing AI safety, helping governments and societies better understand the risks posed by advanced AI systems, and proposing solutions to mitigate these risks.
    Beyond addressing potential harms, the institutes and offices involved will guide the responsible development and deployment of AI systems.
    The International Network of AI Safety Institutes will serve as a forum for collaboration, bringing together technical expertise to address AI safety risks and best practices.
    It will focus on four priority areas: research, testing, guidance and inclusion.
    The network will collaborate with the scientific community to advance research on the risks and capabilities of advanced AI systems, while sharing key findings to strengthen the science of AI safety.
    The institutes will also facilitate shared approaches to interpreting test results for advanced AI systems to ensure consistent and effective responses.
    Partners and stakeholders in regions will be engaged at all stages of development, by sharing information and technical tools in accessible ways to broaden participation in AI safety science.
    Through this Network, the members commit to advancing international alignment on AI safety research, testing, and guidance. By fostering technical collaboration and inclusivity, they aim to ensure that the benefits of safe, secure, and trustworthy AI innovation are shared widely, enabling humanity to fully realise the new technology's potential. (ANSA).
   

Leggi l'articolo completo su ANSA.it