(By Alessio Jacona *)
(ANSA) - ROME, OCT 18 - Is artificial intelligence here to help humanity or to destroy it? With the growing success and spread of this technology, the debate over its use seems increasingly polarized between those in favour and those against, between enthusiasts and catastrophists. But is the question really that simple?
«Contrary to the apocalyptic warnings of tech moguls, AI is not a sci-fi enemy bent on destroying its creator», explains Andrew McIntyre, postdoctoral researcher at the University of Amsterdam (UvA) and a member of the SOLARIS Project . «Rather, AI reflects and amplifies the fragile state of our society, and these exaggerated fears only distract from the real and complex problems the technology poses: from prejudice to discrimination, misinformation and democratic disengagement».
SOLARIS is a European research project, of which ANSA is a participant, that over the next three years aims to define methods and strategies to manage the risks, threats and opportunities that generative artificial intelligence technologies bring to democracy, political engagement and digital citizenship. As a researcher, McIntyre studies the cultural impact of art and media generated by artificial intelligence, while, within SOLARIS, he works alongside Federica Russo focusing on the democratic challenges and opportunities that AI-generated media can present, particularly deepfakes.
«Artificial intelligence is not just a technological problem» he explains, «It is also a nuanced social problem, and to solve it we need more nuanced public debate and discussion. Among many research projects, Solaris is trying to contribute to this debate by developing a more detailed understanding of generative AI that examines the various social, cultural and political factors at play».
The project will last three years and has just started: what are you focusing on at this stage?
As Solaris is still in its early days, we are currently working to try to understand the factors that may influence how deepfakes are produced, distributed and received. This covers a variety of disparate factors, including technical qualities, legal frameworks, and social, political and psychological influences. At UvA, we are working to bring all of these different factors together in an innovative system approach so that we can understand exactly why people are more likely, or less likely, to trust a given deepfake.
There is often an assumption that, if a deepfake is technically accurate enough, it will easily trick people. However, we are finding that there are many different factors at play in a person's social environment, including their personal or political beliefs, their social media use, and the culture in which they live.
Do you have an example?
What particularly interests me is the role of media narratives in general. For example, given what I have read and seen about the actor Tom Cruise in the media, I am likely to believe a deepfake showing him dancing on TikTok is real. However, I am unlikely to believe a similar deepfake of Vladimir Putin dancing, based on what I know of the man, regardless of how accurate or convincing it may appear. This is something that is quite difficult to discuss, let alone quantify and measure, as are many of the other social factors we are analysing.
Once we have identified the factors at play, we need to find a way of testing and analyzing them in order to understand which are important and, from there, we can start to form impactful recommendations. But that is much further down the line.
Is it true, as some tech moguls are asserting, that AI is a menace and a threat to human existence? If not, why not?
There are undeniable dangers with AI and certainly greater regulation and cooperation between governments is necessary to ensure safety. However, many of these dangers arise from overreliance on, inappropriate use of, and overestimating the capabilities of AI. In short, it comes from misunderstanding what AI is.
Unfortunately, the popular narrative that so many in the tech industry, in the media and in politics are advancing right now would seem to misrepresent AI as some kind of Skynet-style movie villain that will someday get out of hand, rise up against its masters and destroy humanity. It's a nice, simple story that unites all of humankind against a common evil, but it belongs to the pages of science fiction story rather than reality. This common conception of AI is completely different to the various real-world technologies named "AI" that we use every day and that are negatively impacting people's lives right now, particularly the most vulnerable and underrepresented in society.
What kind of negative impacts are we talking about?
While these apocalyptic warnings may be well-intentioned, they overgeneralize the problems of AI as an existential and solely technological threat to all humanity and, in doing, so overshadow the more complex, more specific social injustices that AI technologies exacerbate, such as those regarding surveillance, discrimination and manipulation. Research and policy-making resources are finite and so, by focusing on the existential threat of AI, many of these immediate problems are being side-lined. Furthermore, this apocalyptic narrative also risks inspiring fear in the public and a reluctance to accept the positive uses of AI technologies. Over the years, there have been many calls to address these issues from prominent AI ethicists like Timnit Gebru, Emily Bender and Meredith Whittaker, but these are too often ignored in favour of the existential-threat narrative.
So yes, AI could be a menace, but in a very different way than we might think. And the only way to ensure that it is not a menace to human beings is to have a more balanced and nuanced public debate. Unfortunately, these calls from tech moguls and others are making this far more difficult.
What are the real and immediate problems that AI technologies are posing? Could you provide some examples?
With Solaris, we believe that the impact of generative AI on democracy is a real and immediate problem. Not in the sense that some authoritarian AI overlord will arise to oppress humanity but, rather, that existing AI technologies are contributing to a gradual erosion of democratic values and institutions. Of course, this erosion is not directly observable, but is the result of a variety of other problems.
Chief among these is the well-documented fact that many AI technologies perpetuate and amplify bias and discrimination. For example, facial recognition programs have become notorious for misidentifying black men as criminals, while language programs like ChatGPT continue to display explicit and implicit biases in their outputs, despite safeguards and filters. Bias and discrimination are immediate problems that are incredibly damaging for individuals and communities, but they further contribute to this erosion of democracy by spreading divisive falsehoods and discouraging people from underrepresented communities from participating in politics.
And then there's misinformation…
Beyond bias, generative AI may also exacerbate issues of disinformation. We have likely all seen deepfakes online, such as Putin apparently announcing the evacuation of Russian regions bordering Ukraine or Volodymyr Zelenskyy announcing the surrender of Ukrainian forces , but these high-profile cases are often easily debunked. However, as this technology becomes more accessible, there may be more disinformation on a smaller scale that goes undetected such as interference in local elections.
Furthermore, with the increasing sophistication of language models like ChatGPT , the Internet may be flooded with artificial news articles, photographs and academic papers arguing false concepts and promoting extremist political narratives.
More concerning, however, is the climate of uncertainty we now live in. The arrival of AI-generated media is making it more difficult for the average person to decide what is real and it is encouraging people to question previously trusted information sources and institutions with potentially disastrous consequences. This has already happened in 2018 when, following a prolonged absence from public life, Gabonese president Ali Bongo Ondimba released a video address that many suspected of being a deepfake to hide the president's ill health or death.
Citing this as proof of deception, the Gabon military launched a failed coup against the government.
However, violent incidents like this are an extreme case. What is more likely is a slow decline of democratic participation as citizens begin to feel that they are unable to believe their own eyes and incapable of making political decisions leaving them more susceptible to the influence of authoritarian and populist figures.
How is the Solaris project going to address these problems and how will it help to solve them? What's your roadmap in this first year?
As mentioned, the democratic impact of generative AI is a very broad and very complex problem that is actually composed of various unique problems. To begin tackling this issue, we need to understand why people trust, or do not trust, deepfakes and what are the important factors at play in their production, distribution and reception. Once we understand that, we can begin to experiment and learn more about these various factors in more detail and develop regulatory innovations to mitigate political risks and to promote digital democracy.
In this first year, we are focused on learning what we can about generative AI and the dissemination of AI-generated media online, while other colleagues are working on the technical modeling for our experiments next year. As we do so, we are observing, as the debate about AI is moving quickly and who knows where the AI debate will go in a few months' time?
The AI Act is almost ready: how do you think it is going to impact the AI market? Will it affect the Solaris project?
If so, how? I am not entirely sure about how it will impact the AI market, but I think any new regulation that specifically addresses the unique issues of AI technologies is good. It may make things more difficult for companies in the short term but, ultimately, I think stricter regulation will help combat the risks I have mentioned and also encourage people to trust that this technology can be managed and used appropriately for good.
As for how it will affect Solaris, this is something we are very eager to find out in the coming months, as regulation is the best way we can begin to address these broad social problems.
Beyond our research, we are starting to reach out to other organizations within the EU to see how our research may align with the AI Act and its goals, and also how our proposed future regulatory innovations might develop on it.
(*Journalist, innovation expert and editor of the ANSA.it Artificial Intelligence Observatory). (ANSA).