By akademiotoelektronik, 28/04/2022

Should we be afraid of artificial intelligence?

Nearly 30 international experts, specialists in artificial intelligence, cybersecurity and robotics call on governments and the various actors concerned to put in place countermeasures to limit the potential threats linked to the considerable technical progress in this field.

Will artificial intelligence get out of control? The increasing effectiveness of AI risks, in the next ten years, "strengthening cybercrime" but also "leading to the use of drones or robots" for terrorist purposes, warn 26 world experts who sign a report on the risks around malicious use of AI.

This 100-page report was written by 26 experts specializing in artificial intelligence (AI), cybersecurity and robotics and belong to universities (Cambridge, Oxford, Yale, Stanford) and non-governmental organizations (OpenAI, Center for a New American Security, Electronic Frontier Foundation).

AI could enable particularly effective terrorist attacks

The authors of the report call on governments and the various actors concerned to put in place countermeasures to limit the potential threats linked to artificial intelligence: "We believe that the attacks that will be enabled by the increasing use of AI will be particularly effective, finely targeted and difficult to attribute."

To illustrate their fears, these specialists evoke several "hypothetical scenarios" of malicious use of AI. They point out that terrorists could modify commercially available AI systems (drones, autonomous vehicles), to cause crashes , collisions or explosions.

Faut-il avoir peur de l'intelligence artificielle ?

Risks of which the public authorities in France seem aware, at least. The mathematician and deputy of Essonne Cédric Villani, specified in Le 1, on January 31, the need to support the development of artificial intelligence to prevent the robots of tomorrow from being seen as a threat by the population. The winner of the Fields medal specified, in this sense, that it would be necessary in particular to strengthen the arsenal of protection of our personal data, because no one is immune to the hacking of an AI - the small name of the artificial intelligence, in the spirit of what is proposed by the GDPR or general data protection regulations which are due to come into force in Europe next May.

Political risks

In addition, "cybercrime, already strongly on the rise, risks being reinforced with the tools provided by AI", declared to AFP Seán Ó hÉigeartaigh, director of the "Centre for the Study of Existential Risk" of the University of Cambridge, one of the authors of the report. Targeted phishing attacks (spear phishing) could thus become much easier to carry out on a large scale.

But for him, "the most serious risk, even if it is less probable, is the political risk". Alluding to suspicions of Russian interference in the US presidential election, the expert recalls that "we have already seen how people used technology to try to interfere in elections and democracy".

Between election rigging, propaganda operations - with AI, it should be possible to make very realistic fake videos and this could be used to discredit politicians, the report warns - and difficult attributions of terrorist attacks, "this could pose big problems of political stability and perhaps contribute to triggering wars," said Seán Ó hÉigeartaigh.

Another risk pointed out by experts, à la Big Brother or Minority report style: authoritarian states will also be able to rely on AI to strengthen the surveillance of their citizens.

This is not the first time that concerns have been raised about AI. As early as 2014, astrophysicist Stephen Hawking warned of the risks it could pose to humanity by going beyond human intelligence.

Entrepreneur Elon Musk and others have also sounded the alarm, even though their business consists of offering services and devices based on augmented reality and artificial intelligence technologies, which raises serious concerns. suspected conflict of interest.

Specific reports on the use of killer drones or how AI could affect the security of the United States, finally, have also been published.

View the report:

Tags: