Artificial Intelligence: opportunity or catastrophic weapon? Infallible weapon even for terrorists

By Massimiliano D'Elia

Every evening, on Antonio Ricci's television program "Striscia la Notizia", ​​we watch, enjoy, leader politicians and institutions juggle unlikely dances while singing bizarre songs. The protagonists are illustrious figures such as the Italian Prime Minister in office, the President of the Republic and the various opposition leaders. It is precisely them, alive and well, who pleasantly spread nonsense on TV. 

Merit of the artifact is a   which uses rudiments of Artificial Intelligence (AI) to perform a true television miracle. Similar applications are also available for a fee on social networks (Tik Tok). We are only at the beginning of this innovative and exciting technology which however needs to be "handled with care" due to the effects it could have in the not too distant future. For this reason, we are increasingly talking about the need to adopt a code of ethics during the research, development and use of applications in which AI could be called upon to perform a physical action capable of interfering with the real world.

For example, everyone agrees on the fact that devices equipped with AI must still be authorized by humans before carrying out a lethal act (in the purely military field). Axiom which, however, becomes reductive when faced with a technology which, in theory, will be able to learn, evolve and above all confuse the adversary who one day could coincide with the man in charge of controlling him.  

Some simulated AI capabilities

In China at a prestigious university (Nanjing University of Aeronautics and Astronautics) they have developed a program that simulates an air battle between two latest generation fighters: one piloted by a human being with all his physical and psychological limitations, the other by 'AI. In less than eight seconds at hypersonic speed, the AI ​​fighter easily shot down its opponent, firing a missile "in reverse", defying every known law of physics.

In the computer simulation, the hypersonic aircraft battled an enemy jet flying at Mach 1.3, close to the top speed of an F-35. The pilot of the hypersonic aircraft was given the order to shoot down the enemy. Instinct should have directed the pilot towards the target, but the pilot, guided by Artificial Intelligence, headed to an unexpected position, far ahead of the enemy plane and fired a missile backwards. The missile hit the opponent who was 30 km away, thus ending the battle in a matter of seconds.

Net of considerations and fears, states, at least as regards their own military deterrence, have already launched futuristic programs, such as sixth generation aircraft which will be supported in their missions by devices guided by entities equipped with AI. From this perspective, various governments are trying to regulate, as far as possible, the cultural, social and productive approach to this new and unexplored technology.

China

Basic security requirements for generative artificial intelligence service is the title of a document recently released in draft form byChinese national body  is preferably used for  Standardization of Information Security which dictates the specifications with which suppliers must comply services based on generative AI. The document, although still in a non-final version, highlights the Chinese government's approach to managing the impact of AI on social and political relations. It does not only include legal rules, but also technical rules, capable of having a greater degree of concrete effectiveness, since they must be respected before purchasing, to make a service available.

United States

The American President, Joe Biden is expected to sign a document next week addressing the rapid advances of artificial intelligence, setting the stage for Washington's adoption of AI as a tool in its national security arsenal. An essential condition is that companies develop the technology in a safe way, the report specifies. The document therefore seeks to establish guidelines for the use of AI by federal agencies, while harnessing the government's purchasing power to direct companies towards what it considers best practices. The White House also organized an event entitled “Safe, secure and reliable AI”.

The American administration therefore needs to regulate the matter to develop new and innovative military capabilities to guarantee credible global deterrence over time. It is planned to use the National Institute of Standards and Technology to identify industry benchmarks for testing and evaluating AI systems. Another condition is government oversight of the AI ​​commitments recently made by major American companies.

Among some sectors identified by the US government are threats to cybersecurity following the advent of artificial intelligence. 

It is expected that artificial intelligence can become an important resource for identifying threats by analyzing vulnerabilities and attempts Phishing, social media, email and chat. It could also be useful for finding patterns within intelligence reports that humans might miss. Security researchers and intelligence officials have said that AI can be a necessary tool to dramatically improve the cyber defenses of businesses, critical infrastructure, government agencies and individuals. But it could also be used as a weapon by foreign governments and criminal hackers to write malware, automate intrusion attempts more easily, and respond quickly to evolving security defenses.

England

Artificial intelligence could help terrorists develop weapons of mass destruction, the warning is contained in a document drawn up for Prime Minister Rishi Sunak on the occasion of his speech to the Chambers. The analysis of the report explains how artificial intelligence, while offering significant opportunities, “also involves new dangers and fears“. The government document, written with the support of 50 experts, states that by 2025 AI could be used to commit fraud and cyber attacks. The document explains that AI will have the potential to “enhance terrorist capabilities” in developing weapons, planning attacks and in the production of propaganda. The range of most serious risks that could emerge by 2030 includes the “mass disinformation” and a "crisis" regarding employment following the massive use of AI in manual jobs where the use of humans is currently expected.

At the same time, however, AI will lead to "new knowledge, new opportunities for economic growth, new advances in human capabilities and the possibility of solving problems that were once considered unsolvable". 

The document also states that there is currently insufficient evidence to exclude a threat to humanity from AI. 

By 2030, the report highlights, technology companies could develop one “superintelligence” artificial device capable of carrying out complex tasks and deceiving human beings, which would represent a "catastrophic" risk. Another example explains how a company could gain technological dominance at the expense of global competitors. The report also warns that AI could “substantially exacerbate existing cyber risks” if used improperly, for example by launching cyber attacks autonomously.

"These systems are capable of completing complex tasks that require planning and reasoning and can interact with each other flexibly“, the report reads. “Once they set a task, they develop a strategy with sub-goals, learning new skills.”

However, “unintended consequences” can occur, including the “diversion” of information and resources to complete objectives.

Future innovation in software and hardware will allow AI to “interact with the physical world” through robots and self-driving cars. “These autonomous systems will collect large amounts of data, which will be channeled to refine their capabilities,” the report reads.

Subscribe to our newsletter!

Artificial Intelligence: opportunity or catastrophic weapon? Infallible weapon even for terrorists