AI: The English summit slows down with new government control rules while the Pentagon races with 685 new projects

By Massimiliano D'Elia

Il summit requested last week by the English Prime Minister Altar in Great Britain on AI (Artificial Intelligence) saw the participation of the world's largest Hi-Tech companies and most of the world leaders. At the end of the summit, the companies committed, through a declaration of intent, to recognize the role of governments in the verification activity before each AI product is marketed externally. Purpose of participation Public&Private is to carefully analyze potential dangerous uses of AI in the field of national security or in society.

What is certain is that the road from intent to practice is still very long and tortuous. In fact Sunak has already jumped forward saying: “making things mandatory and promulgating laws takes time, we need to move faster than now: it is important that regulations are empirically based, before specifying formal regulation”.

Each State will therefore have to make use of a structure responsible for verifying new AI models. In England there will beInstitute for AI Safety.

The risks feared by Altar and also from Elon Musk, present at the summit, concern the Terminator effect that this new and unexplored technology could have seen that models equipped with superintelligence will be able to self-evolve, experience emotions and potentially be able to act autonomously. The fear is that one day, not too far away, they could constitute a concrete danger to the existence of humanity.

Musk, in this regard, was lapidary: “Artificial intelligence represents one of the greatest risks to humanity because for the first time we are faced with something that will be much more intelligent than us."

The Pentagon has released its AI strategy

According to a count made public by Government Accountability Office USA, at the beginning of 2021 the Department of Defense would be managing more than 685 projects related to AI. The Army leads the way with at least 232 projects. The Marine Corps, however, is reportedly managing at least 33.

The US Department of Defense was forced to come out and recently released a new strategy on the use of data analytics and artificial intelligence, pushing for further investment in AI to develop advanced models of the technologies autonomous, including drones.

The Pentagon intends to provide its military commanders in the field with new tools capable of supporting them in decisions. For this reason, the industry is asked to develop new AI models.

Other objectives outlined in the strategy include improved, greater infrastructure partnerships with groups outside the Department of Defense and a reform of the internal bureaucracy that often prevents technology from advancing faster than the Department would like.

With this document, the Pentagon also establishes the internal entity (CDAO) that should govern it.

The CDAO was established in 2021 and incorporated the Joint Artificial Intelligence Center, the Defense Digital Service, the Advana data platform and assumed the role of Chief Data Officer.

The use of generative AI in the military is controversial, writes Defense News. Its main benefit is its ability to streamline simple or mundane tasks, such as searching for files, finding contact information, and answering simple questions. But the technology has also been used to fuel cyberattacks, spoofing attempts and disinformation campaigns.

The Pentagon has ensured that humans will remain responsible for the use of lethal force and, as stated in the latest review of its nuclear weapons, they will remain under strict human control.

The development and use of semi-autonomous or fully autonomous weapons is regulated by the so-called Directive 3000.09, originally signed a decade ago and updated last January.

The directive is intended to reduce the risks of autonomy and firepower. It does not apply to cybernetics, a field where leaders are increasingly advocating the need for autonomous capabilities.

Task Force Lima, overseen by the CDAO, was created earlier this year to evaluate and guide the application of generative AI for national security purposes.

The Pentagon has requested $1,4 billion for AI in FY 2024, which begins Oct. 1, keeping funding levels unchanged from the previous year.

Subscribe to our newsletter!

AI: The English summit slows down with new government control rules while the Pentagon races with 685 new projects

| EVIDENCE 2, INTELLIGENCE |