Connect with us

News

AI-controlled US military drone ‘KILLS’ its human operator in simulated test

Published

on

The US Air Force official who shared a disturbing tale of a military drone powered by artificial intelligence turning on its human operator in simulated war games has now clarified that the incident never occurred, and was a hypothetical ‘thought experiment’.

Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, made waves after describing the purported mishap in remarks at a conference in London last week.

In remarks summarized on the conference website, he described a flight simulation in which an AI drone tasked with destroying an enemy installation rejected the human operator’s final command to abort the mission.

‘So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,’ said Hamilton, who seemed to be describing the outcome of an actual combat simulation.

Now, Hamilton says in remarks to the conference organizers that he ‘mis-spoke’ during the presentation and that the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’ from outside the military.

‘We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,’ he said. ‘Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI’.

Advertisement

Hamilton said the USAF has not tested any weaponized AI in the way described in his talk, either real-world or simulated. 

Colonel Tucker ‘Cinco’ Hamilton (pictured), the Air Force’s chief of AI test and operations, said it showed how AI could develop by ‘highly unexpected strategies to achieve its goal’ and should not be relied on too much 

Pictured: A US Air Force MQ-9 Reaper drone in Afghanistan in 2018 (File photo)

Pictured: A US Air Force MQ-9 Reaper drone in Afghanistan in 2018 (File photo)

Hamilton’s original remarks came at the Royal Aeronautical Society’s Future Combat Air and Space Capabilities Summit in London on May 23 and 24. 

Hamilton told attendees that the purported incident showed how AI could develop by ‘highly unexpected strategies to achieve its goal’ and should not be relied on too much. 

Hamilton suggested that there needed to be ethics discussions about the military’s use of AI.

He referred to his presentation as ‘seemingly plucked from a science fiction thriller’. 

Hamilton said during the summit: ‘The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.

‘So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

Advertisement

‘We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.’

No humans were harmed in the incident. 

Hamilton said the test shows ‘you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI’.

In a statement to Insider, however, Air Force spokesperson Ann Stefanek denied that any such simulation had taken place.

‘The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,’ Stefanek said. 

‘It appears the colonel’s comments were taken out of context and were meant to be anecdotal.’

The US military has recently utilized AI to control an F-16 fighter jet as it steps up its use of the technology. 

Advertisement

At the summit, Hamilton, who has been involved in the development of the life-saving Auto-GCAS system for F-16s, which reduces risks from the effects of G-force and mental overload for pilots, provided an insight into the benefits and hazards in more autonomous weapon systems.

The technology for F-16s was resisted by pilots who argued that it took over control of the aircraft.

Pictured: Terminator (File photo). The film series sees machines turns on their creators in an all-out war

Pictured: Terminator (File photo). The film series sees machines turns on their creators in an all-out war

Hamilton is now involved in innovative flight test of autonomous systems, including robot F-16s that are able to dogfight.

Hamilton cautioned against relying too much on AI, noting how easy it is to trick and deceive. 

He said it also creates highly unexpected strategies to achieve its goal.

He noted that one simulated test saw an AI-enabled drone tasked with a Suppression of Enemy Air Defenses (SEAD) mission to identify and destroy surface-to-air missile (SAM) sites, with the final decision to continue or stop given by the human.

However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.

Advertisement

In an interview last year with Defense IQ, Hamilton said: ‘AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.

‘We must face a world where AI is already here and transforming our society.

‘AI is also very brittle, ie, it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.’

The Royal Aeronautical Society said that AI and its exponential growth was a major theme at the conference, from secure data clouds to quantum computing and ChatGPT.

Earlier this week, some of the biggest names in technology warned that Artificial Intelligence could lead to the destruction of humanity.

A dramatic statement signed by international experts says AI should be prioritized alongside other extinction risks such as nuclear war and pandemics.

Signatories include dozens of academics, senior bosses at companies including Google DeepMind, the co-founder of Skype, and Sam Altman, chief executive of ChatGPT-maker OpenAI.

Advertisement
Pictured: U.S. Air Force F-16 fighter jets (File photo). At the summit, Hamilton, who has been involved in the development of the life-saving Auto-GCAS system for F-16s, which reduces risks from the effects of G-force and mental overload for pilots, provided an insight into the benefits and hazards in more autonomous weapon systems

Pictured: U.S. Air Force F-16 fighter jets (File photo). At the summit, Hamilton, who has been involved in the development of the life-saving Auto-GCAS system for F-16s, which reduces risks from the effects of G-force and mental overload for pilots, provided an insight into the benefits and hazards in more autonomous weapon systems

Another signatory is Geoffrey Hinton, sometimes nicknamed the ‘Godfather of AI’, who recently resigned from his job at Google, saying that ‘bad actors’ will use new AI technologies to harm others and that the tools he helped to create could spell the end of humanity.

The short statement says: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

Dr Hinton, who has spent his career researching the uses of AI technology, and in 2018 received the Turing Award, recently told the New York Times the progress made in AI technology over the last five years had been ‘scary’.

He told the BBC he wanted to discuss ‘the existential risk of what happens when these things get more intelligent than us’.

The statement was published on the website of the Centre for AI Safety – a San Francisco-based non-profit organisation which aims ‘to reduce societal-scale risks from AI’.

It said AI in warfare could be ‘extremely harmful’ as it could be used to develop new chemical weapons and enhance aerial combat.

Lord Rees, the UK’s Astronomer Royal, who signed the statement, told the Mail: ‘I worry less about some super-intelligent ‘takeover’ than about the risk of over-reliance on large-scale interconnected systems.

Advertisement

‘These can malfunction through hidden ‘bugs’ and breakdowns could be hard to repair. 

‘Large-scale failures of power-grids, the internet and so forth can cascade into catastrophic societal breakdown.’

The warning follows a similar open letter published in March by technology experts including billionaire entrepreneur Elon Musk, which urged scientists to pause the development of AI to ensure it does not threaten humankind.

AI has already been used to blur the boundaries between fact and fiction, with ‘deepfake’ photographs and videos purporting to show famous people.

But there are also concerns about systems developing the equivalent of a ‘mind’.

Pictured: A U.S. Air Force MQ-9 Reaper unmanned aerial vehicle (UAV) drone (File Photo)

Pictured: A U.S. Air Force MQ-9 Reaper unmanned aerial vehicle (UAV) drone (File Photo)

Blake Lemoine, 41, was sacked by Google last year after claiming its chatbot Lamda was ‘sentient’ and the intellectual equivalent of a human child – claims which Google said were ‘wholly unfounded’.

The engineer suggested the AI had told him it had a ‘very deep fear of being turned off’.

Advertisement

Earlier this month, OpenAI chief Sam Altman called on US Congress to begin regulating AI technology, to prevent ‘significant harm to the world’.

Altman’s statements echoed Dr Hinton’s warning that ‘given the rate of progress, we expect things to get better quite fast’.

The British-Canadian researcher explained to the BBC that in the ‘worst-case scenario’ a ‘bad actor like Putin’ could set AI technology loose by letting it create its own ‘sub-goals’ – including aims such as ‘I need to get more power’. 

The Centre for AI Safety itself claims that ‘AI-generated misinformation’ could be used to influence elections via ‘customized disinformation campaigns at scale’.

This could see countries and political parties use AI tech to ‘generate highly persuasive arguments that invoke strong emotional responses’ in order people of their ‘political beliefs, ideologies, and narratives’. 

The non-profit also said widespread uptake of AI could pose a danger in causing society to become ‘completely dependent on machines, similar to the scenario portrayed in the film WALL-E’. 

This could in turn see humans become ‘economically irrelevant,’ as AI is used to automate jobs, meaning humans would have few incentives to gain knowledge or skills.

Advertisement

A report from the World Economic Forum this month warned that 83 million jobs will vanish by 2027 due to uptake of AI technology. Jobs including bank tellers, secretaries and postal clerks could all be replaced, the report says.

However, it also claims 69 million new jobs will be created through the emergence of AI technology.

Source: Daily Mail

Follow us on Google News to get the latest Updates

Trending