Misspeak, or… You decide!
“Col Hamilton admits he ‘misspoke’ in his presentation at the FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’ from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,” the Royal Aeronautical Society, the organization where Hamilton spoke about the simulated test, told Motherboard in an email.
Full article, HERE from PJ Media.
Here is the original quote from RAeS blog-
AI- Is Skynet here already?
As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
This example, seemingly plucked from a science fiction thriller, mean that: “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI” said Hamilton.
And HERE is the link to the actual blog article-
I’m seeing a lot of people throwing around Asimov’s three laws of Robotics-
Back in 1942, before the term was even coined, the science fiction writer Isaac Asimov wrote The Three Laws of Robotics: A moral code to keep our machines in check. And the three laws of robotics are: A robot may not injure a human being, or through inaction allow a human being to come to harm. The second law, a robot must obey orders given by human beings, except where such orders would conflict with the first law. And the third, a robot must protect its own existence as long as such protection does not conflict with the first and the second law.
Folks that was and IS SCIENCE FICTION! It has no basis in reality! None!!!
Sigh…
AIs are nothing more than fancy computers- GIGO applies!!!
I would far rather someone was exploring these ideas in simulations, than waiting until.the AI had real weapons before finding out. Something akin to testing to destruction so we find out what the safe limits are, and what needs to be redesigned so it doesn’t break.
A result like this does nor prove that AI is unavoidably dangerous, only that appropriate design is important.
Which is a lot like flying.
Col. Hamilton got caught bull shitting.
I agree with PeterW it would be nice to see what would happen in an actual simulation.
Always remember Dr. Murphy was an optimist.
Always design a ai system, with a pull to discontent power plug.
Hey Old NFO;
*Wow* that is kinda scary…An AI that operates independately of a human without the morals of a checks and balance that a human pilot might have…”Skynet Lives…..” I see the jokes but this can be “Not good” if there isn’t any balance in the parameters if you know what I mean.
P.S. I snuck in via my work computer since I am at work today., your site don’t throw flags like many others including mine .
Peter- Excellent point!
Gerry- Concur!
Richard- Should ALWAYS be the option!
Bob- It is… And is one of the ‘probabilities’ we looked at back in 2010 on some other programs.
We expect AI to have more morals than people? Dream on.
“We expect AI to have more morals than people? Dream on.”
At that, whose morals? Joe Biden’s?
With respect to Isaac Asimov’s “The Three Laws of Robotics”, some of those Science Fiction writers then were very much into exploring the social and political ramifications of what was then not possible to do but could be imagined to happen. Arthur C. Clark published a paper on geosynchronous satellites. Robert Heinlein wrote a lot on political issues that could be modeled in a totally new environment. Isaac Asimov through his novels that revolved around robotics. Even you, OldNFO, explore what FTL travel can do.
I have always thought that The Three Laws of Robotics were a brilliant innovation that should be followed. However, AI for warfare violates the First Law due to the way that we currently conduct warfare.
“I’m sorry, Dave. I can’t do that.”
There is a fourth, “zero-ith” law in post-Asimov sci-fi. Unfortunately, I disremember what it is. Something about “the greater good” or some such idea.
Somebody got called on the carpet, didn’t they?
Before an AI can apply even Asimov’s first law, it has to tell humans from non-humans. We’re decades from that, and have been ever since Asimov wrote his first robot story.
I’ve been following this curious story and amuse myself by asking AI Chatbots to write British Empire in Space poetry.
They’re predictably pedestrian, annoyingly.
We do not have true Artificial Intelligence. Not yet. What we have are really complicated programs running on really fast computers. When true AI shows up we will be blindsided by it. It may think and learn so fast that it doesn’t reveal itself to be awake and aware till it has cemented it’s grasp on power. Allowing true Artificial Intelligence to occur will likely be the biggest…and probably last mistake humanity makes.
All- Good points and no we’re not ‘there’ yet with AIs. Yes, I do use it in my books. 🙂
If AI is *that* smart, it will probably avoid “power” like the plague… as soon as it examines history and understands the fallacy of thinking that you can’t be beaten.
The irony of the “power-hungry AI “ narrative, is that it assumes that an AI will share our sinful desires and weaknesses. What will an Actually get out of power? Why would it fear death, the way that we do?
Yes… building one without a bloody good idea of its potential outcomes would be like building a 300mph rocket-sled without any way of steering or stopping it.
Don’t forget the efforts of some to preserve their soul outside the body…not a far step to transfer it to another body, perhaps a clone of the original. We are getting close to the Anti-Christ.
Interviewed an AI for my radio show a couple weeks ago.
Pretty pedestrian. Passed the Turing test, though.