A #USAF official who was quoted saying the Air Force conducted a simulated test where an AI drone killed its human operator is now saying he “misspoke” and that the Air Force never ran this kind of test, in a computer simulation or otherwise.
The Air Force's AI chief also says the remarks were merely "a hypothetical 'thought experiment' from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation.” (Oops!)
Today in AI training and alignment adventures, a very paperclip-maximizing anecdote from a US Air Force AI simulation. "We trained the system – ‘don’t kill the operator.' So what does it start doing? It starts destroying the communication tower."
This reflects back the human developers are…not good. Who builds a simulated lethal system w/zero safety interlocks? They shouldn’t be allowed to code a toaster oven. Wtaf USAF.
"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/
"...a 👉hypothetical "thought experiment" from outside the military👈, based on 👉plausible scenarios👈 and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, 👉nor would we need to in order to realise that this is a plausible outcome"👈. He clarifies that the #USAF has not tested any weaponised #AI in this way 👉(real or simulated)👈 and says "Despite this being a..."
The question is, about what: the facts (i. e. only a "thought experiment" or he got carried away and gave away military 🪖 secrets that he shouldn't have talked about or #USAF had given clearance but chose to retract as the lesser evil, in face of the public backlash.)
We might never know.
What we DO know is that someone already did recreate the "thought experiment" w/ #ChatGPT...