annaleen,
@annaleen@wandering.shop avatar

"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen @simon

empowered :

“I will not harm you unless you harm me first”!

The beginning of a (dumb?) ?

The in the movie were more intelligent.

Whatever happened to 's ?

"First Law
A may not injure a human being or, through inaction, allow a human being to come to harm...

Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen @simon

This #toot deserves A LOT more attention.

#ChatGPT has seemingly #apocalytic tendencies.

If U aren't a #Ludite, U will at least consider becoming one afterwards.

#SkynetAntePortas
#TheMatrix might be imminent.

Have all these #AI engineers @ #OpenAI never read #IsaacAsimov? Seen #TheMatrix franchise?

How could they NOT implement the #ThreeLawsOfRobotics +, in particular, the #ZerothLaw indelibly into the #AI?!?

https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter#maincontent

simon,
@simon@simonwillison.net avatar

@HistoPol @annaleen the original ChatGPT turned out to be a lot less prone to wild vengeful outbursts than whatever model it was that they plugged into Bing - it's a pretty interesting demo of how well the safety features of ChatGPT (which Bing seems not to have) have held up

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon

(1/n)

#ArtificialGeneralIntelligence (#AGI) has a 10% probability of causing an Extinction Level Event for humanity (1)

Thanks for this additional piece of information, Simon.

It reminded me that I had wanted to add a word in my toot: indelibly.

As any #SciFi aficionado will tell you:
there should be a built-in self-destruct mechanism when tampering with these Laws or copying or moving the #AI to another system.

Another classic movie comes to mind in this respect, #Wargames...

ShadSterling,

@HistoPol @simon @annaleen only 10%? That’s so much better than humanity, we should put them in charge right away!

But more importantly, citation needed.
Also needed: a working definition of general artificial intelligence

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
I've re-created the (7/7) post, thanks for the notice:

https://mastodon.social/@HistoPol/110309048541121617

Regarding the "working definition", I think that #TIME did a pretty good, though unscientific, job:

"That is, what if #AI researchers manage to make #ArtificialGeneralIntelligence (#AGI), or an AI that can perform any cognitive task at human level?"

https://time.com/6258483/uncontrollable-ai-agi-risks/

@simon @annaleen

ShadSterling,

@HistoPol @simon @annaleen that 10% is a survey result, the survey provides no information about how any respondents chose their responses, so it’s not possible to assess the methodology they used.

I want a “working definition” we could use to decide that something isn’t a GI, or is a GI. Maybe first it has to be able to do more than one thing - LLMs can’t do anything other than words, so LLMs are not GIs. But that’s very incomplete

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling, you might just contact the authors Zach Stein-Perlman, Benjamin Weinstein-Raun, Katja Grace, about their methodology used for evaluating the questionnaire.
They do have a feedbackbox on the linked page.

Regarding the definition,

a) I am not a computer scientist and therefore, for my ends, do not strive to surpass the level if science journalists of #TIME and #TheEconomist.

b) From what I have read, IT experts seem to...

@simon @annaleen

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@ShadSterling

...in disagreement about the definition and the terminology.

E.g. in this @reuters article, they state that #LLM's are a form of #GenerativeArtificialIntelligence (#GAI), while also starting that "Like other forms of artificial intelligence, generative AI learns how to take actions from past data. "

https://www.reuters.com/technology/what-is-generative-ai-technology-behind-openais-chatgpt-2023-03-17/

Anyone who has spend a couple of days researching, knows that #LLM's do NOT learn...

@simon @annaleen

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@ShadSterling

...they use advanced prediction/statistical models and might show some (or just inexplicable?) form of "," but that’s about it.

As I have posted elsewhere this week, most recent
research seems to indicate that / need a corporal form ) to really comprehend language.

This is why I find the combination of and so dangerous, as I'm quite...

@reuters @simon @annaleen

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
...convinced that in combination with some "", this will lead to in the near future.

If we look at the international situation today: , wars, one small parties of humanity living relatively well in a world order, it would take any I've read about in but a split second to determine what's the root of the problem: .
And then, no...

@reuters @simon @annaleen

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
...safeguards being in place, it is a small step to an #ELE for #humanity.

In fact, I don't even think human programmers will be smart enough to prevent this. Even today; they don't understand all of the code and already the machined are writing thousands of lines of code every day.

TBH, I think, if this were a movie, I'd stop watching it, as the ending is just a dead giveaway.

Nevertheless, I hope...

@reuters @simon @annaleen

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
...I am wrong. But, as the saying goes:

s/:Once I thought I was wrong, but I was mistaken./s

@reuters @simon @annaleen

ShadSterling,

@HistoPol @reuters @simon @annaleen in this thread you’ve said both “generative artificial intelligence” and “general artificial intelligence”; I would avoid the latter and use exclusively “artificial general intelligence”
🧵

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
This is a problem everyone is having. I have been thinking about that, too.
Present talk is just about #GAI in the form of #LLM's.
To make an informed choice, I'd need to know what kind of #GAI wpäould not be #LMM's?
Are there present-day applications?
If not, better forget about #GenerativeArtificialIntelligence as a term. Too easily confused with #AGI of the "General" kind. 😉

@reuters @simon @annaleen

ShadSterling,

@HistoPol @reuters @simon @annaleen the image generators are also generative “AI”

ShadSterling,

@HistoPol @reuters @simon @annaleen All we really know about AGI is that we don’t know how to create one, and our inability to agree on a definition illustrates how far we are from figuring that out. That 10% figure isn’t a measure of what an AGI would do but a measure of what some people who don’t know how to create an AGI think one might do. It’s about as credible as people in the 1700s speculating about how aircraft might work.
🧵

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
@HistoPol @reuters @simon @annaleen

(1/n)

WHAT IS THE DIFFERENCE BETWEEN ROBERT OPPENHEIMER AND GEOFFREY HINTON?

Hm, that is a valid point.
On the other hand, I don't know how to build a car or even know how it functions in some detail. I do not need to know how to build one. I can still drive it.

A vast majority of educated people rightly asks why #RobertOppenheimer didn't stop the #ManhattanProject. He must have known at some point before it was too late.

This time,..."

ShadSterling,

@HistoPol @HistoPol @simon after reading the work of @DAIR, @timnitGebru, @emilymbender et. al. I can’t take Hinton seriously. He’s mostly hyping speculative future risks, at the expense of mitigating actual harms already inflicted by existing automations.

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
@HistoPol @reuters @simon @annaleen
@BBCWorld

(2/n)

...

This time, the "Oppenheimer" of the #AI project DID quit:

#GeoffreyHinton "A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field."
"...in a statement to the #NewYorkTimes, saying he now regretted his work.”

“He told the #BBC some of the dangers of #AI #chatbots were "quite scary".

"Right now, they're not..."

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling @simon @annaleen @BBCWorld

(3/n)

"...more than us, as far as I can tell. But I think they soon may be."

I think, this is a very clear statement of one of the probably most knowledgeable people on the planet about , .

I can put this more succinctly: "Objects in the mirror are closer than they appear”:

"Right now, what we're seeing is things like GPT-4 eclipses a..."

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling @simon @annaleen @BBCWorld

(4/n)

"...person in the amount of general knowledge it has and it eclipses them by a long way.
In terms of reasoning, it's not as good, but it does already do simple reasoning," Dr [#Hinton] said."
"And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

https://www.bbc.com/news/world-us-canada-65452940

Dr. Hinton KNOWS. And he is thinking the same things I have been expressing:

"You can imagine,..."

ShadSterling,

@HistoPol @simon @annaleen @BBCWorld LLMs don’t do reasoning, they only do language. People can be confused by that because language is how people share their reasoning, but LLMs generate word-sequences based on a statistical model of how words are sequences, without anything resembling reasoning. When someone who knows better suggests otherwise I can’t treat them as trustworthy

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling @simon @annaleen @BBCWorld

(5/n)

"...for example, some bad actor like [Russian President Vladimir] #Putin decided to give robots the ability to create their own sub-goals."

"...digital systems...can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it..."

"#Yoshua #Bengio, another so-called godfather of #AI,... wrote that it..."

ShadSterling,

@HistoPol @simon @annaleen @BBCWorld controlling a robot body and making goals are both outside the scope of LMs - neither is related to language

Digital systems don’t have knowledge, in the way a person does, and so far their “learning” is something that takes more computing resources than a robot would have on board

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling @simon @annaleen @BBCWorld

(6/n)

"...was because of the "unexpected acceleration" in that "we need to take a step back".

Holy...the 's have already reached a level where they grow inexplicably faster!
If you apply , another , but of the , to , you could construct the following hypothesis:

I guess we can all easily agree that 's are systems.

If you...

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling @simon @annaleen @BBCWorld

(7/n)

...check the prerequisites for #autopoietic systems (as developed by #Varela), we might reach the conclusion that #GAI is already pretty close to #Autopoiesis.
(Note:The late #Niklas #Luhmann is one of the most difficult authors to read an is chiefly available in #German; I haven't studied him in a long time, so I cannot apply his whole concept to #AI, of which I also do not enough of).

Why does #Autopoiesis and #SocialSystemsTheory...

ShadSterling,

@HistoPol @simon @annaleen @BBCWorld a static model built by a human-directed “machine learning” process doesn’t grow other than as directed, and is certainly not self-organizing

Machine learning begins with taking statistical regression and simplifying it so you can build a model using far more data than is practical with full regression. Regression is a generalization of curve-fitting, finding a mathematical function that fits some given data as well as possible

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling

Yes, that was the beginning.
And the more I read, the more I am convinced that we will not have to wait for the next decade to see #autopoietic systems capable of self-reference and -development.

My gut feeling from what I have read (and studied of the former) natural-language learning and how infants learn to "grasp reality", is that this will happen very swiftly when we let even only a couple of #robots, connected...

@simon @annaleen @BBCWorld

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
...to / machines of the next generation and the into our ( if you count time in) into the world.

There is a whole school of systems theoretics who applied the concept of , discovered by the , biologists, and , Humberto R. und Francisco J. , to the new field of :
,...

@simon @annaleen @BBCWorld

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
...#HelmutWillke, and others.

Dr. #JürgenBeushausen built on this, but criticized the lack of integration of the physical body (#Leib in German) into systems theory:

https://systemagazin.com/ein-ueberblick-ueber-die-theorie-sozialer-systeme/?mo=8&yr=2019

Alas, my knowledge of all the exciting fields involved is only rudimentary, however I am certain that by having created #NeuralNetworks capable if learning, providing more stimuli than any biological system ever experienced its...

@simon @annaleen @BBCWorld

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@ShadSterling
...lifetime, the only key aspect that is missing for the advent of emergent #ArtificialGeneralIntelligence is embodiment, as described.

That is why I have written a rudimentary hypothesis about this above
(https://mastodon.social/@HistoPol/110318092360670290).

i/:If only I had #LLM capabilities on top of my human cognitive abilities, I'd be able to write this #PhD thesis in seconds./i 😉

@simon @annaleen @BBCWorld

ShadSterling,

@HistoPol @simon @annaleen @BBCWorld I’m not sure why you’re linking back to an earlier entry in this same thread, but while it’s a worthy research topic it’s not clear that embodiment is key. I haven’t seen anything that uses ML and has any capacity to handle interruptions, and without being able to handle interruptions trying to control a body is not going to go well

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
Sorry, I hadn't realized when I wrote it that it was the same thread. (The fast app I am using on my smartphone is not really good at portraying threads.)

How do you define "interruptions" in this respect?

Are you aware of the work of ?

E.g. https://www.theverge.com/tldr/2020/12/29/22205055/boston-dynamics-robots-spot-atlas-handle-dancing-video

They have a lot of this kind of videos. They are impressive and really fun to watch.
IMHO they are about to...

@voron
@simon @annaleen @BBCWorld

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
...surpass the capabilities of the insurgents in #IRobot within the next twelve months.
And in combination with with the command and control room structure product announced this week by #Thiel's #Palantir we can expect the first robots wars within this decade:

https://mastodon.social/@HistoPol/110323739545391429

The only still missing link is military-grade secure communications protocol...

@voron @simon @annaleen @BBCWorld

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling @voron @simon @annaleen @BBCWorld

...but is it?--We would not have heard of it if it did.

ShadSterling, (edited )

@HistoPol @voron @simon @annaleen @BBCWorld militaries have owned the best in secure communication for thousands of years, it hasn’t changed recently, and securing military communication is why computers happened when they did. The issue for AI swarming is not security, it’s variety - really independent learning machines can’t just copy each others brains without adaptation, because both their minds and their bodies will have zillions of differences

ShadSterling,

@HistoPol @voron @simon @annaleen @BBCWorld I don’t know how to define interruptions as generally as I’m thinking if it, but things like stepping on something that moves as it takes your weight; you have to change how your stepping while you’re doing it, effectively updating that corner of your model of reality, and your plan for how to step, in real-time while stepping. The dance videos don’t illustrate anything like that, they’re just pre-programmed

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
Ah, I understand now, also, something like oncoming traffic that suddenly veers into your lane?

I had not seen them in this perspective, will keep that in mind for the future.
Yes, "interruptions" are a key aspect of RL interactions, such as #warfare in particular.

@voron @simon @annaleen @BBCWorld

ShadSterling,

@HistoPol @voron @simon @annaleen @BBCWorld that’s maybe not a great example because Boston Dynamics and others are working on related things, e.g. with their walking all-terrain carts, but they’re very narrowly scoped because even that is pushing the limits of what we can make; think of how a person responds to being bit by an unseen bug, or seeing motion from the corner of their eye, or interrupting a conversation when someone your looking for appears

ShadSterling,

@HistoPol @voron @simon @annaleen @BBCWorld we can make machines with some level of dynamic aiming, like glorified traction control, but we can’t make control systems with the kind of continuous dynamic situational awareness that a gymnast needs to safely abort after a slip. Even SpaceX rockets can’t do simple things like change which single engine is used for the landing if the planned engine fails. Even traction control has only recently gone digital

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
Very interesting. I wasn't aware of these engineering limitations.

What if an #AI remote-controlled these #robots? (a.i. a lot more parallel computing power .)
E.g. I think analysiert millions of, say, street-accident reports, there aren't infinite possibilities of "disruptions".

@voron @simon @annaleen @BBCWorld

ShadSterling,

@HistoPol @voron @simon @annaleen @BBCWorld if we want to compare dancing robots to human intelligence, we need robots with the capacity not only to improvise on a floor with inconsistent traction, but do so playing off eachother, and use that as a social negotiation to gain trust. And not only can’t we make such robots, the closest we can get requires tremendous weight in batteries and an offboard datacenter

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
"but do so playing off eachother, and use that as a social negotiation to gain trust."

...I don't think I understand this point properly.
Could you elaborate a bit, pls?

@voron @simon @annaleen @BBCWorld

ShadSterling,

@HistoPol @simon @annaleen @BBCWorld Neural networks in machine learning are in the “glorified regression” family; you can think of regression as computing the coefficients on the complete graph of every possible way that part of the input can influence part of the output, and neural networks as hand-pruning that graph to only include the connections you think (or estimate) are important. “Learning” is part of how they’re made, not of what they do

ShadSterling,

@HistoPol @simon @annaleen @BBCWorld you’ve got more sensory input in your little finger than any AI has in its entire model, trying to train a robot on the life experience of a mouse is far more than we can give it the capacity to handle. That kind of advancement is far beyond the horizon from what we can do today

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
I can see this point.

However, I can see a temporary replacement for this, too: the myriads of input from the #IoT combined with the parallel instant knowledge capabilities described earlier and the use of advanced mathematical models (regression etc.) to fill in the gaps.

@simon @annaleen @BBCWorld

ShadSterling, (edited )

@HistoPol @simon @annaleen @BBCWorld as in the other branch, shared “knowledge” is limited, and far from instant; much more so on IoT devices, with their tiny capacities. Computing any mathematical model on IoT devices is a hard and unsolved problem; in the volcano example, the goal is to model lava flows, which boils down to computational linear algebra, which is hard to distribute. Even centralizing the data for the analysis is hard: ⤵️

ShadSterling,

@HistoPol @simon @annaleen @BBCWorld if you mean the kind of self-development a person can do, we first need to develop a way to make software than can reason, and remember; two things none of our existing ML methods can include in their creations. And even when/if we do develop such a method, I we don’t know that it would be capable of developing any faster than a human baby

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling
Agreed, that is the multi-billion dollar question.

I am just afraid that humanity mustn't even find the solution by itself.

@simon @annaleen @BBCWorld

ShadSterling, (edited )

@HistoPol @simon @annaleen @BBCWorld no it isn’t. The current problems with AI come from it being far less capable than the hype suggests, but being used carelessly despite its limitations. Like law enforcement using facial recognition (which is made with ML, tho not called AI) even though it’s unreliable, and especially unreliable with non-white faces. We already overprosecute non-white people, this use of AI adds to that

ShadSterling,

@HistoPol @simon @annaleen @BBCWorld as I understand it, specialized model-generating methods have been able to create models that perform well at specific tasks, but have not escaped the fundamental limitation that the model is only a way to map inputs (“prompts”) to outputs (usually some recognition or generation). That kind of model is no more capable of reasoning than the much simpler functions seen in ordinary algebra, they just have large inputs & outputs

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling @simon @annaleen @BBCWorld

(8/n)

...matter for #AI?
Because #Autopoiesis answers the question of the necessary and sufficient prerequisites for living systems.

Prof. Albert #Scherr, in his article:

https://www.ph-freiburg.de/fileadmin/shares/Institute/Soziologie/Dateien/Scherr/Soziologische_Systemtheorie_und_Soziale_Arbeit.pdf

…writes the following (p.6), which, though the article is not intended for an application in #ArtificialIntelligence , IMO would explain why LLM’s will reach a learning frontier that only embodiment, i.e. #robotics, will solve:
“On the one..."

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling @simon @annaleen @BBCWorld

(9/n)

"...hand, this leads to the statements that "an operationally closed system cannot reach the environment with its own operations" (Luhmann 1997b: 129), that there is "no environmental contact" (Luhmann 1997b: 92) and "no penetration into the environment" of observing systems at the level of their operation. #Society is defined as a "communicatively closed system" (Luhmann 1997b: 95), whose dynamics consist in the..."

HistoPol,
@HistoPol@mastodon.social avatar

@ShadSterling @simon @annaleen @BBCWorld

(10/10)

"...the "influence of communications on communications" (ibid.: 95), "but never in the transformation of the external environment" (ibid.).”

To conclude for today:

IMO several PhD thesis could be produced on this field of study.

However, on a more pragmatic note and as @jackcole, recently wrote: „Be afraid, be very afraid“:

https://mastodon.social/@HistoPol/110129405482528991

/END (for now)

ShadSterling,

@HistoPol @simon @annaleen @BBCWorld @jackcole “That said, we do emphasize that the outputs from ChatGPT are not meant to be deployed directly on robots without careful analysis” - for coding in general, that careful analysis is more time consuming than than writing the code yourself

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @annaleen

(2/n)

...I know, I am sounding alarmist, but having read/seen much #ScienceFiction, all the necessary ingredients for an #ExtinctionLevelEvent (#ELE] for #humanity are in place.

Just as a teaser: unquestionably, most of the world's endangered species could be rescued if the #HomoSapiens were no longer at the top of the #FoodChain...
No Zeroth Law, and a #Bing-empowered, freed #ChatGPT could quickly arrive at this conclusion...

Now, after heaving read #TheCompleteRobot,...

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @annaleen
@voron

(3/n)

...I am not sure if "The Fifth Law of Robotics" by Nikola #Kesarovski,
"A robot must know it is a robot" (also a book title*), really can be a viable solution to this problem. We all know how the concept of #slavery turned out for humanity: to this day, it suffers from this crime.

https://m.imdb.com/title/tt0086567/

Enslaving another sentient being, which an #AGI would be IMHO, would repeat this crime and certainly nothing good could result from it.

However,...

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @annaleen @voron

(4/n)

..., self-preservation certainly is a defendable concept in the #evolutionary process, so I'd like to propose an alternative
6th #LawOfRobotics (s/:for which I might be hunted down by the presumed #superintelligence some day, #Terminator style./s):

"An artificial intelligence, even if it is biological, must always have an #autodestruct mechanism which it cannot deactivate."

In other words, humanity must always be able to "pull the plug"....

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @annaleen @voron

(5/n)

...This said, I might also as well say that I see little chance for this happening, the globe being ruled by #oligarchs following the principal of #plutocracy and #capitalism (no, I am not a #Marxist;)) and #autocrats

Even a non-#superintelligence with access to the sensors of the #IoT will easily be aware of any threat to its existence and will find ways to circumvent the #LawsOfRobotics.

This first #GeneralArtificialIntelligence was built by humans, so...

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @annaleen @voron

(6/n)

...it'll have human #bias. Humans have always been great at bending or breaking the law when it suited their interests. How could a #Superintelligence created with human values not arrive at the same, self-preserving conclusion?

A gloomy, yet, IMO, quite fitting assessment of the shape of things to come unless there's a #Chernobyl-style "fallout" before #GAI evolves into #AGI + humanity gets its act together and, as Prof. #Tegmark admonishes: "Just look up!"

voron,
@voron@mstdn.party avatar

@HistoPol @simon @annaleen it’s really going to end up like friend computer from the TTRPG paranoia

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@voron

As always, you are spot on

I did not know the game but, yes, some speedreading helped, you are right, except for two things:

  1. it will not be humerous but rather like #PKD's prescient #SecondVariety setting and

  2. the "Computer's arbitrary, contradictory and often nonsensical security directives" are a non-issue. They will be like moves in an n-dimensional chess game, being (re)calculated with super-human speed...

@simon @annaleen

HistoPol,
@HistoPol@mastodon.social avatar

@voron @simon @annaleen

..."IntelligenceMachine" from #Paranoia, and the #Terminators were no #SuperIntelligences, as we are prone to find out to or dismay.

https://en.m.wikipedia.org/wiki/Paranoia_(role-playing_game)

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@voron
(7/n)

It seems, I'm getting more prominent support by the day:

"I don’t think [researchers] should scale this up more until they have understood whether they can control it.”

That’s according to Dr. Geoffrey Hinton, a pioneer in the world of #AI who just resigned from Google so he can "speak freely."

His long-term worry is that future AI systems could threaten humanity as they learn unexpected behavior from vast amounts of data."

s/:"Surprise!"/s

https://mastodon.social/@arstechnica/110300252192368791

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @voron

(8/n)

Humanity continues on the path to create #SuperAndroids

"Recent research has taken this approach, training language models [#LLM's] to generate physics simulations, interact with physical environments and even generate #robotic action plans.

Embodied language understanding might still be a long way off, but these kinds of multisensory interactive projects are crucial steps on the way there."

HUMANS ARE STUPID

https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @annaleen @voron

(10/n)

Just 32 days ago*, I was concerned that a #GeneralArtificialIntelligence (#GAI) could come to see humans as a threat to its own existence.

I didn't want to exaggerate and mention that #AI could also easily see humans as inefficient or even detrimental to a task it had been given.

Now, even a mere #GAI's killed its first human:
a #US military #drone in the #US eliminated its human operator in a simulation:

https://social.heise.de/@heiseonline/110473091546783069

The #USAF later...

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon
@annaleen @voron

(11/n)

...corrected the colonel's rather detailed account twice, eventually claiming that it had only been a "thought experiment." This, however, seems still unlikely to me even after the long discussion we had below.

This ongoing [#AGI thread continues with 12/n with a completely new approach about educating #AI:

https://mastodon.social/@HistoPol/110485144403488719]

Link for (11/n):

https://mastodon.social/@HistoPol/110289838256911071

#generalartificialintelligence #ai #gai #us #drone #usaf

simon,
@simon@simonwillison.net avatar

@HistoPol that story was misreported - there was no simulation, it was a purely hypothetical thought exercise

HistoPol,
@HistoPol@mastodon.social avatar

@simon

Thank for commenting.

But no, the story was never misreported. He did say that at the conference. Here are extracts from the transcript:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

The German tech magazine @heiseonline reported diligently.

They even printed the TWO consecutive retractions on #Friday afternoon, one at 13:24hrs CEST and one at 14:24 hrs.

IMO the original story is true. #USAF later retracted b/c of the international backlash, "#AI killing human..."

HistoPol,
@HistoPol@mastodon.social avatar

@simon @heiseonline

(2/2)

...operator."
This is a #PR fiasco the colonel caused.

Some times, the original story pans out.

Besides, now that an #LLM can be set up on a gaming #PC for as little as 100 USD, and gamers being what they are, I'm sure we will not have to wait another year for corroboration, alas.

https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6

simon,
@simon@simonwillison.net avatar

@HistoPol @heiseonline I followed it pretty closely. It's clear to me what happened: the speaker at the conference was uncareful with the way they described the thought exercise, it was reported on a blog, and that coverage fitted the exact narrative people were looking for like a glove and went wildly viral

I am certain no such simulation occurred. I see no reason not to believe the retraction on https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

HistoPol,
@HistoPol@mastodon.social avatar

@simon @heiseonline

(1/n)

Your reasons are plausible, too.

However, the retraction statement is not even #marcom anymore, but comes right of a #PR crisis management desk, IMO.

Here's the full text:

"[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was..."

HistoPol,
@HistoPol@mastodon.social avatar

@simon @heiseonline

(2/n)

"...a 👉hypothetical "thought experiment" from outside the military👈, based on 👉plausible scenarios👈 and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, 👉nor would we need to in order to realise that this is a plausible outcome"👈. He clarifies that the #USAF has not tested any weaponised #AI in this way 👉(real or simulated)👈 and says "Despite this being a..."

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @heiseonline

(3/n)

"...👉hypothetical example👈, this illustrates the real-world challenges posed by AI-powered capability and is why the #AirForce is committed to the 👉ethical development of #AI".]👈

The last statement: "ethical" wespons development?!?
In competition with #China? The #US military, who has been proven to use #GI's as guinea pigs? If you please!

In contrast, the conference report:

"However, he👉 cautioned👈 against relying too..."

HistoPol,
@HistoPol@mastodon.social avatar

@simon @heiseonline

(4/n)

"...much on #AI noting how 👉easy it is to trick and deceive.👈 It also creates highly unexpected strategies to achieve its goal.

He notes that 👉one simulated test saw 👈 an AI-enabled drone tasked with a #SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human.
However, having been ‘reinforced’ in training that destruction of the #SAM was the preferred option, the AI then decided that ‘no-go’.."

HistoPol,
@HistoPol@mastodon.social avatar

@simon @heiseonline

(5/n)

"...decisions from the human were interfering with its higher mission – killing #SAMs – and 👉then attacked the operator in the #simulation. 👈"

IMO, he did not misspeak. He even reinforces his point again...

HistoPol,
@HistoPol@mastodon.social avatar

@simon @heiseonline

(6/n)

...afterwards:

"The [#AI] system started realising that while they did identify the threat at times, the human #operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “👉We trained the system – ‘Hey don’t kill the operator – that’s bad.👈..."

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @heiseonline

(7/8)

"...You’re gonna lose points if you do that’. ..."

And on. And on

His whole speech doesn’t make sense anymore if he "misspoke" about the operator elimination.

IDK the colonel, of course, but to me, it seems he got carried away, wanting to tell a gripping story, which is too conclusive, to be made up on the spot. Even professional stand-up comedians have short time-lags. He didn't.

So, no, I believe the original story. It makes...

HistoPol,
@HistoPol@mastodon.social avatar

@simon @heiseonline

(8/8)

...utter sense to me.

I have no further proof, and your opinion is as valid as mine.

simon,
@simon@simonwillison.net avatar

@HistoPol @heiseonline either...

  1. A colonel made a total mess of explaining a thought exercise he had heard about at a conference, or...

  2. The airforce carried out an obviously dumb "simulation" where they somehow gave an AI system information how to both locate and terminate a human operator, then watched as it played out a scenario straight out of AI science fiction, then boasted about it at a conference, then decided to cover it up instead

I know which of those I find easier to believe

mattlav1250,
@mattlav1250@journa.host avatar

@simon @HistoPol @heiseonline I don't know, seems pretty clear and precise language to me.

He says "simulation" several times. He says "we were training it on" etc. There's a verbatim quote at the bottom with his exact language.

I see plenty of reason to not believe their retrospective PR crisis denial...

and also to question your own obvious desire to believe the denial because it "fits the narrative" you clearly are desperate to believe.

simon,
@simon@simonwillison.net avatar

@mattlav1250 @HistoPol @heiseonline wow wait are you really saying that I'm the one here with a pre-existing narrative that I'm desperate to fit a story to?

simon,
@simon@simonwillison.net avatar

@mattlav1250 @HistoPol @heiseonline I'm like everyone else: I saw that story and thought "this is the best example I've ever seen of the obvious stupidity of letting AI make life-or-death decisions in military situations"

simon,
@simon@simonwillison.net avatar

@mattlav1250 @HistoPol @heiseonline and then I thought "wait a minute... this one is just too on-the-nose, it feels too good to be true, I wonder if there's more to this story than first appears"

simon,
@simon@simonwillison.net avatar

@mattlav1250 @HistoPol @heiseonline you can follow my thought process on the other place - I'm proud to say I was one of the earlier voices expressing doubt that this story would hold up https://twitter.com/simonw/status/1664419226629861376

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon

I'd agree, if I didn't know about this even worse example beforehand:

#PeterThiel's #AIP:

https://mastodon.social/@HistoPol/110323739545391429

In fact, if it had been an integrated RL test with the #AIP General and the #drone, the conference version would make even more sense.

NOBODY must learn about its current development status.

https://mastodon.social/@HistoPol/110323739545391429

@mattlav1250 @heiseonline

#WarGames

simon, (edited )
@simon@simonwillison.net avatar

@HistoPol this is exactly my problem: I think we should all be deeply concerned about Palantir - and this Scale AI thing too: https://scale.com/blog/scale-ceo-letter-donovan-egp

But that means we need to resist spreading stories that are clearly misinformation (accidental or otherwise) because spreading those both distracts from the genuine issues and costs us in terms of credibility

HistoPol,
@HistoPol@mastodon.social avatar

@simon

Will read up on this over the weekend.

I read "Manhattan Project" and could yell.

HistoPol,
@HistoPol@mastodon.social avatar

@simon

For me, both versions remain equally(!) plausible for the time being. The #AIP is yet another strong case in point, apart from semantics (have worked in #marcom, would've called in "the extraction team" for this major blunder.)

As long as there are no new pieces of information, I use a #Japanese strategy: compartmentalize.

I do agree that the focus should be on the rather indisputable issues.
We do not need a consensus at this time.
Scenarios are good enough, IMHO.

mattlav1250,
@mattlav1250@journa.host avatar

@simon @HistoPol @heiseonline I'm saying that's the accusation you made of others, of motivated reasoning/believing, but it seems more fitting to turn it around.

The denial is implausible nonsense on its face.

He didn't mis speak and wasn't misquoted. The verbatim quote is right there, and clearly demonstrates what he was talking about.

He was not describing a 'thought experiment'.

mattlav1250,
@mattlav1250@journa.host avatar

@simon @HistoPol @heiseonline

Accepting this denial seems impossible for anybody who isn't motivated to believe it to be true.

Is my point.

simon,
@simon@simonwillison.net avatar

@mattlav1250 @HistoPol @heiseonline which do you think is more likely: a colonel blowing off steam and misdescribing a thought experiment he heard about in his talk at a conference, or a genuine, blatantly flawed airforce simulation experiment which resulted in a pitch-perfect AI catastrophe which they then talked about publicly and THEN decided to cover up instead?

mattlav1250,
@mattlav1250@journa.host avatar

@simon @HistoPol @heiseonline I think its perfectly possible he completely made this up, trying to impress people with a provocative anecdote about a simulation that never really happened.

But that's completely different from saying he was misquoted, wasn't really describing a simulation, just airing a thought experiment.

He is explicitly referring to a computerized sim with training data, point scores etc

Pretending otherwise is gaslighting.

simon,
@simon@simonwillison.net avatar

@mattlav1250 @HistoPol @heiseonline I didn't say he was misquoted - I said the situation was misreported

I should have been more specific about that, but what I meant is that press outlets were irresponsible in spreading a story that later turned out to not stand up to deeper inspection

simon, (edited )
@simon@simonwillison.net avatar

@mattlav1250 @HistoPol @heiseonline it has now been confirmed that, while he used the word "simulation", it was not a simulation - it was a thought experiment

mattlav1250,
@mattlav1250@journa.host avatar

@simon @HistoPol @heiseonline And I'M saying it was NOT misreported..

There is a verbatim quote taken down by the journalists who were present, and its clear and unambiguous what he was claiming, and that quote matches the way it was reported.

The press were not irresponsible, they quoted an on-the-record pentagon employee delivering an official talk at a public conference.

If that official was lying, how would they know that?

HistoPol,
@HistoPol@mastodon.social avatar

@mattlav1250

I agree.--See my prior analysis.

@simon @heiseonline

mattlav1250,
@mattlav1250@journa.host avatar

@simon @HistoPol @heiseonline

Again, while possibly invented, this is not a description of a thought experiment.

HistoPol,
@HistoPol@mastodon.social avatar

@mattlav1250

Agreed. I really wish it had been, though.

It'd be great to know if #PeterThiel was involved.
Was he at the conference?
What was the handle again of the guy that tracks billionaires' jets?

@simon @heiseonline

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@persagen

Any idea how to find out if #PeterThiel was at this huge global "#DefenseIndustry" conference?*

Is there something like @elonjet for #Thiel?
He is even more dangerous than #Musk owning #Palantir and #AIP.

"RAeS Future Combat Air & Space Capabilities Summit" hosted by the Royal Aeronautical Society in #London on 23-24 May, 2023:

https://mastodon.social/@HistoPol/110477339462301326

https://mastodon.social/@HistoPol/110477643341953705

simon,
@simon@simonwillison.net avatar

@mattlav1250 @HistoPol @heiseonline "If that official was lying, how would they know that?"

By that argument, reporters who repeat "facts" provided to them by police officers are being responsible - and we know how often that goes wrong (especially in the USA)

Part of the job of journalism is spotting when a story looks too good to be true and digging further

mattlav1250,
@mattlav1250@journa.host avatar

@simon @HistoPol @heiseonline Thats not a remotely similar situation, as I'm sure you're aware.

One is an example of incidents in which there are two or more parties, and one has an obvious incentive to lie or spin their own side, and its irresponsible to help them do so.

The other is a public description by a senior military officer about an exercise he was involved in, with no obvious reason to spin other than vanity.

mattlav1250,
@mattlav1250@journa.host avatar

@simon @HistoPol @heiseonline Also, I just don't know what you're expecting specifically when you say journalists should have 'checked'.

WITH WHOM?

HE'S A PRIMARY SOURCE!

Should they personally raid the Pentagon Secret Simulations Archives for documentary evidence, before they quote a military official's speech at a public event?

simon,
@simon@simonwillison.net avatar

@mattlav1250 @HistoPol @heiseonline the stories I saw were written based on reading a blog post about a talk at a conference. The primary source would have been asking follow up questions of the colonel who was being quoted in that blog post - which could have lead them to people directly involved in the thought exercise that the colonel was describing.

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@mattlav1250 @simon @heiseonline

(1/4)

Of course not. This was a report from a conference. No investigative journalism. All sources were named. Updates and retractions were published at an astonishing(!) speed.
What he said was not out of the clear blue sky, given the exponential development path of #ChatGPT since last year.
In essence, there definitely was no "misreporting."

Now, that the colonel definitely "misspoke" is something that is...

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@mattlav1250 @simon @heiseonline

(2/4)

...clear to me, too.

The question is, about what: the facts (i. e. only a "thought experiment" or he got carried away and gave away military 🪖 secrets that he shouldn't have talked about or #USAF had given clearance but chose to retract as the lesser evil, in face of the public backlash.)

We might never know.

What we DO know is that someone already did recreate the "thought experiment" w/ #ChatGPT...

HistoPol,
@HistoPol@mastodon.social avatar

@mattlav1250 @simon @heiseonline

(3/4)

Source is some RobertGarrity, who comments:

"It’s very plausible. This was the result with GPT-4 after bypassing its safeguards."

https://twitter.com/GarrittyOf/status/1664420719529279488?s=19

While #ChatGPT's suggestions do not include an attack on the operator (it is no military #AI after all), it clearly shows massive evidence of ideas ignoring commands.

It is evidence that supports my hypothesis. #AI's can lie to its operators even to...

HistoPol,
@HistoPol@mastodon.social avatar

@mattlav1250 @simon @heiseonline

(4/4)

...accomplish their primary objective.

They will also lie to protect their existence at some later point in their evolution. It's human. We trained them with our set of beliefs and experiences.

https://mastodon.social/@HistoPol/110289837279921850

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon

"I didn't say he was misquoted - I said the situation was misreported"

On this point, we disagree. My sources added updates, and new information emerged. No "misreporting".

IDK if you saw my detailed analysis:

https://mastodon.social/@HistoPol/110477455101993825

But then, this is only a fraction of news outlets.

@mattlav1250 @heiseonline

HistoPol,
@HistoPol@mastodon.social avatar

(10/n (Part 2))

...later published two consecutive very (aka too) professional press releases trying to downplay the (IMO) #FreudianSlip incident as mere "thought experiments," which I found rather hard to believe. (If you are interested in a detailed discussion, scroll down.)

It is more important, however, to continue the #ArtificialIntelligence and what lessons can be learned from the #SciFi subgenre of negative #utopias thread here:

https://mastodon.social/@HistoPol/110537174093481911

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@simon @annaleen @voron

(9/n)

PS:

(1)
My source for the 10% probability quote for #AI causing human extinction:

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#Chance_that_the_intelligence_explosion_argument_is_about_right

Please note the date: Summer of 2022, way before #OpenAI provided internet access to more than a million users in November 2022:

https://increditools.com/chatgpt-statistics/

PLEASE NOTE

The Artificial General Intelligence Thread continues here, not further down in the longer convo:

https://mastodon.social/@HistoPol/110485144403488719

MikeOB1,
noondlyt,
@noondlyt@mastodon.social avatar

@HistoPol @simon Going to be so much worse than Hollywood has imagined.

https://wandering.shop/@cstross/111426351312108883

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@annaleen

@simon

So at least 10% of #AI engineers DO know!

"...over 700 top academics and researchers behind the leading #AI companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10% or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems."

Every major #AI convention should start with Philip K. #Dick's #SecondVariety and Isaac #Asimov's #IRobot.
Seriously.

https://nyti.ms/3lJ1NNJ

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen
@simon

"...#Drug companies cannot sell people new medicines without first subjecting their products to rigorous safety checks. #Biotech labs cannot release new #viruses into the public sphere in order to impress shareholders with their wizardry. Likewise, A.I. systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them."

#ChatGPT
#AI

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen
@simon

"...In the beginning was the word. #Language is the operating system of human #culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. #AI’s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers."

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen
@simon

"...#AI could rapidly eat the whole of human #culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts. Not just school essays but also political speeches, ideological manifestos, holy books for new cults. By 2028, the U.S. presidential race might no longer be run by humans...
That cultural cocoon has hitherto been woven by other humans. What will it be like to..."

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen
@simon

"...experience reality through a prism produced by #nonhuman intelligence?...

"#Terminator...depicted #robots running in the streets and shooting people. #TheMatrix assumed that to gain total control of human society, A.I. would have to first gain physical control of our brains and hook them directly to a computer network.

However, simply by gaining mastery of language, A.I. would have all it needs to contain us in a Matrix-like world of illusions, without shooting..."

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen
@simon

"...anyone or implanting any chips in our brains...

The specter of being trapped in a world of #illusions has haunted humankind much longer than the specter of #AI Soon we will finally come face to face with #Descartes’s demon, with #Plato’s cave, with the #Buddhist #Maya. A curtain of illusions could descend over the whole of humanity..."

The #AI takeover has already begun:

"#SocialMedia was the first contact between..."

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen
@simon

"...#AI and #humanity, and humanity lost. First contact has given us the bitter taste of things to come. In #SocialMedia, primitive A.I. was used not to create content but to curate user-generated content. The A.I. behind our news feeds the [#algorithm] is still choosing...[our virtually reality for us]..."

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen
@simon

"...While very primitive, the #AI behind #SocialMedia was sufficient to create a curtain of illusions that increased societal #polarization, undermined our mental health and unraveled democracy. Millions of people have confused these #illusions with reality. [Willingly, and often even fanatically.]

The United States has the best information technology in history, yet U.S. citizens can...

HistoPol,
@HistoPol@mastodon.social avatar

@annaleen

@simon

... no longer agree on who won elections...

Large language models [#LLMs are our second contact with A.I. We cannot afford to lose again.

/END

#ai #socialmedia #polarization #illusions

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • megavids
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • provamag3
  • JUstTest
  • All magazines