HistoPol, (edited )
@HistoPol@mastodon.social avatar

The threat of

(1/n)

Almost every week now, + despite statements to the contrary, by many and , the utopias of and (+ others 1)) are making a leap forward.
Due to all the white noise + the hype regarding most of the general public.

Since I posted my warning in February (https://mastodon.social/@HistoPol/109877181962607380), much has happened.

I see the enabling of with (https://mastodon.social/@HistoPol/110129405482528991) as a particular threat because it...

HistoPol,
@HistoPol@mastodon.social avatar

(2/n)

...will make the potential learning curve of #AGI's a lot steeper. Why? Well, for three reasons:

  1. i/Because after digesting most of the world's information from the #internet and other online sources, for lack of similar databases of #alien species, there is not much more knowledge to be accumulated ("only" to be processed differently, which is not as "steep").

  2. the #InternetOfThings will provide "almost" infinite (and increasing) data points to be processed.

  3. However,...

HistoPol,
@HistoPol@mastodon.social avatar

(3/n)

...more importantly, #AI can finally learn to differentiate between fiction and reality.

This said, I am "finally" being joined by renowned #academic from #ComputerSciences in my skepticism, which coincides with my stance (though I do not agree with his choice of fiction, see boost of older thread in the follow-up):

"#MIT professor and AI researcher #Max #Tegmark is pretty stressed out about the potential impact of #ArtificialGeneralIntelligence (#AGI)..."

https://time.com/6273743/thinking-that-could-doom-us-with-ai/

HistoPol,
@HistoPol@mastodon.social avatar

(4/n)

...on human society. In a new essay for #Time, he rings the alarm bells, painting a pretty dire picture of a future determined by an #AI that can outsmart us.

"Sadly, I now feel that we're living the movie 'Don't Look Up' for another existential threat: unaligned #superintelligence," #Tegmark wrote, comparing what he perceives to be a lackadaisical response to a growing #AGI threat to director Adam #McKay's popular climate change satire...
A recent survey*..."

*https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

HistoPol,
@HistoPol@mastodon.social avatar

(5/n)

...showed that half of #AI researchers give AI at least ten percent chance of causing *HumanExtinction," the researcher continued.
Since we have such a long #history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer #AI in a safer direction than out-of-control #superintelligence

"Think again," he added, "instead, the most influential..."

HistoPol,
@HistoPol@mastodon.social avatar

(6/n)

...responses have been a combination of denial, mockery, and resignation so darkly comical that it's deserving of an Oscar."

In short, according to #Tegmark, #AGI is a very real threat, and human society isn't doing nearly enough to stop it — or, at the very least, isn't ensuring that #AGI will be properly aligned with human values and safety."

I.e. [#Asimov's #LawsOfRobotics at an absolute minimum]

I agree 100% with this analysis, as posted in earlier threads:

"#Tegmark..."

HistoPol,
@HistoPol@mastodon.social avatar

(7/n)

"...goes as far as to claim that "isn't a long-term issue," but is even "more short-term than e.g. climate change and most people's retirement planning."

To support his theory, the researcher pointed to a recent study arguing that 's large language model GPT-4 is already showing "sparks" of AGI and a recent talk given by researcher Yoshua Bengio*."

https://www.youtube.com/watch?v=w92y0YiJA4M&t=164s

[And as I am living proof, you do not even...

HistoPol,
@HistoPol@mastodon.social avatar

(8/n)

...do not even have to be a #ComputerScientists to ascertain that.]

Even without #AGI, "...the current crop of less sophisticated #AIs already poses a threat, from #misinformation-spreading synthetic content to the threat of AI-powered #weaponry...

Although #humanity is racing toward a cliff, we're not there yet, and there's still time for us to slow down, change course and avoid falling off – and instead enjoying the amazing benefits that safe, aligned #AI has to offer,"..."

HistoPol, (edited )
@HistoPol@mastodon.social avatar

(9/n)

... Professor " writes. "This requires agreeing that the cliff actually exists and falling off of it benefits nobody."

"Just look up!" he added."

Source:
https://futurism.com/mit-professor-agi-dont-look-up





HistoPol, (edited )
@HistoPol@mastodon.social avatar
nus,
@nus@mstdn.social avatar
HistoPol,
@HistoPol@mastodon.social avatar
NoctisEqui,

@HistoPol

Maybe #2012 will interrupt all that. Or maybe it will just be #Strange Days.

Those are 2 DVDs I watched recently, pretty dystopian.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • kavyap
  • thenastyranch
  • Durango
  • DreamBathrooms
  • ngwrru68w68
  • magazineikmin
  • cisconetworking
  • Youngstown
  • mdbf
  • slotface
  • osvaldo12
  • GTA5RPClips
  • rosin
  • InstantRegret
  • provamag3
  • everett
  • cubers
  • vwfavf
  • normalnudes
  • tacticalgear
  • tester
  • ethstaker
  • khanakhh
  • modclub
  • Leos
  • anitta
  • megavids
  • JUstTest
  • All magazines