DrALJONES, to random
@DrALJONES@mastodon.social avatar

Inside #TESCREAL - the new techno-religion for billionaires

Humans merging consciousness with machines & claiming to know what's best for trillions of hypothetical people living in the future...

https://yewtu.be/watch?v=_ZF8FTdnfEQ

Elon Musk & Peter Thiel are obsessed with this tech version of neo-fascism.

"Even if billions perish in the coming climate catastrophe... we shouldn’t be too concerned".

Only the rich deserve to survive; we the people are just "useless feeders".

https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/

..

image/jpeg

selzero, to random
@selzero@syzito.xyz avatar

It should be no surprise to anyone that criminals roll in gangs.

NatureMC,
@NatureMC@mastodon.online avatar

@liskin Such statements make life difficult for genuinely mentally ill people.

No, this guy is fully responsible for his actions, there is no treatable illness behind it but a fascist ideology named . https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/ @selzero

vitriolix, to ai
@vitriolix@mastodon.social avatar

I just heard Sal Kahn on KQED talking about Khan Academy's new AI feature. They have it mentoring and doing socratic method stuff with students.

It will guide you, to lead yourself, to find the answer that there are 2 r's in "strawberry"

#ai #llm

vitriolix,
@vitriolix@mastodon.social avatar

That said, Khan actually spoke about the risks and harms of ai in a way that seemed pretty convincing and not all #TESCREAL

sudelsurium, to Instagram German

🤖 Ich will nicht, dass Meta mit meinen Instagram-Inhalten seine KI trainiert. Deshalb habe ich erfolgreich widersprochen.

👉 So geht‘s: Auf dem eigenen Account im Burgermenü rechts oben den Punkt „Info“ (ganz unten) auswählen, dort zur „Datenschutzrichtlinie“ und das Wort „Widerspruchsrecht“ anklicken, Formular ausfüllen, abschicken, Mail-Adresse verifizieren.

⏳ Mein Einspruch wurde innerhalb weniger Minuten per Mail anerkannt.

https://www.verbraucherzentrale.de/aktuelle-meldungen/digitale-welt/ihre-daten-bei-facebook-und-instagram-fuer-ki-so-widersprechen-sie-95646

#ki #verbraucherschutz #instagram

NatureMC,
@NatureMC@mastodon.online avatar

@sudelsurium Ersteres versteh ich vollkommen und wünschte mir, dass dezentrale Netzwerke endlich eine Alternative werden könnten, weil man das Zielpublikum dort findet. Das funktioniert leider noch nicht für jeden. 😭
Den Techbros trau ich keinen Millimeter über den Weg, Stichwort .

jeffjarvis, to random
@jeffjarvis@mastodon.social avatar

Meta’s new AI council is composed entirely of white men https://techcrunch.com/2024/05/22/metas-new-ai-council-is-comprised-entirely-of-white-men/

NatureMC, (edited )
@NatureMC@mastodon.online avatar

@jeffjarvis Does that surprise anyone? It's perfect .

https://www.scientificamerican.com/article/tech-billionaires-need-to-stop-trying-to-make-the-science-fiction-they-grew-up-on-real/ by @cstross

For more information @timnitGebru on https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599 (about Zuckerbergs AGI work: section 5. "From transhumanism to AGI", especially 5.3, last paragraph)

jeffjarvis, to random
@jeffjarvis@mastodon.social avatar

The Guardian laps up the nutballs doomsterism of Max Tegmark arguing the opposite of what is true: His talk of extinction from AI is macho distraction from real and present issues. God, I wish reporters would just Google before reporting on the AI boys.
https://www.theguardian.com/technology/article/2024/may/25/big-tech-existential-risk-ai-scientist-max-tegmark-regulations

alper, to random
@alper@rls.social avatar

Who ever could have predicted that AI was going to be the technology of white supremacy?

With an advisory group like this it seems like bad outcomes are more or less guaranteed.

susankayequinn, to random
@susankayequinn@wandering.shop avatar

I'm truly, deeply alarmed at how the tech industry is trying to insert itself in every human interaction, getting between humans in every possible relationship, and they think that's "better" while absolutely destroying everything that makes society work.

The answer is MORE human-to-human interaction not LESS. FFS.

(screenshot from a substack that landed in my inbox, but you can see this same ethos everywhere, including strained attempts to portray chatbots with "theories of the mind")

NatureMC,
@NatureMC@mastodon.online avatar

@susankayequinn I sign it! (Some time ago I read about the disaster of a caregiving AI robot: the results were so bad).

It's interesting to look at the "philosophy" or non-ethics behind that. Often shortened as : It's deeply fascist eugenics thinking, therefore, always anti-life, anti-humanity. https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/

jeffjarvis, (edited ) to random
@jeffjarvis@mastodon.social avatar

Damnit. I wish journalists would do their homework on #TESCREAL and AI. OpenAI people are all believers in the BS of AGI & so-called x-risk; the "safety" people are just more fervent believers. They're all full of it. They are the danger.
A Safety Check for OpenAI https://www.nytimes.com/2024/05/20/business/dealbook/openai-leike-safety-superalignment.html?smid=tw-share

paninid, to random
@paninid@mastodon.world avatar

There is a lot of alignment between the Dominionists and crowd.

jeffjarvis, to random
@jeffjarvis@mastodon.social avatar

It wasn't the safety team. It was the doom team. AI is a h all of mirrors....
OpenAI Reportedly Dissolves Its Existential AI Risk Team
https://gizmodo.com/openai-reportedly-dissolves-its-existential-ai-risk-tea-1851484827

jeffjarvis,
@jeffjarvis@mastodon.social avatar

The "safety" team were the more fanatical doomsters but the rest of OpenAI is still a cult building their BS god, AGI. Reporters aren't reading up on and so they are missing the real story here. At least Axios links to AGI skeptic Gary Marcus.

OpenAI's safety dance

https://www.axios.com/2024/05/20/openai-safety-jan-leike-sam-altman

grumpybozo, to DoctorWho
@grumpybozo@toad.social avatar

Is selling to the masses?

(No, of course not. OMG I hope not.) https://mas.to/@gavinwinters/112463488755833958

urlyman, to random
@urlyman@mastodon.social avatar

Cloud seeding:

The rise of storage on the World Wide Web.

To fuel the training of AI.

To make it rain technofeudalism,
with the growing likelihood of hailstorms of eugenics

https://overcast.fm/+nh1Av4Rz4

urlyman,
@urlyman@mastodon.social avatar

@NatureMC It is the same as that video.

Wild covers quite a few angles but the ones that really struck me were the affinities those pursuing AGI (Artificial General Intelligence) apparently have with the ideas of:

This is part of the bundle which @timnitGebru and @xriskology have written about https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #TESCREAL #SiliconValley #BigTech: "So there's this long tradition of consulting people who use technologies to find out what they need, and to find out why technology does or doesn't work for them. And the big message there was that technologists are probably more ill-equipped to understand that than average people, and to see the industry swing back towards tech authority and tech expertise as making decisions about everything, from how technology is built to what future is the best for all of us, is alarming in that sense.

So we can draw from things like user-centered research. This is how I concluded the paper, is just pointing to all the processes and practices we could start using. There's user-centered research, there's participatory processes, there's... Policy gets made often through consulting with groups that are affected by systems, by policies. There are ways of designing technology so that people can feed back straight into it, or we can just set in some regulations that say, in certain cases, it's not acceptable for technology to make a decision.

I think some of what we have to do is get outside of the United States, because some of the more human rights oriented or user-centered policymaking is happening elsewhere, especially in Europe."

https://www.techpolicy.press/podcast-resisting-ai-and-the-consolidation-of-power/

knittingknots2, to random
@knittingknots2@mstdn.social avatar

Who believes the most "taboo" conspiracy theories? It might not be who you think | Salon.com

https://www.salon.com/2024/05/05/believes-the-most-taboo-conspiracy-theories-it-might-not-be-you-think/

NatureMC,
@NatureMC@mastodon.online avatar

@knittingknots2 When these people call themselves "extremely liberal" it's interesting to look into the research on https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/

And I'm missing that point a bit here: how much of this do these ‘cheerleader’ types on the photo really believe and how much of it are they just faking to push their ideology to the masses?

I'll read the study to find out more. Thanks for the link!

shawnmjones, to Humanism
@shawnmjones@hachyderm.io avatar

I'm working through my thoughts on , very concerned about the loss of .

I've read "What We Owe the Future" by William MacAskill.

I'm a subscriber of the following Podcasts:

  • “The Tech Won’t Save Us"
  • "Mystery AI Hype Theater 3000”

I follow the (sometimes disturbing) subreddits:

  • r/artificial
  • r/ArtificialIntelligence
  • r/Futurology
  • r/singularity

I'm reading “God, Human, Animal, Machine” by Meghan O’Gieblyn.

Does anyone have any other reading/listening suggestions?

jbzfn, to ai
@jbzfn@mastodon.social avatar

🧠 The Babelian Tower Of AI Alignment
➥ NOEMA

「 A more imminent threat, he told the Times, is the one posed by American AI giants to cultures around the globe. “These models are producing content and shaping our cultural understanding of the world,” Mensch said. “And as it turns out, the values of France and the values of the United States differ in subtle but important ways.” 」

https://www.noemamag.com/the-babelian-tower-of-ai-alignment/

#ai #agi #tescreal

emilymbender, to random
@emilymbender@dair-community.social avatar
NatureMC,
@NatureMC@mastodon.online avatar

@emilymbender I can read only the second headline and what I read about this decision feels dystopic. It feels so wrong on so many levels.

xynthia, to technologie French

Transhumanisme, long-termisme… comment les courants « TESCREAL » influent le développement de l’IA

@technologie
https://piaille.fr/@mart1oeil/112336068030361562
mart1oeil@piaille.fr -

https://next.ink/135681/transhumanisme-long-termisme-comment-les-courants-tescreal-influent-le-developpement-de-lia/

un deuxième article sur le sujet de @mathildesaliou

, , (moderne) .

parismarx, to tech
@parismarx@mastodon.online avatar

Transhumanism is all the rage with tech billionaires pushing mind uploading, AGI, and more. But where do those ideas come from?

On #TechWontSaveUs, I spoke with Meghan O’Gieblyn to discuss the religious roots of transhumanist visions of the future.

https://techwontsave.us/episode/218_the_religious_foundations_of_transhumanism_w_meghan_ogieblyn

#tech #ai #artificialintelligence #neuralink #tescreal

mathildesaliou, to random French
@mathildesaliou@piaille.fr avatar
CultureDesk, (edited ) to books
@CultureDesk@flipboard.social avatar

Does sci-fi shape the future? Tech billionaires from Bill Gates to Elon Musk have often talked about the impact of novels they read as teens, from Neal Stephenson's "Snow Crash" to Iain M. Banks' "Culture" series. Big Think's Namir Khaliq spoke to authors including Andy Weir, Lois McMaster Bujold, @cstross and @pluralistic about how much impact they think science fiction has had, or can have.

https://flip.it/DmHzd2

@bookstodon

NatureMC,
@NatureMC@mastodon.online avatar

@CultureDesk It's a very topical question regarding the sub-genres #CliFi or #ClimateFiction, #solarPunk and #hopepunk. The latter experiments with positive changes.

But it becomes extremely creepy when you take a closer look at how #SiliconValley #billionaires mix sci-fi with radical right-wing eugenics ideas and knit an anti-democratic ideology out of it. Named #TESCREAL, a sort of tech fascism: https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/ and here: https://www.theatlantic.com/magazine/archive/2024/03/facebook-meta-silicon-valley-politics/677168/?gift=pNhm6V1nG5ZO8R8GWle1H01Kw4OvqWH8-6RE146aONg&utm_source=copy-link&utm_medium=social&utm_campaign=share

@pluralistic @bookstodon #bookstodon

yoginho, to Futurology
@yoginho@spore.social avatar

Wow. Mike Levin has finally come out as a full #transhumanist: https://noemamag.com/ai-could-be-a-bridge-toward-diverse-intelligence #TESCREAL

I'm not sure "most of us think this way about the world we want for our kids"... at least I don't. Not at all. I find this toxic optimist "vision" utterly naive & disgusting. /1

remixtures, to Futurology Portuguese
@remixtures@tldr.nettime.org avatar

#AGI #LongTermism #EffectiveAltruism #TESCREAL #Eugenics: "The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI." https://firstmonday.org/ojs/index.php/fm/article/view/13636

drahardja, to random
@drahardja@sfba.social avatar

Any day when a proponent loses their funding and platform is a good day.

“Oxford shuts down institute run by Elon Musk-backed philosopher”

https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

  • All
  • Subscribed
  • Moderated
  • Favorites
  • normalnudes
  • osvaldo12
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • khanakhh
  • mdbf
  • rosin
  • Youngstown
  • slotface
  • love
  • ngwrru68w68
  • kavyap
  • tacticalgear
  • megavids
  • tester
  • anitta
  • cubers
  • Durango
  • thenastyranch
  • everett
  • cisconetworking
  • GTA5RPClips
  • provamag3
  • Leos
  • ethstaker
  • modclub
  • JUstTest
  • All magazines