🤖 Ich will nicht, dass Meta mit meinen Instagram-Inhalten seine KI trainiert. Deshalb habe ich erfolgreich widersprochen.
👉 So geht‘s: Auf dem eigenen Account im Burgermenü rechts oben den Punkt „Info“ (ganz unten) auswählen, dort zur „Datenschutzrichtlinie“ und das Wort „Widerspruchsrecht“ anklicken, Formular ausfüllen, abschicken, Mail-Adresse verifizieren.
⏳ Mein Einspruch wurde innerhalb weniger Minuten per Mail anerkannt.
@sudelsurium Ersteres versteh ich vollkommen und wünschte mir, dass dezentrale Netzwerke endlich eine Alternative werden könnten, weil man das Zielpublikum dort findet. Das funktioniert leider noch nicht für jeden. 😭
Den Techbros trau ich keinen Millimeter über den Weg, Stichwort #TESCREAL.
I'm truly, deeply alarmed at how the tech industry is trying to insert itself in every human interaction, getting between humans in every possible relationship, and they think that's "better" while absolutely destroying everything that makes society work.
The answer is MORE human-to-human interaction not LESS. FFS.
(screenshot from a substack that landed in my inbox, but you can see this same ethos everywhere, including strained attempts to portray chatbots with "theories of the mind")
The "safety" team were the more fanatical doomsters but the rest of OpenAI is still a cult building their BS god, AGI. Reporters aren't reading up on #TESCREAL and so they are missing the real story here. At least Axios links to AGI skeptic Gary Marcus.
Wild covers quite a few angles but the ones that really struck me were the affinities those pursuing AGI (Artificial General Intelligence) apparently have with the ideas of:
#AI#TESCREAL#SiliconValley#BigTech: "So there's this long tradition of consulting people who use technologies to find out what they need, and to find out why technology does or doesn't work for them. And the big message there was that technologists are probably more ill-equipped to understand that than average people, and to see the industry swing back towards tech authority and tech expertise as making decisions about everything, from how technology is built to what future is the best for all of us, is alarming in that sense.
So we can draw from things like user-centered research. This is how I concluded the paper, is just pointing to all the processes and practices we could start using. There's user-centered research, there's participatory processes, there's... Policy gets made often through consulting with groups that are affected by systems, by policies. There are ways of designing technology so that people can feed back straight into it, or we can just set in some regulations that say, in certain cases, it's not acceptable for technology to make a decision.
I think some of what we have to do is get outside of the United States, because some of the more human rights oriented or user-centered policymaking is happening elsewhere, especially in Europe."
And I'm missing that point a bit here: how much of this do these ‘cheerleader’ types on the photo really believe and how much of it are they just faking to push their ideology to the masses?
I'll read the study to find out more. Thanks for the link!
「 A more imminent threat, he told the Times, is the one posed by American AI giants to cultures around the globe. “These models are producing content and shaping our cultural understanding of the world,” Mensch said. “And as it turns out, the values of France and the values of the United States differ in subtle but important ways.” 」
Does sci-fi shape the future? Tech billionaires from Bill Gates to Elon Musk have often talked about the impact of novels they read as teens, from Neal Stephenson's "Snow Crash" to Iain M. Banks' "Culture" series. Big Think's Namir Khaliq spoke to authors including Andy Weir, Lois McMaster Bujold, @cstross and @pluralistic about how much impact they think science fiction has had, or can have.
I'm not sure "most of us think this way about the world we want for our kids"... at least I don't. Not at all. I find this toxic optimist "vision" utterly naive & disgusting. /1
#AGI#LongTermism#EffectiveAltruism#TESCREAL#Eugenics: "The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI." https://firstmonday.org/ojs/index.php/fm/article/view/13636