mwichary,
@mwichary@mastodon.online avatar

Has anyone written about how textual generative AI feels strangely close to toxic masculinity in some respects? The absolute confidence in everything stated, the lack of understanding of the consequences of getting that confidence wrong for important questions, the semi-gaslighty feeling when it “corrects” itself when you call it out on something. It so often feels like talking to someone one would despise and avoid in “real life.” I’m curious if anyone did some writing on this.

jwz,
@jwz@mastodon.social avatar

@mwichary HAL confidently incorrecting an increasingly-agitated Scarlett Johansson.

mwichary,
@mwichary@mastodon.online avatar

Thank you to everyone for responding. Some people suggested these two pieces:

https://futurism.com/artificial-intelligence-automated-mansplaining-machine

https://www.wired.com/story/generative-ai-totally-shameless/

I’ve seen speculation about “this is because the models were trained by white men on data from Reddit and Facebook” but I wonder if this is an issue of the training corpus, or is it just the intrinsic nature of LLMs? Is it possible to create a “hesitant, honest” LLM if you wanted to? I feel naïve asking this way, but that’s the part I don’t fully understand yet.

samueljohnson,
@samueljohnson@mstdn.social avatar

@mwichary FWIW in my experience ChatGPT (haven't used anything else as much) differs from your typical mansplainer in one respect: instead of doubling down when contradicted it apologises and says "you're absolutely right" - - then, of course, tells you what you already know, which is kinda mansplainey.

mwichary,
@mwichary@mastodon.online avatar

@samueljohnson Yeah, that’s why I didn’t mention mansplaining – that part feels more gaslighty to me. A relationship with truth that’s so loose it doesn’t allow for trust to blossom.

samueljohnson,
@samueljohnson@mstdn.social avatar

@mwichary I like it. I will think of it as "gasjective".

Someone I knew long ago: "let's be subjective about this"
Friend interjects: "surely you mean objective"
SIKLA: "yeah, whatever"

jef,
@jef@mastodon.social avatar

@mwichary I've heard it called mansplaining as a service.

NatureMC,
@NatureMC@mastodon.online avatar
Faintdreams,
@Faintdreams@dice.camp avatar

@mwichary when the a large set of the conversational corpus of LLM's is taken from places like Twitter and a reddit, then their ... Flavour of discourse is the output

NatureMC,
@NatureMC@mastodon.online avatar

@mwichary Yes, there are studies about the social and ethical impact of biased , especially in questions of masculinism, racism or homophobia. It's a fact that the popular models are trained mainly by men (with the "philosophy"***) on men dominated content. The latest is this study: https://cepis.org/unesco-study-exposes-gender-and-other-bias-in-ai-language-models/
This test became quite well-known in 2023: https://rio.websummit.com/blog/society/chatgpt-gpt4-midjourney-dalle-ai-ethics-bias-women-tech/

18+ NatureMC,
@NatureMC@mastodon.online avatar

@mwichary *** And it starts with the profession itself: how many women work in AI tech and how are they treated? The newest information: https://www.salon.com/2024/05/21/coercive-climate-of-silicon-valleys-ai-boom-fuels-troubling-parties-researcher-says/

You can search for more studies.

mwichary,
@mwichary@mastodon.online avatar

@NatureMC Thank you for this. I’m aware of a lot of this, but I am not sure if this is the question of what the models are trained on, or the question of an intrinsic nature of the method itself.

I have not seen a clear answer to that yet, but I’m sure I just need to keep looking. This feels different than the “why is every person imagined by midjourney someone conventionally attractive” problem.

NanoRaptor,
@NanoRaptor@bitbang.social avatar

@mwichary Gods yes! I was telling a friend only in the last month it sounds like a sales guy I used to know who would overconfidently state a wrong thing and I'd just have to bring in that one fact that showed it up "oh! of course of course yes, sorry about that, anyway <another wrong thing>"

Binder,
@Binder@petrous.vislae.town avatar
18+ adorfer,
@adorfer@chaos.social avatar

@Binder @mwichary "Mansplaining as a service":
AI generated texts sound like a schoolboy caught red-handed on a unfamiliar topic, just rambling everything that comes to his mind, making up facts, in order to get away having no real understanding of the topic, hoping that the person listening does not pay attention or is even more clueless.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • mdbf
  • ethstaker
  • magazineikmin
  • cubers
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • InstantRegret
  • Durango
  • JUstTest
  • everett
  • tacticalgear
  • modclub
  • anitta
  • cisconetworking
  • tester
  • ngwrru68w68
  • GTA5RPClips
  • normalnudes
  • megavids
  • Leos
  • provamag3
  • lostlight
  • All magazines