Has anyone written about how textual generative AI feels strangely close to toxic masculinity in some respects? The absolute confidence in everything stated, the lack of understanding of the consequences of getting that confidence wrong for important questions, the semi-gaslighty feeling when it “corrects” itself when you call it out on something. It so often feels like talking to someone one would despise and avoid in “real life.” I’m curious if anyone did some writing on this.
I’ve seen speculation about “this is because the models were trained by white men on data from Reddit and Facebook” but I wonder if this is an issue of the training corpus, or is it just the intrinsic nature of LLMs? Is it possible to create a “hesitant, honest” LLM if you wanted to? I feel naïve asking this way, but that’s the part I don’t fully understand yet.
@mwichary FWIW in my experience ChatGPT (haven't used anything else as much) differs from your typical mansplainer in one respect: instead of doubling down when contradicted it apologises and says "you're absolutely right" - - then, of course, tells you what you already know, which is kinda mansplainey.
@samueljohnson Yeah, that’s why I didn’t mention mansplaining – that part feels more gaslighty to me. A relationship with truth that’s so loose it doesn’t allow for trust to blossom.
@mwichary when the a large set of the conversational corpus of LLM's is taken from places like Twitter and a reddit, then their ... Flavour of discourse is the output
@NatureMC Thank you for this. I’m aware of a lot of this, but I am not sure if this is the question of what the models are trained on, or the question of an intrinsic nature of the method itself.
I have not seen a clear answer to that yet, but I’m sure I just need to keep looking. This feels different than the “why is every person imagined by midjourney someone conventionally attractive” problem.
@mwichary Gods yes! I was telling a friend only in the last month it sounds like a sales guy I used to know who would overconfidently state a wrong thing and I'd just have to bring in that one fact that showed it up "oh! of course of course yes, sorry about that, anyway <another wrong thing>"
@Binder@mwichary "Mansplaining as a service":
AI generated texts sound like a schoolboy caught red-handed on a unfamiliar topic, just rambling everything that comes to his mind, making up facts, in order to get away having no real understanding of the topic, hoping that the person listening does not pay attention or is even more clueless.
Add comment