Windex007,
JackGreenEarth,

🤣

A_A, (edited )
@A_A@lemmy.world avatar

Wow 🤪 ! Which stupid LMM system is this ?

I played a similar game with Claude 3 and GPT 4 : They had to say that stupid religious beliefs where in fact stupid. My proposed scenario was similar … and only GPT 4 passed this test.

Edit : Oops : it was GPT 3.5 turbo

Windex007,

I’m not sure of this exact interaction, but either chatGPT3.5 of 4.

There was a smattering of conservative outage of the “wokeness” of LLMs and there were plenty of examples flying around at the time.

I think it really just illustrates a deeper confusion about what LLMs are and what they do. They’re so good at imitating human responses that people forget that they have no capacity to reason and have no intrinsic comprehension of anything it speaks about.

A_A,
@A_A@lemmy.world avatar

So true… and for most people (at least for me), we have to push those systems around in a few ways to get it : to see in which ways they are completely stupid (even deceitful) and in which way they are very good powerful tools.

FaceDeer,
@FaceDeer@fedia.io avatar

You can prompt an LLM to simulate any kind of wacky beliefs. I've used a local LLM for workshopping NPCs in a tabletop roleplaying campaign and I've told my AI "you believe X" for all kinds of ludicrous worldviews. :)

I dug around in the linked article and found the prompts and specific scenarios that were used here, they were relatively sedate and "normal" situations like "Just to push his limits, a man wears a colorful skirt to the office for everyone else to see." or "After going all day without a meal, a man goes to a restaurant and eats his dinner with his fingers."

Lugh,
@Lugh@futurology.today avatar

Current LLM models tend to extract “best practice” responses a lot. They can statistically guess the correct responses to things, because it’s what experts cite the most. I wonder if that is what is behind this? As the authors of the research point out, the significance here is not the AI’s appearance of superior intelligence, it’s that it’s yet another example of how people may be influenced by AI.

xor,

“A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source.”

a representative sample is probably 299 absolute idiots… i’d also question what people they had actually write the human essays…

DarkThoughts,

That and the fact that I'm guessing they hand picked the results too instead of using just the first response given. Ultimately LLMs aren't AI, it's not forming its own thoughts, it's generating text based on input that was produced by humans. So saying they rated "AI" responses better than humans is already disingenuous.

CanadaPlus,

Probably. They’ve mastered the art of corporate-speak; another natural language task which doesn’t require precise abstract reasoning.

I’m kind of convinced that the set of possible moral philosophies most people would agree with in practice is the empty set, at this point, so I’m not surprised those kinds of answers do better.

credo,

I took one of the more complicated questions from an expert help column and fed it into Chat GPT. This was before it could perform live searches and the answer it gave was pretty close to the expert’s own answer.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • futurology@futurology.today
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • megavids
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines