happyborg,
@happyborg@fosstodon.org avatar

By making LLMs aim for plausibility rather than correctness, they have been tuned for deceit.

They produce the most plausible response regardless of correctness, which makes it hard to spot incorrect and misleading output.

#LLMs are inherently dangerous in the hands of humans because they are designed to bypass our critical faculties.

What could possibly go wrong?!

#LLM #AI

  • All
  • Subscribed
  • Moderated
  • Favorites
  • LLMs
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • JUstTest
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • megavids
  • lostlight
  • All magazines