ErikJonker,
@ErikJonker@mastodon.social avatar

Nice paper about the inevitability of hallucinations in LLMs with some nice and simple empirical experiments. For those TLDR people,
"All LLMs will hallucinate.
Without guardrail and fences" "LLMs cannot be used for critical decision making."
"Without human control, LLMs cannot be used automatically in any safety-critical decision-making."
The authors make the relevant point that this does not make LLMs worthless.
https://arxiv.org/pdf/2401.11817.pdf
#AI #LLM #hallucinations #generativeAI

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ai
  • DreamBathrooms
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • tacticalgear
  • JUstTest
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • megavids
  • lostlight
  • All magazines