gimulnautti,
@gimulnautti@mastodon.green avatar

One of the most common #machinelearning #hallucinations is #recommendation engines relentlessly pushing you political content from influencers you wouldn’t touch with a long stick.

And it’s not just a one-off, it’s constant. When the training set does not contain the inference needed, the system hallucinates it.

With recommendations, the only training parameter is engagement. With generic #LLM’s the parameters are found by the system independently, but the same problem generalises across.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • machinelearning
  • DreamBathrooms
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • tacticalgear
  • JUstTest
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • megavids
  • lostlight
  • All magazines