drahardja,
@drahardja@sfba.social avatar

LLMs perpetuate biases that exist in their training data. https://mastodon.green/@pvonhellermannn/112455826772366983

loko,

@drahardja Wow. Just writing this out loud because the post gave me the idea.

Would be really cool to be able to train an LLM which identifies biases in the data. Then train it with the data of everybody to be able to recognize our own biases and then try to correct them.
Of course, like done by the persons itself interested in recognizing its own biases, not like a megacorporation/goverment distopy to be able to control public opinion using those biases.

Now I'm not sure if this is so great or can be a real posibility of ever happening or even it already happened at some degree with the Cambridge analytica scandal 😅

drahardja,
@drahardja@sfba.social avatar

@loko I think you’ll end up substituting one form of bias for another. Bias is one of those things that is very contextual, and require thinking through meaning, knowledge, and intent—all things that LLMs are incapable of processing.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • megavids
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • provamag3
  • JUstTest
  • All magazines