jeffjarvis,
@jeffjarvis@mastodon.social avatar

Guardrails are futile, like suggesting a printing press can be prevented from publishing lies. Generative AI is a general machine that will do what it is told. The issue is the actor. Get used to it.
AI chatbots’ safeguards can be easily bypassed, say UK researchers
https://www.theguardian.com/technology/article/2024/may/20/ai-chatbots-safeguards-can-be-easily-bypassed-say-uk-researchers

jeffjarvis,
@jeffjarvis@mastodon.social avatar

I repeat: Guardrails are futile. Generative AI has no sense of meaning or (dis)information. So, yes, it can be made to say anything. This is not news. It reminds me of early days when people fainted that something could be wrong on Wikipedia.
See How Easily A.I. Chatbots Can Be Taught to Spew Disinformation
https://www.nytimes.com/interactive/2024/05/19/technology/biased-ai-chatbots.html

loke,
@loke@functional.cafe avatar

@jeffjarvis you are off course not wrong. But people don't know this, and keep issuing these tools as if they were able to provide reliable information. I see those articles as attempts to rectify that misinformation. Just like all journalism, they simplify, and honestly I don't know how I would be able to explain it better.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • Durango
  • magazineikmin
  • mdbf
  • thenastyranch
  • khanakhh
  • rosin
  • Youngstown
  • ethstaker
  • slotface
  • modclub
  • kavyap
  • DreamBathrooms
  • everett
  • ngwrru68w68
  • JUstTest
  • InstantRegret
  • tacticalgear
  • GTA5RPClips
  • cubers
  • normalnudes
  • osvaldo12
  • tester
  • anitta
  • cisconetworking
  • megavids
  • Leos
  • provamag3
  • lostlight
  • All magazines