rysiek,
@rysiek@mstdn.social avatar

@sehe no, it only requires the LLM agent to be able to perform any kind of actions at all. And without them, the agent is basically useless.

Thing is, LLMs chatbots have no way of doing "parametrized prompts", so to speak. Prompt injection is very much a thing, but as opposed to good old SQL injection, there's no way to actually properly fix it.

Because, again, no way to do parametrized prompts.

You seem to think writing software "spectacularly badly" doesn't happen often… :blobcatcoffee:

  • All
  • Subscribed
  • Moderated
  • Favorites
  • infosec
  • DreamBathrooms
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • GTA5RPClips
  • JUstTest
  • tacticalgear
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • ethstaker
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • megavids
  • lostlight
  • All magazines