baldur,
@baldur@toot.cafe avatar

Your reminder that I've written a book on the business risks of generative AI. "The Intelligence Illusion"

https://illusion.baldurbjarnason.com/

Stuff I cover (not exhaustive):

  • AGI is not happening any time soon.
  • AGI and anthropomorphism will cripple your ability to think clearly about AI
  • The AI industry has a long history of snake oil and fraud
  • These models copy more than you think
  • Hallucinations are still a thing and aren't going away.
  • AI "reasoning" is quite broken
  • Security is a shit show
f4grx,
@f4grx@chaos.social avatar

@baldur Thank you for this. I will pass it on.

baldur,
@baldur@toot.cafe avatar

@f4grx 👍🏻

singletona,

@baldur As someone who is deeply interested in AI and can see a massive potential for a net good in development of a more intelligent system as an aid to people?

AI especially in the space of art generation and chatGPT style linguistic modeling AI.... is a mirage that carnival barkers are trying to sell as the real thing.

asbestos,
@asbestos@toot.community avatar

@baldur
Isn't "The intelligence illusion" when someone like Bill Barr wears glasses to try and look intellectual?

dpwiz,
@dpwiz@qoto.org avatar

@baldur I got interested in “no AGIs” thesis, but there’s nothing like that in ToC. Am I missing something?

baldur,
@baldur@toot.cafe avatar

@dpwiz I don’t really spend any time debunking AGI except insofar as I cover it in an early chapter, which happens to be available online as well: https://softwarecrisis.dev/letters/ai-bird-brains-silicon-valley/

AGI doesn’t exist yet and probably won’t ever, so I didn’t want to spend much time on it in a practical book about business risks.

dpwiz,
@dpwiz@qoto.org avatar

@baldur I’ve only skimmed and searched for a few keywords, but the overall point of the article is “LLMs are not AGIs” (on which I totally agree).

Unfortunately I didn’t find the “AGIs are impossible” parts. Well, given the overall focus on the status quo, that’s fair. Thanks, anyway!

specwill,

@baldur
also:
You can always suppress a human's wage growth potential, but publicly-traded corporations exist solely to grow profits. If they manage to displace enough human labor to corner certain jobs, they can keep charging more and more. If AI persists, I'd guess that in less than ten years it'll cost more than the equivalent human labor.

resuna,
@resuna@ohai.social avatar

@baldur I can't think of AI "reasoning" as anything more than the ability of humans to see patterns that aren't actually there.

gregeganSF,
@gregeganSF@mathstodon.xyz avatar
resuna,
@resuna@ohai.social avatar

@gregeganSF @baldur And it's not "accidental". Since Eliza chatbots have been trying to "pass the Turing Test" by tricking humans into falling for the con. Weizenbaum found people's tendency to anthropomorphize Eliza disturbing but that didn't stop other researchers from enthusiastically adopting deception by design.

The result: chatbots that are designed to fool humans.

johncormier,

@baldur looks very interesting.. is it available as a regular i.e. printed book? I don’t read books any other way

baldur,
@baldur@toot.cafe avatar

@johncormier No, not yet. It’s something I plan on doing but have no firm timeline on yet.

johncormier,

@baldur cool, I’ll be first in line when it comes out in print!

robertpi,
@robertpi@functional.cafe avatar

@baldur
Thank you, looks really interesting.

stemurray,

@baldur
Does conferring the ability to hallucinate not add to the anthropomorphism issue? Would it be better just to use the language of software, e.g. bugs and errors.

baldur,
@baldur@toot.cafe avatar

@stemurray It definitely does, and it's an issue with almost every term used in AI.

The dilemma is that avoiding their terms makes it harder for you to refer to research from the AI field to back your argument, but using the terms implicitly supports the AI field's worst tendencies.

It's tricky, but I try to address it in the book by pointing out that the terms are inaccurate wishful mnemonic and replace them with more accurate terms when I can.

baldur,
@baldur@toot.cafe avatar

@stemurray The term I try to use in the book is "fabrications" because software terms are just as inaccurate. Everything generative AI outputs is a fabrication. It just so happens that the most probable fabricated output sometimes contains facts.

To call a hallucination a bug is a fundamental misrepresentation how these systems work that's in many ways just as bad as the hallucination term.

baldur,
@baldur@toot.cafe avatar

@stemurray Mostly in that bugs are usually fixable. The only way to fix the "hallucination" issue is to invent a completely new kind of model.

strong_sue,

@baldur definitely going to get your book as I have concerns about AI. Only thing I might try is Midjourney to see how creative it is creating images. I would not trust it for facts or writing. I prefer experts in their field for facts, human creativity for writing. I read about one AI that had issues, think Bing, so no to interact with it.

simonzerafa,

@baldur

@riskydotbiz @thegrugq

Well at least I don't have to write this now 🙂🤷‍♂️

thegrugq,

@simonzerafa @baldur @riskydotbiz sweet

rticks,

@baldur

I look forward to the copyright lawsuits crippling the tecords

dauwhe,

@baldur I just bought the book this morning and it's exactly what I needed. I have not seen a clearer description of how generative AI works, what it might be good for, and what the risks are. The references alone are worth the price of the book.

baldur,
@baldur@toot.cafe avatar

@dauwhe So glad to hear that! Thank you 👍

alexthurow,

@baldur Done deal ✅. Thanks for the book - looking forward to its contents… 🤓

alexthurow,

@baldur … oh, and there is more: https://store.baldurbjarnason.com/
So many books, so little time 🫠!

blabberlicious,

@baldur “AGI and anthropomorphism will cripple your ability to think clearly about AI”

💯 This.
Confederacy of Tech Dunces seem to be writing about their ‘encounters' with AI.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • mdbf
  • ethstaker
  • magazineikmin
  • cubers
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • InstantRegret
  • Durango
  • JUstTest
  • everett
  • tacticalgear
  • modclub
  • anitta
  • cisconetworking
  • tester
  • ngwrru68w68
  • GTA5RPClips
  • normalnudes
  • megavids
  • Leos
  • provamag3
  • lostlight
  • All magazines