ErikJonker,
@ErikJonker@mastodon.social avatar

"If it does turn out to be anything like human understanding, it will probably not be based on LLMs.
After all, LLMs learn in the opposite direction from humans. LLMs start out learning language and attempt to abstract concepts. Human babies learn concepts first, and only later acquire the language to describe them."
https://www.sciencenews.org/article/ai-large-language-model-understanding
#AI #LLM #AGI

MisuseCase,
@MisuseCase@twit.social avatar

@ErikJonker Arguably LLMs don’t even “learn” language. They have kind of a probabilistic, entropy-reducing model of language and can arrange words that way. They don’t really “know” what the words “mean.”

ErikJonker,
@ErikJonker@mastodon.social avatar

@MisuseCase true, but in practice/use that is not a problem, LLMs are (not surprising) quite good in language related tasks. Let's just not call it intelligence but very very good tooling.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ai
  • DreamBathrooms
  • everett
  • InstantRegret
  • magazineikmin
  • thenastyranch
  • rosin
  • GTA5RPClips
  • Durango
  • Youngstown
  • slotface
  • khanakhh
  • kavyap
  • ngwrru68w68
  • tacticalgear
  • JUstTest
  • osvaldo12
  • tester
  • cubers
  • cisconetworking
  • mdbf
  • ethstaker
  • modclub
  • Leos
  • anitta
  • normalnudes
  • megavids
  • provamag3
  • lostlight
  • All magazines