ct_bergstrom,

One of the decisive moments in my understanding of #LLMs and their limitations was when, last autumn, @emilymbender walked me through her Thai Library thought experiment.

She's now written it up as a Medium post, and you can read it here. The value comes from really pondering the question she poses, so take the time to think about it. What would YOU do in the situation she outlines?

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

arildsen,
@arildsen@fosstodon.org avatar

@ct_bergstrom @emilymbender I am not convinced. I think the thought experiment relies too much on analogy to what a person would be capable of in that situation. A LLM is not a human brain and so, does not process information in exactly the same way.
A person would surely be maxed out of ability to maintain an overview of so many boys off information at once.

radiotime,

@ct_bergstrom @emilymbender spoiler:"But also: it’s not “intelligent”. Our only evidence for its “intelligence” is the apparent coherence of its output."

mousey,
@mousey@seattlematrix.org avatar

@ct_bergstrom @emilymbender

TIL the difference between, and relationship of, meaning and form. And now I finally get how words can truly be the mother of ten thousand things, dang.

gerbrandvd,

@ct_bergstrom @emilymbender This article brings a great thought experiment, next level chinese room.
Clarifies how llm really don't have any concept of meaning.

steveediger,

@ct_bergstrom @emilymbender My brother’s opening sentence of his dissertation “A Phenomonology of the Listening Body” states, “Listening begins with the turn of the self’s attentive being in an attitude of receptivity toward the communicative gestures of the other.” 1/x

corbin,

@ct_bergstrom @emilymbender By basic extension, a baby with total hearing loss would also not be able to understand that written language. By the sensor-fusion argument, no baby can acquire understanding of any language with an audiovisual homomorphism. Thus, no human understands any natural language.

A reasonable conclusion, then, is that humans don't understand anything. This matches both my experience and the typical Chinese Room analysis. Y'all want to remind us that LLMs aren't people, but you need to not glorify meatbags either. We are walking collections of memes, not necessarily acting according to any particular justification.

militant_dilettante,

@ct_bergstrom @emilymbender Wow, that's actually a very good and useful analogy.

sj,
@sj@social.scriptjunkie.us avatar

@ct_bergstrom @emilymbender @thedarktangent that's not really true though. While an AI might never taste an apple, the whole field of mathematics is entirely deducible from symbol patterns. There is no external thing that is a "two", it's a pattern, whether of apples or bits or Thai words. Programs that learn and can apply the rules can be considered to "understand" since the rules and patterns are the essence of math. Some generic pattern finding programs have even discovered new useful math.

Homomorphiesatz,

@ct_bergstrom @emilymbender The only strategies I can come up with still rely on external knowledge (e.g. looking for maths textbooks). I find the argument to be very convincing though I was pretty strongly in the parrots camp to begin with. Also, an LLM not understanding meaning and not being intelligent doesn't preclude it from still being a very useful tool in many situations; it can however serve as a guideline as to which situations this tool should and should not be used in.

no1lion99,

@ct_bergstrom oh I love this as I have struggled to explain to non-techy friends what these models do! Thanks Carl!

dryak,
@dryak@mstdn.science avatar

@ct_bergstrom (BTW, a bit out of subject, but kudos to archeologists who for some languages had to deal with settings looking like that. Mostly relying on the gotchas 3, 5 & 6 - i.e. relying on whatever external knowledge they could grasp and doing tons of statistics on it).

claudius,

@ct_bergstrom
@emilymbender @alexglow
A similar thought experiment is Searle's "Chinese Room" experiment.

randulo,

@ct_bergstrom @emilymbender Brilliantly conceived and well stated.

michaelsmith2nd,

@ct_bergstrom @emilymbender The Thai library thought experiment was excellent. Thank you Dr Bender for writing this essay and Dr Bergstrom for sharing.

djdesign,

@ct_bergstrom @emilymbender @timnitGebru I wonder what the output of ChatGPT looks like when you give it a prompt that is "logical nonsense" - correct structure but no coherent meaning. 🤔

dgill,

@ct_bergstrom @emilymbender Love the Thai Library thought experiment. How does it hold up with multimodal LLMs that train on images and video in addition to text? Does "knowing" what a horse looks like give meaning to the word "horse"? What if the model has "seen" people ride them?

isaaccp,

@ct_bergstrom @emilymbender I love how 90% of people responding didn't seem to think for too long before doing so.

jbigham,
@jbigham@hci.social avatar

@ct_bergstrom @emilymbender one thing that is hard for me to wrap my head around -- ok, imagine you're in the thai library, but you're not human, you're a very powerful statistical likelihood machine… then, at some point, you get introduced back out in the world, where you are presented with contexts very much like the text you were able to observe in the library. i feel like all of our intuitions break down about what happens next.

redezem,
@redezem@aus.social avatar

@ct_bergstrom @emilymbender This is my new favorite thought experiment, displacing my previous favorite: The Chinese Room Experiment.

phenidone,

@ct_bergstrom @emilymbender this is a re-statement of the Chinese Room thought experiment, no?

https://en.m.wikipedia.org/wiki/Chinese_room

jeffreyearly,

@ct_bergstrom @emilymbender Fantastic post. I was thinking there’s an even more general problem that goes beyond LLMs…the lack of contact with the physical world. Even if an AI has access to illustrations and other info, it would never be able to discern truth without the ability to interact with the physical world.

fanf42,

@ct_bergstrom @dpp @emilymbender this is a very interesting though experiment, but it will b/c very blurry with multimodal models which mixe text, sound, image. Likely possible to derive complex model of meaning with that (still missing first hand causal feedback loop, but hey, we can I least rich philosophy if not science).
Plus, chatgpt is already past the Thai library: it is fed with labeled data giving meaning/concept (in the lang of the creators) to string of words.
Thanks for the link!

steve,
@steve@s.yelvington.com avatar

@ct_bergstrom @emilymbender

My hovercraft is full of eels, khrap.

davva23,

@ct_bergstrom @emilymbender
This reminds me of how ancient Egyptian was only deciphered once the Rosetta stone was found.

fishidwardrobe,
@fishidwardrobe@mastodon.me.uk avatar

@ct_bergstrom @emilymbender Does anybody ever do that? Isn't all understanding of language ultimately by association with something else you understand? Even our first language – baby learns what "bottle" means while you're feeding them with one?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • tacticalgear
  • JUstTest
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • megavids
  • lostlight
  • All magazines