@creachadair@mastodon.social avatar

creachadair

@creachadair@mastodon.social

A lover of language, a writer of words, a spinner of yarns, and a mangler of bits. A page, torn from a book.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

ct_bergstrom, to random
@ct_bergstrom@fediscience.org avatar

For the entire span of my life, people have tried to develop AI systems and anticipated a day when those systems can pass the Turing Test.

Now that day has arrived, and no one seems to care about Turing Tests anymore. Why not?

Is it that

  1. We're not actually there? It would take more than a few simple patches (google the answer to arithmetic questions!) on top of ChatGPT to pass a turing test?

  2. Arriving there makes it clear that the Turing Test never was the right metric?

creachadair,
@creachadair@mastodon.social avatar

@ct_bergstrom I think (2) is closest to the right answer.

danderson, to random
@danderson@hachyderm.io avatar

I'm enjoying how the NTP RFC is just wrong in places. Like, the formula for peer synchronization distance does not incorporate server offset or dispersion, or phi drift, or jitter. If you implemented NTP based on the spec, it would be completely wrong - unless you look at the non-normative reference implementation in the appendix, which includes all this and kinda does a better job of explaining the algorithm in the first place :/

creachadair, (edited )
@creachadair@mastodon.social avatar

It's been my general experience that specs are either rewritten after the first real implementation, or they're wrong in vital ways. The only way, I think, for a spec to reflect the real problem is to feed back from building it.

ct_bergstrom, (edited ) to random
@ct_bergstrom@fediscience.org avatar

After hearing Sebastian Bubeck talk about the paper today, I decided to give another chance.

If it can really reason, it should be able to solve very simple logic puzzles. So I made one up. Sebastian stressed the importance of asking the question right, so I stressed that this is a logic puzzle and didn't add anything confusing about knights and knaves.

Still, it gets the solution wrong.

creachadair,
@creachadair@mastodon.social avatar

@ct_bergstrom The quest to find general intelligence among language models reminds me of nothing so much as the old story about the prisoner who begs the king to give him a year to teach the king's horse to talk.

creachadair,
@creachadair@mastodon.social avatar

@ct_bergstrom Maybe the language models would do better at puzzles involving Bullshit Artists instead. After all, they have demonstrably no impulsion either way about the truth, so perhaps they'd feel more as home among their kinfolk.

creachadair,
@creachadair@mastodon.social avatar

@ct_bergstrom This sounds very much like the test taking strategy I've observed among some students.

phire, to random
@phire@phire.place avatar

deleted_by_author

  • Loading...
  • creachadair,
    @creachadair@mastodon.social avatar

    @phire I would try to encourage you to leave yourself space, but as a pot I have no right to comment on the hue of the kettle.

    creachadair,
    @creachadair@mastodon.social avatar

    @phire It has been known to happen. In my defense I do not feel compelled to it; I merely use productivity as a substitute for a social life 😂

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines