sotneStatue,
@sotneStatue@fosstodon.org avatar

A colegue had some problems in their experiments and I found out the issue was they were using #chatGPT for unit conversions. I wonder how many people use it for doing math and trust the results

sotneStatue,
@sotneStatue@fosstodon.org avatar

I corrected the problem and explained them why they shouldn't use it like this, but honestly I don't think they understood clearly what the problem was.

I'm not a programmer and don't understand the details in how #llms work, but I can already see what the lack of even surface understanding can do

sotneStatue,
@sotneStatue@fosstodon.org avatar

Maybe new versions might get better at this and it becomes less and less of a problem in the future, as of now I am deeply skeptical of any "fact" these models give me.

To me the only use case is summarizing or rearranging text, the information in the text itself must come from me or at least be on a topic I understand

sotneStatue,
@sotneStatue@fosstodon.org avatar

In related news, look this new research paper that came out. And one of the reviewers was totally fine with this. https://pubpeer.com/publications/8026AE8D42C97065E13C577DA4F4C7

image/png

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ChatGPT
  • DreamBathrooms
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • tacticalgear
  • JUstTest
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • megavids
  • lostlight
  • All magazines