happyborg,
@happyborg@fosstodon.org avatar

I mean, wut?

User: make a list of things I might find you useful for

Llama: Sure, I'd be happy to help you with that. Please provide me with a list of things or tasks you would like assistance with and we can work together on them.

It took nearly a minute of hammering my eight CPU cores to come up with that. 🤷‍♂️

I think we're safe from #AGI for the foreseeable. Don't listen to twerps like #SamAltman who is just shilling.

#Llama #LLM #AI

eludom,

@happyborg re: "a minute of hammering eight CPU cores" ... from what I understand, these things /really/ want to have GPU(s). Different programming model.

My prediction is that we'll start seeing motherboards/chipsets/etc with integrated GPUs just as, for instance, we went from systems w/o ethernet or wifi built in. Not every system will have it, but some.

happyborg,
@happyborg@fosstodon.org avatar

@eludom that's a good point although they will have to become universally useful first and so far they seem to have lots of weakness and faults and few practical uses. I expect they will have uses but don't know what yet.

eludom,

@happyborg Yeah, well, maybe if a raspberry-pi comes out with a GPU soon I'll buy one and see if they are good for anything 🙂

eludom,

@happyborg Dunno about LLAMA, but I''m constnatly asking it (Chat GPT) how to do bash and emacs org mode stuff (and other programming stuff, offlineimap config weirdness) and getting answers that are in the ballpark.

There was a day when I could quote chapter and verse of K&R (C). I don't have to do that anymore.

happyborg,
@happyborg@fosstodon.org avatar

@eludom I find it also lead me on wild goose chases trying exactly that. Traditional search works faster atm.

The problem is that it doesn't understand, can't check it's own work and makes things up to fill any gaps

eludom,

@happyborg my offineimap is working now. It wasn't yesterday.

Enough there that I'm going to keep giving it a chance.

You have to be skeptical, and it helps to have strong domain knowledge.

eloquence,
@eloquence@social.coop avatar

@happyborg I mean, local LLMs are still of very limited utility. GPT-4 will give an actually useful answer for that kind of Q.

happyborg,
@happyborg@fosstodon.org avatar

@eloquence I realise I'm working with a limited version but that's what's available locally and the idea of pushing all manner of queries and dialogue into one of these remote systems is very high risk IMO, so not for me.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Futurology
  • DreamBathrooms
  • mdbf
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • InstantRegret
  • Durango
  • provamag3
  • everett
  • cisconetworking
  • Leos
  • normalnudes
  • cubers
  • modclub
  • ngwrru68w68
  • tacticalgear
  • megavids
  • anitta
  • tester
  • JUstTest
  • lostlight
  • All magazines