LLMs

markhughes,
@markhughes@mastodon.social avatar

Using for technical design is like asking Picasso to design aircraft.

Engineer: more wings please. No, not that many, and make them symmetrical this time. No, no, no...

josemurilo,
@josemurilo@mato.social avatar

"the worst part is that when they [] can’t complete a task confidently, they don’t give you an error or tell you they’re unable to finish. They make something up and serve you incorrect information.
…companies like are pretending this isn’t a problem and pushing these systems toward taking over as our phones’ virtual assistants and the brains behind our online searches."

https://www.computerworld.com/article/2117752/google-gemini-ai.html

angusm,
@angusm@mastodon.social avatar

It's fashionable to criticize , but can you think of another human invention that allows us to spend the energy budget of Tanzania to lift shitposts out of context and present them as if they were authoritative knowledge?

prefec2,
@prefec2@norden.social avatar

@LouisIngenthron @angusm very good argument.

prefec2,
@prefec2@norden.social avatar

@failedLyndonLaRouchite @angusm I checked the article. It refers for example.to cancer detection, but it does not state that it uses LLMs. The described setup rather hints for some neural network processing a few measurements as inputs.

kellogh,
@kellogh@hachyderm.io avatar

i’m very excited about the interpretability work that #anthropic has been doing with #LLMs.

in this paper, they used classical machine learning algorithms to discover concepts. if a concept like “golden gate bridge” is present in the text, then they discover the associated pattern of neuron activations.

this means that you can monitor LLM responses for concepts and behaviors, like “illicit behavior” or “fart jokes”

https://www.anthropic.com/research/mapping-mind-language-model

kellogh,
@kellogh@hachyderm.io avatar

this is great work. i’m excited to see where this goes next

i hope #anthropic exposes this via their API. at this point in time, most of the promising interpretability work is only available on open source models that you can run yourself. it would be great to also have them available from #AI vendors

Lobrien,

@kellogh This does, of course, imply vastly easier subversion of guardrails. Bad actors will have an easier time manipulating bias.

openwebsearcheu,
@openwebsearcheu@suma-ev.social avatar

💭 Dreaming of in Europe
👉 The German Science Journal „Spektrum.de“ writes about the OWS.EU project & the challenge of creating a European as a foundation for , & special interest applications.

„So far, 1.3 billion URLs in 185 languages, totaling 60 terabytes, have been crawled and indexed“ says project lead Michael Granitzer in the article.

Find out more about potential future applications & OWS.EU´s unique approach:

https://openwebsearch.eu/the-dream-of-an-open-search-engine-i-spektrum-de/

kellogh,
@kellogh@hachyderm.io avatar

if i had more time, i'd love to investigate PII coming from . i've seen it generate phone numbers and secrets, but i wonder if these are real or not. i imagine you could look at the logits to figure out if phone number digits were randomly chosen or if the sequence is meaningful to the LLM. anyone aware of researchers who have already done this?

kellogh,
@kellogh@hachyderm.io avatar

i would guess that phone numbers are probably mostly random, since so many phone numbers are found online, whereas AWS keys are less common, so you're probably more likely to get partial or even full real keys

Lobrien,

@kellogh Someone claimed that a long magic number used in their highly-optimized (FFT?) code was spit out by Copilot. (This was soon after release.) The constant was arrived at by long fine-tuning, not conceptual in any way.

scottjenson,
@scottjenson@social.coop avatar

Saying "LLMs will eventually do every job" is a bit like:

  1. Seeing Wifi wireless data
  2. Then predicting "Wireless" Power saws (no electrical cord or battery) are just around the corner

It's a misapplication of the tech. You need to understand how work and extrapolate that capability. It's all text people. Summarizing, collating, template matching. All fair game. But stray outside of that box and things get much harder.

mattwilcox,
@mattwilcox@mstdn.social avatar

@scottjenson Tbh I’m not convinced on any of those either. Again; because it’s a bias machine to expose the average. It’s useless at anything else. It won’t give you modern css. It won’t look at your own code base to avoid duplication. If you’re less than average skill it may give you beneficial things; maybe. There are no “smarts” at all, just stats. In the medical field it’s diagnosed scans of plastic toys as having cancer. I wouldn’t want a dr to be leaning on this stuff.

miki,
@miki@dragonscave.space avatar

@scottjenson Saying "we will once be able to fly from New York to Paris" is like seeing the contraption that the Wright brothers have just designed and extrapolating a jet engine.

fizise,
@fizise@sigmoid.social avatar

Nice example of how important emphasis can be for language understanding. Depending on which word in the sentence below is emphasized, it completely changes its meaning.
For #LLMs (and for our #ise2024 lecture) this means that learning to understand language purely from written text is probably not an "easy" task....

Picture from Brian Sacash, via LinkedIn, cf. https://www.linkedin.com/feed/update/urn:li:activity:7195767258848067584/

#nlp #languagemodel #computationallinguistics @sourisnumerique @enorouzi @shufan @lysander07

tayarndt,
@tayarndt@techopolis.social avatar
CatherineFlick,
@CatherineFlick@mastodon.me.uk avatar

Just FYI, if you have older parents or other family members, set up some sort of shibboleth with them so they know what to ask you if you ever call them asking for something. These new generative models are going to be extremely convincing, and the idiots in charge of these companies think they can use guardrails to stop it being used inappropriately. They can't.

vicki,
@vicki@jawns.club avatar

The most interesting stuff in #LLMs right now (to me) is:

  • figuring out how to do it small
  • figuring out how to do it on CPU
  • figuring out how to do it well for specific tasks
webology,
@webology@mastodon.social avatar

@vicki I think this is why Ollama has appealed to me. I can run it on my Macs and when paired with Tailscale, I can access it from anywhere.

faassen,
@faassen@fosstodon.org avatar

@janriemer

@vicki

That's funny!

Nonetheless LLMs can do things with language that are interesting that other algorithms struggle with. And getting that behavior smaller and more reliable is useful - even though the small & reliable of classic algorithms may never be equalled

smach,
@smach@masto.machlis.com avatar

“The general problem of mixing data with commands is at the root of many of our computer security vulnerabilities.” Great explainer by security researcher Bruce Schneier on why large language models may not be a great choice for tasks like processing your emails.
https://cacm.acm.org/opinion/llms-data-control-path-insecurity/

#GenAI #LLMs #InfoSec

kellogh,
@kellogh@hachyderm.io avatar

@smach yay! i had the same thought a while ago. if you can separate the data & control, you can make it safe

https://timkellogg.me/blog/2024/01/11/application-phishing

kellogh,
@kellogh@hachyderm.io avatar

@smach after writing that, i found out about control vectors, which is sort of close, but the control still goes through the same channel as data https://vgel.me/posts/representation-engineering/#Control_Vectors_v.s._Prompt_Engineering

ai6yr,
kellogh,
@kellogh@hachyderm.io avatar

i used an analogy yesterday, that #LLMs are basically system 1 (from Thinking Fast and Slow), and system 2 doesn’t exist but we can kinda fake it by forcing the LLM to have an internal dialog.

my understanding is that system 1 was more tuned to pattern matching and “gut reactions”, while system 2 is more analytical

i think it probably works pretty well, but curious what others think

Lobrien,

@kellogh I use that exact analogy. And emphasize that we certainly do use and need System 2 at least occasionally. At some point, human-like reasoning must use symbols with definite, not probabilistic, outcomes. Can that arise implicitly within attention heads? Similar to embeddings being kinda-sorta knowledge representation? I mean, maybe? But it still seems hugely wasteful to me. I still tend towards neuro-symbolic being the way.

kellogh,
@kellogh@hachyderm.io avatar

@Lobrien i would have written the same thing but you beat me to it

kellogh,
@kellogh@hachyderm.io avatar

has anyone made a successor to fuckit.js that uses #LLMs?

(fuckit.js ran the script in a loop, randomly deleting lines until it runs successfully)

kellogh,
@kellogh@hachyderm.io avatar

@wagesj45 right, but it’s gotta be haphazard

wagesj45,
@wagesj45@mastodon.jordanwages.com avatar

@kellogh I wonder how it could be done... 🤔

Just randomly zero out a vector component maybe. We should ask ChatGPT lol.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • LLMs
  • khanakhh
  • magazineikmin
  • osvaldo12
  • GTA5RPClips
  • mdbf
  • Youngstown
  • tacticalgear
  • slotface
  • rosin
  • kavyap
  • ethstaker
  • everett
  • thenastyranch
  • DreamBathrooms
  • megavids
  • InstantRegret
  • cubers
  • normalnudes
  • Leos
  • ngwrru68w68
  • cisconetworking
  • modclub
  • Durango
  • provamag3
  • anitta
  • tester
  • JUstTest
  • lostlight
  • All magazines