cassidy, to ChatGPT
@cassidy@blaede.family avatar

I was curious if a niche blog post of mine had been slurped up by so I asked a leading question—what I discovered is much worse. So far, it has told me:

• use apt-get on Endless OS
• preview a Jekyll site locally by opening files w/a web browser (w/o building)
• install several non-existent “packages” & extensions

It feels exactly like chatting w/someone talking out of their ass but trying to sound authoritative. need to learn to say, “I don’t know.”

ids1024,
@ids1024@fosstodon.org avatar

@cassidy " need to learn to say, 'I don’t know.'"

Doing that properly might require... something that isn't an LLM. I'd say the LLM generates something that (statistically) looks like an answer, because that's what its trained to do.

Actually modeling some understanding of truth and knowledge might be a different and more difficult task than modeling language.

cassidy,
@cassidy@blaede.family avatar

@ids1024 yeah, fair point. Which is why I try to constantly use “LLM” instead of “AI,” because people seem to miss the “artificial” part of artificial intelligence. It’s artificial in that it is not intelligent!

This race to use LLMs for everything is so misguided; LLMs can be super cool for very specific things like summarizing a long text, typing suggestions, describing images, etc. but I genuinely think that chat model is just a terrible idea that needs to die.

fizise, to LLMs
@fizise@sigmoid.social avatar

Nice example of how important emphasis can be for language understanding. Depending on which word in the sentence below is emphasized, it completely changes its meaning.
For #LLMs (and for our #ise2024 lecture) this means that learning to understand language purely from written text is probably not an "easy" task....

Picture from Brian Sacash, via LinkedIn, cf. https://www.linkedin.com/feed/update/urn:li:activity:7195767258848067584/

#nlp #languagemodel #computationallinguistics @sourisnumerique @enorouzi @shufan @lysander07

scottjenson, to LLMs
@scottjenson@social.coop avatar

Saying "LLMs will eventually do every job" is a bit like:

  1. Seeing Wifi wireless data
  2. Then predicting "Wireless" Power saws (no electrical cord or battery) are just around the corner

It's a misapplication of the tech. You need to understand how work and extrapolate that capability. It's all text people. Summarizing, collating, template matching. All fair game. But stray outside of that box and things get much harder.

scottjenson,
@scottjenson@social.coop avatar

@mattwilcox Oh, I agree. But there will be domains where it's transformative, e.g. programming. This has completely transformed how I code. I'm still driving, but DAMN is it handy.

I expect it'll transform online help significantly (I hope for the better!) It will likely transform Law in many ways. I don't expect it'll replace lawyers, just allow them to review and output boilerplate much faster.

It's the Gartner hype cycle all over again. People are freaking out and it'll pull back.

mattwilcox,
@mattwilcox@mstdn.social avatar

@scottjenson Tbh I’m not convinced on any of those either. Again; because it’s a bias machine to expose the average. It’s useless at anything else. It won’t give you modern css. It won’t look at your own code base to avoid duplication. If you’re less than average skill it may give you beneficial things; maybe. There are no “smarts” at all, just stats. In the medical field it’s diagnosed scans of plastic toys as having cancer. I wouldn’t want a dr to be leaning on this stuff.

scottjenson, to Figma
@scottjenson@social.coop avatar

I just tried a few AI plugins for and they were all bad. This domain might be a great test for . I predict these failings are unlikely to be fixed any time soon:

  • Layout was poor
  • They can't create components
  • Laughably complex object hierarchies (everything was enclosed in a frame)

Of course things will improve, but I expect fixing these deep structural problems are a function of many new constraints, likely beyond what today's LLMs are actually capable of. @simon ?

jaseg,
@jaseg@chaos.social avatar

@scottjenson This problem reminds me of AI-generated printed circuit board layouts. A bunch of companies have been trying to get that one right for years now, but the 2D nature and the topological constraints of the interconnections make it really hard to do with AI.

simon,
@simon@simonwillison.net avatar

@scottjenson were these plugins producing visual component output? I'd be surprised to see that work well, current LLMs deal with text and have very weak "spatial reasoning" - if you could call it that

Most promising demo around that kind of thing I've seen so far is this one: https://github.com/tldraw/make-real

kubikpixel, to gentoo
@kubikpixel@chaos.social avatar

Gentoo and NetBSD ban 'AI' code, but Debian doesn't – yet

The problem isn't just that LLM-bot generated code is bad – it's where it came from.

🐧 https://www.theregister.com/2024/05/18/distros_ai_code/


#gentoo #netbsd #debian #ai #llm #LLMs #bsd #linux #opensource #oss #bot #it

kubikpixel,
@kubikpixel@chaos.social avatar

🧵 …although I tend to favour OpenBSD and Linux for personal reasons, I find this decision OK. Certain open source projects lack clear, reasoned positions and decisions.

»NetBSD’s New Policy – No Place for AI-Created Code:
NetBSD bans AI-generated code to preserve clear copyright and meet licensing goals.«

🚩 https://linuxiac.com/netbsd-new-policy-prohibits-usage-of-ai-code/


#netbsd #bsd #ai #code #copyright #os #license #policy #AIgenerated #oss #linux #openbsd #OpenSourceProjekt

metin, (edited ) to ai
@metin@graphics.social avatar
Lazarou, to stackoverflow
@Lazarou@mastodon.social avatar

This just makes me want to delete everything of mine on corporate social media, and I pretty much have tbh

paezha,
@paezha@mastodon.online avatar

@Lazarou

Why are they still volunteering?

LChoshen, to llm
@LChoshen@sigmoid.social avatar

Do LLMs learn foundational concepts required to build world models? (less than expected)

We address this question with 🌐🐨EWoK (Elements of World Knowledge)🐨🌐

a flexible cognition-inspired framework to test knowledge across physical and social domains

https://ewok-core.github.io

luis_in_brief,
@luis_in_brief@social.coop avatar

@LChoshen I was just talking about this problem with a friend the other day. Really interesting data, thank you for sharing!

metin, to ai
@metin@graphics.social avatar
ai6yr, to ai

Giant sucking sounds from over there on Reddit https://www.bbc.com/news/articles/cxe92v47850o #AI #LLMs #reddit #openai

Viss,
@Viss@mastodon.social avatar

@ai6yr it went from 54 to 64 afterhours, which i guess yeah, constitutes a 'jump', but it smells of meme-stockery to me

Crispius,
@Crispius@mstdn.crispius.ca avatar

@ai6yr I haven’t outright banned the Reddit domain yet, but I feel like it’s coming.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #LLMs #ParetoCurves: "Which is the most accurate AI system for generating code? Surprisingly, there isn’t currently a good way to answer questions like these.

Based on HumanEval, a widely used benchmark for code generation, the most accurate publicly available system is LDB (short for LLM debugger).1 But there’s a catch. The most accurate generative AI systems, including LDB, tend to be agents,2 which repeatedly invoke language models like GPT-4. That means they can be orders of magnitude more costly to run than the models themselves (which are already pretty costly). If we eke out a 2% accuracy improvement for 100x the cost, is that really better?

In this post, we argue that:

  • AI agent accuracy measurements that don’t control for cost aren’t useful.

  • Pareto curves can help visualize the accuracy-cost tradeoff.

  • Current state-of-the-art agent architectures are complex and costly but no more accurate than extremely simple baseline agents that cost 50x less in some cases.

  • Proxies for cost such as parameter count are misleading if the goal is to identify the best system for a given task. We should directly measure dollar costs instead.

  • Published agent evaluations are difficult to reproduce because of a lack of standardization and questionable, undocumented evaluation methods in some cases."

https://www.aisnakeoil.com/p/ai-leaderboards-are-no-longer-useful

leanpub, to ai
@leanpub@mastodon.social avatar

AI for Efficient Programming: Harnessing the Power of Large Language Models http://leanpub.com/courses/fredhutch/ai_for_software is the featured online course on the Leanpub homepage! https://leanpub.com

doctorambient, to ai
@doctorambient@mastodon.social avatar

"The biggest question raised by a future populated by unexceptional A.I., however, is existential. Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?" (From an NYT article. See original thread.)

@peter https://thepit.social/@peter/112445916259675495

doctorambient,
@doctorambient@mastodon.social avatar

@gimulnautti I don't disagree with your general point, that AI will be (is) used for making a lot of porn. But is there any evidence that the company OpenAI is specifically moving in that direction right now? Seems to me they're spending an awful lot of effort on moderation specifically to stop that use case. (But I haven't been following this closely.)

gimulnautti,
@gimulnautti@mastodon.green avatar
AccordionGuy, to ai
@AccordionGuy@mastodon.cloud avatar

Do you REALLY want to get a feel for how GPT-4o does what it does? Just complete this poem — by doing so, you’ll have performed a computation similar to the one it does when you feed it a text-plus-image prompt.

https://www.globalnerdy.com/2024/05/15/the-simplest-way-to-illustrate-how-gpt-4o-works/

AccordionGuy,
@AccordionGuy@mastodon.cloud avatar

@gimulnautti Every analogy falls apart at some point — as you inferred, I’m just trying to describe the process simply.

As for responsibility and societal consquences, there are days when I worry that the actual human brains at some of the big LLM vendors aren’t taking them into consideration.

gimulnautti,
@gimulnautti@mastodon.green avatar

@AccordionGuy Yes, I worry about the same.

Some days, as I listen to techno-optimists, long-termists and libertarians, I wonder if underneath it all they really are trying to build a god for themselves to worship..

But then I’m quickly pulled back to the industrial revolution, when automation permanently changed the livelihoods of generations of people, and it took almost a hundred years for living standards to recover.

And, to the level of sociopathy needed to pull that off..

iammannyj, to opensource
@iammannyj@fosstodon.org avatar

IBM open-sources its Granite AI models - and they mean business

Many companies claim to have open-sourced their LLMs, but IBM actually did it.

https://www.zdnet.com/article/ibm-open-sources-its-granite-ai-models-and-they-mean-business/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tacticalgear
  • rosin
  • Youngstown
  • mdbf
  • ngwrru68w68
  • slotface
  • khanakhh
  • ethstaker
  • everett
  • kavyap
  • thenastyranch
  • DreamBathrooms
  • magazineikmin
  • anitta
  • osvaldo12
  • InstantRegret
  • Durango
  • cisconetworking
  • modclub
  • cubers
  • GTA5RPClips
  • tester
  • normalnudes
  • Leos
  • provamag3
  • megavids
  • lostlight
  • All magazines