br00t4c, to llm
@br00t4c@mastodon.social avatar

Here's what's really going on inside an LLM's neural network

#anthropic #llm

https://arstechnica.com/?p=2026236

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

Fine Tuning LLM Models – Generative AI Course 👇🏼

FreeCodeCamp released today a new course for fine tuning LLM models. The course, by Krish Naik, focuses on different tuning methods such as QLORA, LORA, and Quantization using different models such as Llama2, Gradient, and Google Gemma model.

📽️: https://www.youtube.com/watch?v=iOdFUJiB0Zc

Taffer, to ai
@Taffer@mastodon.gamedev.place avatar

In my mind, the people most likely to use "AI" for things are the ones who sort of know what they want, but don't know how to get it.

So you ask for code to do something, and the LLM spits out something glommed together from Stack Overflow posts or Reddit. How do you know it does what you wanted? How do you debug it if it doesn't work?

Taffer,
@Taffer@mastodon.gamedev.place avatar

If these actually worked, I'd love to select a hunk of code, and have something spit out basic unit tests, or a reasonable documentation outline. Or even check for logic or security errors. How about figuring out how to upgrade my code to eliminate out-of-date libraries?

Taffer,
@Taffer@mastodon.gamedev.place avatar

My fantasy LLMs that actually do something useful are also not trained on data stolen from the Internet. And they don't use enough electricity to power a country, or evaporate a big city's water supply.

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

Research paper from Anthropic.

“Today we report a significant advance in understanding the inner workings of AI models. We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model. This interpretability discovery could, in future, help us make AI models safer.”

#LLM #Anthropic https://www.anthropic.com/research/mapping-mind-language-model

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

My use case for LLMs is to see if it turns up any subtopic of interest that I haven’t included in an article I’m writing on a topic.

If it does, then I can research that subtopic to see if I should include it in the article. Which I then write myself. The LLM is a search assistant.

I can also see value in them as research assistants and guides for learning about new topics. With the proviso that nothing an LLM produces should be taken at face value.

Claude is my fav.

#LLM #Claude

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

The new book by Salman Khan, of Khan Academy fame, will be of interest to anyone interested in how chatbots will influence education. There is definitely a place for them as personalised learning tutors. Especially for learners who would have zero chance of getting a human tutor for one-to-one learning.
https://www.penguin.co.uk/books/460644/brave-new-words-by-khan-salman/9780241680964

WanderingInDigitalWorlds, to ubuntu
@WanderingInDigitalWorlds@mstdn.games avatar

Reading about Ubuntu and nvidia’s LLM development collaboration, it seems like none of the features will be forced on end users via software updates. It seems like an opt-in situation, for which I’m thankful. As Microsoft and other companies are going about LLM integration wrong. Forcing users to test unsafe software is a horrible strategy.

https://ubuntu.com/nvidia

pixelate, to accessibility
@pixelate@tweesecake.social avatar

So, I know generative AI is supposed to be just the most incorrect thing ever, but I want you to compare two descriptions. "A rock on a beach under a dark sky." And: The image shows a close-up view of a rocky, cratered surface, likely a planet or moon, with a small, irregularly shaped moon or asteroid in the foreground. The larger surface appears to be Mars, given its reddish-brown color and texture. The smaller object, which is gray and heavily cratered, is likely one of Mars' moons, possibly Phobos or Deimos. The background fades into the darkness of space. The first one is supposed to be the pure best thing that isn't AI. Right? Like, it's what we've been using for the past like 5 years. And yes, it's probably improved over those years. This is Apple's image description. It's, in my opinion, the best, most clear, and sounds like the ALT-text that it's made from, which people made BTW, and the images it was made with, which had to come from somewhere, were of very high quality, unlike Facebook and Google which just plopped anything and everything into theirs. The second was from Be My Eyes. Now, which one was more correct? Obviously, Be My Eyes. Granted, it's not always going to be, but goodness just because some image classification tech is old, doesn't mean it's better. And just because Google and Facebook call their image description bullshit AI, doesn't mean it's a large language model. Because at this point in time, Google TalkBack does not use Gemini, but uses the same thing VoiceOver has. And Facebook uses that too, just a classifier. Now, should sighted people be describing their pictures? Of course. Always. With care. And having their stupid bots use something better than "picture of cats." Because even a dumb image classifier can tell me that, and probably a bit more, lol. Cats sleeping on a blanket. Cats drinking water from a bowl. Stuff like that. But for something quick, easy, and that doesn't rely on other people, shoot yeah I'll put it through Be My Eyes.

glaroc,

@pixelate 100 percent, we wouldn't have to use AI for shit if people made there stuff accessible in the first place. Its literally humans that make things less accessible and then treat us like we are evil for using something that betters our lives by making inaccessible shit accessible to us because of there paranoya that AI is destroying the world or some shit like that.

janriemer, to ai

In the age of there will be no more room for nuance or detail.

Everything will be coarse and average.

chikim, to llm
@chikim@mastodon.social avatar
kellogh,
@kellogh@hachyderm.io avatar

@chikim i love what they’ve been doing with phi!

frankel, to llm
@frankel@mastodon.top avatar

Large Language models - A Street Full of Wrong-Way Drivers? #LLM #AI

https://javahippie.net/artificial-intelligence/llm/2024/05/05/llm-wrong-way-drivers.html

alexanderhay, to microsoft
@alexanderhay@mastodon.social avatar
ai6yr, to ai
ai6yr,

Here's the crux of why a sultry sounding LLM is so appealing to so many....

Viss,
@Viss@mastodon.social avatar

@ai6yr theres a futurama episode about this, where sigourney weaver takes over the voice of the ship and tries to over-obsessed-girlfriend bender

metin, (edited ) to ai
@metin@graphics.social avatar

So… Big Tech is allowed to blatantly steal the work, styles and therewith the job opportunities of thousands of artists and writers without being reprimanded, but it takes similarity to the voice of a famous actor to spark public outrage about AI. 🤔

https://www.theregister.com/2024/05/21/scarlett_johansson_openai_accusation/

rubinjoni,
@rubinjoni@mastodon.social avatar

@metin Better late, than never.

metin,
@metin@graphics.social avatar

@rubinjoni Definitely. 👍

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

I hope Anthropic doesn’t go off the rails the way OpenAI has.

I like Claude 3 Opus output more than any of the ChatGPT models. Anthropic I’ll pay for going forward. ChatGPT I’ll occasionally use for free.

If the rumours are correct we’ll get some form of OpenAI GPT or Google Gemini in iOS later this year as well.

#LLM

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • thenastyranch
  • magazineikmin
  • ethstaker
  • khanakhh
  • rosin
  • Youngstown
  • everett
  • slotface
  • ngwrru68w68
  • mdbf
  • GTA5RPClips
  • kavyap
  • DreamBathrooms
  • provamag3
  • cisconetworking
  • cubers
  • Leos
  • InstantRegret
  • Durango
  • tacticalgear
  • tester
  • osvaldo12
  • normalnudes
  • anitta
  • modclub
  • megavids
  • lostlight
  • All magazines