jemoka, to llm
@jemoka@maly.io avatar

🎉 new preprint day

Wrote some multi-hop reasoning work recently, formalizing inference as a

achieved results on game of 24 problem from tree of thougchts

https://arxiv.org/abs/2404.19055

SomeGadgetGuy, (edited ) to tech
@SomeGadgetGuy@techhub.social avatar

Replay Crew! We had a fun romp through tech headlines this week! https://somegadgetguy.com/b/44j
Jack Dorsey is no longer on the board of BlueSky. We're wrapping up the closing arguments in Google's anti-trust case. The Rabbit R1 is an app. Sony's marketing materials for the next XPERIA leak.

And we should probably chat about this next iPad thing-y...

ErikJonker, to OpenAI Dutch
@ErikJonker@mastodon.social avatar

Personally I thought all their data was already harvested by #OpenAI but apparently not...
#stackoverflow #ai #data #llm #coding

snoopy, (edited ) to forumlibre in Je bosse au 4/5 sur les modèles de langage (LLM, parfois appelées IAs) et à 2/5 sur la robotique open hardware AMA
@snoopy@mastodon.zaclys.com avatar

Salut le fédiverse,

@keepthepace_ fait un Demande-moi n'importe quoi sur le @forumlibre

Le thème : les modèles de language et la robotique open hardware. Si ça vous intéresse de découvrir une autre facette que Skynet et la machine à billet,

je vous invite à lire ce poste où il parle de son parcours :
https://jlai.lu/post/6554057

Puis de poser vos questions. Bonne lecture !

Hésitez pas à partager :3

metin, to ai
@metin@graphics.social avatar
lars, to ai
@lars@mastodon.social avatar

AI art

I just came across this (h/t to Peter Krupa), and it blew my mind. It highlights the problem with LLMs in general with pinpoint accuracy, and wraps it in a well known metaphorical idiom that everyone understands — which instantly becomes a meta reference. …

https://lars-christian.com/notes/4d8c59cee5/
##ai ##llm

sanjaymenon, to ai
@sanjaymenon@mastodon.social avatar

Intro to LLM Security - OWASP Top 10 for Large Language Models (LLMs)

https://www.youtube.com/watch?v=dj1H4g4YSlU

#ai #cybersecurity #infosec #llm #security

jmcastagnetto, to llm
@jmcastagnetto@mastodon.social avatar
nyergler, to llm
@nyergler@mastodon.social avatar

Guys! My Rabbit R0 arrived! Can’t wait to see how useful it is!

chikim, to llm
@chikim@mastodon.social avatar

I created a multi-needle in a haystack test where a randomly selected secret sentence was split into pieces and scattered throughout the document with 7.5k tokens in random places. The task was to find these pieces and reconstruct the complete sentence with exact words, punctuation, capitalization, and sequence. After running 100 tests, llama3:8b-instruct-q8 achieved a 44% success rate, while llama3:70b-instruct-q8 achieved 100%! https://github.com/chigkim/haystack-test

cerisara, to llm
@cerisara@mastodon.online avatar

#LLM on CPU only.

For inference, the best option right now is llama.cpp with quantized LLM in GGUF format. There are several high-lever wrappers around llama.cpp that makes it easy to use: ollama, vllama...

For inference with very big LLM and very small RAM, the only option is airLLM: it's slow, but you can run llama3-70b

For finetuning quantized LLM with LoRA, the only option afaik is also llama.cpp (look for "finetune"). It's a work in progress but usable and promising!

KathyReid, to ai
@KathyReid@aus.social avatar

You know the Rabbit R1 device was created by a bunch of male software nerds, because they named an electronic device "rabbit", non-ironically.

🐰

amalgam_, to Sleeping
@amalgam_@mastodon.social avatar

This kind of thing gets me going. These sort of reversals of agency. Also, the idea that things get caused by dreams. There is something in me that wants to explore all these things that don’t fit the not modernity

From the other side of the bridge (Milan, April 2024) https://interconnected.org/home/2024/05/03/dreaming

LukaszD, to ai Polish
@LukaszD@pol.social avatar

„Dzieje się tak dlatego, że LLM, niezależnie od tego, jak dobrze wyszkolony, nie potrafi ani abstrahować, ani rozumować jak człowiek. (...) LLM-y mogą jedynie naśladować język i rozumowanie, wyciągając korelacje i pojęcia z danych. Mogą często poprawnie naśladować ludzką komunikację, ale bez umiejętności internalizowania i z powodu olbrzymiego rozmiaru modelu nie ma gwarancji, że ich wybory będą bezpieczne albo etyczne"

https://wiadomosci.wp.pl/niepokojace-badanie-ai-w-wojsku-moze-wywolac-wojne-atomowa-7022758668512192a

br00t4c, to ai
@br00t4c@mastodon.social avatar

AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations

https://arstechnica.com/?p=2021482

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

Overview of Large Language Models 👇🏼

Here is a great summary or glossary doc about LLM by Aman Chadha. This long doc provides a summary of some of the main concepts related to LLM. This includes topics such as:
✅ Embeddings
✅ Vector database
✅ Prompt engineering
✅ Token
✅ RAG
✅ LLM performance evaluation
✅ Review main LLMs

🔗 https://aman.ai/primers/ai/LLM

#llm #DataScience #deeplearning #MachineLearning

wildebees, to llm
@wildebees@mastodon.social avatar

Beyond the brain: Our intelligence leverages the power of culture and language. Channeling Ted Underwood and Francios Chollet, I argue that language models, despite their biases and lack of understanding –– are important tools for thinking. 🗣️🌍💡 cc @TedUnderwood
https://leviathan.substack.com/p/beyond-the-brain

clarinette, to ai
@clarinette@mastodon.online avatar

“I don’t care if we burn $50 billion a year, we’re building AGI,” says Sam Altman He doesn’t care about burning the planet either. Typical irresponsable megalomania. https://analyticsindiamag.com/i-dont-care-if-we-burn-50-billion-a-year-were-building-agi-says-sam-altman/ in danger

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Ofcourse results needs to be verified and confirmed in practice but after reading the
MedGemini paper from Google there is no doubt in my mind AI will change the world of medicines. Not replacing people but augmenting them during diagnosis, operations and treatment of patients.
https://arxiv.org/abs/2404.18416

hgrsd, to ai
@hgrsd@hachyderm.io avatar

If you are using LLMs through API tokens, or running locally, which UI do you use? I'm in the market for recommendations. Have tried llm and LibreChat but neither really stuck for me.

jrefior, to ai
@jrefior@hachyderm.io avatar

No one is more excited for AI than wealthy corporate leaders and investors.

cohomologyisFUN, to llm
@cohomologyisFUN@mastodon.sdf.org avatar

A Catholic organization’s AI chatbot “hallucinated” that it was a real priest and took a user’s confession.

It also said it was OK to baptize baby in Gatorade.

https://www.techdirt.com/2024/05/01/catholic-ai-priest-stripped-of-priesthood-after-some-unfortunate-interactions/

cassidy, (edited ) to ai
@cassidy@blaede.family avatar

I really like the convention of using ✨ sparkle iconography as an “automagic” motif, e.g. to smart-adjust a photo or to automatically handle some setting. I hate that it has become the defacto iconography for generative AI. 🙁

cassidy,
@cassidy@blaede.family avatar

Aha! A week later @davidimel has an excellent video about this: https://youtu.be/g-pG79LOtMw?si=9B2KCLRC5H4on5Wq

piefedadmin, to kbin
@piefedadmin@join.piefed.social avatar

Google provides a tool called PageSpeed Insights which gives a website some metrics to assess how well it is put together and how fast it loads. There are a lot of technical details but in general green scores are good, orange not great and red is bad.

I tried to ensure the tests were similar for each platform by choosing a page that shows a list of posts, like https://mastodon.social/explore.

https://join.piefed.social/?attachment_id=308Mastodonhttps://join.piefed.social/?attachment_id=307Peertubehttps://join.piefed.social/?attachment_id=311Misskeyhttps://join.piefed.social/?attachment_id=309Lemmyhttps://join.piefed.social/?attachment_id=313kbinhttps://join.piefed.social/?attachment_id=315Akkomahttps://join.piefed.social/?attachment_id=310PieFedhttps://join.piefed.social/?attachment_id=314pixelfedhttps://join.piefed.social/?attachment_id=312PleromaPieFed and kbin do very well. pixelfed is pretty good, especially considering the image-heavy nature of the content.

The rest don’t seem to have prioritized performance or chose a software architecture that cannot be made to perform well on these metrics. It will be very interesting to see how that affects the cost of running large instances and the longevity of the platforms. Time will tell.

https://join.piefed.social/2024/02/13/technical-performance-of-each-fediverse-platform/

piefedadmin,
@piefedadmin@join.piefed.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • magazineikmin
  • Durango
  • Youngstown
  • ngwrru68w68
  • slotface
  • ethstaker
  • everett
  • khanakhh
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • cisconetworking
  • rosin
  • anitta
  • cubers
  • GTA5RPClips
  • mdbf
  • tacticalgear
  • osvaldo12
  • InstantRegret
  • provamag3
  • normalnudes
  • tester
  • Leos
  • modclub
  • megavids
  • lostlight
  • All magazines