aral, to ai
@aral@mastodon.ar.al avatar

We call it AI because no one would take us seriously if we called it matrix multiplication seeded with a bunch of initial values we pulled out of our asses and run on as much shitty data as we can get our grubby little paws on.

#AI #ArtificalIntelligence #MachineLearning #LLM #LargeLanguageModels

hosford42, to llm
@hosford42@techhub.social avatar

I am really, really, REALLY irritated by what I just saw. The function of Microsoft's is outright lying to people with vision impairments about what appears in images it receives. It's bad enough when an is allowed to tell lies that a person can easily check for veracity themselves. But how the hell are you going to offer this so-called service to someone who can't check the claims being made and NEEDS those claims to be correct?

How long till someone gets poisoned because Bing lied and told someone it was food that hasn't expired when it has, or that it's safe to drink when it's cleaning solution, or God knows what? This is downright irresponsible and dangerous. either needs to put VERY CLEAR disclaimers on their service, or just take it down until it can actually be trusted.









lightweight, to LLMs
@lightweight@social.fossdle.org avatar

If your educational institution is still using , especially in light of their policy change to use/sell your content to train (), it's doing the wrong thing. Digitally literate institutions (a rare & precious thing) already use () which is & substantially better for educational applications. If you want to trial it, talk to us - we've been making our instances available for institutions to use since Covid: https://oer4covid.oeru.org

sleepytako, to random
@sleepytako@famichiki.jp avatar

"Very broadly speaking: the Effective Altruists are doomers, who believe that (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race."
Quote @pluralistic

"Spicy autocomplete" is sooooo the best way to best describe what "AI" actually is. Nothing is intelligent, just artificial.

Ruth_Mottram, to ai
@Ruth_Mottram@fediscience.org avatar

This 👇
https://fediscience.org/@steve/111286741688934024
@steve : It’s important to keep reminding ourselves that so-called #AI #LargeLanguageModels are nothing more than very fancy bullshit generators. They know nothing about truth... just the patterns of how humans use language.
So every time you hear about them being used (successfully) for class assignments, grant applications, legal briefs, newspaper articles, writing emails, etc, it really just means all these venues are places where we’ve come to expect bullshit.

ajsadauskas, to ai
@ajsadauskas@aus.social avatar

In five years time, some CTO will review the mysterious outage or technical debt in their organisation.

They will unearth a mess of poorly written, poorly -documented, barely-functioning code their staff don't understand.

They will conclude that they did not actually save money by replacing human developers with LLMs.

#AI #LLM #LargeLanguageModels #WebDev #Coding #Tech #Technology @technology

transponderings, to ChatGPT

It’s increasingly clear that Alan Turing’s ‘imitation game’ – usually known as ‘the Turing test’ – tells us nothing about whether machines can think

Instead it demonstrates how readily people can be taken in by complete and utter nonsense if it has the superficial form of an authoritative text

#ImitationGame #TuringTest #LargeLanguageModels #ChatGPT #ArtificialIntelligence

simon_brooke, to Futurology
@simon_brooke@mastodon.scot avatar

"This is undesirable for scientific tasks which value truth" ( 'scientists' writing about ).

So presumably it's fine in scientific tasks that DON'T value truth?

I wonder where you'd find people who work on scientific tasks and don't value truth? Oh, of course. In Meta's AI team, of course.

H/t @emilymbender

https://pca.st/episode/306b9fc4-aa02-4af1-8193-80a8abb1c268

TheWildHuntNews, to paganism
@TheWildHuntNews@witches.live avatar

Pagan authors impacted by AI services like ChatGPT ~ Pagan authors have been impacted by the rise of AI generated Pagan content. TWH speaks with several authors whose works were used to train AI systems without their knowledge or consent.

Thank you to Deborah Blake, Markus Ironwood, Ivo Dominguez Jr., Michael M. Hughes, and Michelle Belanger who spoke with us for this story.

https://wildhunt.org/2023/10/pagan-authors-impacted-by-ai-services-like-chatgpt.html

haraldkliems, to ChatGPT
@haraldkliems@fosstodon.org avatar

Great lecture with @emilymbender coming up at #UWMadison this Thursday: "ChatGP-Why: When, if Ever, is Synthetic Text Safe, Appropriate, and Desirable?" #chatGPT #largeLanguageModels https://languageinstitute.wisc.edu/chatgp-why-when-if-ever-is-synthetic-text-safe-appropriate-and-desirable/

aral, to ArtificialIntelligence
@aral@mastodon.ar.al avatar

Fake Intelligence is where we try to simulate intelligence by feeding huge amounts of dubious information to algorithms we don’t fully understand to create approximations of human behaviour where the safeguards that moderate the real thing provided by family, community, culture, personal responsibility, reputation, and ethics are replaced by norms that satisfy the profit motive of corporate entities.

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Elon Musk’s new AI model doesn’t shy from questions about cocaine and orgies - Enlarge (credit: Getty Images | Benj Edwards)

On Saturday, Elo... - https://arstechnica.com/?p=1981276 #x.ai

contributopia, to ChatGPT Italian
@contributopia@vivaldi.net avatar

How AI chatbots like ChatGPT or Bard work – visual explainer: https://www.theguardian.com/technology/ng-interactive/2023/nov/01/how-ai-chatbots-like-chatgpt-or-bard-work-visual-explainer

In the last year, chatbots powered by Large Language Models are everywhere and even … useful. But how do they work? Sul , un articolo interessante che in modalità visiva prova a spiegare il funzionamento dei su cui sono basati i come o @macfranc @maupao @scuola @quinta

kemosite, to ai
thoughtworks, to llm
@thoughtworks@toot.thoughtworks.com avatar

Want to get the desired outputs from your prompts on a based application?

Here's how you can pick up to make the most of the : https://thght.works/3Zo0nrl

itnewsbot, to ChatGPT
@itnewsbot@schleuss.online avatar

AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic - Enlarge (credit: Benj Edwards | Getty Images)

Imagine download... - https://arstechnica.com/?p=1995975 #largelanguagemodels #promptinjections #sleeperagents #llmsecurity #aisecurity #anthropic #chatgpt #chatgtp #claude2 #biz#claude #llm #ai

ajsadauskas, to ArtificialIntelligence
@ajsadauskas@aus.social avatar

Here's an observation that should be bleeding obvious, but often gets overlooked amidst all the AI hype.

Especially in the enterprise IT space, many of the tools and platforms now being hyped up as "AI" were around a decade ago.

Back then, the buzzwords used to sell them were big data, machine learning, and predictive data analytics.

With all the hype around large language models and ChatGPT, the vendors have basically repackaged them as AI.

But essentially, there's a whole bunch of old (or at least not new) tech now being shilled with new buzzwords.

Everybody’s talking about Mistral, an upstart French challenger to OpenAI (arstechnica.com)

On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej...

felwert, to llm
@felwert@mstdn.social avatar

I’m really fascinated by @mozilla’s llamafile: It allows you to download and run locally as a single executable. This opens up a lot of use cases where privacy was an issue. https://github.com/Mozilla-Ocho/llamafile /ht @simon

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Reddit sells training data to unnamed AI company ahead of IPO - Enlarge (credit: Reddit)

On Friday, Bloomberg reported that Re... - https://arstechnica.com/?p=2004431

natadimou, to random

Our poster at the #SemTab challenge corner is ready & Duo Yang will explain all spicy details of our solution which combines #heuristics & #LargeLanguageModels to understand entities, types & relationships of tables with very promising results! #iswc2023

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

IBM, Meta form “AI Alliance” with 50 organizations to promote open source AI - Enlarge (credit: Getty Images | Benj Edwards)

On Tuesday, IBM ... - https://arstechnica.com/?p=1988592

steve, to cryptocurrency
@steve@fediscience.org avatar

New rule. If you've worked on and/or , your email requesting doing a PhD under my supervision goes straight in the trash.

steve,
@steve@fediscience.org avatar

Also, no I will not join your research project looking at how and can help solve climate change. There is no possible world in which that makes any sense.

Guinnessy, to ArtificialIntelligence
@Guinnessy@mastodon.world avatar

Many researchers are feeling the pressure to jump on the bandwagon of #largelanguagemodels (LLMs), says cognitive scientist Abeba Birhane — and many are not considering whether technology is an appropriate ‘solution’ to complex, multifaceted challenges. Too right. #artificialintelligence #AI https://www.nature.com/articles/d41586-023-03798-6?utm_source=Live+Audience&utm_campaign=ce9ecde073-briefing-dy-20231201&utm_medium=email&utm_term=0_b27a691814-ce9ecde073-50179668

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • JUstTest
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines