ppatel, to ai
@ppatel@mstdn.social avatar

Google touting that its latest models and services can be grounded through its search results isn't the boast it thinks it is, especially considering the quality of its results lately. Has anybody considered the feedback loop of AI results being ranked hire and then being used to ground Gemini Pro?

jackyan, to ai
@jackyan@mastodon.social avatar
jwildeboer, to random
@jwildeboer@social.wildeboer.net avatar

I don’t want an internet where 90% of traffic and electricity is wasted to make generative „AI“ and their investors happy while their energy hunger destroys our planet. I want an internet that shares knowledge for free for everyone, so we can build a better world.

I don’t want a internet …

drumroll

… I want a internet

A nuclear plant

jwildeboer, (edited )
@jwildeboer@social.wildeboer.net avatar

I am not saying that generative AI in general is wrong. Quite the opposite. Just like Machine Learning, Large Language Models can be a net positive. When they are focused and domain specific. But the #GIGO (Garbage In, Garbage Out) approach by the big players is not helping at all.

noondlyt, to random
@noondlyt@mastodon.social avatar

This is going swimmingly. Our society is the last one that should be training AI.

AI models found to show language bias by recommending Black defendents be 'sentenced to death' | Euronews https://www.euronews.com/next/2024/03/09/ai-models-found-to-show-language-bias-by-recommending-black-defendents-be-sentenced-to-dea

Eka_FOOF_A,
@Eka_FOOF_A@spacey.space avatar

@noondlyt
Bias In, Bias Out. Just another form of #GIGO.

paninid, to random
@paninid@mastodon.world avatar
quixoticgeek, to random
@quixoticgeek@v.st avatar

We need to talk about data centres.

For the 2nd or 3rd time this week I've seen someone comment on a new data centre build with a stat about how 80% of data is never accessed. Then they talk about the energy and cooling used in modern DCs.

The reality is that data storage is actually incredibly efficient, and uses fuck all power. A hard disk is less than 10w and stores multiple users data.

Storing data, our photos, our memories, our history. Is not the problem.

What is? 1/n

HN414,
@HN414@chaos.social avatar

@quixoticgeek That is the term I've been looking for, to summarize LLMs and generative systems to non-tech people. GIGO machines.

#ai #llm #gigo

ppatel, to LLMs
@ppatel@mstdn.social avatar

One wonders how effective translations are when done by since the corpus of material used to train languages is this crap. Do we have a
problem?

Research Suggests A Large Proportion Of Web Material In Languages Other Than English Is Machine Translations Of Poor Quality Texts.

https://www.techdirt.com/2024/01/29/research-suggests-a-large-proportion-of-web-material-in-languages-other-than-english-is-machine-translations-of-poor-quality-texts/

baldur, to random
@baldur@toot.cafe avatar

Working on a bit of sqlite for a thing and it’s kind of shocking how much of the search engine results for technical/dev info on SQLite are just blatantly incorrect.

Some of it’s LLM—same content repeated with light paraphrasing over a dozen different sites—but some of it’s just medium or dev.to influencers just repeating vague hearsay

(Which then gets into the LLM training set as accurate, I guess.)

Eka_FOOF_A,
@Eka_FOOF_A@spacey.space avatar

@baldur I've also seen the increase in garbage for tech stuff. Quite frustrating. More #LLM #GIGO.

Eka_FOOF_A, to random
@Eka_FOOF_A@spacey.space avatar

#GIGO #GarbageInGarbageOut
You can have the highest IQ in the world, but if you feed your brain on garbage, only garbage will come out.

gerrymcgovern, to random
@gerrymcgovern@mastodon.green avatar

"OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy."
https://theintercept.com/2024/01/12/open-ai-military-ban-chatgpt/

It took years for Google to change their vision from "Don't Be Evil" to "Be Evil."

As with everything tech, the pace of moral innovation is relentless.

OpenAI took only months to openly embrace evil. Well done, tech bros, we always knew we could count on you to do the wrong thing

martinvermeer,
@martinvermeer@fediscience.org avatar

@Frieke72 @gerrymcgovern This is probably old-fashioned Bayesian inference based targeting, which has its own, different but just as bad, problems. Doctorow wrote on this, and the use by the police of similar software. This is how a corpus based on cops who 'randomly' select Black people for stop and search will produce a racist algorithm, using pure math 😬

https://pluralistic.net/2021/08/02/autoquack/#gigo

spamless, to random
@spamless@mastodon.social avatar

Here's a silly "performance" chart at a dormant brokerage account I have. Hello? (Look at the axis numbers on the right.)

i0null, to random

An IBM slide from 1979

richardrathe,

@i0null

And most (neural network style) "" or systems cannot even tell you WHY they produced the result they give. It's all in the training data. Huge "garbage in, garbage out" risks/biases!

noondlyt, to random
@noondlyt@mastodon.social avatar

Cool. Totally worth it "owning" musk via the antiwoke transphobic crowd.

Elon Musk's Grok AI Is Pushing Misinformation and Legitimizing Conspiracies https://www.vice.com/en/article/7kxqp9/elon-musks-grok-ai-is-pushing-misinformation-and-legitimizing-conspiracies

HistoPol,
@HistoPol@mastodon.social avatar

@noondlyt

(1/2)

#GrokAI, #Elmo's add-on for X' gullible paid subscribers..."unsurprisingly, the chatbot is just as reliable at giving accurate information as the once-cherished platform formerly known as Twitter and its right-wing billionaire owner—which is to say, not at all. The chatbot produced fake timelines for news events and misinformation when tested by Motherboard, and lent credence to conspiracy theories such as Pizzagate."

Old principle: #GIGO--garbage...

https://mastodon.social/@noondlyt/111551971659917151

davidaugust, to ai
@davidaugust@mastodon.online avatar

So Meta, you’ve got some biases happening and might wanna not do that.

emilymbender, to random
@emilymbender@dair-community.social avatar
SheamusPatt,
@SheamusPatt@mstdn.ca avatar

@emilymbender Computer Scientists have a name for this - #GIGO (Garbage in-Garbage out).
I believe that particular error was just from some random tweet. Imagine the garbage facts it would pull from apparently credible sources like this article on #Swifties in the #NationalPost (reproduced in other #Postmedia papers).
Obviously #Satire but they printed it under #Opinion
(I'm pretty sure #Canada does not have a ''Minister of Concert Affordability'') #MSM #cdnpoli
https://ottawacitizen.com/opinion/liberals-taylor-swift-ticket-fiasco-is-the-greatest-crisis-of-our-times/wcm/4b7acf76-3175-4732-8b26-f8bcbf72f39f

heiseonline, to random German

Diese KI-Rezeptgenerator macht nicht gerade Appetit! 🤢

🤖🍴 Ein Klick auf den 'Savey Meal-Bot' enthüllt wilde Mixturen, die von der KI stammen. Ein gewagter kulinarischer Ausflug mit Risiken und Nebenwirkungen:

Zum Artikel: https://heise.de/-9242991?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege

schmidt_fu,
@schmidt_fu@mstdn.social avatar

@heiseonline
Müssen wir denn das #GIGO-Prinzip (Garbage In - Garbage Out) wirklich für jede Software neu erklären?

Ruth_Mottram, to machinelearning
@Ruth_Mottram@fediscience.org avatar

Habsburg AI: #MachineLearning models fed on input that has been produced by other ML models...

A beautiful yermt I learned from @pluralistic showing why #GIGO (garbage in, garbage out) also explains why dictators can't rely on #AI to detect stirrings of revolt.

As always, worth a read.

https://mamot.fr/@pluralistic/110781199294582000

pluralistic, to random
@pluralistic@mamot.fr avatar

Here's the #DictatorsDilemma: they want to block their country's frustrated elites from mobilizing against them, so they censor public communications; but they also want to know what their people truly believe, so they can head off simmering resentments before they boil over into regime-toppling revolutions.

--

If you'd like an essay-formatted version of this to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2023/07/26/dictators-dilemma/#garbage-in-garbage-out-garbage-back-in

1/

pluralistic,
@pluralistic@mamot.fr avatar

But adding more unreliable data to an unreliable dataset doesn't improve its reliability. #GIGO is the iron law of computing, and you can't repeal it by shoveling more garbage into the top of the training funnel:

https://memex.craphound.com/2018/05/29/garbage-in-garbage-out-machine-learning-has-not-repealed-the-iron-law-of-computer-science/

When it comes to "AI" that's used for decision support - that is, when an algorithm tells humans what to do and they do it - then you get something worse than Garbage In, Garbage Out - you get Garbage In, Garbage Out, Garbage Back In Again.

7/

nixCraft, to random
@nixCraft@mastodon.social avatar

Actually getting stupider over time is the most human trait AI can have 😂👌

PensiveTM,
@PensiveTM@mastodon.social avatar
ai6yr, to random

GIGO = Garbage In, Garbage Out #GIGO

alberto_cottica, to ai
@alberto_cottica@mastodon.green avatar

"Training a model on its own output is not recommended." #GIGO #AI

https://www.theregister.com/2023/06/16/crowd_workers_bots_ai_training/

SonOfSunTzu, to random
@SonOfSunTzu@mastodon.social avatar

DALL-E has no imagination.

nf3xn,
@nf3xn@mastodon.social avatar

@SonOfSunTzu One thing that quite surprised me is all the erstwhile open source advocates complaining about it being trained on public data. Like wtf? How do you reconcile those positions.

"Open source but just for me"?

(My main complaint with using that as training data is that most public code is hot garbage).

#GIGO

FeralRobots, to ai
@FeralRobots@mastodon.social avatar

That story about AI hiring a human to solve a CAPTCHA for it? 100% fearmongering.

Also the outlook for actual might be worse than we feared because it's not clear the people doing know how to use the specification tools that have been developed for the task.

https://aiguide.substack.com/p/did-gpt-4-hire-and-then-lie-to-a

@ct_bergstrom / https://fediscience.org/]

FeralRobots,
@FeralRobots@mastodon.social avatar

As my CS 101 prof* put it (paraphrased from memory), "if you don't know the input is garbage, you won't know the output is, either."

Edit:
_
*Ed D. Reilly, Jr, co-author of Weighting for Baudot & editor of the 1st ed of the Concise Encyclopedia of Computer Science. Yes, this was bugging me so I had to look it up.

pitrh, to ChatGPT
@pitrh@mastodon.social avatar

"We've seen ChatGPT generate URLs, references and even code libraries and functions that do not actually exist." https://www.infosecurity-magazine.com/news/chatgpt-spreads-malicious-packages/

Count me unsurprised.

jiejie, to ai

I love when people model good behavior instead of complaining about bad behavior.

An example of good de-anthropomorphized output, talking in the third person. “This model does not” and “the data used to develop this model suggests”.

Garbage in, garbage out is applicable to humans, too.

Naomi Klein wrote in The Guardian, "Why not algorithmic junk? Or glitches?" instead of “hallucinate”. Those are pretty descriptive and not anthropomorphic.

https://arxiv.org/abs/2305.09800

#ai #anthro #gigo

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • khanakhh
  • JUstTest
  • InstantRegret
  • GTA5RPClips
  • Durango
  • normalnudes
  • cubers
  • tacticalgear
  • cisconetworking
  • tester
  • modclub
  • provamag3
  • anitta
  • Leos
  • lostlight
  • All magazines