ErikJonker, to ai Dutch
@ErikJonker@mastodon.social avatar

Played around with GPT-4o analysing pictures and analysing the same picture with Google Gemini (I have to admit , the free version) , but the differences are enormous, the amount of hallucinations in Google Gemini is insane, making things up about the picture provided...how can Google be so far behind ?
#AI #ChatGPT #GPT4 #GoogleGemini #hallucinations

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Companies are trying to reduce the amount of hallucinations in generative AI.
https://thenextweb.com/news/iris-reducing-ai-hallucinations-in-scientific-research
#AI #GenerativeAI #Hallucinations #Iris

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Verge with Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

So expect more of these weird and incredibly wrong snafus from AI Overviews despite efforts by Google engineers to fix them, such as this big whopper: 13 American presidents graduated from University of Wisconsin-Madison. (Hint: this is so not true.)

But Pichai seems to downplay the errors.

"There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time," he said. "I think that would be the wrong way to think about it.""
https://futurism.com/the-byte/ceo-google-ai-hallucinations

infodocket, to ai
@infodocket@newsie.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Hallucinations: "I want to be very clear: I am a cis woman and do not have a beard. But if I type “show me a picture of Alex Cranz” into the prompt window, Meta AI inevitably returns images of very pretty dark-haired men with beards. I am only some of those things!

Meta AI isn’t the only one to struggle with the minutiae of The Verge’s masthead. ChatGPT told me yesterday I don’t work at The Verge. Google’s Gemini didn’t know who I was (fair), but after telling me Nilay Patel was a founder of The Verge, it then apologized and corrected itself, saying he was not. (I assure you he was.)

When you ask these bots about things that actually matter they mess up, too. Meta’s 2022 launch of Galactica was so bad the company took the AI down after three days. Earlier this year, ChatGPT had a spell and started spouting absolute nonsense, but it also regularly makes up case law, leading to multiple lawyers getting into hot water with the courts.

The AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their dimwittedness. I cannot get excited about the next turn in the AI revolution because that turn is into a place where computers cannot consistently maintain accuracy about even minor things."

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Models like ChatGPT and Claude are deeply dependent on training data to improve their outputs, and their very existence is actively impeding the creation of the very thing they need to survive. While publishers like Axel Springer have cut deals to license their companies' data to ChatGPT for training purposes, this money isn't flowing to the writers that create the content that OpenAI and Anthropic need to grow their models much further. It's also worth considering that these AI companies may already have already trained on this data. The Times sued OpenAI late last year for training itself on "millions" of articles, and I'd bet money that ChatGPT was trained on multiple Axel Springer publications along with anything else it could find publicly-available on the web.

This is one of many near-impossible challenges for an AI industry that's yet to prove its necessity. While one could theoretically make bigger, more powerful chips (I'll get to that later), AI companies face a kafkaesque bind where they can't improve a tool for automating the creation of content without human beings creating more content than they've ever created before. Paying publishers to license their content doesn't actually fix the problem, because it doesn't increase the amount of content that they create, but rather helps line the pockets of executives and shareholders. Ironically, OpenAI's best hope for survival would be to fund as many news outlets as possible and directly incentivize them to do in-depth reporting, rather than proliferating a tech that unquestionably harms the media industry." https://www.wheresyoured.at/bubble-trouble/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The only reason bosses want to buy robots is to fire humans and lower their costs. That's why "AI art" is such a pisser. There are plenty of harmless ways to automate art production with software – everything from a "healing brush" in Photoshop to deepfake tools that let a video-editor alter the eye-lines of all the extras in a scene to shift the focus. A graphic novelist who models a room in The Sims and then moves the camera around to get traceable geometry for different angles is a centaur – they are genuinely offloading some finicky drudgework onto a robot that is perfectly attentive and vigilant.

But the pitch from "AI art" companies is "fire your graphic artists and replace them with botshit." They're pitching a world where the robots get to do all the creative stuff (badly) and humans have to work at a robotic pace, with robotic vigilance, in order to catch the mistakes that the robots make at superhuman speed.

Reverse centaurism is brutal. That's not news: Charlie Chaplin documented the problems of reverse centaurs nearly 100 years ago:" https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Search #SearchEngines #LLMs #Hallucinations: "A couple of days ago, Wharton professor Ethan Mollick, who studies the effects of AI and often writes about his own uses of it, summarized (on X) something that has become clear over the past year: “To most users, it isn't clear that LLMs don't work like search engines. This can lead to real issues when using them for vital, changing information. Frontier models make less mistakes, but they still make them. Companies need to do more to address users being misled by LLMs.”

It's certainly, painfully obvious by now that this is true." https://www.scu.edu/ethics/internet-ethics-blog/certainly-here-is-a-blog-post/

raymondpert, to ai

AI hallucination mitigation: two brains are better than one .

> As generative #AI (genAI) continues to move into broad use by the public and various enterprises, its adoption is sometimes plagued by errors, copyright infringement issues and outright #hallucinations, undermining trust in its accuracy. https://www.computerworld.com/article/3714290/ai-hallucination-mitigation-two-brains-are-better-than-one.html#tk.rss_all

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Hallucinations #Disinformation #Misinformation #Politics #Elections: "From tech companies, we need more than just pledges to keep chatbot hallucinations away from our elections. Companies should be more transparent by publicly disclosing information about vulnerabilities in their products and sharing evidence of how they are doing so by performing regular testing.

Until then, our limited review suggests that voters should probably steer clear of AI models for voting information. Voters should instead turn to local and state elections offices for reliable information about how and where they can cast their ballots. Elections officials should follow the model of Michigan Secretary of State Jocelyn Benson who, ahead of that state’s Democratic primary election, warned that “misinformation and the ability for voters to be confused or lied to or fooled,” was the paramount threat this year.

With hundreds of AI companies sprouting up, let’s make them compete on the accuracy of their products, rather than just on hype. Our democracy depends on it."

https://www.latimes.com/opinion/story/2024-03-08/primaries-voting-elections-ai-misinformation-plaforms-chatgpt

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Nice paper about the inevitability of hallucinations in LLMs with some nice and simple empirical experiments. For those TLDR people,
"All LLMs will hallucinate.
Without guardrail and fences" "LLMs cannot be used for critical decision making."
"Without human control, LLMs cannot be used automatically in any safety-critical decision-making."
The authors make the relevant point that this does not make LLMs worthless.
https://arxiv.org/pdf/2401.11817.pdf
#AI #LLM #hallucinations #generativeAI

hiisikoloart, to Horror Finnish
@hiisikoloart@writing.exchange avatar

I suffer from hallucinations when I am going to sleep, and when I wake up. They can be auditory, visual, tactile, or even smells.

So imagine the horror of waking from a dream and hearing this...eerie humming/singing echoing all around me. Straight from a horror movie.

And. It. Doesn't. Stop.

Waited about 20min, thinking I had lost it, before getting up and asking my partner if can they hear it too.

Thank Elders they could because FUCK ME it sounds haunting.

#horror #hallucinations #sleep

hiisikoloart,
@hiisikoloart@writing.exchange avatar

It was just our neighbour singing btw. Nothing supernatural, and I haven't lost my marbles yet.

Also hallucinations upon falling asleep and waking up are normal with both #IdiopathicHypersomnia and #Narcolepsy (type 1 and 2). It is not in IH criteria, but many of us have them anyway.

I have strong ones when I am extra stressed, and before my IH diagnosis I did often think I was losing my mind. Un-sleep disordered people should not have any hallucinations, or very rarely have them.

AJBarth, to Astronomy
Jigsaw_You, to generativeAI
@Jigsaw_You@mastodon.nl avatar

Spot-on…

“If #hallucinations aren’t fixable, #generativeAI probably isn’t going to make a trillion dollars a year. And if it probably isn’t going to make a trillion dollars a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is” @garymarcus

#AI

https://garymarcus.substack.com/p/what-if-generative-ai-turned-out

itnewsbot, to machinelearning

Rhyming AI-powered clock sometimes lies about the time, makes up words - Enlarge / A CAD render of the Poem/1 sitting on a bookshelf. (credit: M... - https://arstechnica.com/?p=1999895 #machinelearning #hallucinations #confabulation #hallucination #mattwebb #aiclock #chatgpt #chatgtp #biz#ai

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • tacticalgear
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • lostlight
  • All magazines