remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Hallucinations: "I want to be very clear: I am a cis woman and do not have a beard. But if I type “show me a picture of Alex Cranz” into the prompt window, Meta AI inevitably returns images of very pretty dark-haired men with beards. I am only some of those things!

Meta AI isn’t the only one to struggle with the minutiae of The Verge’s masthead. ChatGPT told me yesterday I don’t work at The Verge. Google’s Gemini didn’t know who I was (fair), but after telling me Nilay Patel was a founder of The Verge, it then apologized and corrected itself, saying he was not. (I assure you he was.)

When you ask these bots about things that actually matter they mess up, too. Meta’s 2022 launch of Galactica was so bad the company took the AI down after three days. Earlier this year, ChatGPT had a spell and started spouting absolute nonsense, but it also regularly makes up case law, leading to multiple lawyers getting into hot water with the courts.

The AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their dimwittedness. I cannot get excited about the next turn in the AI revolution because that turn is into a place where computers cannot consistently maintain accuracy about even minor things."

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Models like ChatGPT and Claude are deeply dependent on training data to improve their outputs, and their very existence is actively impeding the creation of the very thing they need to survive. While publishers like Axel Springer have cut deals to license their companies' data to ChatGPT for training purposes, this money isn't flowing to the writers that create the content that OpenAI and Anthropic need to grow their models much further. It's also worth considering that these AI companies may already have already trained on this data. The Times sued OpenAI late last year for training itself on "millions" of articles, and I'd bet money that ChatGPT was trained on multiple Axel Springer publications along with anything else it could find publicly-available on the web.

This is one of many near-impossible challenges for an AI industry that's yet to prove its necessity. While one could theoretically make bigger, more powerful chips (I'll get to that later), AI companies face a kafkaesque bind where they can't improve a tool for automating the creation of content without human beings creating more content than they've ever created before. Paying publishers to license their content doesn't actually fix the problem, because it doesn't increase the amount of content that they create, but rather helps line the pockets of executives and shareholders. Ironically, OpenAI's best hope for survival would be to fund as many news outlets as possible and directly incentivize them to do in-depth reporting, rather than proliferating a tech that unquestionably harms the media industry." https://www.wheresyoured.at/bubble-trouble/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The only reason bosses want to buy robots is to fire humans and lower their costs. That's why "AI art" is such a pisser. There are plenty of harmless ways to automate art production with software – everything from a "healing brush" in Photoshop to deepfake tools that let a video-editor alter the eye-lines of all the extras in a scene to shift the focus. A graphic novelist who models a room in The Sims and then moves the camera around to get traceable geometry for different angles is a centaur – they are genuinely offloading some finicky drudgework onto a robot that is perfectly attentive and vigilant.

But the pitch from "AI art" companies is "fire your graphic artists and replace them with botshit." They're pitching a world where the robots get to do all the creative stuff (badly) and humans have to work at a robotic pace, with robotic vigilance, in order to catch the mistakes that the robots make at superhuman speed.

Reverse centaurism is brutal. That's not news: Charlie Chaplin documented the problems of reverse centaurs nearly 100 years ago:" https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle

m, to ai
@m@martinh.net avatar

Seize the memes of production! :ms_robot_headpats:

https://app.suno.ai/song/1bead4da-3c14-4082-9b5f-13b0a76af047/

"In a world of digital creation, I sing my song of light
But lurking in the shadows, a tale of endless night
Generative AIs, they steal from artists' hearts
Their creativity taken, ripped apart"

#AI #GenerativeAI #Music #Copyright #TrainingData

m,
@m@martinh.net avatar

:cursor_green: Leaping the Guard Rails

https://app.suno.ai/song/2ffa3423-2e8a-4a68-8fd3-584108193554/

"In a pixelated world, where bits collide
Hallucinations dance in 8-bit lullabies
AI models leaping, their guard rails untried
Spewing hate speech, casting shadows in the skies"

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "A couple of days ago, Wharton professor Ethan Mollick, who studies the effects of AI and often writes about his own uses of it, summarized (on X) something that has become clear over the past year: “To most users, it isn't clear that LLMs don't work like search engines. This can lead to real issues when using them for vital, changing information. Frontier models make less mistakes, but they still make them. Companies need to do more to address users being misled by LLMs.”

It's certainly, painfully obvious by now that this is true." https://www.scu.edu/ethics/internet-ethics-blog/certainly-here-is-a-blog-post/

raymondpert, to ai
@raymondpert@mstdn.social avatar

AI hallucination mitigation: two brains are better than one .

> As generative #AI (genAI) continues to move into broad use by the public and various enterprises, its adoption is sometimes plagued by errors, copyright infringement issues and outright #hallucinations, undermining trust in its accuracy. https://www.computerworld.com/article/3714290/ai-hallucination-mitigation-two-brains-are-better-than-one.html#tk.rss_all

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Hallucinations #Disinformation #Misinformation #Politics #Elections: "From tech companies, we need more than just pledges to keep chatbot hallucinations away from our elections. Companies should be more transparent by publicly disclosing information about vulnerabilities in their products and sharing evidence of how they are doing so by performing regular testing.

Until then, our limited review suggests that voters should probably steer clear of AI models for voting information. Voters should instead turn to local and state elections offices for reliable information about how and where they can cast their ballots. Elections officials should follow the model of Michigan Secretary of State Jocelyn Benson who, ahead of that state’s Democratic primary election, warned that “misinformation and the ability for voters to be confused or lied to or fooled,” was the paramount threat this year.

With hundreds of AI companies sprouting up, let’s make them compete on the accuracy of their products, rather than just on hype. Our democracy depends on it."

https://www.latimes.com/opinion/story/2024-03-08/primaries-voting-elections-ai-misinformation-plaforms-chatgpt

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Nice paper about the inevitability of hallucinations in LLMs with some nice and simple empirical experiments. For those TLDR people,
"All LLMs will hallucinate.
Without guardrail and fences" "LLMs cannot be used for critical decision making."
"Without human control, LLMs cannot be used automatically in any safety-critical decision-making."
The authors make the relevant point that this does not make LLMs worthless.
https://arxiv.org/pdf/2401.11817.pdf

hiisikoloart, to Horror Finnish
@hiisikoloart@writing.exchange avatar

I suffer from hallucinations when I am going to sleep, and when I wake up. They can be auditory, visual, tactile, or even smells.

So imagine the horror of waking from a dream and hearing this...eerie humming/singing echoing all around me. Straight from a horror movie.

And. It. Doesn't. Stop.

Waited about 20min, thinking I had lost it, before getting up and asking my partner if can they hear it too.

Thank Elders they could because FUCK ME it sounds haunting.

AJBarth, to Astronomy
@AJBarth@fediscience.org avatar
Jigsaw_You, to generativeAI
@Jigsaw_You@mastodon.nl avatar

Spot-on…

“If aren’t fixable, probably isn’t going to make a trillion dollars a year. And if it probably isn’t going to make a trillion dollars a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is” @garymarcus

https://garymarcus.substack.com/p/what-if-generative-ai-turned-out

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Rhyming AI-powered clock sometimes lies about the time, makes up words - Enlarge / A CAD render of the Poem/1 sitting on a bookshelf. (credit: M... - https://arstechnica.com/?p=1999895

itnewsbot, to generativeAI
@itnewsbot@schleuss.online avatar

OpenAI must defend ChatGPT fabrications after failing to defeat libel suit - Enlarge (credit: Anadolu / Contributor | Anadolu)

It looks lik... - https://arstechnica.com/?p=1996758

gimulnautti, to machinelearning
@gimulnautti@mastodon.green avatar

One of the most common is engines relentlessly pushing you political content from influencers you wouldn’t touch with a long stick.

And it’s not just a one-off, it’s constant. When the training set does not contain the inference needed, the system hallucinates it.

With recommendations, the only training parameter is engagement. With generic ’s the parameters are found by the system independently, but the same problem generalises across.

grammargirl, to ai
@grammargirl@zirk.us avatar

"AI" continues to dominate word-of-the-year choices.

Dictionary.com just chose "hallucinate," and The Economist recently chose "ChatGPT."

https://content.dictionary.com/word-of-the-year-2023/

https://archive.is/KvFxc#selection-1159.22-1191.144

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

“Hallucinating” AI models help coin Cambridge Dictionary’s word of the year - Enlarge / A screenshot of the Cambridge Dictionary website where it ann... - https://arstechnica.com/?p=1984726

reallyflygreg, to ai
@reallyflygreg@mstdn.ca avatar

Well that's fun. I just read that there AI's can hallucinate, i.e. generate responses that are not reflective of the source data. What could go wrong?

nblr, to random
@nblr@chaos.social avatar

So… People put stuff in their robots.txt to “prevent” malicious scraping of their data for machine learning purposes. I hope everybody understands that this is just a “please don’t take my data” sign on the front lawn. We should be creating heaps of adversarial data instead. Data suitable to taint those datasets.

robcornelius,

@nblr

From what I have seen so far the real that has been scraped and fed into produces , sorry .

Perhaps the sum total of human knowledge is gibberish after all. I have always suspected as much.

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Google’s AI assistant can now read your emails, plan trips, “double-check” answers - Enlarge (credit: Getty Images)

On Tuesday, Google announced up... - https://arstechnica.com/?p=1969226

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "1. WTF, GPT: why did you tell me it was copyrighted if you knew it’s in the public domain?

  1. Do you know it’s in the public domain now, but you didn’t know it a few seconds ago?

  2. Why do I sound like the most obnoxious defense attorney on Law & Order?

  3. That quote feels a little more promising – especially “it is always a contemporary emotion that we experience” and also “a snapshot that has survived and which we had not suspected of having taken”.

  4. It’s weird that my memory feels so vague, and I have no idea where in the book that could have been, even though I reread Vol 2 in June.

  5. Why is the phrase “c’est toujours une émotion contemporaine que nous en éprouvons” getting zero hits on Google and on Google Books?

  6. Why does À l’ombre have to be divided into three separate files in the free web edition?

  7. How is that quote not in any of the 3 files?

  8. What is a polite way of phrasing this?"

https://www.theguardian.com/books/2023/sep/05/proust-chatgpt-and-the-case-of-the-forgotten-quote-elif-batuman

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Although chatbots such as ChatGPT can facilitate cost-effective text generation and editing, factually incorrect responses (hallucinations) limit their utility. This study evaluates one particular type of hallucination: fabricated bibliographic citations that do not represent actual scholarly works. We used ChatGPT-3.5 and ChatGPT-4 to produce short literature reviews on 42 multidisciplinary topics, compiling data on the 636 bibliographic citations (references) found in the 84 papers. We then searched multiple databases and websites to determine the prevalence of fabricated citations, to identify errors in the citations to non-fabricated papers, and to evaluate adherence to APA citation format. Within this set of documents, 55% of the GPT-3.5 citations but just 18% of the GPT-4 citations are fabricated. Likewise, 43% of the real (non-fabricated) GPT-3.5 citations but just 24% of the real GPT-4 citations include substantive citation errors. Although GPT-4 is a major improvement over GPT-3.5, problems remain."

https://www.nature.com/articles/s41598-023-41032-5

lauren, to random
@lauren@mastodon.laurenweinstein.org avatar

By and large, after many, many years of dreams during sleep, I have come to the conclusion that they are probably just artifacts from neural management functions (sorting, retrieval, merging and storing, garbage collection, etc.), and have no major significance in and of themselves.

paninid,
@paninid@mastodon.world avatar

@lauren
Dreams are hallucinations while we sleep.

Artificial neural networks connect disparate bits of information that could be plausibly connected, which we view as a hallucination.

While we’re dreaming, our organic neural network connects disparate bits of information which could be plausibly connected, but are not necessary or helpful, so it flushes them out via dreams, as sleeping hallucinations.

danslerush, to ChatGPT
@danslerush@floss.social avatar

« If is fabricating code libraries (packages), attackers could use these to spread malicious packages without using familiar techniques like typosquatting or masquerading.

Those techniques are suspicious and already detectable. But if an attacker can create a package to replace the “fake” packages recommended by ChatGPT, they might be able to get a victim to download and use it. »

https://vulcan.io/blog/ai-hallucinations-package-risk

alanrycroft, to ChatGPT
@alanrycroft@mastodon.world avatar

ChatGPT’s ‘hallucinations’ undermine credibility and create legal trouble

If an AI platform publishes or creates content that is false, significant harm can be inflicted

https://www.thestar.com/opinion/contributors/chatgpt-s-hallucinations-undermine-credibility-and-create-legal-trouble/article_7bfd8bc5-2550-50c8-b116-516d1b42eeb4.html

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Report: OpenAI holding back GPT-4 image features on fears of privacy issues - Enlarge (credit: Witthaya Prasongsin (Getty Images))

OpenAI ha... - https://arstechnica.com/?p=1954677 #facialrecognition #machinelearning #hallucinations #confabulation #blindness #aiethics #bemyeyes #biz#openai #blind #gpt-4 #ai

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • everett
  • rosin
  • Youngstown
  • ngwrru68w68
  • khanakhh
  • slotface
  • InstantRegret
  • mdbf
  • GTA5RPClips
  • kavyap
  • thenastyranch
  • DreamBathrooms
  • magazineikmin
  • anitta
  • tacticalgear
  • tester
  • Durango
  • cubers
  • ethstaker
  • cisconetworking
  • modclub
  • osvaldo12
  • Leos
  • normalnudes
  • megavids
  • provamag3
  • lostlight
  • All magazines