br00t4c, to ai
@br00t4c@mastodon.social avatar
ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Companies are trying to reduce the amount of hallucinations in generative AI.
https://thenextweb.com/news/iris-reducing-ai-hallucinations-in-scientific-research

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "On March 27, a large group of artists and creators from across the web noticed the frightening extent to which a once-beloved, highly influential community platform of theirs had, like so many others, fallen prey to the artificial intelligence juggernauts plundering the internet.

As VFX animator Romain Revert (Minions, The Lorax) pointed out on X, the bots had come for his old home base of DeviantArt. Its social accounts were promoting “top sellers” on the platform, with usernames like “Isaris-AI” and “Mikonotai,” who reportedly made tens of thousands of dollars through bulk sales of autogenerated, dead-eyed 3D avatars. The sales weren’t exactly legit—an online artist known as WyerframeZ looked at those users’ followers and found pages of profiles with repeated names, overlapping biographies and account-creation dates, and zero creations of their own, making it apparent that various bots were involved in these “purchases.”

It’s not unlikely, as WyerframeZ surmised, that someone constructed a low-effort bot network that could hold up a self-perpetuating money-embezzlement scheme: Generate a bunch of free images and accounts, have them buy and boost one another in perpetuity, inflate metrics so that the “art” gets boosted by DeviantArt and reaches real humans, then watch the money pile up from DeviantArt revenue-sharing programs. Rinse, repeat.

After Revert declared this bot-on-bot fest to be “the downfall of DeviantArt,” myriad other artists and longtime users of the platform chimed in to share in the outrage that these artificial accounts were monopolizing DeviantArt’s promotional and revenue apparatuses. Several mentioned that they’d abandoned their DeviantArt accounts—all appearing to prove his dramatic point."

https://slate.com/technology/2024/05/deviantart-what-happened-ai-decline-lawsuit-stability.html

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Media #Journalism #News: "The promise of working alongside AI companies is easy to grasp. Publishers will get some money—Thompson would not disclose the financial elements of the partnership—and perhaps even contribute to AI models that are higher-quality or more accurate. Moreover, The Atlantic’s Product team will develop its own AI tools using OpenAI’s technology through a new experimental website called Atlantic Labs. Visitors will have to opt in to using any applications developed there. (Vox is doing something similar through a separate partnership with the company.)

But it’s just as easy to see the potential problems. So far, generative AI has not resulted in a healthier internet. Arguably quite the opposite. Consider that in recent days, Google has aggressively pushed an “AI Overview” tool in its Search product, presenting answers written by generative AI atop the usual list of links. The bot has suggested that users eat rocks or put glue in their pizza sauce when prompted in certain ways. ChatGPT and other OpenAI products may perform better than Google’s, but relying on them is still a gamble. Generative-AI programs are known to “hallucinate.” They operate according to directions in black-box algorithms. And they work by making inferences based on huge data sets containing a mix of high-quality material and utter junk. Imagine a situation in which a chatbot falsely attributes made-up ideas to journalists. Will readers make the effort to check? Who could be harmed?"

https://www.theatlantic.com/technology/archive/2024/05/a-devils-bargain-with-openai/678537/

AlexJimenez, to ai
@AlexJimenez@mas.to avatar

Inside Anthropic, the #AI Company Betting That Safety Can Be a Winning #Strategy

https://time.com/6980000/anthropic/

#DigitalTransformation #LLMs #GenerativeAI

adamsnotes, to generativeAI
@adamsnotes@me.dm avatar

AI clones of people are becoming a lot more common and a lot more worrying.

This one was taken down after Ali jumped through a bunch of hoops to prove her identity, but CivitAI currently only removes models if there is a complaint - they have no policy against creating models to impersonate real people.

--
What It’s Like Finding Your Nonconsensual AI Clone Online
https://www.404media.co/what-its-like-finding-your-nonconsensual-ai-clone-online/

#Deepfakes #GenerativeAI #CivitAI #404media

Nonilex, to OpenAI
@Nonilex@masto.ai avatar

#OpenAI finds its #tech being used for #propaganda & #US 2024 #ElectionInterference

#ChatGPT maker OpenAI found #Russia, #China, #Iran & #Israel groups using its #technology to #influence global political discourse, highlighting concerns #generative #ArtificialIntelligence is making it easier for state actors to run covert #propaganda campaigns as the presidential election nears.
#ForeignDisinformationCampaigns #disinformation #generativeAI
https://www.washingtonpost.com/technology/2024/05/30/openai-disinfo-influence-operations-china-russia/

Nonilex,
@Nonilex@masto.ai avatar

#OpenAI removed accounts associated w/well-known #propaganda ops in #Russia, #China & #Iran; an #Israeli political campaign firm; & a previously unknown group originating in Russia that the company’s researchers dubbed “#BadGrammar.” The groups used OpenAI’s #tech to write posts, translate them into various languages & build software that helped them automatically post to #SocialMedia.

#ForeignDisinformationCampaigns #ElectionInterference #disinformation #generativeAI

dalfen, to ai
@dalfen@mstdn.social avatar

Imagine— It might all be just a fad.


https://www.bbc.com/news/articles/c511x4g7x7jo

denis, to generativeAI
@denis@ruby.social avatar

Literally every single thing ChatGPT tells me is provably wrong.

Generative AI is a fucking train wreck.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #ContentModeration #LLMs #AIRegulation: "Drawing on the extensive history of study of the terms and conditions (T&C) and privacy policies of social media companies, this paper reports the results of pilot empirical work conducted in January-March 2023, in which T&C were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc), small and large sizes, and varying countries of origin. Although the study looked at terms relating to a wide range of issues including content restrictions and moderation, dispute resolution and consumer liability, the focus here is on copyright and data protection. Our early findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradistinction to their function as content generators rather than mere hosts for third party content. This study concludes that in light of these findings, new laws being drafted to rein in the power of “big tech” must be reconsidered carefully, if the imbalance of power between users and platforms in the social media era, only now being combatted, is not to be repeated via the private ordering of the providers of generative AI."

https://www.create.ac.uk/blog/2024/05/29/new-working-paper-private-ordering-and-generative-ai-what-can-we-learn-from-model-terms-and-conditions/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #LLMs #DataCenters #BigTech #Energy #WaterScarcity #FossilFuels #ClimateChange: "Large language models such as ChatGPT are some of the most energy-guzzling technologies of all. Research suggests, for instance, that about 700,000 litres of water could have been used to cool the machines that trained ChatGPT-3 at Microsoft’s data facilities. It is hardly news that the tech bubble’s self-glorification has obscured the uglier sides of this industry, from its proclivity for tax avoidance to its invasion of privacy and exploitation of our attention span. The industry’s environmental impact is a key issue, yet the companies that produce such models have stayed remarkably quiet about the amount of energy they consume – probably because they don’t want to spark our concern.

Google’s global datacentre and Meta’s ambitious plans for a new AI Research SuperCluster (RSC) further underscore the industry’s energy-intensive nature, raising concerns that these facilities could significantly increase energy consumption. Additionally, as these companies aim to reduce their reliance on fossil fuels, they may opt to base their datacentres in regions with cheaper electricity, such as the southern US, potentially exacerbating water consumption issues in drier parts of the world. Before making big announcements, tech companies should be transparent about the resource use required for their expansion plans."

https://www.theguardian.com/commentisfree/article/2024/may/30/ugly-truth-ai-chatgpt-guzzling-resources-environment?CMP=fb_a-technology_b-gdntech

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "A popular view of generative AI is that it’s unjustifiably expensive, chronically wasteful, rarely useful, and is being foisted on the general public for ideological reasons even though it makes the services they rely on worse. Governments are sure to be all over it.

https://www.ft.com/content/a60c3c7b-1c48-485d-adb7-5bc2b7b1b650

futurebird, (edited ) to random
@futurebird@sauropods.win avatar

These AI SEO spam operations have used lists of common searches to ensure that their pages come up first in searches in the “long fat tail” the kind of search where it used to be about 50/50 if you’d find a page addressing your needs. But, it used to be if you found something like “The top 15 smallest ants in the world” it wouldn’t be nonsense. It’d either exist and be the work of another person who cared OR you found nothing. Not so now! I can’t possibly over-stress how bad this is! 1/

NatureMC,
@NatureMC@mastodon.online avatar

@futurebird I don't think that you overstress. In social media (the amplifier of the whole thing), one can already recognise tendencies where knowledge loss as a cultural phenomenon is reminiscent of biodiversity loss. Experts still recognise it. But what if the baseline shift is no longer noticeable?

bornach,
@bornach@fosstodon.org avatar

@jeruyyap @MyWoolyMastadon @futurebird @seawall
As Adam Conover points out in his latest video,
https://youtu.be/P7NHABs76mg
the preponderance of low quality on the first page of search results affects all the major search engines. Switching to alternative search providers has been made impossible by the tyranny of the default which squeezes out the smaller players in spite of having superior web indexing methods

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Large publishers are forging ahead with voluntary agreements in the absence of legal regulatory clarity. But this leaves out smaller and local publishers and could undermine efforts to develop business model alternatives as opposed to one-off licensing opportunities.

Ad hoc approaches, however, risk worsening the compounding crises caused by the decline of local news and the scourge of disinformation. We are already seeing the proliferation of election related disinformation in the U.S. and around the world, from AI robocalls impersonating President Joe Biden to deepfakes of Moldovan candidates making false claims about alignment with Russia.

Renegotiating the relationship between tech platforms and the news industry must be a fundamental part of the efforts to support journalism and help news organizations adapt to the generative AI era."

https://niemanreports.org/articles/the-battle-over-using-journalism-to-build-ai-models-is-just-starting/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI: "ChatGPT creator OpenAI is training a powerful new model to fuel its chatbot and image generation tools, the company said Tuesday. It is also launching a new committee focused on safety, following scrutiny over its safety efforts and several high-profile resignations.

The moves follow a controversy earlier this month in which OpenAI suspended a voice chatbot after actress Scarlett Johansson accused the company of copying her AI voice character from the movie Her.

OpenAI said the next model will “bring us to the next level of capabilities on our path to AGI,” which refers to artificial general intelligence — AI systems that could eventually match or surpass human capabilities."

https://www.semafor.com/article/05/28/2024/openai-forms-safety-council-led-by-sam-altman-and-trains-gpt-4-successor

AlexJimenez, to ai
@AlexJimenez@mas.to avatar
crafty_crow, to ai
@crafty_crow@mastodon.sdf.org avatar

If AI tech bros are going to steal content for their generative AI, perhaps poisoning the well with mislabeled images, incorrect responses, and injecting instructions in content is well within our rights to fight back.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Search #SearchEngines #AISearch #Hallucinations #LLMs: "You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Verge with Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

So expect more of these weird and incredibly wrong snafus from AI Overviews despite efforts by Google engineers to fix them, such as this big whopper: 13 American presidents graduated from University of Wisconsin-Madison. (Hint: this is so not true.)

But Pichai seems to downplay the errors.

"There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time," he said. "I think that would be the wrong way to think about it.""
https://futurism.com/the-byte/ceo-google-ai-hallucinations

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #ChatGPT #Media #News #Journalism: "ChatGPT is by far the most widely recognised generative AI product – around 50% of the online population in the six countries surveyed have heard of it. It is also by far the most widely used generative AI tool in the six countries surveyed. That being said, frequent use of ChatGPT is rare, with just 1% using it on a daily basis in Japan, rising to 2% in France and the UK, and 7% in the USA. Many of those who say they have used generative AI have used it just once or twice, and it is yet to become part of people’s routine internet use.
In more detail, we find:

  • While there is widespread awareness of generative AI overall, a sizable minority of the public – between 20% and 30% of the online population in the six countries surveyed – have not heard of any of the most popular AI tools.
  • In terms of use, ChatGPT is by far the most widely used generative AI tool in the six countries surveyed, two or three times more widespread than the next most widely used products, Google Gemini and Microsoft Copilot.
  • Younger people are much more likely to use generative AI products on a regular basis. Averaging across all six countries, 56% of 18–24s say they have used ChatGPT at least once, compared to 16% of those aged 55 and over.
  • Roughly equal proportions across six countries say that they have used generative AI for getting information (24%) as creating various kinds of media, including text but also audio, code, images, and video (28%).
  • Just 5% across the six countries covered say that they have used generative AI to get the latest news."

https://reutersinstitute.politics.ox.ac.uk/what-does-public-six-countries-think-generative-ai-news

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "- Merely relying on the disclosure of statistical accuracy of the GenAI model is insufficient, since it could lead to an “Accuracy Paradox”. It refers to the unintended consequences of solely relying on the disclosure of a model’s statistical accuracy, which can lead to a misleading sense of reliability among users. As accuracy metrics improve, users may overly trust the AI outputs without sufficient verification, increasing the risk of accepting erroneous information.

  • Increasing the accuracy of inputs, models, and outputs often comes with the cost of privacy, especially in GenAI context. This involves not only technical identifiability of the individuals involved, but also societal risks such as more accurate and precise targeting for commercial purposes, social sorting, and group privacy implications.
  • Overreliance on developers’ and deployers’ accuracy legal compliance is not pragmatic and is overoptimistic, which could ultimately become a burden for users with the tendency of using dark pattern. In this context, GenAI developers and deployers could use such manipulative design to shift the responsibility for data accuracy onto users.
  • We argue that content moderation as a tool to mitigate inaccuracy and untrustworthiness. As a critical role in ensuring the accuracy, reliability, and trustworthiness of GenAI, content moderation could filter flawed or harmful content, which involves refining detection methods to distinguish and exclude incorrect or misleading information from training data and model outputs.
  • Accuracy of training data cannot directly translate to the accuracy of output, especially in the context of hallucination. Even though most training data is reliable and trustworthy, the essential issue remains that the recombination of trustworthy data into new answers in a new context may lead to untrustworthiness..."

https://www.create.ac.uk/blog/2024/05/28/accuracy-of-training-data-and-model-outputs-in-generative-ai-create-response-to-the-information-commissioners-office-ico-consultation/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

Surprise, surprise: News publishers only care about Money!!

: "Publishers are deep in negotiations with tech firms such as OpenAI to sell their journalism as training for the companies’ models. It turns out that accurate, well-written news is one of the most valuable sources for these models, which have been hoovering up humans’ intellectual output without permission. These AI platforms need timely news and facts to get consumers to trust them. And now, facing the threat of lawsuits, they are pursuing business deals to absolve them of the theft. These deals amount to settling without litigation. The publishers willing to roll over this way aren’t just failing to defend their own intellectual property—they are also trading their own hard-earned credibility for a little cash from the companies that are simultaneously undervaluing them and building products quite clearly intended to replace them."

https://www.theatlantic.com/technology/archive/2024/05/fatal-flaw-publishers-making-openai-deals/678477/

unevil_cat, to StableDiffusion German
@unevil_cat@mastodon.social avatar
unevil_cat, to StableDiffusion German
@unevil_cat@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "In this conversation, we discuss how Herndon collaborated with a human chorus and her “A.I. baby,” Spawn, on “PROTO”; how A.I. voice imitators grew out of electronic music and other musical genres; why Herndon prefers the term “collective intelligence” to “artificial intelligence”; why an “opt-in” model could help us retain more control of our work as A.I. trawls the internet for data; and much more."

https://www.youtube.com/watch?v=4MJ2D9uCLLA

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • GTA5RPClips
  • tacticalgear
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • ethstaker
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • lostlight
  • All magazines