remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIHype #Media #News #Journalism: "More broadly, across news media coverage of AI in general, reviewing 30 published studies, Saba Rebecca Brause and her coauthors find that, while there are of course exceptions, most research so far find not just a strong increase in the volume of reporting on AI, but also “largely positive evaluations and economic framing” of these technologies.

So, perhaps, as Timit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), has written on X: “The same news orgs hype stuff up during ‘AI summers’ without even looking into their archives to see what they wrote decades ago?”

There are some really good reporters doing important work to help people understand AI—as well as plenty of sensationalist coverage focused on killer robots and wild claims about possible future existential risks.

But, more than anything, research on how news media cover AI overall suggests that Gebru is largely right – the coverage tends to be led by industry sources, and often take claims about what the technology can and can’t do, and might be able to do in the future, at face value in ways that contributes to the hype cycle."

https://reutersinstitute.politics.ox.ac.uk/news/how-news-coverage-often-uncritical-helps-build-ai-hype

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god?

The company’s leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the world’s input into how to do it justly and wisely.

But when there’s real money at stake — and there are astounding sums of real money at stake in the race to dominate AI — it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees — those who know the most about what’s happening inside OpenAI — can’t tell the rest of the world what’s going on.

The website may have high-minded ideals, but their termination agreements are full of hard-nosed legalese. It’s hard to exercise accountability over a company whose former employees are restricted to saying “I resigned.”" https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release

1br0wn, to generativeAI
@1br0wn@eupolicy.social avatar

🇬🇧 minister wants to create a “framework or policy” around #GenerativeAI model training transparency but noted “very complex international problems that are fast moving”. She said the UK needed to ensure it had “a very dynamic regulatory environment”. https://on.ft.com/3ULCFn1

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill it—with nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.
(...)
I must assume that eventually an army of shame engineers will rise up, writing guilt-inducing code in order to make their robots more convincingly human. But it doesn’t mean I love the idea. Because right now you can see the house of cards clearly: By aggregating the world’s knowledge, chomping it into bits with GPUs, and emitting it as multi-gigabyte software that somehow knows what to say next, we've made the funniest parody of humanity ever. These models have all of our qualities, bad and good. Helpful, smart, know-it-alls with tendencies to prejudice, spewing statistics and bragging like salesmen at the bar. They mirror the arrogant, repetitive ramblings of our betters, the horrific confidence that keeps driving us over the same cliffs. That arrogance will be sculpted down and smoothed over, but it will have been the most accurate representation of who we truly are to exist so far, a real mirror of our folly, and I will miss it when it goes."

https://www.wired.com/story/generative-ai-totally-shameless/

attacus, to ai
@attacus@aus.social avatar

Most products and business problems require deterministic solutions.
Generative models are not deterministic.
Every single company who has ended up in the news for an AI gaffe has failed to grasp this distinction.

There’s no hammering “hallucination” out of a generative model; it’s baked into how the models work. You just end up spending so much time papering over the cracks in the façade that you end up with a beautiful découpage.

#AI #generativeAI

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Slack #AITraining #Copyright: "It all kicked off last night, when a note on Hacker News raised the issue of how Slack trains its AI services, by way of a straight link to its privacy principles — no additional comment was needed. That post kicked off a longer conversation — and what seemed like news to current Slack users — that Slack opts users in by default to its AI training, and that you need to email a specific address to opt out.

That Hacker News thread then spurred multiple conversations and questions on other platforms: There is a newish, generically named product called “Slack AI” that lets users search for answers and summarize conversation threads, among other things, but why is that not once mentioned by name on that privacy principles page in any way, even to make clear if the privacy policy applies to it? And why does Slack reference both “global models” and “AI models?”

Between people being confused about where Slack is applying its AI privacy principles, and people being surprised and annoyed at the idea of emailing to opt-out — at a company that makes a big deal of touting that “Your control your data” — Slack does not come off well."

https://techcrunch.com/2024/05/17/slack-under-attack-over-sneaky-ai-training-policy/?guccounter=1

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

techchiili, to microsoft
@techchiili@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIHype #AGI: "The reality is that no matter how much OpenAI, Google, and the rest of the heavy hitters in Silicon Valley might want to continue the illusion that generative AI represents a transformative moment in the history of digital technology, the truth is that their fantasy is getting increasingly difficult to maintain. The valuations of AI companies are coming down from their highs and major cloud providers are tamping down the expectations of their clients for what AI tools will actually deliver. That’s in part because the chatbots are still making a ton of mistakes in the answers they give to users, including during Google’s I/O keynote. Companies also still haven’t figured out how they’re going to make money off all this expensive tech, even as the resource demands are escalating so much their climate commitments are getting thrown out the window."

https://disconnect.blog/ai-hype-is-over-ai-exhaustion-is-setting-in/

br00t4c, to generativeAI
@br00t4c@mastodon.social avatar

STAT+: Venture capitalist Bob Kocher on generative AI startups, Change Healthcare cyberattack

https://www.statnews.com/2024/05/17/bob-kocher-venture-capitalist-health-tech-stat-summit/?utm_campaign=rss

modean987, to generativeAI
@modean987@mastodon.world avatar
br00t4c, to generativeAI
@br00t4c@mastodon.social avatar
FatherEnoch, to ai
@FatherEnoch@mastodon.online avatar

AI might be cool, but it’s also a big fat liar, and we should probably be talking about that more.

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong





remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Hallucinations: "I want to be very clear: I am a cis woman and do not have a beard. But if I type “show me a picture of Alex Cranz” into the prompt window, Meta AI inevitably returns images of very pretty dark-haired men with beards. I am only some of those things!

Meta AI isn’t the only one to struggle with the minutiae of The Verge’s masthead. ChatGPT told me yesterday I don’t work at The Verge. Google’s Gemini didn’t know who I was (fair), but after telling me Nilay Patel was a founder of The Verge, it then apologized and corrected itself, saying he was not. (I assure you he was.)

When you ask these bots about things that actually matter they mess up, too. Meta’s 2022 launch of Galactica was so bad the company took the AI down after three days. Earlier this year, ChatGPT had a spell and started spouting absolute nonsense, but it also regularly makes up case law, leading to multiple lawyers getting into hot water with the courts.

The AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their dimwittedness. I cannot get excited about the next turn in the AI revolution because that turn is into a place where computers cannot consistently maintain accuracy about even minor things."

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #SocialSciences #Humanities: "With Senate Majority Leader Chuck Schumer releasing a sweeping “roadmap” for AI legislation today and major product announcements from OpenAI and Google, it’s been a big week for AI… and it’s only Wednesday.

But amid the ever-quickening pace of action, some observers wonder if government is looking at the tech industry with the right perspective. A report shared first with DFD from the nonprofit Data & Society argues that in order for powerful AI to integrate successfully with humanity, it must actually feature… the humanities.

Data & Society’s Serena Oduro and Tamara Kneese write that social scientists and other researchers should be directly involved in federally funded efforts to regulate and analyze AI. They say that given the unpredictable impact it might have on how people live, work and interact with institutions, AI development should involve non-STEM experts at every step.

“Especially with a general purpose technology, it is very hard to anticipate what exactly this technology will be used for,” said Kneese, a Data & Society senior researcher."

https://www.politico.com/newsletters/digital-future-daily/2024/05/15/ai-data-society-report-humanities-00158195

remixtures, to Sony Portuguese
@remixtures@tldr.nettime.org avatar

Sony Music is the prototype of the company that uses artists as mere puppets for getting the only thing it really wants: free money extracted through IP rents. It's a parasite that doesn't contribute at all to the promotion of arts and science.

: "Sony Music is sending warning letters to more than 700 artificial intelligence developers and music streaming services globally in the latest salvo in the music industry’s battle against tech groups ripping off artists.

The Sony Music letter, which has been seen by the Financial Times, expressly prohibits AI developers from using its music — which includes artists such as Harry Styles, Adele and Beyoncé — and opts out of any text and data mining of any of its content for any purposes such as training, developing or commercialising any AI system.

Sony Music is sending the letter to companies developing AI systems including OpenAI, Microsoft, Google, Suno and Udio, according to those close to the group.

The world’s second-largest music group is also sending separate letters to streaming platforms, including Spotify and Apple, asking them to adopt “best practice” measures to protect artists and songwriters and their music from scraping, mining and training by AI developers without consent or compensation. It has asked them to update their terms of service, making it clear that mining and training on its content is not permitted.

Sony Music declined to comment further."

https://www.ft.com/content/c5b93b23-9f26-4e6b-9780-a5d3e5e7a409

drahardja, to ai
@drahardja@sfba.social avatar

On the other hand, successful lawsuits against companies for the output of lousy chatbots will put a dollar amount on the liability of using chatbots to talk to customers, and may actually reduce their usage. https://mastodon.social/@arstechnica/112452961167345476

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIHype: "The reality is that A.I. models can often prepare a decent first draft. But I find that when I use A.I., I have to spend almost as much time correcting and revising its output as it would have taken me to do the work myself.

And consider for a moment the possibility that perhaps A.I. isn’t going to get that much better anytime soon. After all, the A.I. companies are running out of new data on which to train their models, and they are running out of energy to fuel their power-hungry A.I. machines. Meanwhile, authors and news organizations (including The New York Times) are contesting the legality of having their data ingested into the A.I. models without their consent, which could end up forcing quality data to be withdrawn from the models.

Given these constraints, it seems just as likely to me that generative A.I. could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests."

https://www.nytimes.com/2024/05/15/opinion/artificial-intelligence-ai-openai-chatgpt-overrated-hype.html

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #LLMs #ParetoCurves: "Which is the most accurate AI system for generating code? Surprisingly, there isn’t currently a good way to answer questions like these.

Based on HumanEval, a widely used benchmark for code generation, the most accurate publicly available system is LDB (short for LLM debugger).1 But there’s a catch. The most accurate generative AI systems, including LDB, tend to be agents,2 which repeatedly invoke language models like GPT-4. That means they can be orders of magnitude more costly to run than the models themselves (which are already pretty costly). If we eke out a 2% accuracy improvement for 100x the cost, is that really better?

In this post, we argue that:

  • AI agent accuracy measurements that don’t control for cost aren’t useful.

  • Pareto curves can help visualize the accuracy-cost tradeoff.

  • Current state-of-the-art agent architectures are complex and costly but no more accurate than extremely simple baseline agents that cost 50x less in some cases.

  • Proxies for cost such as parameter count are misleading if the goal is to identify the best system for a given task. We should directly measure dollar costs instead.

  • Published agent evaluations are difficult to reproduce because of a lack of standardization and questionable, undocumented evaluation methods in some cases."

https://www.aisnakeoil.com/p/ai-leaderboards-are-no-longer-useful

drahardja, to generativeAI
@drahardja@sfba.social avatar

Much as I dislike the theft of human labor that feeds many of the #generativeAI products we see today, I have to agree with @pluralistic that #copyright law is the wrong way to address the problem.

To frame the issue concretely: think of whom copyright law has benefited in the past, and then explain how it would benefit the individual creator when it is applied to #AI. (Hint: it won’t.)

Copyright law is already abused and extended to an absurd degree today. It already overreaches. It impoverishes society by putting up barriers to creation and allowing toll-collectors to exist between citizen artists and their audience.

Labor law is likely what we need to lean on. #unions and #guilds protect creators in a way that copyright cannot. Inequality and unequal bargaining power that lead to exploitation of artists and workers is what we need to address head-on.

Copyright will not save us.

“AI "art" and uncanniness”

https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The rapid growth of the technology industry and the increasing reliance on cloud computing and artificial intelligence have led to a boom in the construction of data centers across the United States. Electric vehicles, wind and solar energy, and the smart grid are particularly reliant on data centers to optimize energy utilization. These facilities house thousands of servers that require constant cooling to prevent overheating and ensure optimal performance.

Unfortunately, many data centers rely on water-intensive cooling systems that consume millions of gallons of potable (“drinking”) water annually. A single data center can consume up to 3 million to 5 million gallons of drinking water per day, enough to supply thousands of households or farms.

The increasing use and training of AI models has further exacerbated the water consumption challenges faced by data centers."

https://archive.ph/7bunV

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The smooth interactivity that OpenAI has laboured hard to enable does well to paper over the cracks of the underlying technology. When ChatGPT first elbowed its way noisily into our lives in November 2022, those who had been following the technology for decades pointed out that AI in its current form was little more than snazzy pattern-matching technology – but they were drowned out by the excited masses. The next step towards human-like interaction is only going to amplify the din.

That’s great news for OpenAI, a company already valued at more than $80bn, and with investment from the likes of Microsoft. Its CEO, Sam Altman, tweeted last week that GPT-4o “feels like magic to me”. It’s also good news for others in the AI space, who are capitalising on the ubiquity of the technology and layering it into every aspect of our lives. Microsoft Word and PowerPoint now come with generative AI tools folded into them. Meta, the parent company of Facebook and Instagram, is putting its AI chatbot assistant into its apps in many countries, much to some users’ chagrin.

But it’s less good for ordinary users. Less friction between asking an AI system to do something and it actually completing the task is good for ease of use, but it also helps us forget that we’re not interacting with sentient beings. We need to remember that, because AI is not infallible; it comes with biases and environmental issues, and reflects the interests of its makers. These pressing issues are explored in my book, and the experts I spoke to tell me they represent significant concerns for the future."
https://www.theguardian.com/commentisfree/article/2024/may/14/chat-gtp-40-ai-human-corporate-product

bespacific, to generativeAI
@bespacific@newsie.social avatar

Fake studies have flooded publishers of top leading to thousands of , M of $ in lost revenue. Biggest hit has come to 217-year-old based in Hoboken NJ which announced it is closing 19 journals, some of which were infected by large-scale research . Wiley has reportedly had to retract more than 11,300 papers recently “that appeared compromised” as makes it easier for paper mills to peddle fake research. https://www.wsj.com/science/academic-studies-research-paper-mills-journals-publishing-f5a3d4bc

Andbaker, to academia
@Andbaker@aus.social avatar

Uncited generative AI use by students in coursework

I teach a course where I allow the use of generative AI. My university rules allow this and the students are instructed that they must cite the use of generative AI. I have set the same Laboratory Report coursework for the last two years. And students submit their work through TurnItIn. so I can see what the TurnItIn Ai checker is reporting.

http://andy-baker.org/2024/05/15/uncited-generative-ai-use-by-students-in-coursework/

#academia #AI #generativeAI #teaching #education

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AITraining #Copyright #GenerativeAI #IP #Creativity #Art: "Creating an individual bargainable copyright over training will not improve the material conditions of artists' lives – all it will do is change the relative shares of the value we create, shifting some of that value from tech companies that hate us and want us to starve to entertainment companies that hate us and want us to starve.

As an artist, I'm foursquare against anything that stands in the way of making art. As an artistic worker, I'm entirely committed to things that help workers get a fair share of the money their work creates, feed their families and pay their rent.

I think today's AI art is bad, and I think tomorrow's AI art will probably be bad, but even if you disagree (with either proposition), I hope you'll agree that we should be focused on making sure art is legal to make and that artists get paid for it.

Just because copyright won't fix the creative labor market, it doesn't follow that nothing will. If we're worried about labor issues, we can look to labor law to improve our conditions."

https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • kavyap
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • thenastyranch
  • ngwrru68w68
  • Youngstown
  • everett
  • slotface
  • rosin
  • ethstaker
  • Durango
  • GTA5RPClips
  • megavids
  • cubers
  • modclub
  • mdbf
  • khanakhh
  • vwfavf
  • osvaldo12
  • cisconetworking
  • tester
  • Leos
  • tacticalgear
  • anitta
  • normalnudes
  • JUstTest
  • All magazines