remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIHype #Media #News #Journalism: "More broadly, across news media coverage of AI in general, reviewing 30 published studies, Saba Rebecca Brause and her coauthors find that, while there are of course exceptions, most research so far find not just a strong increase in the volume of reporting on AI, but also “largely positive evaluations and economic framing” of these technologies.

So, perhaps, as Timit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), has written on X: “The same news orgs hype stuff up during ‘AI summers’ without even looking into their archives to see what they wrote decades ago?”

There are some really good reporters doing important work to help people understand AI—as well as plenty of sensationalist coverage focused on killer robots and wild claims about possible future existential risks.

But, more than anything, research on how news media cover AI overall suggests that Gebru is largely right – the coverage tends to be led by industry sources, and often take claims about what the technology can and can’t do, and might be able to do in the future, at face value in ways that contributes to the hype cycle."

https://reutersinstitute.politics.ox.ac.uk/news/how-news-coverage-often-uncritical-helps-build-ai-hype

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god?

The company’s leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the world’s input into how to do it justly and wisely.

But when there’s real money at stake — and there are astounding sums of real money at stake in the race to dominate AI — it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees — those who know the most about what’s happening inside OpenAI — can’t tell the rest of the world what’s going on.

The website may have high-minded ideals, but their termination agreements are full of hard-nosed legalese. It’s hard to exercise accountability over a company whose former employees are restricted to saying “I resigned.”" https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release

1br0wn, to generativeAI
@1br0wn@eupolicy.social avatar

🇬🇧 minister wants to create a “framework or policy” around #GenerativeAI model training transparency but noted “very complex international problems that are fast moving”. She said the UK needed to ensure it had “a very dynamic regulatory environment”. https://on.ft.com/3ULCFn1

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #ChatBots: "What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill it—with nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.
(...)
I must assume that eventually an army of shame engineers will rise up, writing guilt-inducing code in order to make their robots more convincingly human. But it doesn’t mean I love the idea. Because right now you can see the house of cards clearly: By aggregating the world’s knowledge, chomping it into bits with GPUs, and emitting it as multi-gigabyte software that somehow knows what to say next, we've made the funniest parody of humanity ever. These models have all of our qualities, bad and good. Helpful, smart, know-it-alls with tendencies to prejudice, spewing statistics and bragging like salesmen at the bar. They mirror the arrogant, repetitive ramblings of our betters, the horrific confidence that keeps driving us over the same cliffs. That arrogance will be sculpted down and smoothed over, but it will have been the most accurate representation of who we truly are to exist so far, a real mirror of our folly, and I will miss it when it goes."

https://www.wired.com/story/generative-ai-totally-shameless/

attacus, to ai
@attacus@aus.social avatar

Most products and business problems require deterministic solutions.
Generative models are not deterministic.
Every single company who has ended up in the news for an AI gaffe has failed to grasp this distinction.

There’s no hammering “hallucination” out of a generative model; it’s baked into how the models work. You just end up spending so much time papering over the cracks in the façade that you end up with a beautiful découpage.

#AI #generativeAI

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Slack #AITraining #Copyright: "It all kicked off last night, when a note on Hacker News raised the issue of how Slack trains its AI services, by way of a straight link to its privacy principles — no additional comment was needed. That post kicked off a longer conversation — and what seemed like news to current Slack users — that Slack opts users in by default to its AI training, and that you need to email a specific address to opt out.

That Hacker News thread then spurred multiple conversations and questions on other platforms: There is a newish, generically named product called “Slack AI” that lets users search for answers and summarize conversation threads, among other things, but why is that not once mentioned by name on that privacy principles page in any way, even to make clear if the privacy policy applies to it? And why does Slack reference both “global models” and “AI models?”

Between people being confused about where Slack is applying its AI privacy principles, and people being surprised and annoyed at the idea of emailing to opt-out — at a company that makes a big deal of touting that “Your control your data” — Slack does not come off well."

https://techcrunch.com/2024/05/17/slack-under-attack-over-sneaky-ai-training-policy/?guccounter=1

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

remixtures,
@remixtures@tldr.nettime.org avatar

"Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other colead. The group’s work will be absorbed into OpenAI’s other research efforts.

Sutskever’s departure made headlines because although he’d helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board." https://www.wired.com/story/openai-superalignment-team-disbanded/

techchiili, to microsoft
@techchiili@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIHype #AGI: "The reality is that no matter how much OpenAI, Google, and the rest of the heavy hitters in Silicon Valley might want to continue the illusion that generative AI represents a transformative moment in the history of digital technology, the truth is that their fantasy is getting increasingly difficult to maintain. The valuations of AI companies are coming down from their highs and major cloud providers are tamping down the expectations of their clients for what AI tools will actually deliver. That’s in part because the chatbots are still making a ton of mistakes in the answers they give to users, including during Google’s I/O keynote. Companies also still haven’t figured out how they’re going to make money off all this expensive tech, even as the resource demands are escalating so much their climate commitments are getting thrown out the window."

https://disconnect.blog/ai-hype-is-over-ai-exhaustion-is-setting-in/

remixtures,
@remixtures@tldr.nettime.org avatar

The opposite view: "There’s universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. And when non-techies see the products for themselves, they most often become believers too. (Including Joe Biden, after a March 2023 demo of ChatGPT.) That’s why Microsoft is well along on a total AI reinvention, why Mark Zuckerberg is now refocusing Meta to create artificial general intelligence, why Amazon and Apple are desperately trying to keep up, and why countless startups are focusing on AI. And because all of these companies are trying to get an edge, the competitive fervor is ramping up new innovations at a frantic page. Do you think it was a coincidence that OpenAI made its announcement a day before Google I/O?

Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now it’s an appendage no less critical to our daily life than an arm or a leg. At a certain point AI’s feats, too, may not seem magical any more. But the AI revolution will change our lives, and change us, for better or worse. And we haven’t even seen GPT-5 yet." https://link.wired.com/view/5fda497df526221fe830f4d4l2x75.27v/4a13cfeb

br00t4c, to generativeAI
@br00t4c@mastodon.social avatar

STAT+: Venture capitalist Bob Kocher on generative AI startups, Change Healthcare cyberattack

#capital #generativeai

https://www.statnews.com/2024/05/17/bob-kocher-venture-capitalist-health-tech-stat-summit/?utm_campaign=rss

modean987, to generativeAI
@modean987@mastodon.world avatar
pixel,
@pixel@social.pixels.pizza avatar

@modean987 Same prompt:

br00t4c, to generativeAI
@br00t4c@mastodon.social avatar
FatherEnoch, to ai
@FatherEnoch@mastodon.online avatar

AI might be cool, but it’s also a big fat liar, and we should probably be talking about that more.

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong





kyonshi,
@kyonshi@dice.camp avatar

@FatherEnoch yeah, we are months away from the next newscycle of "this company used AI and now they have to pay big bucks"

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Hallucinations: "I want to be very clear: I am a cis woman and do not have a beard. But if I type “show me a picture of Alex Cranz” into the prompt window, Meta AI inevitably returns images of very pretty dark-haired men with beards. I am only some of those things!

Meta AI isn’t the only one to struggle with the minutiae of The Verge’s masthead. ChatGPT told me yesterday I don’t work at The Verge. Google’s Gemini didn’t know who I was (fair), but after telling me Nilay Patel was a founder of The Verge, it then apologized and corrected itself, saying he was not. (I assure you he was.)

When you ask these bots about things that actually matter they mess up, too. Meta’s 2022 launch of Galactica was so bad the company took the AI down after three days. Earlier this year, ChatGPT had a spell and started spouting absolute nonsense, but it also regularly makes up case law, leading to multiple lawyers getting into hot water with the courts.

The AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their dimwittedness. I cannot get excited about the next turn in the AI revolution because that turn is into a place where computers cannot consistently maintain accuracy about even minor things."

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #SocialSciences #Humanities: "With Senate Majority Leader Chuck Schumer releasing a sweeping “roadmap” for AI legislation today and major product announcements from OpenAI and Google, it’s been a big week for AI… and it’s only Wednesday.

But amid the ever-quickening pace of action, some observers wonder if government is looking at the tech industry with the right perspective. A report shared first with DFD from the nonprofit Data & Society argues that in order for powerful AI to integrate successfully with humanity, it must actually feature… the humanities.

Data & Society’s Serena Oduro and Tamara Kneese write that social scientists and other researchers should be directly involved in federally funded efforts to regulate and analyze AI. They say that given the unpredictable impact it might have on how people live, work and interact with institutions, AI development should involve non-STEM experts at every step.

“Especially with a general purpose technology, it is very hard to anticipate what exactly this technology will be used for,” said Kneese, a Data & Society senior researcher."

https://www.politico.com/newsletters/digital-future-daily/2024/05/15/ai-data-society-report-humanities-00158195

remixtures,
@remixtures@tldr.nettime.org avatar

"This policy brief explores the importance of integrating humanities and social science expertise into AI governance, and outlines some of the ways that doing so can help us to assess the performance and mitigate the harms of AI systems. It concludes with a set of recommendations for incorporating humanities and social science methods and expertise into government efforts, including in hiring and procurement processes." https://datasociety.net/library/ai-governance-needs-sociotechnical-expertise/

remixtures,
@remixtures@tldr.nettime.org avatar

"AI is deeply intertwined with social systems, organizations, institutions, and culture. Sociotechnical approaches to AI system development and deployment are important to contend with the socially-embedded nature of AI to ensure that these systems are safe and effective and that their risks have been appropriately managed. People with expertise in sociology, anthropology, political science, law, economics, and psychology already exist in a wide range of technical and non-technical roles in AI companies but tend to be underused in AI system development efforts. Instead, they are often relegated to siloed roles in AI ethics or governance, compliance, or pre-deployment user interface testing where they have limited input to early design and prototyping, with limited authority to substantively modify product roadmaps."

https://cdt.org/insights/applying-sociotechnical-approaches-to-ai-governance-in-practice/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • ngwrru68w68
  • rosin
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • ethstaker
  • Durango
  • everett
  • kavyap
  • thenastyranch
  • DreamBathrooms
  • magazineikmin
  • megavids
  • normalnudes
  • InstantRegret
  • osvaldo12
  • cisconetworking
  • tacticalgear
  • cubers
  • GTA5RPClips
  • tester
  • anitta
  • Leos
  • modclub
  • provamag3
  • lostlight
  • All magazines