SteveThompson, to ai
@SteveThompson@mastodon.social avatar
OmaymaS, to ai
@OmaymaS@dair-community.social avatar

I feel dizzy, sick and bored at the AI discourse.

We keep hearing the same bullshit.

We keep seeing new variations of the same flawed products.

We keep reading papers that state the obvious.

We keep pushing back the nonsense.

We keep seeing people cheering for the same nonsense.

We keep being pushed to embrace that nonsense.

🤕

jonippolito, to journalism
@jonippolito@digipres.club avatar

What jobs are we preparing students for by boosting their writing productivity with AI? After shedding 40% of its workforce, the gaming site Gamurs posted an ad last June for an editor to write 250 articles per week. That’s a new article every 10 minutes, at $4.25 per article.

As @novomancy has noted, AI is only the accomplice here. This clickbait nightmare is the logical conclusion of the ad-supported web.

https://www.sciencetimes.com/articles/44308/20230614/gaming-media-company-looks-hire-ai-editor-write-250-articles.htm

#Journalism #Writing #AIethics #AIEdu #AIinEducation #Gaming

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Technology is built by humans and controlled by humans, and we cannot talk about technology as an independent agent acting outside of human decisions and accountability–this is true for AI as much as anything else. The integrity that Mann rightly envisions for AI cannot be understood as a property of a model, or of a software system into which a model is integrated. Such integrity can only come via the human choices made, and guardrails adhered to, by those developing and using these systems. This will require changed incentive structures, a massive shift toward democratic governance and decision making, and an understanding that those most likely to be harmed by AI systems are often not ‘users’ of the systems, but subjects of AI’s application ‘on them’ by those who have power over them–from employers, to governments to law enforcement. To truly ensure that AI systems are deployed in ways that have integrity, and uphold a dignified and equitable social order, those subject to AI’s use by powerful actors must have the information, power, and ability to determine what AI systems with ‘integrity’ mean, and the ability to reject or contest their use."

https://theinnovator.news/interview-of-the-week-meredith-whittaker-ai-ethics-expert/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #AISafety #AIEthics: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

ErrantCanadian, to philosophy
@ErrantCanadian@zirk.us avatar

Now that it's been accepted to ACM FAccT'24, I've updated the preprint of my paper on why artists are right that AI art is a kind of theft. I hope this promotes more serious thought about the visions of generative AI developers and the impacts of these technologies.

https://philpapers.org/rec/GOEAAI-2

@philosophy @facct

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

"What do you mean by progress when you talk about AI?" and progress for whom?

I asked the techno-optimist guy at an AI Hype Manel!

  • Does progress mean getting bigger or better models?

  • What about the impact on environment, water resources, destruction of communities, mining raw materials in Africa?

He first didn't get my Q. Then he said he believed in the "utilitarian view" & developing intelligence is very important.

Just parroting the AI hype people!

#AI #AIethics #GenAI #AIhype

SteveThompson, to ai
@SteveThompson@mastodon.social avatar

Disturbing in so many ways.

"AI Outperforms Humans in Moral Judgments"

https://neurosciencenews.com/ai-llm-morality-26041/

"In the study, participants rated responses from AI and humans without knowing the source, and overwhelmingly favored the AI’s responses in terms of virtuousness, intelligence, and trustworthiness.

This modified moral Turing test, inspired by ChatGPT and similar technologies, indicates that AI might convincingly pass a moral Turing test by exhibiting complex moral reasoning."

SteveThompson, to ai
@SteveThompson@mastodon.social avatar

There you have it. AI scofflaws.

"Former Amazon exec alleges she was told to ignore the law while developing an AI model — 'everyone else is doing it'"

https://www.businessinsider.com/ex-amazon-ghaderi-exec-suing-ai-race-copyright-allegations-2024

"A former Amazon exec alleges that the company instructed her to ignore copyright rules to stay afloat in the race for AI innovation."

#AI #AIethics #AIpolicy #AIregs #Amazon

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIEthics #ResponsibleAI #Hype: "We have been here before. Other overhyped new technologies have been accompanied by parables of doom. In 2000, Bill Joy warned in a Wired cover article that “the future doesn’t need us” and that nanotechnology would inevitably lead to “knowledge-enabled mass destruction”. John Seely Brown and Paul Duguid’s criticism at the time was that “Joy can see the juggernaut clearly. What he can’t see—which is precisely what makes his vision so scary—are any controls.” Existential risks tell us more about their purveyors’ lack of faith in human institutions than about the actual hazards we face. As Divya Siddarth explained to me, a belief that “the technology is smart, people are terrible, and no one’s going to save us” will tend towards catastrophizing.

Geoffrey Hinton is hopeful that, at a time of political polarization, existential risks offer a way of building consensus. He told me, “It’s something we should be able to collaborate on because we all have the same payoff”. But it is a counsel of despair. Real policy collaboration is impossible if a technology and its problems are imagined in ways that disempower policymakers. The risk is that, if we build regulations around a future fantasy, we lose sight of where the real power lies and give up on the hard work of governing the technology in front of us."

https://www.science.org/doi/10.1126/science.adp1175

OmaymaS, to tech
@OmaymaS@dair-community.social avatar
  • "Business" is NOT neutral.
  • Tech is NOT apolitical.
  • Industry is NOT detached from the wider societal and political issues.

Executives & investors who promote opposite facts are either naïve or benefiting from isolating & silencing their employees.

jonippolito, to generativeAI
@jonippolito@digipres.club avatar

Google's Education VP wants us to believe AI is the classroom's new calculator, but this is a terrible analogy:

  1. We know how calculators produce their results.
  2. You can check a calculator's answer using pretty much the same algorithm it uses.
  3. Rare floating point errors aside, calculators do not invent false answers.
  4. Calculators are based on math principles; LLMs are based on no principles.

https://news.slashdot.org/story/24/04/06/0541216/ais-impact-on-cs-education-likened-to-calculators-impact-on-math-education #AIethics #AIEdu #AIinEducation #AIliteracy #GenerativeAI #GenAI #LLM

eric, to IsraelPalestine

#Lavender is traditionally used in France to reduce the moth population.

Only a small proportion of French Jews emigrate to #IsraelPalestine. These Binationals are subject to compulsory military service.

This army prepared and launched the first #AIWar in 2021: https://techhub.social/@estelle/111510965384428730

A development team has designed a more efficient product, which a Frenchman has suggested calling Lavander: https://techhub.social/@estelle/112220409975979758 @palestine

#humour #innovation #techBros #ethics #Gospel #tech #AIEthics #AI

ErrantCanadian, to philosophy
@ErrantCanadian@zirk.us avatar

Happy to share that my paper on why AI art is theft has been accepted to the 2024 ACM Conference on Fairness, Accountability, and Transparency! See you in Rio in June 😃

Preprint here (revisions soon):
philpapers.org/rec/GOEAAI-2
arxiv.org/abs/2401.06178

@facct @philosophy

jonippolito, to Cybersecurity
@jonippolito@digipres.club avatar

A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.

Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬

https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware

underdarkGIS, to random
@underdarkGIS@fosstodon.org avatar

Excited about our upcoming @emeraldseu #Webinar: Navigating AI's Ethical Aspects

It's a big topic to tackle.

When? 28 March 11:00 CET
Where? https://emeralds-horizon.eu/events/emeralds-webinar-navigating-ais-ethical-aspects

#emeraldseu #aiethics #xai #mobilitydatascience #geoai

CorinnaBalkow, to random

"Our results suggest that between 6.5% and 16.9% of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates. The circumstances in which generated text occurs offer insight into user behavior: the estimated fraction of LLM-generated text is higher in reviews which report lower confidence, were submitted close to the deadline, and from reviewers who are less likely to respond to author rebuttals. We also observe corpus-level trends in generated text which may be too subtle to detect at the individual level, and discuss the implications of such trends on peer review. We call for future interdisciplinary work to examine how LLM use is changing our information and knowledge practices."

https://arxiv.org/abs/2403.07183

#AIEthics

axbom, to random
@axbom@axbom.me avatar

Generative AI can not generate its way out of prejudice

The concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that after hundreds of attempts would not generate an output matching the prompt. I tried their experiment with Stable Diffusion's free web version and found it every bit as concerning as you might imagine.

https://axbom.com/generative-prejudice/

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

Github copilot suggests real use names!

Seems that, not only do they include copyrighted data, but also keep user names in the training data (which is sth irrelevant!)

What about private repos?

#AI #generativeAI #aiethics #AIhype #copilot

echevarian, to techno
@echevarian@genart.social avatar
jonippolito, to generativeAI
@jonippolito@digipres.club avatar

AI companies to universities: Personalized tutors will make you obsolete

Also AI companies: Thanks for recording your lectures so we can sell them on the open market to train personalized tutors

https://annettevee.substack.com/p/when-student-data-is-the-new-oil

SuVergnolle, to ArtificialIntelligence French
@SuVergnolle@eupolicy.social avatar

You want to take a step back and reflect on the regulation of Artificial Intelligence? I have a report for you! 🚀

❓What's in it?
As we navigate the evolving landscape of , it is crucial to put democratic principles at the forefront. The report does just that and outlines key recommendations on four different topics: Design, Liability, Ethics, and Governance.

📃 Full report: https://informationdemocracy.org/2024/02/28/new-report-of-the-forum-more-than-200-policy-recommendations-to-ensure-democratic-control-of-ai/

jonippolito, to llm
@jonippolito@digipres.club avatar

"Aftermarket" fixes applied after training, like injecting diversity terms into prompts, don't fix the underlying model and can even exacerbate harmful fabrications. If the training set is biased—and the Internet is—it's really hard to correct that after the fact.

https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

✍️ New Blog Post

On The Enshittification of Everything: Melting Down in The AI Summer!

https://www.onceupondata.com/post/2024-02-18-enshittification-of-everything/

#AI #GenAI #AIethics #AIhype

rocketdyke, to reddit
@rocketdyke@yellowmustard.club avatar

well, looks like I'll be going in and editing all of my old reddit posts and comments to be gibberish so I can poison a machine learning dataset.

https://9to5mac.com/2024/02/19/reddit-user-content-being-sold/

#reddit #ai #aiethics #poisonTheDataset

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • ngwrru68w68
  • everett
  • InstantRegret
  • magazineikmin
  • thenastyranch
  • rosin
  • GTA5RPClips
  • Durango
  • Youngstown
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • provamag3
  • tacticalgear
  • osvaldo12
  • tester
  • cubers
  • cisconetworking
  • mdbf
  • ethstaker
  • modclub
  • Leos
  • anitta
  • normalnudes
  • megavids
  • lostlight
  • All magazines