poppastring, to ai
@poppastring@dotnet.social avatar

"Black British anti-knife crime activist Shaun Thompson, 38, has launched a legal challenge against the Metropolitan Police. The police detained the 38-year-old after live facial recognition technology wrongly identified him as a suspect."

The tragic irony is that these models get trained on datasets that do not represent the entire population, and then of course get weaponized against the marginalized communities.

https://peopleofcolorintech.com/articles/anti-knife-crime-activist-brings-legal-challenge-to-police-after-false-facial-recognition-arrest/

#ai #aiethics #blackmastodon #BlackFedi

poppastring, to ai
@poppastring@dotnet.social avatar

The Zoom team has either run out of ideas or is simply trolling people like me who cannot let an idea as strange as this go unanswered. đŸ€Ș

#ai #clones #aiethics

https://www.theverge.com/2024/6/3/24168733/zoom-ceo-ai-clones-digital-twins-videoconferencing-decoder-interview

SteveThompson, to ai
@SteveThompson@mastodon.social avatar
OmaymaS, to ai
@OmaymaS@dair-community.social avatar

I feel dizzy, sick and bored at the AI discourse.

We keep hearing the same bullshit.

We keep seeing new variations of the same flawed products.

We keep reading papers that state the obvious.

We keep pushing back the nonsense.

We keep seeing people cheering for the same nonsense.

We keep being pushed to embrace that nonsense.

đŸ€•

#AI #aiethics #GenAI #tech

codingconduct, to random
@codingconduct@hci.social avatar

From afar, congratulations 🎆 đŸ€© to the amazing newly-minted Dr @kaeru for defending his thesis on contestable AI. If you work on any way in , , or , you owe it to yourself to read it:

https://contestable.ai/

jonippolito, to journalism
@jonippolito@digipres.club avatar

What jobs are we preparing students for by boosting their writing productivity with AI? After shedding 40% of its workforce, the gaming site Gamurs posted an ad last June for an editor to write 250 articles per week. That’s a new article every 10 minutes, at $4.25 per article.

As @novomancy has noted, AI is only the accomplice here. This clickbait nightmare is the logical conclusion of the ad-supported web.

https://www.sciencetimes.com/articles/44308/20230614/gaming-media-company-looks-hire-ai-editor-write-250-articles.htm

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Technology is built by humans and controlled by humans, and we cannot talk about technology as an independent agent acting outside of human decisions and accountability–this is true for AI as much as anything else. The integrity that Mann rightly envisions for AI cannot be understood as a property of a model, or of a software system into which a model is integrated. Such integrity can only come via the human choices made, and guardrails adhered to, by those developing and using these systems. This will require changed incentive structures, a massive shift toward democratic governance and decision making, and an understanding that those most likely to be harmed by AI systems are often not ‘users’ of the systems, but subjects of AI’s application ‘on them’ by those who have power over them–from employers, to governments to law enforcement. To truly ensure that AI systems are deployed in ways that have integrity, and uphold a dignified and equitable social order, those subject to AI’s use by powerful actors must have the information, power, and ability to determine what AI systems with ‘integrity’ mean, and the ability to reject or contest their use."

https://theinnovator.news/interview-of-the-week-meredith-whittaker-ai-ethics-expert/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

remixtures,
@remixtures@tldr.nettime.org avatar

"Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other colead. The group’s work will be absorbed into OpenAI’s other research efforts.

Sutskever’s departure made headlines because although he’d helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board." https://www.wired.com/story/openai-superalignment-team-disbanded/

ErrantCanadian, to philosophy
@ErrantCanadian@zirk.us avatar

Now that it's been accepted to ACM FAccT'24, I've updated the preprint of my paper on why artists are right that AI art is a kind of theft. I hope this promotes more serious thought about the visions of generative AI developers and the impacts of these technologies.

https://philpapers.org/rec/GOEAAI-2

@philosophy @facct #AIEthics #philosophy #sts

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

"What do you mean by progress when you talk about AI?" and progress for whom?

I asked the techno-optimist guy at an AI Hype Manel!

  • Does progress mean getting bigger or better models?

  • What about the impact on environment, water resources, destruction of communities, mining raw materials in Africa?

He first didn't get my Q. Then he said he believed in the "utilitarian view" & developing intelligence is very important.

Just parroting the AI hype people!

OmaymaS,
@OmaymaS@dair-community.social avatar

@albertcardona Yeah! People keep hyping up things in different ways!

But even if the performance was great and use cases were justified, I keep wondering how many people care about the harm subjected on other people or environment to achieve the "progress" they cheer for!

albertcardona,
@albertcardona@mathstodon.xyz avatar

@OmaymaS

Indeed. And on that, any gains in efficiency of ANNs implementation or GPU tech are squandered by ever growing neural networks. The scale currently is horrendous; the electricity and water usage is outlandish. One wonders, was the choice of clean water or "AI", what would the fanboys choose then. At the moment they choose "AI" for themselves and water shortages for others.

SteveThompson, to ai
@SteveThompson@mastodon.social avatar

Disturbing in so many ways.

"AI Outperforms Humans in Moral Judgments"

https://neurosciencenews.com/ai-llm-morality-26041/

"In the study, participants rated responses from AI and humans without knowing the source, and overwhelmingly favored the AI’s responses in terms of virtuousness, intelligence, and trustworthiness.

This modified moral Turing test, inspired by ChatGPT and similar technologies, indicates that AI might convincingly pass a moral Turing test by exhibiting complex moral reasoning."

jeffc,
@jeffc@mastodon.online avatar

@SteveThompson
AI trained on widely available data is going to do a good job of emulating the moral decisions in that data.

If the participants rating it are ordinary folks rather than, say, ethicists, a high rating likely means it's good at emulating the responses they like, not necessarily that its responses are more moral, ethical, or just.

In other words, it could be that they like the training data, which might or might not be biased, and it's good at reproducing the training data.

cyberlyra, to random
@cyberlyra@hachyderm.io avatar

I am really very excited about this open access special issue on AI, power and domination, with papers from friends and colleagues, that says the quiet part loud.

https://firstmonday.org/ojs/index.php/fm

SteveThompson, to ai
@SteveThompson@mastodon.social avatar

There you have it. AI scofflaws.

"Former Amazon exec alleges she was told to ignore the law while developing an AI model — 'everyone else is doing it'"

https://www.businessinsider.com/ex-amazon-ghaderi-exec-suing-ai-race-copyright-allegations-2024

"A former Amazon exec alleges that the company instructed her to ignore copyright rules to stay afloat in the race for AI innovation."

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "In predictive optimisation systems, machine learning is used to predict future outcomes of interest about individuals, and these predictions are used to make decisions about them. Despite being based on pseudoscience (on the belief that the future of the individual is already written and, therefore, readable), not working and unfixably harmful, predictive optimisation systems are still used by private companies and by governments. As they are based on the assimilation of people to things, predictive optimisation systems have inherent political properties that cannot be altered by any technical design choice: the initial choice about whether or not to adopt them is therefore decisive, as Langdon Winner wrote about inherently political technologies.

The adoption of predictive optimisation systems is incompatible with liberalism and the rule of law because it results in people not being recognised as self-determining subjects, not being equal before the law, not being able to predict which law will be applied to them, all being under surveillance as 'suspects' and being able or unable to exercise their rights in ways that depend not on their status as citizens, but on their contingent economic, social, emotional, health or religious status. Under the rule of law, these systems should simply be banned.

Requiring only a risk impact assessment – as in the European Artificial Intelligence Act – is like being satisfied with asking whether a despot is benevolent or malevolent: freedom, understood as the absence of domination, is lost whatever the answer. Under the AI ACT's harm approach to fundamental rights impact assessments (perhaps a result of the "lobbying ghost in the machine of regulation"), fundamental rights can be violated with impunity as long as there is no foreseeable harm."

https://zenodo.org/records/10866778

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIEthics #ResponsibleAI #Hype: "We have been here before. Other overhyped new technologies have been accompanied by parables of doom. In 2000, Bill Joy warned in a Wired cover article that “the future doesn’t need us” and that nanotechnology would inevitably lead to “knowledge-enabled mass destruction”. John Seely Brown and Paul Duguid’s criticism at the time was that “Joy can see the juggernaut clearly. What he can’t see—which is precisely what makes his vision so scary—are any controls.” Existential risks tell us more about their purveyors’ lack of faith in human institutions than about the actual hazards we face. As Divya Siddarth explained to me, a belief that “the technology is smart, people are terrible, and no one’s going to save us” will tend towards catastrophizing.

Geoffrey Hinton is hopeful that, at a time of political polarization, existential risks offer a way of building consensus. He told me, “It’s something we should be able to collaborate on because we all have the same payoff”. But it is a counsel of despair. Real policy collaboration is impossible if a technology and its problems are imagined in ways that disempower policymakers. The risk is that, if we build regulations around a future fantasy, we lose sight of where the real power lies and give up on the hard work of governing the technology in front of us."

https://www.science.org/doi/10.1126/science.adp1175

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ‱
  • JUstTest
  • kavyap
  • DreamBathrooms
  • cisconetworking
  • khanakhh
  • mdbf
  • magazineikmin
  • modclub
  • InstantRegret
  • rosin
  • Youngstown
  • slotface
  • Durango
  • tacticalgear
  • megavids
  • ngwrru68w68
  • everett
  • tester
  • cubers
  • normalnudes
  • thenastyranch
  • osvaldo12
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • lostlight
  • All magazines