"Black British anti-knife crime activist Shaun Thompson, 38, has launched a legal challenge against the Metropolitan Police. The police detained the 38-year-old after live facial recognition technology wrongly identified him as a suspect."
The tragic irony is that these models get trained on datasets that do not represent the entire population, and then of course get weaponized against the marginalized communities.
What jobs are we preparing students for by boosting their writing productivity with AI? After shedding 40% of its workforce, the gaming site Gamurs posted an ad last June for an editor to write 250 articles per week. Thatâs a new article every 10 minutes, at $4.25 per article.
As @novomancy has noted, AI is only the accomplice here. This clickbait nightmare is the logical conclusion of the ad-supported web.
#AI#AIEthics: "Technology is built by humans and controlled by humans, and we cannot talk about technology as an independent agent acting outside of human decisions and accountabilityâthis is true for AI as much as anything else. The integrity that Mann rightly envisions for AI cannot be understood as a property of a model, or of a software system into which a model is integrated. Such integrity can only come via the human choices made, and guardrails adhered to, by those developing and using these systems. This will require changed incentive structures, a massive shift toward democratic governance and decision making, and an understanding that those most likely to be harmed by AI systems are often not âusersâ of the systems, but subjects of AIâs application âon themâ by those who have power over themâfrom employers, to governments to law enforcement. To truly ensure that AI systems are deployed in ways that have integrity, and uphold a dignified and equitable social order, those subject to AIâs use by powerful actors must have the information, power, and ability to determine what AI systems with âintegrityâ mean, and the ability to reject or contest their use."
#AI#GenerativeAI#OpenAI#AISafety#AIEthics: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.
Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the companyâs superalignment team â the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.
Theyâre not the only ones whoâve left. Since last November â when OpenAIâs board tried to fire CEO Sam Altman only to see him quickly claw his way back to power â at least five more of the companyâs most safety-conscious employees have either quit or been pushed out."
"Now OpenAIâs âsuperalignment teamâ is no more, the company confirms. That comes after the departures of several researchers involved, Tuesdayâs news that Sutskever was leaving the company, and the resignation of the teamâs other colead. The groupâs work will be absorbed into OpenAIâs other research efforts.
Sutskeverâs departure made headlines because although heâd helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board." https://www.wired.com/story/openai-superalignment-team-disbanded/
Now that it's been accepted to ACM FAccT'24, I've updated the preprint of my paper on why artists are right that AI art is a kind of theft. I hope this promotes more serious thought about the visions of generative AI developers and the impacts of these technologies.
@albertcardona Yeah! People keep hyping up things in different ways!
But even if the performance was great and use cases were justified, I keep wondering how many people care about the harm subjected on other people or environment to achieve the "progress" they cheer for!
Indeed. And on that, any gains in efficiency of ANNs implementation or GPU tech are squandered by ever growing neural networks. The scale currently is horrendous; the electricity and water usage is outlandish. One wonders, was the choice of clean water or "AI", what would the fanboys choose then. At the moment they choose "AI" for themselves and water shortages for others.
"In the study, participants rated responses from AI and humans without knowing the source, and overwhelmingly favored the AIâs responses in terms of virtuousness, intelligence, and trustworthiness.
This modified moral Turing test, inspired by ChatGPT and similar technologies, indicates that AI might convincingly pass a moral Turing test by exhibiting complex moral reasoning."
@SteveThompson
AI trained on widely available data is going to do a good job of emulating the moral decisions in that data.
If the participants rating it are ordinary folks rather than, say, ethicists, a high rating likely means it's good at emulating the responses they like, not necessarily that its responses are more moral, ethical, or just.
In other words, it could be that they like the training data, which might or might not be biased, and it's good at reproducing the training data.
I am really very excited about this open access special issue on AI, power and domination, with papers from friends and colleagues, that says the quiet part loud. #aiethics
#AI#PredictiveAlgorithms#PredictiveOptimization#AIEthics: "In predictive optimisation systems, machine learning is used to predict future outcomes of interest about individuals, and these predictions are used to make decisions about them. Despite being based on pseudoscience (on the belief that the future of the individual is already written and, therefore, readable), not working and unfixably harmful, predictive optimisation systems are still used by private companies and by governments. As they are based on the assimilation of people to things, predictive optimisation systems have inherent political properties that cannot be altered by any technical design choice: the initial choice about whether or not to adopt them is therefore decisive, as Langdon Winner wrote about inherently political technologies.
The adoption of predictive optimisation systems is incompatible with liberalism and the rule of law because it results in people not being recognised as self-determining subjects, not being equal before the law, not being able to predict which law will be applied to them, all being under surveillance as 'suspects' and being able or unable to exercise their rights in ways that depend not on their status as citizens, but on their contingent economic, social, emotional, health or religious status. Under the rule of law, these systems should simply be banned.
Requiring only a risk impact assessment â as in the European Artificial Intelligence Act â is like being satisfied with asking whether a despot is benevolent or malevolent: freedom, understood as the absence of domination, is lost whatever the answer. Under the AI ACT's harm approach to fundamental rights impact assessments (perhaps a result of the "lobbying ghost in the machine of regulation"), fundamental rights can be violated with impunity as long as there is no foreseeable harm."
#AI#GenerativeAI#AIEthics#ResponsibleAI#Hype: "We have been here before. Other overhyped new technologies have been accompanied by parables of doom. In 2000, Bill Joy warned in a Wired cover article that âthe future doesnât need usâ and that nanotechnology would inevitably lead to âknowledge-enabled mass destructionâ. John Seely Brown and Paul Duguidâs criticism at the time was that âJoy can see the juggernaut clearly. What he canât seeâwhich is precisely what makes his vision so scaryâare any controls.â Existential risks tell us more about their purveyorsâ lack of faith in human institutions than about the actual hazards we face. As Divya Siddarth explained to me, a belief that âthe technology is smart, people are terrible, and no oneâs going to save usâ will tend towards catastrophizing.
Geoffrey Hinton is hopeful that, at a time of political polarization, existential risks offer a way of building consensus. He told me, âItâs something we should be able to collaborate on because we all have the same payoffâ. But it is a counsel of despair. Real policy collaboration is impossible if a technology and its problems are imagined in ways that disempower policymakers. The risk is that, if we build regulations around a future fantasy, we lose sight of where the real power lies and give up on the hard work of governing the technology in front of us."