Risk tools come to still improve our chances. A study shows:
➤ Risk assessment reduces the likelihood of incarceration for relatively affluent defendants,
➤ Risk assessment increases the likelihood of incarceration for relatively poor defendants.
@eric@juliemoynat@craigabbott that seems to come from standard VPAT conventions that tie this partial/with exceptions language to an evaluation of support across a whole product. of course pass/fail is binary, but in an ACR you then give a statement of how much of the site/app is broken ... #a11y#accessibility [edit: edited for tone, which was unnecessarily confrontational]
"Là où on pense qu’il n’y a que machines et procédures automatiques, il y a en fait des humains qui chopent des tendinites et doivent faire des choix compliqués.[…] Là où on pense que les catégories sont claires, qu’il y a des données et des résultats, on s’aperçoit que les données ne sont pas données, qu’il faut les construire et que dans cette construction, entrent de la subjectivité, de la morale et même de la politique." https://www.radiofrance.fr/franceinter/podcasts/le-code-a-change/le-code-a-change-6-5342040@dataGovernance
"One ‘moral technology’ plays a central role in contemporary Western wars: ‘rules of engagement’. These rules take the form of (written) texts which state the circumstances under which the soldiers/airmen are authorized to open fire. Their claim to ‘morality’ stems from the fact that they present themselves as invitations to ‘master’ violence. [They] provide a concrete and operational translation of the ‘proportionality’ principle by stating how many ‘non-combatants’ airmen are allowed to kill"…
#AI systems accelerate the pace of war. "AI can contribute to mis- or disinformation, creating and amplifying dangerous misunderstandings in times of war. AI systems may increase the human tendency to trust suggestions from machines (this is highlighted by the Habsora system, named after the infallible word of God)".
"If a person has enough similarities to other people labelled as an enemy combatant, they too may be labelled a combatant themselves."
By design #generativeAI delivers whatever.
How it works is that the engineer takes obtained #data, grabs some probabilities and then the software outputs answers at scale.
If answers could be "true" is out of the scope.
"It is important to view this chapter as a starting point for discussion about what kind of society Creative AI techniques will be creating, and, more importantly, what kind of society Creative AI practitioners want to create through their artistic practice and use of #AI tools."
I TAed for "Ethics and Policy Issues In Computing", which is on her list.
I would also read some research articles. My favorite one to start with would be "Owning ethics: Corporate logics, silicon valley, and the institutionalization of ethics", Metcal etal, which paints "practicing" ethics in tech companies as a fraught endeavor.