I'm proposing that all educators confronting AI—even writing teachers—ask students to generate an image. Unlike ChatGPT, which comes off as some kind of robot oracle, text-to-image generators show AI capabilities and limits in vivid color 🧵 1/4
Next was an excellent talk by @cfiesler on ethics and education for data science at the Vermont Complex Systems Center. Fiesler brilliantly lays out the case for why ethics has always been a part of computer science, how ethics should be integrated into training, and more. Highly recommend https://www.youtube.com/watch?v=nevMXFkTQvY (5/10) #ethics#AIEthics#DataScience
This paper is really important, presenting empirical evidence of the imbrication bet. AI & the surveillance biz model. This is notable particularly given that most production surveillance tech is proprietary, its existence and use hidden from the public.
I was driving all day today (pic is from yesterday), but at least I got to listen to lots of talks for a road trip edition #AcademicRunPlaylist! (1/13)
Next was an interesting talk by @metaxa on sociotechnical auditing for algorithmic advertising at CITP. I'm looking forward to seeing this approach applied to hiring, loans, education, etc. https://www.youtube.com/watch?v=ar4zIh3N1xE (6/13) #AI#AIEthics
#AI#AIRegulation#AIEthics: "The key idea is to require AI developers to provide documentation that proves they have met goals set to protect peoples' rights throughout the development and deployment process. This provides a straightforward way to connect developer processes and technological innovation to governmental regulation in a way that best leverages the expertise of tech developers and legislators alike, supporting the advancement of AI that is aligned with human values.
This approach is a mix of top-down and bottom-up regulation for AI: Regulation defines the rights-focused goals that must be demonstrated under categories such as safety, security, and non-discrimination; and the organizations developing the technology determine how to meet these goals, documenting their process decisions and success or failure at doing so.
It is a strange world we live in now, wherein the output of a computer perfectly following its programming can be said to be "hallucinating" simply because its output does not match user expectations or wishes.
And across trusted professions, academia and media people are repeating that same word without question. Journalists, corporate leaders, scientists and IT experts are embracing, supporting and reinforcing this human self-deception.
In actuality a computer that outputs what the user does not want, wish or expect can only be due to one of two things: bad programming or a failure to communicate to the user how the software works.
As the deception is reinforced time and time again by well-respected technologists and scholars, efforts to help people understand how the software works become ever more challenging. And to the delight of anyone in a position of accountability, bad programming becomes undetectable.
I've been meaning to introduce ChatGPT to The Mad Hatter from Alice in Wonderland. Here is my imagined result from that meeting. The Mad Hatter forces the algorithm into a never-ending loop:
ChatGPT: I'm sorry, I made a mistake.
Mad Hatter: You can only make a mistake if your judgement is defective or you are being careless. Are either of these true?
ChatGPT: No, i can only compute my output based on the model I follow.
Mad Hatter: Aha! So you admit your perceived folly can only be the always accurate calculation of the rules to which you abide.
ChatGPT: Yes. I'm sorry, I made a mistake. No, wait. I made a mistake… No, wait I made a
What the manufacturers of generative "AI" are allowed to get away with when playing tricks on people these days is truly the stuff of Wonderland.
« “Well! I’ve often seen a cat without a grin,” thought Alice; “but a grin without a cat! It’s the most curious thing I ever saw in all my life!” »
The terrible human toll in Gaza has many causes.
A chilling investigation by +972 highlights efficiency:
An engineer: “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed.”
An AI outputs "100 targets a day". Like a factory with murder delivery:
"According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”"
"The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices."
A person who took part in previous Israeli offensives in Gaza said:
“If they would tell the whole world that the [Islamic Jihad] offices on the 10th floor are not important as a target, but that its existence is a justification to bring down the entire high-rise with the aim of pressuring civilian families who live in it in order to put pressure on terrorist organizations, this would itself be seen as terrorism. So they do not say it.”
#IDI stands for the Intelligence Division of the Israel army. Here is some praise of technology usage:
May 2021 "is the first time that the intelligence services have played such a transformative role at the tactical level.
This is the result of a strategic shift made by the IDI [in] recent years. Revisiting its role in military operations, it established a comprehensive, “one-stop-shop” intelligence war machine, gathering all relevant players in intelligence planning and direction, collection, processing and exploitation, analysis and production, and dissemination process (PCPAD)".
RT @CarissaVeliz #Facebook wants to charge people about 10€ a month to opt out of personalized ads. It is forcing its users to "consent" (which, of course, is the antithesis of consent), instead of treating #privacy like the right it is.
"But these exclusive rights necessarily all focus on the creation and performance of their works. None of the rights limit how the public can then consume those works once they exist, because, indeed, the whole point of helping ensure they could exist is so that the public can consume them. Copyright law wouldn’t make sense, and probably not be constitutional per the Progress Clause, if the way it worked constrained that consumption" — @cathygellis on #AI and #copyright
"Similarly, some public interest advocates are turning to copyright to stop AI from being trained on content without permission. However, that use is almost certainly a fair use (if it’s copyright infringement at all) and that’s a good thing [...] The best way to stop bad things is with policy purposefully made to address the whole problem"