#AI models were perplexed by a baby giraffe without spots. They're perplexed by me, too.
This article on #disability and #ableism within #GenerativeAI is more personal than I usually write. It would mean a lot to me if you read and shared it.
> "Just as GitHub was founded on Git, today we are re-founded on Copilot."
Look, I respect the heck out of the technical implementation of LLMs, but let's be honest: statistically they produce average code at best and misunderstood/invalid code most often. They re-implement old bugs and obfuscate programmer intent and anyone who is leaning on them for more than a pair assist is making software harder for the rest of us.
After a year of hype, the reality is emerging. Cloud clients aren’t buying generative AI tools because they’re expensive, lack accuracy, and it’s not clear what value they provide. Some analysts are already warning of a coming “trough of disillusionment.”
Hats off to the author, you don't see that kind of, uh, skillful rhetoric chicanery every day. Like "generative AI doesn't compete with artists because artists are not in the data market". 😬
This is an interesting read, there are some scary ideas behind the push for AI in everything, being propagated by the nightmare crew that brought you most of the other terrible stuff to come out of Silicon Valley...
Since their arrival, generative AI models and their trainers have demonstrated their ability to download any online content for model training. For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will. Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.
In an effort to address this power asymmetry, we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.
Generative AI bias can be substantially worse than in society at large. One example: “Women made up a tiny fraction of the images generated for the keyword ‘judge’ — about 3% — when in reality 34% of US judges are women . . . .In the Stable Diffusion results, women were not only underrepresented in high-paying occupations, they were also overrepresented in low-paying ones.” #AI#GenAI#GenerativeAI#LLM#LLMs https://www.bloomberg.com/graphics/2023-generative-ai-bias/
This is why you don't use ChatGPT for research. ESPECIALLY if you're inputting qualitative data from participants without having had their informed consent. Their data ends up on stranger's screens!!!
"Ars reader reports ChatGPT is sending him conversations from unrelated AI users
Names of unpublished research papers, presentations, and PHP scripts also leaked."
This is a beautifully-written, haunting, ambiguous and resonant exploration of one of the founding fathers of #AI - Joseph Weizenbaum - and the demons that drove his work in early #ConversationalAI with #Eliza. As Weizenbaum rightly asserts, our context, our history, our experience, shapes our relationship with, and toward, technology.
"Yet, as Eliza illustrated, it was surprisingly easy to trick people into feeling that a computer did know them – and into seeing that computer as human. Even in his original 1966 article, Weizenbaum had worried about the consequences of this phenomenon, warning that it might lead people to regard computers as possessing powers of “judgment” that are “deserving of credibility”. “A certain danger lurks there,” he wrote."
A certain danger lurks there. As applicable now in an age of #GenerativeAI as it was in the 1960s.
Thank you @bentarnoff for such an incisive piece. h/t to @CriticalAI for bringing it to my attention.
"If #OpenAI is found to have violated any #copyrights in this process, #FederalLaw allows for the infringing articles to be destroyed at the end of the case.
In other words, if a federal judge finds that OpenAI illegally copied The Times' articles to train its #AI model, the court could order the company to destroy #ChatGPT's dataset. " #AILaw#GenerativeAI
Much as I dislike the theft of human labor that feeds many of the #generativeAI products we see today, I have to agree with @pluralistic that #copyright law is the wrong way to address the problem.
To frame the issue concretely: think of whom copyright law has benefited in the past, and then explain how it would benefit the individual creator when it is applied to #AI. (Hint: it won’t.)
Copyright law is already abused and extended to an absurd degree today. It already overreaches. It impoverishes society by putting up barriers to creation and allowing toll-collectors to exist between citizen artists and their audience.
Labor law is likely what we need to lean on. #unions and #guilds protect creators in a way that copyright cannot. Inequality and unequal bargaining power that lead to exploitation of artists and workers is what we need to address head-on.