Just saw a presentation by @pamelasamuelson on #GenerativeAI and #Copyright (https://www.youtube.com/watch?v=6sDGIrVO6mo). I agree with the general sentiment that AI output is likely not going to infringe the copyright of the original training data, but if you've any interest in the topic, this is a great summary from one of the leading legal scholars in the field.
I saw a comment about “creative people” vs. “non-creative people” the other day. The topic was #GenerativeAI products, in particular, and how (in the poster’s view) the so-called non-creative people “hated” creative work and wanted to automate it, perhaps out of ignorance that there were creative people who loved such work.
Notwithstanding the many issues of generative AI, this generalization is problematic—both the classification of people as “non-creative” and the motives ascribed to them.
"The jaw-dropping speed of #GenerativeAI’s embrace is essentially a large-scale acknowledgement that modern life is sort of miserable and that most people don’t actually care if anything works anymore. Which is, honestly, fair."
Ihr Lieben, ich brauche bzw. wuensche mir abermals euer Feedback, welches ihr hier 👉 https://t.ly/hallo mit einer kurzen Sprachnachricht hinterlassen koennt.
Ireland's newspaper of record, The Irish Times got duped by an opinion piece written mostly by generative AI; GPT-4. The piece was removed by the paper and an apology posted by the editor.
「 Any C-Suite executive who thinks they can replace software engineers (even novices) with generative AI will be at a disadvantage compared to competitors who use it to empower software engineers 」
—neverworkintheory.org
If you're an #Asimov fan like many of us, #PromptEngineering feels like something we definitely have read somewhere many moons ago. This great article revisits some of those great stories.
“Less than 24 hours after publication on our digital platforms, The #IrishTimes became aware that the column may not have been genuine. That prompted us to remove it from the site and to initiate a review, which is ongoing. It now appears that the #article and the accompanying #byline photo may have been produced, at least in part, using #GenerativeAI technology. It was a #hoax; the person we were corresponding with was not who they claimed to be. We had fallen victim to a deliberate and coordinated #deception.”
Suppose i would like to measure the amount of bias, discrimination, hallucination etc in tools like Bard, Bing, ChatGPT and others. Are there already standards and tools to measure that ?
There will be discussions whether model A is better/worse then model B, it would be nice to have some standards/benchmarks for evaluation ? 🤔 #AI#GenerativeAI#Evaluation#LLM
Language model plagiarism is an issue that nobody seems to want to talk about, even though vendors themselves say that direct copying from the training data set happens around 1% of the time. According to other researchers the rate varies. Sometimes less, at around 0.1%, which is still incredibly high for daily use. Sometimes more, at around 2%
And vendor tests, such as Microsoft's, are based on longer runs of text, so they wouldn't count this one.
"(W)hat we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent."
“AI art generators are trained on...millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history."
"Why should a for-profit company be permitted to feed the [work] of living artists into a program..so it can then be used to generate doppelganger versions of those very artists’ work, with the benefits flowing to everyone but the artists themselves?" #generativeAI
if more and more content is made with #generativeAI, then our information environment is slowly but increasingly filled with bland texts, providing weak 'you could say A or non-A' type of arguments, and impersonal analysis. What does the increase of such texts mean for learning, including both human and machine learning? We learn by reading the works of others, and machines are trained on other works.
Does generative AI in the end lead to the DEGENERATION of writing, analysis, and creativity?
"#TVandFilm#writers in the US – 11,500 of them – have walked off their jobs for the first time in 15 years. On May 2, their negotiations with the Alliance of Motion Picture and Television Producers broke down. One of the bargaining points: the role of AI in writing scripts"
Considering this set of principles by which #Anthropic tries to train its #AI, I found that it does not always meet those principles.
Anthropic, an AI startup founded by former OpenAI staff and that raised $1.3B, including $300M from #Google, details its “constitutional AI” for safer #chatbots.
🎉 We are excited to share the Generative AI at MozFest 2023 Report, a collaboration between Creative Commons and the Movement for a Better Internet! This report highlights the key insights from our session at Mozilla Festival 2023, where we discussed the opportunities, risks, and potential solutions of generative AI.
「 The book, titled “Automating DevOps with GitLab CI/CD Pipelines,” just like Cowell’s, listed as its author one Marie Karpos, whom Cowell had never heard of. When he looked her up online, he found literally nothing — no trace. That’s when he started getting suspicious.
The book bears signs that it was written largely or entirely by an artificial intelligence language model, using software such as OpenAI’s ChatGPT 」
— @washingtonpost
Step 1: Create the problem.
Step 2: Make Money.
Step 3: Promise to solve the problem.
Step 4: Make money.
Pay attention to the last paragraph in this piece. Hint, they say open source sucks.
After #GPTZero gained 1.2M users since January, co-founder Edward Tian raised $3.5M to launch Origin, aimed at "saving journalism" by detecting #AI disinformation.
I wonder how long it’ll take for fans of AI art to discover that it both has a specific aesthetic and that aesthetics eventually fall out of popular fashion?
He spends so much time Photoshopping the #StableDiffusion in-painting that the #GenerativeAI now represents about 40% of the whole workflow. Good quality art still requires the artist put in a lot of their own time and effort integrating the new tools into their overall vision.