AI clones of people are becoming a lot more common and a lot more worrying.
This one was taken down after Ali jumped through a bunch of hoops to prove her identity, but CivitAI currently only removes models if there is a complaint - they have no policy against creating models to impersonate real people.
Although companies have created detectors to help spot #deepfakes, studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted.
A team of researchers discovered new methods that improve both the fairness and the accuracy of these detection algorithms by teaching them about human diversity
Many people canceled their Openai subscriptions, or it is tough to monetize stuff created with generative AI, I guess, so Sama comes with a new plan to use all those GPUs. They are now going after OF models. WTF OpenAI? They are going to allow deepfake? This company is beyond evil 👿
#AI#GenerativeAI#Propaganda#DeepFakes#Disinformation: "AI propaganda is here. But is it persuasive? Recent research published in PNAS Nexus and conducted by Tomz, Josh Goldstein from the Center for Security and Emerging Technology at Georgetown University, and three Stanford colleagues—master’s student Jason Chao, research scholar Shelby Grossman, and lecturer Alex Stamos—examined the effectiveness of AI-generated propaganda.
#AI#GenerativeAI#SyntheticMedia#DeepFakes: "Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.
Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.
But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences." https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/
How to ID an #AI imposter in video, audio & text."
"If there's any doubt about a person’s video veracity, ask them to turn their head to the right or the left, or look backward. If the person complies but their head disappears on the video screen, end the call immediately.
"#OpenAI’s voice cloning AI model only needs a 15-second sample to work
/
Called Voice Generation, the model has been in development since late 2022 and powers the Read Aloud feature in #ChatGPT."
"# The AI-generated voice can read out text prompts on command in the same language as the speaker or in a number of other languages.👈"
"OpenAI told the publication the model will only be available to about 10 developers."