Are there any studies that actually show the efficacy of #Nightshade in preventing models from getting trained on your artwork? The authors of Nightshade of course claim that it works, but has there been independent studies to verify this?
I’ve only found some reddit posts that talk about it. #GLAZE and Nightshade don’t seem to be effective in fooling CLIP (text description extraction from image), only in fooling training models into misunderstanding the style (GLAZE) or label-to-subject correlation (Nightshade) of the image during training. While these sound pretty good, they don’t seem to be silver bullets. How many of us have to consistently misdirect trained models before they get fooled? How effective are these techniques actually? #ai
I understand the importance of alt text for images. But as I was drafting my image description just now a question occurred to me – in adding alt text to images am I also inadvertently making it easier for the AI bots to scrape my artwork?
For a hackathon this weekend, we built a small application in #Kotlin, #Javalin and #Lit that uses #Tensorflow to detect if you're about to upload something you might want to reconsider, and then allows stripping Exif metadata for privacy.
We also looked at distorting the image to make it unusable for training an #AI. In one day we could just garble the image beyond human recognition, but a better option would be integrating #Glaze to distort it for AI yet not for the human eye.
Does anyone know of an existing open source project working on AI model poisoning or style cloaking, in the vein of #glaze and #nightshade?
I'm interested in this tech but they both seem to be proprietary, and I'd like to see if there is any work being done on the open source side of things.
@aenderlara Looks great! Also, I never heard of Cara .. great! Is that an alternative to ArtStation? I see they have #Glaze to help artists protect their work from AI :)
I've been following the development of a neat little tool called NightShade Which supposedly alters the image in imperceptible ways that impact how generative models learn from your art, effectively "poisoning" their training information. There is a second tool called glaze by the same developers that made nighshade.
in short, Nighshade distorts what the AI learns about a certain pictures, and glaze provides protection from style mimicry, a quite awful practice i've already come across often where some insensitive individual trains an AI on a specific artist.
Using both of these tools we can push back against companies and individuals that scrape our art without our consent.
ALL of us should use #Glaze on artwork we post online. It’s a defense against “style mimicry attacks.”
#Nightshade is offensive. It “turns any image into a data sample that is unsuitable for model training. [It] transforms images into ‘poison’ samples, so that models training on them without consent will see their models learn unpredictable behaviors …”
Since their arrival, generative AI models and their trainers have demonstrated their ability to download any online content for model training. For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will. Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.
In an effort to address this power asymmetry, we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.
#Nightshade is an offensive #DataPoisoning tool, a companion to a defensive style protection tool called #Glaze, which The Register covered in February last year.
Nightshade poisons #ImageFiles to give indigestion to models that ingest data without permission. It's intended to make those training image-oriented models respect content creators' wishes about the use of their work. #LLM#AI
Hmm, here’s an idea: what if tools like Glaze and Nightshade were integrated into fediverse servers and/or clients and automatically applied whenever any images are posted… 🤔
Anti Ai image scraping tool that serves up random_noise_bmp with #Glaze applied to that. When an ai web bot asks for images, the server just give it junk instead of the images it thinks it's getting.
Throwing this out on the #fediverse: with Meta scrapping #artworks from billions of accounts between Facebook and Instagram to train their AI, I'm seeing a strong feeling of helplessness, of "where do I go from here and how do I protect myself?"
There's a move to go to image sharing platforms with #Glaze and #Nightshade options growing.
How is the @pixelfed crowd dealing with this? Is #pixelfed safe from scrapping and crawling while showcasing artists?
@jmcrookston Tech described here will not address that flaw (of either people nor LLMs being trained on human knowledge/dialogue), & I'd prefer a phrase carrying fewer negative connotations than "data poisoning." If LLMs and #AI are inevitable — and they are, in #healthcare and elsewhere — it behooves owners/operators to make sure they "behave" ethically":
Friend in #Bangalore has been taking these #pottery classes, thought I’d share. So satisfying to watch. She’s happy with :instagram: , so I don’t bother pulling her to join the Fediverse. 🎧 On.