The AI Horde is about to become 1 year old, on Sep 3rd. I plan to make it a whole day event, where I'll be on voice comms all day, just hanging out. You can ask me anything you want.
I hope people old and new join us to celebrate one year of crowdsourced Generative AI compute for everyone!
SDAI FOSS
Generate Stable Diffusion AI assets on your own Automatic1111 instance.
Stable Diffusion AI is an easy-to-use app that lets you quickly generate images from text or other images with just a few clicks. With Stable Diffusion AI, you can communicate with your own server and generate high-quality images in seconds.
Because I have tried to write a perspective more appropriate for the private discussion and I keep failing because I get angry, so I am going to start here.
First, as a pet peeve, let me say:
I am autistic. I have ADHD. I have NVLD.
I strongly dislike when people say things like "As a neurodivergent, I will readily admit that I do not react well to such comments and for that I’d like to apologize and try again."
So here it says that they "match our shared guidelines."
Which is great. Do those guidelines include not using models trained on data that you don't have permsision to access?
Do those shared guidelines ban the use of stable diffusion?
If they aren't now, could they be updated accordingly?
This is a slip up in the shell game: it is freely admitting that the public #AiHorde 1. has a set of restrictions on who can join 2. that set of restrictions does not care about the issues discussed.
Sorry, more correctly, a set of restrictions around what models are appropriate to use on the #AiHorde
Which again underscores the point: This means that they are choosing and curating the list
That doesn't make them neutral middleware. Even if the project were neutral middleware (which again, it isn't), we must accept that they are deeply connected to a project that is not neutral: Stable Diffusion.
That they curate models gives them responsibility even if nothing else did.
Putting some of my thoughts here with respect to #haidra and #nivenly, which I may formalize later into questions for the discussion:
Would Haidra be willing to commit to zero use or advertising of models/workers that are trained on data sourced from copyrighted material that does not include the holder's permission, irrespective of legal fair use qualifiers? (ACM 1.6, 2.8).
Has an analysis been done on the environmental impact of #AiHorde? What would this look like? (ACM 1.1, 1.2)
Relatedly: What would the members of #nivenly think of codifying a series of ethical principles around the use of generative AI? Would #Haidra and #AiHorde be willing to abide by them? (ACM 1.2, 3.4, 4.1).
Currently Haidra appears to be taking the "I'm a sign, not a cop" to the problem of kudos being exchanged for money. Would Haidra and Nivenly be open to reexamining this strategy and determining if either other mechanisms might be more robust or if it can be secured (ACM 1.2, 2.5)
While #Haidra asserts that individuals are not identifiable ( https://github.com/Haidra-Org/AI-Horde/blob/main/FAQ.md#can-workers-spy-on-my-prompts-or-generations ) there do not seem to be strong safeguards in place around this that I can ascertain. Would the parties be open to an audit and considering any privacy risks identified here as P0 priorities to fix, even if it degrades #AiHorde as a service or renders it infeasible? Is this something that #nivenly could invest in? (ACM 1.2, 1.6, 1.7, 2.4, 2.9)
A follow on to (1): Given the structure of #Haidra there are significant environmental concerns in both the training and execution of models under the #AiHorde. Can benchmarks be set to reduce this environmental impact over time? Would the Haidra project be amenable to treating this as a high priority and to hold themselves accountable to a reasonable schedule here? (ACM 1.1, 1.2, 3.2)
More later as I think of them, time to grab dinner.