I think I'm starting to learn how to use this. Instead of trying to prompt everything at one, I first prompted the background, then added a picture of a turtle and inpainted it to blend in style (not perfect, though). Then opened in gimp and deleted every shell part, added a city and again, inpainted to get the style. Then, another batch of img2img to blend the overall image a bit more and finally do the upscaling.
Can AI invent anything that is not based on anything in nature or previous work?
I posted this a while back.
I asked bing chat: "When and how was the wheel invented?"
"The wheel was invented sometime between 3500 and 3000 BC in Mesopotamia. The first wheel was not used for transportation, but for pottery. The first wheeled vehicle was a bullock cart, followed by war chariots and four-wheeled carts of the gods. The spoke wheel was invented in around 2000 BC, which considerably reduced the weight of the wheel. The wheel may have also been invented independently in China, around 2800 BC."
Me: "Could an AI have invented the wheel?"
"It is unlikely that an AI could have invented the wheel as it is not based on anything in nature. However, AI can be used to optimize the
design of wheels."
@randulo@ct_bergstrom@emilymbender From my experience experimenting with stablediffusion any completely original concepts seem to be mostly happy accidents.
Instead of classifying it as this I tend to come back to thinking of it in copyright terms of being either:
derivative or
transformative
The truth is that it can be both, just don't expect any deeper understanding from it. When creating images it can be completely transformative, unlike anything before, but it rarely is.
Any complicated mechanics that have to be in a certain way to function properly, it will mostly fail to replicate.
Everyone knows by now how poorly they works with hands but the same goes for engineering parts. Even things as simple as a wrench or a hammer you most likely will have to process further, for example using additional tools like inpainting. #ai#stablediffusion
So much of the majority of AI art I see is terrible, bad looking and deformed. But with the right instruction's it doesn't have to be.
To accurately create fictional characters, this is probably the best method I've seen so far. There's work involved for training a model like this and not something you can just give a bunch of prompts and expect good results.
I started by gathering 64 screenshots of my 3D VRChat model from Blender in various positions and angles in different lighting while wearing select clothing of choice. Then I added proper tags describing each image in a respective text file.
Based on the training data and they keywords I specified, you can input various clothing alternatives including:
armor
jacket
shirt
barechest
Training took about 30 minutes using an RTX2080Ti GPU.
With this technology gaining traction I certainly sympathize with artists concerned about their profession being in danger. It's a topic worth discussing and what it's societal effects will be. I can certainly see it ending up being bad and requiring proper regulation.
One thought on my mind would be that these are "tools" for us to use, and as with any tools if they're good or bad ends up being determined by how they're used. It's not the technology itself that is the danger, but rather how corporations and bad interests may exploit it to the detriment of everyone else.
Curious to hear other's thoughts about this and how we can approach it in a way that is beneficial for everyone.
These guides were really useful for explaining complex concepts without having the understanding of the mathematics involved. It tends to get really complicated the deeper you delve into it. https://rentry.org/lora_train https://rentry.org/59xed3
Since I really liked the results I ended retouching some parts manually. Things like eye color, fingers and random clutter where there were details that looked weird.
In 1923, a cartoonist imagined that in 2023, an electric machine would generate ideas and draw them as cartoons automatically. With your Idea Dynamo linked up to your Cartoon Dynamo and an adequate supply of ink, this machine would create "hilarious" (?) cartoons like 'How to Torture Your Wife'. 🙄
I wonder how long it’ll take for fans of AI art to discover that it both has a specific aesthetic and that aesthetics eventually fall out of popular fashion?
He spends so much time Photoshopping the #StableDiffusion in-painting that the #GenerativeAI now represents about 40% of the whole workflow. Good quality art still requires the artist put in a lot of their own time and effort integrating the new tools into their overall vision.
Been playing with #stablediffusion with #controlnet lately. Its surprisingly addictive... Ive been leaving the seed on -1 and getting random new images in the same pose. Very entertaining
Anybody has any idea when playing with #stablediffusion gets boring? Actually, asking whether I should upgrade from GTX 1660 TI to RTX 4070 TI... #ai#gpu
@msprout could you tell the difference? I made this in 60 secondes with #stablediffusion on M2Pro. It’s impressive to be able to run that locally. I also saw Vicuna made a 3B parameters that would be suitable for more embedded devices if quantized. Thinking about using this instead for my bot to have something non proprietary without all the BS from “OpenAI”.