If there were an open sign up for colonists for Mars, Titan, etc. I'd put my name on there without hesitation.
And so it begins...
I did that with RealCartoon-XL, and soooometimes it gave good results for the style, but inconsistent. I guess I just got too used to SD1.5 models specializing in a certain aesthetic.
Let's assume that it is indeed a burlap skort ๐
Finally a solution! Now what to do about the turbines murdering whales?
How about Three Point One Four?
And how many girls do you know that can play the harmonica with their pussies?
I think you've got a setting wrong. I've got mine set to download only. So it just downloads the update in the background and notifies me. I have even left that notification sitting there for months before without it forcing or nagging me.
I'm curious how stable diffusion plays into taking over the world.
Oh my, I found it. I think that guy needs a divorce...
You may be able to find an artist with a similar style here https://www.midlibrary.io/categories/illustrators
Looks like the data from the-eye.eu/redarcs only goes up to the start of March. But luckily the folks at ArchiveTeam processed the post at a time before the comment was edited. It can be seen here.
Okay soooooo, that took a lot longer than I anticipated, but I think I got it. It seems it is a problem with the VAE encoding process and it can be handled with the ImageCompositeMasked node that combines the padded image with the new outpainted area so that pre-outpainted area isn't affected by the VAE. I learned this here https://youtu.be/ufzN6dSEfrw?si=4w4vjQTfbSozFC6F&t=498. The whole video is quite useful, but the part I linked to is where he talks about that problem.
The next problem I ran into is that at around the fourth from the last outpainting, ComfyUI would stop, it just wouldn't go any further. The system I'm using has 24GB of VRAM and 42 GB of RAM so I didn't think that was the problem, but just in case I tried it on a beastly RunPod machine that had 48GB VRAM and 58GB of RAM. It had the exact same problem.
To work around this I first bypassed everything except the original gen and the first outpaint. Then I enabled each outpaint one by one until I got to the fourth from the last. At that point I saved the output image and bypassed everything except the original gen and first outpaint and enabled the last four outpaints, loading the last image manually.
I used DreamShaper XL Lightning because there was no way I was going to wait for 60 steps each time with FenrisXL ๐ I tried two different ways of using the same model for inpainting. The first was using the Fooocus Inpaint node and Differential Diffusion node. This worked well, but when comfy stopped working I thought maybe that was the problem so I switched all of those out for some model merging. Basically, it subtracts the base SDXL model from the SDXL inpainting model and adds the Dreamshaper XL Lighting model to that. This creates a "Dreamshaper XL Lightning inpainting model". The SDXL inpainting model can be found here.
You should be able to use this workflow with FenrisXL the whole time if you want. You'll just need to change the steps, CFG, and maybe sampler at each ksampler.
Image with ImageMaskedComposite: https://files.catbox.moe/my4u7r.png
Image without ImageMaskedComposite: https://files.catbox.moe/h8yiut.png