Exterior design using stable-diffusion ๐ก โ General install instructions.
-
CompVis
Stable Diffusion
High-Resolution Image Synthesis with Latent Diffusion Models -
Basujindal
fork optimisation for lesser VRAM
Optimized Stable Diffusion (Sort of) -
ControlNet
https://github.com/lllyasviel/ControlNet
pstring = "An fantasy english family home, dog in the foreground, fantasy, illustration, trending on artstation"
input_img = "../inputs/halle_at_home_2021_s.JPG"
strength = range(30, 75, 5)
for s in strength:
!python optimizedSD/optimized_img2img.py --prompt "{pstring}" --init-img {input_img} --strength {s*0.01} --seed 200 --outdir {outdir}
- ๐ Exterior design with stable diffusion with controlnet, canny-fp16 edge detection.
prompt = "modern english front garden, with traditional lush green lawn and striking architectural design"
- Alternative edge control, using
hed-fp16
Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3669285758, Size: 512x512,
Model hash: bb6e6362d8, Model: chikmix_V1, ControlNet: "preprocessor: softedge_hed,
model: control_hed-fp16 [13fee50b]
- Midjourney generation from prompt
line art drawing of top down landscape
architectural plan of a classic english garden --s 1 --v 4 --q 2 --s 5000
- Stable Diffusion + ControlNet with canny-fp16
landscape garden with flowers, professional photograph, acurate, intricate
Example from argaman123
๐
- Using the output of one image to generate a new image.
- This iterative process can make increasingly complex and customizable images.
A distant futuristic city full of tall buildings inside a huge transparent glass dome, In the middle of a barren desert full of large dunes, Sun rays, Artstation, Dark sky full of stars with a shiny sun, Massive scale, Fog, Highly detailed, Cinematic, Colorful
!python optimizedSD/optimized_img2img.py --prompt "{pstring}" --init-img {input_img} --strength 0.8
--n_iter 2 --n_samples 3 --H 512 --W 512 --seed 12 --outdir {outdir} --ddim_steps 200
Using an input image to create unlimited variations.
- Img from
jansteffen
on r/stablediffusion
Inpainting allows applying a layer mask to an area of interest โ and then running img2img
with a text prompt
to generate new content.
-
๐น Tutorial from 1littlecoder on youtube and accompanying Colab Notebook.
-
๐ค Uses Huggingface
diffusers
library.
Example: Adding a dragon to the castle (1)
and then adding flaming rubble to the gate (2)
.
prompt = "A fantasy castle with a dragon defending. Trending on artstation,
precise lineart, award winning, divine"
with autocast("cuda"):
images = pipe(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.7)["sample"]
Gradio webui by hlky https://github.com/sd-webui/stable-diffusion-webui
- Clone repo
- Run
webui.bat
from windows explorer
LAION-Aesthetics v2 6+
on Datasette:
From this blog post, and Hackernews conversation.
-
Top Artists
https://laion-aesthetic.datasette.io/laion-aesthetic-6pls/artists?_sort_desc=image_counts -
Search by Artist
https://laion-aesthetic.datasette.io/laion-aesthetic-6pls/images?_search=%22Thomas+Kinkade%22&_sort=rowid