machine vivid dreams
pip install -r requirements.txt --upgrade
if system is not recognizing taming transformers import git clone https://github.com/CompVis/taming-transformers into same folder as txt2dream.py
txt2dream.py / txt2dream.ipynb
python txt2dream.py
"""
total iterations = video_length * target_fps
key_frames = True, allows setup such as 10: (Apple: 1| Orange: 0), 20: (Apple: 0| Orange: 1| Peach: 1)
from frame 0 to frame 10 show Apple, from frame 10 to 20 show Orange & Peach
"""
from txt2dream import Text2Image
settings = {
'key_frames': True,
'width': 256,
'height': 256,
'prompt': '10: (Apple: 1| Orange: 0), 20: (Apple: 0| Orange: 1| Peach: 1)',
'angle': '10: (0), 30: (10), 50: (0)',
'zoom': '10: (1), 30: (1.2), 50: (1)',
'translation_x': '0: (0)',
'translation_y': '0: (0)',
'iterations_per_frame': '0: (1)'
'generate_video': True,
'video_length': 6, # seconds
'target_fps': 30,
'upscale_dream': True,
'upscale_strength': 2, # available [2, 4] -> 2x or 4x the generated output
}
Text2Image(settings)
upscale_dream.py
python upscale_dream.py
from upscale_dream import ScaleImage
settings = {
'input': './vqgan-steps', # './vqgan-steps/0001.png',
'output': './vqgan-steps-upscaled'
}
ScaleImage(settings)
Coming soon
This kind of work is possible because of cool people such as
https://github.com/crowsonkb
https://github.com/justin-bennington/S2ML-Art-Generator
https://github.com/xinntao/Real-ESRGAN
https://github.com/CompVis/taming-transformers
https://github.com/lucidrains/big-sleep
https://github.com/openai/CLIP
https://github.com/hojonathanho/diffusion