A research script based on Artificial Intelligence for Blender1, this is a multi-steps development project, with LLM Local System2 focus in mind. This script/addon is intended to serve the purpose to generate unique images and video sequences, within Blender Nodes Editor and based on Public Checkpoint Models3 OR/AND Private custom Models and LoRas4. It includes an integrated learning machine process as well as a workflows exporter script.
!!! This documentation is in progress !!!
the code release will be uploaded as soon as the complete package is ready, stay tuned...
NOTE: This project is derived from the K.Sharon & Yorha4D work, called "ComfyUI-BlenderAI-node".
Link: https://github.com/AIGODLIKE/ComfyUI-BlenderAI-node
Users: https://github.com/Yorha4D & https://github.com/KarryCharon
Release: 1.2.9 - Licensed under GNU General Public License v3.0
"I decided to re-write this addon for several reasons. The first was that the original translation was in Chinese and that some errors appeared due to non-standard characters. The second was that the interface deserved to be redesigned to integrate into my workflow. The third was that I wanted to add functions for animation and the video sequencer. And the last because I haven't found a way to get in touch with the developers of the project and offer my collaboration." 🙋
The objectives of this project are to offer different digital creation tools, using Blender as the main platform and only in local environment. At a time when Blender 4x is in the process of being officially released, the main goal is to deliver this addon as soon as this new version of our favorite software is released. Here is a non-exhaustive list of planned features:
- Create images from prompts (background image, textures, hdri, titles)
- Create images from images (styles, tones, themes)
- Create 3D Models from images (DMTet integration)
- Create texts from prompts (subtitles)
- Create images sequences from prompts/images (video strips, animations)
- Create audio strips from prompts/images (voices, music themes)
- NEW Blender Node Editor window
- Server connection for local ComfyUI sessions (ComfyUI via Stability Matrix ) PC Only!
- Cpu OR Gpu processing with Cuda support
- Preferences Panel (Server Settings, Image Rendering Settings, Training Options)
- NodeTree Editor Tools Panel (N) with Presets collection and Store tools
- Automatic generated content saving in .blend file
- Stable diffusion models custom nodes, support SD 1.5, 2.0 and XL 1.0 (See tested Models List)
- Processing Text-2-Single images and Image sequences, rendering/storing
- Processing Script-2-Prompt GPT-2 based custom nodes
- Processing 3D scene-2-rendered image custom nodes
- Processing image-2-Images-Sequence_Strip option
- Custom checkpoints, LoRas loading/merging nodes
- No LoRa Image Training wokflow included
- Background Mask with Alpha channel custom node
Todo List: See next steps development phases Updated: 26/10/2023
After installing ComfyUI services with your prefered plateform (i am suggesting the use of Stability Matrix as it is easy to install for beginners), make sure you install the additional modules. Then you need to install GIT software (if it is not done already) on your computer. To install these modules, open a CMD window in the \ComfyUI\custom_nodes folder. And "git clone" each one of them. By adding the link after "git clone".
- ComfyUi-Manager
- ComfyUI-Impact
- ComfyUI-Inspire
- WAS Nodes Suite
- Animate Diff
- Prompt-Expansion
- Derfuu-ComfyUI_ModdedNodes
- Go to the user preferences screen (Edit -> User Preferences).
- Select the “Addons” tab.
- Click “Install from File…” and select the downloaded zip file.
- Click the checkbox on the left to enable the add-on.
- Click “Save User Settings” to make sure the addon is enabled when you restart Blender.
- Set the parameters to fit your GPU system and paths.
- Open the new nodes editor, and press N to open the addon panel.
- Start generating contents :) Read the tutorials in this repo
The addon offers 2 types of presets, full nodetrees and node groups. When the user saves a preet, it is automatically kept in specific folders. This allows, among other things, to share your “workflows” or nodetrees with other users. Here is the list of presets provided in this addon and their usage:
- Basic setups
- Simple text-2-image
- Advanced text-2-image
- Simple image-2-image
- Advanced image-2-image
- Advanced setups
- LoRas Merging
- VAEs Merging
- Complex text-2-image
- Complex image-2-image
- Animation setups
- Basic images Sequence
- Advanced images sequence
- Film making setups
- VFX movie (Inpaint)
Objectives: Collecting user data and processing it locally allows you to create personalized artistic and technical models, ready to be used in a new nodetree.
- Many options: Structural, Lineal, Hierachical...
- Json files storage path
Data Types |
---|
- Actions History
- What is the last action done by a specific user?
- What is the most repeating actions?
- Space Orbits
- What is the mid distance from the object origin in edit mode?
- What is the mid distance of action from the world origin?
- Time Sequences
- What is the mid time between same actions?
- What is the time between same files edition?
- User Types
- What is the skills level of a specific user?
- What is the most needed skill by user type
- user Project Types
- What is the project types? Architecture, Industrial Design, etc...
- What is graphical type associated with the project? Realistic scene, cartoon, etc...
The addon has 3 modes: Analyse/Prepare/Write, external file from session start.
- ANALYSE: write an external file
- PREPARE: images data
- WRITE: .ckpt/tensors
- NOTE: ...
Data Processing |
---|
- Real-time Support
- Suggest a serie of optional processes and combos
- Suggest a serie of optional Shortcuts
- Suggest a serie of optional parametric objects
- Suggest a serie of optional Texturing processes
- Real-time Auto-Correct
- Show errors based on 3 main error types*
Data Backend |
---|
- Local Files
- Write data as text file (txt, Json, Xml)
- Write data as new file (.bai) aka csv alternate
- Temporary Files
- Realize 3 reality states A-X-B (data morphing)
- Compare 2 states A/B
- Author: Uriel Deveaud - Kore Teknology
- License: This project is released under the GPL License.
- This work is dedicated to all Blender users around the world ;)
Footnotes
-
Blender is the free and open source 3D creation suite. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, compositing and motion tracking, even video editing and game creation. Please, visit the Blender Official Website. ↩
-
Large language models (LLM) are very large deep learning models that are pre-trained on vast amounts of data. The underlying transformer is a set of neural networks that consist of an encoder and a decoder with self-attention capabilities. ↩
-
Checkpoints are snapshots of your working model during the training process and stores it in a non-volatile memory. In machine learning and deep learning experiments, they are essentially the things which one uses to save the current state of the model so that one can pick up from where they left. ↩
-
LoRA (Low Rank Adaptation) is a new technique for fine-tuning deep learning models that works by reducing the number of trainable parameters and enables efficient task switching. ↩