- The AI model generates avatar faces using a custom stable diffusion model
- Deploy the model to Hugging Face
- Test the output using Postman
-
- def generate_avatar_face(image_path):
- Parameters:
- image_path (str): The path to the input image. - Returns:
- avatar_face (PIL.Image): The generated avatar face.
- Parameters:
- def generate_avatar_face(image_path):
-
- Using Postman:
-
Base URL : https://example.com/api/
-
Request : POST /avatar-face
-
Content-Type : png/jpeg
-
- Using Postman:
-
- def generate_avatar_face(prompt, negative_prompt, num_samples, num_inference_steps, guidance_scale, strength, image_url):
- Parameters:
- image_path (str) : The path to the input image . [Mandatory]
- prompt (str) : Description of image . [Optional]
- negative_prompt(str) : The description which shouldn't be in image . [Optional]
- num_samples (num) : Number of output images to be produced . [Optional]
- num_inference_steps(num): Number of steps to process the image . [Optional]
- guidance_scale(num) : Describes the freedom of model to follow the prompt . [Optional]
- strength (num) : Describes the noise percentage to be added to original image . [Optional]
- image_path (str) : The path to the input image . [Mandatory]
- Returns:
- avatar_face (.png[Image]): The generated avatar face.
- avatar_face (.png[Image]): The generated avatar face.
- Parameters:
- def generate_avatar_face(prompt, negative_prompt, num_samples, num_inference_steps, guidance_scale, strength, image_url):
-
- Using Gradio :
-
Gradio takes care of - 1.Deployment on Hugging-Face - 2.API Testing - 3.Front-End Web UI - 4.So, No need of postman-testing
-
- Using Gradio :
-
- Stable-Diffusion-webUI using [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui.git) - Very powerful and most efficient stable-diffusion platform - Control-Nets, Multi-Controlnets, Openpose editor and many many extra features can be operated on sd with this - Requries a large of free space (70GB), 8GB VRAM (std.) on NVIDIA GPUs.