Prompt
Steps
Scheduler
Seed
Input Image
Input Mask
Stable Diffusion Inpainting is a text-to-image diffusion model that can create realistic images and art from a description in natural language. The inpainting model that can create photorealistic images from any given text input, and additionally has the ability to fill in missing parts of an image by using a mask.
These weights were trained on stable diffusion 1.5.
ControlNet Inpainting can be utilized in various applications, such as:
For more detailed instructions, refer to the API documentation and resources available on Github.
https://github.com/CompVis/stable-diffusion