Stable Diffusion Inpainting

Input

Prompt

Steps

Scheduler

Seed

Input Image

Uploaded

Input Mask

Uploaded

Output

output


Stable Diffusion Inpainting

Stable Diffusion Inpainting is a text-to-image diffusion model that can create realistic images and art from a description in natural language. The inpainting model that can create photorealistic images from any given text input, and additionally has the ability to fill in missing parts of an image by using a mask.

Weights

These weights were trained on stable diffusion 1.5.

Features

  • Mask to control: Use mask to render something entirely new in any part of an existing picture.

Applications

ControlNet Inpainting can be utilized in various applications, such as:

  • Art generation
  • Scene creation
  • Animation and game development
  • Character creation
  • Product design

Getting Started

For more detailed instructions, refer to the API documentation and resources available on Github.

Github

https://github.com/CompVis/stable-diffusion

License

Apache License 2.0