ControlNet Depth

Input

Prompt

Steps

Scheduler

Seed

Input Image

Uploaded

Output

output


ControlNet Depth

ControlNet is a neural network architechture that can be used to control pretrained large diffusion models to support additional input conditions. The purpose of ControlNet Depth is to enable conditional input of high resolution depth maps to control large diffusion models and facilitate related applications.

Weights

These weights were trained on stable diffusion 1.5.

Features

  • High resolution depth maps: This model receive the full 512×512 depth map, rather than 64×64 depth. This will preserve more details in the depth map than Stability's SD2 depth model that uses 64*64 depth maps.
  • Pose estimation: Depth can be use to transfer poses providing precise pose estimation.

Applications

ControlNet Depth can be utilized in various applications, such as:

  • Art generation
  • Scene creation
  • Human-computer interaction (HCI)
  • Virtual and augmented reality
  • Animation and game development

Getting Started

For more detailed instructions, refer to the API documentation and resources available on Github.

Github

https://github.com/lllyasviel/ControlNet

License

Apache License 2.0