Models
Here are some popular generative model APIs that you can use in your application.
Text To Image
Image to Image
Utility Functions
Controlnets
New Model

NewReality Lightning SDXL

NewReality Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

DreamShaper Lightning SDXL

DreamShaper Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

Colossus Lightning SDXL

Colossus Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

Samaritan Lightning SDXL

Samaritan Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

Realism Lightning SDXL

Realism Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

ProtoVision Lightning SDXL

ProtoVision Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

NightVis Lightning SDXL

NightVis Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

WildCard Lightning SDXL

WildCard Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

Dynavis Lightning SDXL

Dynavis Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

Juggernaut Lightning SDXL

Juggernaut Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

Realvis Lightning SDXL

Realvis Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

New Model

Segmind Vega

The Segmind-Vega Model is a distilled version of the Stable Diffusion XL (SDXL), offering a remarkable 70% reduction in size and an impressive 100% speedup while retaining high-quality text-to-image generation capabilities.

New Model

Segmind VegaRT

Segmind-VegaRT a distilled consistency adapter for Segmind-Vega that allows to reduce the number of inference steps to only between 2 - 8 steps.

Background Removal V2

This model removes the background image from any image

InstantID

InstantID aims to generate customized images with various poses or styles from only a single reference ID image while ensuring high fidelity

Controlnet Inpainting

This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting and controlling the pictures by using a mask

Samaritan 3D XL

Samaritan 3D XL leverages the robust capabilities of the SDXL framework, ensuring high-quality, detailed 3D character renderings.

IP adapter Openpose XL

IP Adapter XL Openpose is built on the SDXL framework. This model integrates the IP Adapter and Openpose preprocessor to offer unparalleled control and guidance in creating context-rich images.

IP adapter Canny XL

IP Adpater XL Canny is built on the SDXL framework. This model integrates the IP Adapter and Canny edge preprocessor to offer unparalleled control and guidance in creating context-rich images.

IP adapter Depth XL

IP Adapter Depth XL is built on the SDXL framework. This model integrates the IP Adapter and Depth preprocessor to offer unparalleled control and guidance in creating context-rich images.

SDXL Inpaint

This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask

SSD Img2Img

This model uses SSD-1B to generate images by passing a text prompt and an initial image to condition the generation

SDXL Openpose

This model leverages SDXL to generate the images with ControlNet conditioned on Human Pose Estimation.

SSD Depth

This model leverages SSD-1B to generate the images with ControlNet conditioned on Depth Estimation

SSD Canny

This model leverages SSD-1B to generate the images with ControlNet conditioned on Canny Images

SSD 1B

The Segmind Stable Diffusion Model (SSD-1B) is a distilled 50% smaller version of the Stable Diffusion XL (SDXL), offering a 60% speedup while maintaining high-quality text-to-image generation capabilities. It has been trained on diverse datasets, including Grit and Midjourney scrape data, to enhance its ability to create a wide range of visual content based on textual prompts.

Copax Timeless SDXL

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

Zavychroma SDXL

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

Realvis SDXL

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

Dreamshaper SDXL

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

Archived

Stable Diffusion 2.1

Stable Diffusion is a type of latent diffusion model that can generate images from text. It was created by a team of researchers and engineers from CompVis, Stability AI, and LAION. Stable Diffusion v2 is a specific version of the model architecture. It utilizes a downsampling-factor 8 autoencoder with an 865M UNet and OpenCLIP ViT-H/14 text encoder for the diffusion model. When using the SD 2-v model, it produces 768x768 px images. It uses the penultimate text embeddings from a CLIP ViT-H/14 text encoder to condition the generation process.

Archived

Stable Diffusion XL 0.9

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

Word2img

Create beautifully designed words using Segmind’s word to image for your marketing purposes

Archived

Segmind Tiny SD

Convert Text into Images with the latest distilled stable diffusion model

Stable Diffusion Inpainting

Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask

Stable Diffusion img2img

This model uses diffusion-denoising mechanism as first proposed by SDEdit, Stable Diffusion is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

Archived

Segmind Small SD

Create realistic portrait images using the finetined Segmind Tiny SD model. Segmind Tiny SD (Portrait) Serverless APIs, Segmind offers fastest deployment for Tiny-Stable-Diffusion inferences

Stable Diffusion XL 1.0

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software

Archived

Scifi

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

Samaritan

The most versatile photorealistic model that blends various models to achieve the amazing realistic images

Archived

RPG

This model corresponds to the Stable Diffusion RPG checkpoint for detailed images at the cost of a super detailed prompt

Reliberate

This model corresponds to the Stable Diffusion Reliberate checkpoint for detailed images at the cost of a super detailed prompt

Realistic Vision

This model corresponds to the Stable Diffusion Realistic Vision checkpoint for detailed images at the cost of a super detailed prompt

Archived

RCNZ Cartoon

The most versatile photorealistic model that blends various models to achieve the amazing realistic images

Archived

Paragon

This model corresponds to the Stable Diffusion Paragon checkpoint for detailed images at the cost of a super detailed prompt

SD Outpainting

Stable Diffusion Outpainting can extend any image in any direction

Archived

Manmarumix

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

Majicmix

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Juggernaut Final

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

Fruit Fusion

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

Flat 2d

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

Fantassified Icons

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Epic Realism

This model corresponds to the Stable Diffusion Epic Realism checkpoint for detailed images at the cost of a super detailed prompt

Edge of Realism

This model corresponds to the Stable Diffusion Edge of Realism checkpoint for detailed images at the cost of a super detailed prompt

Archived

DvArch

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

Dream Shaper

Dreamshaper excels in delivering high-quality, detailed images. It is fine-tuned to understand and interpret a diverse range of artistic styles and subjects.

Archived

Cartoon

This model corresponds to the Stable Diffusion Disney checkpoint for detailed images at the cost of a super detailed prompt

Archived

Deep Spaced Diffusion

The most versatile photorealistic model that blends various models to achieve the amazing realistic space themed images.

Cyber Realistic

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

Cute Rich Style

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

Colorful

This model corresponds to the Stable Diffusion Colorful checkpoint for detailed images at the cost of a super detailed prompt

Archived

All in one pixe

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

526mix

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived

QR Generator

Create beautiful and creative QR codes for your marketing campaigns.

Archived

Segmind Tiny SD (Portrait)

Convert text to images with the distilled stable diffusion model by Segmind, Small-SD. Segmind Small SD Serverless APIs, Segmind offers fastest deployment for Small-Stable-Diffusion inferences.

Kandinsky 2.2

Kandinsky inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas.

Archived

Kandinsky 2.1

Kandinsky inherits best practices from Dall-E 2 and Latent diffusion, while introducing some new ideas.

ControlNet Soft Edge

This model corresponds to the ControlNet conditioned on Soft Edge.

ControlNet Scribble

This model corresponds to the ControlNet conditioned on Scribble images.

ControlNet Depth

This model corresponds to the ControlNet conditioned on Depth estimation.

ControlNet Canny

This model corresponds to the ControlNet conditioned on Canny edges.

Codeformer

CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.

Segment Anything Model

The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image.

Faceswap

Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training

Archived

Revanimated

This model corresponds to the Stable Diffusion Revanimated checkpoint for detailed images at the cost of a super detailed prompt

Background Removal

This model removes the background image from any image

ESRGAN

AI-Powered Image Super-Resolution, upscaling and Image enhancement producing stunning, high-quality results using artificial intelligence

ControlNet Openpose

This model corresponds to the ControlNet conditioned on Human Pose Estimation.

Browse open Source models on Segmind
Use this page to browse through open source generative models and grab their APIs to build it into your own app. Segmind provides simple serverless APIs for generative model so developers can build apps powered by generative models without the hassle of managing your own infrastructure. Segmind models are powered by voltaML to accelerate them and give you the fastest models for production use cases. Top models on Segmind include, Stable Diffusion 2.1, SDXL and ControlNet Openpose.