SSD 1B
The Segmind Stable Diffusion Model (SSD-1B) is a distilled 50% smaller version of the Stable Diffusion XL (SDXL), offering a 60% speedup while maintaining high-quality text-to-image generation capabilities. It has been trained on diverse datasets, including Grit and Midjourney scrape data, to enhance its ability to create a wide range of visual content based on textual prompts.
SDXL Inpaint
This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask
SSD Img2Img
This model uses SSD-1B to generate images by passing a text prompt and an initial image to condition the generation
SDXL Openpose
This model leverages SDXL to generate the images with ControlNet conditioned on Human Pose Estimation.
SSD Depth
This model leverages SSD-1B to generate the images with ControlNet conditioned on Depth Estimation
SSD Canny
This model leverages SSD-1B to generate the images with ControlNet conditioned on Canny Images
lora dog-SSD-1B
LoRA Dog SSD-1B specializes in generating photorealistic images of dogs
lego minifig-xl
LEGO Minifig XL is designed to generate LEGO images and excels in creating detailed and accurate representations of LEGO minifigures and items.
sdxl kream-model-lora
Fine tuned with a custom dataset from KREAM, Korea's premier online resell market, this model is a game-changer in fashion design and visualization.
crayon style_lora_sdxl
Crayon Style - SDXL LoRA is a unique model designed to convert any text prompt into a vibrant, crayon-style drawing.
cyborg style_xl
Cyborg Style SDXL specializes in generating cyborg-themed artwork based on science fiction and futuristic aesthetics.
sdxl lora-lower-decks-aesthetic
SDXL LoRA Lower Decks Aesthetic model, inspired by the unique style of “Star Trek: Lower Decks.” generates artwork in the distinctive animation style of the beloved series.
sdxl wrong-lora
SDXL Wrong LoRA is engineered with a focus on delivering images of higher detail, color saturation and vibrance, bringing images to life with stunning clarity.
dog example-sdxl-lora
Dog Example SDXL LoRA, a specialized AI model within the Stable Diffusion XL framework, uniquely trained to enhance canine imagery.
SDXL StickerSheet-Lora
SDXL StickerSheet LoRA is expertly fine-tuned on a comprehensive collection of sticker images, enabling it to produce a wide variety of sticker designs.
stained glass-style-sdxl
Stained Glass Style SDXL is trained extensively on diverse stained glass images and can replicate the essence of stained glass in digital artworks.
punk collage
Punk Collage Model offers a unique way to create digital collages that resonate with the punk culture's raw energy and subversive charm.
sdxl ugly-sonic-lora
SDXL Ugly Sonic LoRA excels at generating quirky and iconic version of one of the most beloved movie characters - Sonic the hedgehog.
blacklight makeup-sdxl-lora
Blacklight Makeup SDXL LoRA is fine-tuned to generate makeup designs that are not only visually striking but also perfectly suited for blacklight environments.
ClayAnimationRedmond
Clay Animation Redmond based on SDXL 1.0, excels at creating mesmerizing clay animation images with unparalleled ease and precision.
sdxl lora-index-modern-luxury-1
'
ikea instructions-lora-sdxl
Ikea Instructions LoRA SDXL model is fine-tuned on IKEA diagrams and specializes in generating clear, concise, and easy-to-follow visual instructions.
Copax Timeless SDXL
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.
Zavychroma SDXL
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.
Realvis SDXL
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.
Dreamshaper SDXL
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.
Stable Diffusion 2.1
Stable Diffusion is a type of latent diffusion model that can generate images from text. It was created by a team of researchers and engineers from CompVis, Stability AI, and LAION. Stable Diffusion v2 is a specific version of the model architecture. It utilizes a downsampling-factor 8 autoencoder with an 865M UNet and OpenCLIP ViT-H/14 text encoder for the diffusion model. When using the SD 2-v model, it produces 768x768 px images. It uses the penultimate text embeddings from a CLIP ViT-H/14 text encoder to condition the generation process.
Stable Diffusion XL 0.9
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.
Word2img
Create beautifully designed words using Segmind’s word to image for your marketing purposes
Segmind Tiny SD
Convert Text into Images with the latest distilled stable diffusion model
Stable Diffusion Inpainting
Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask
Stable Diffusion img2img
This model uses diffusion-denoising mechanism as first proposed by SDEdit, Stable Diffusion is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
Segmind Small SD
Create realistic portrait images using the finetined Segmind Tiny SD model. Segmind Tiny SD (Portrait) Serverless APIs, Segmind offers fastest deployment for Tiny-Stable-Diffusion inferences
Stable Diffusion XL 1.0
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software
Scifi
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Samaritan
The most versatile photorealistic model that blends various models to achieve the amazing realistic images
RPG
This model corresponds to the Stable Diffusion RPG checkpoint for detailed images at the cost of a super detailed prompt
Reliberate
This model corresponds to the Stable Diffusion Reliberate checkpoint for detailed images at the cost of a super detailed prompt
Realistic Vision
This model corresponds to the Stable Diffusion Realistic Vision checkpoint for detailed images at the cost of a super detailed prompt
RCNZ Cartoon
The most versatile photorealistic model that blends various models to achieve the amazing realistic images
Paragon
This model corresponds to the Stable Diffusion Paragon checkpoint for detailed images at the cost of a super detailed prompt
SD Outpainting
Stable Diffusion Outpainting can extend any image in any direction
Manmarumix
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Majicmix
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Juggernaut Final
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Icbinp
The most versatile photorealistic model that blends various models to achieve the amazing realistic images
Fruit Fusion
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Flat 2d
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Fantassified Icons
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Epic Realism
This model corresponds to the Stable Diffusion Epic Realism checkpoint for detailed images at the cost of a super detailed prompt
Edge of Realism
This model corresponds to the Stable Diffusion Edge of Realism checkpoint for detailed images at the cost of a super detailed prompt
DvArch
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Dream Shaper
Dreamshaper excels in delivering high-quality, detailed images. It is fine-tuned to understand and interpret a diverse range of artistic styles and subjects.
Cartoon
This model corresponds to the Stable Diffusion Disney checkpoint for detailed images at the cost of a super detailed prompt
Deep Spaced Diffusion
The most versatile photorealistic model that blends various models to achieve the amazing realistic space themed images.
Cyber Realistic
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Cute Rich Style
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
Colorful
This model corresponds to the Stable Diffusion Colorful checkpoint for detailed images at the cost of a super detailed prompt
All in one pixe
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
526mix
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
QR Generator
Create beautiful and creative QR codes for your marketing campaigns.
Segmind Tiny SD (Portrait)
Convert text to images with the distilled stable diffusion model by Segmind, Small-SD. Segmind Small SD Serverless APIs, Segmind offers fastest deployment for Small-Stable-Diffusion inferences.
Kandinsky 2.2
Kandinsky inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas.
Kandinsky 2.1
Kandinsky inherits best practices from Dall-E 2 and Latent diffusion, while introducing some new ideas.
ControlNet Soft Edge
This model corresponds to the ControlNet conditioned on Soft Edge.
ControlNet Scribble
This model corresponds to the ControlNet conditioned on Scribble images.
ControlNet Depth
This model corresponds to the ControlNet conditioned on Depth estimation.
ControlNet Canny
This model corresponds to the ControlNet conditioned on Canny edges.
Codeformer
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.
Segment Anything Model
The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image.
Faceswap
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
Revanimated
This model corresponds to the Stable Diffusion Revanimated checkpoint for detailed images at the cost of a super detailed prompt
Background Removal
This model removes the background image from any image
ESRGAN
AI-Powered Image Super-Resolution, upscaling and Image enhancement producing stunning, high-quality results using artificial intelligence
ControlNet Openpose
This model corresponds to the ControlNet conditioned on Human Pose Estimation.