Latest Models

test

Motion Control SVD

Motion Control SVD is an innovative deep learning framework that breathes life into static images. By intelligently managing both camera and object motion, it empowers creators to achieve precise animation effects.

test

Live Portrait video to video

Experience the magic of Live Portrait’s Video-to-Video Model! Transform your static images into dynamic videos seamlessly.

test

Image Superimpose V2

Superimpose V2 elevates image editing! Seamlessly layer images with background removal, precise positioning, and flexible resizing options. Explore 14 blending modes to create stunning effects

test

Video Faceswap

Video Faceswap is a powerful tool for creators, filmmakers, and meme enthusiasts. With this innovative technology, you can effortlessly replace faces in videos

test

Aura Flow

Largest completely open sourced flow-based generation model that is capable of text-to-image generation

test

Live Portrait

Live Portrait animates static images using a reference driving video through implicit key point based framework, bringing a portrait to life with realistic expressions and movements. It identifies key points on the face (think eyes, nose, mouth) and manipulates them to create expressions and movements.

test

Dubbing

ElevenLabs Dubbing uses AI to translate your audio into multiple languages. Easily create multilingual versions of your content without studios or voice actors for each language

test

Claude 3 Haiku

Claude 3 Haiku, the fastest and most cost-effective model LLM from Anthropic, delivers instant responses and image analysis. Build interactive AI experiences that mimic human conversation. Perfect for various applications, from research to enterprise

test

Claude 3 Opus

Claude 3 Opus is an LLM pushing the limits of language understanding. It excels at complex tasks, generates human-quality text, and remembers vast amounts of information.

test

Gemini PRO

Gemini 1.5 Pro represents a significant leap in large language model technology, offering exceptional understanding and performance across different modalities and contexts.

test

Gemini Flash

Gemini 1.5 Flash is a game-changer for developers and enterprises seeking a speedy and cost-effective large language model with exceptional long-context understanding.

test

Claude 3.5 Sonnet

Claude 3.5 Sonnet represents a significant advancement in AI language models, combining speed, accuracy, and visual reasoning capabilities. It excels at understanding and completing requests thoughtfully, and does so much faster than previous versions. Additionally, it boasts a stronger vision model, allowing it to analyze visual data like charts and images with exceptional accuracy.

test

Kolors

Kolors is a cutting-edge text-to-image model that bridges language and visual art. Transform your textual ideas into photorealistic images with semantic precision.

test

Playground V2.5

Playground V2.5 is a diffusion-based text-to-image generative model, designed to create highly aesthetic images based on textual prompts.

test

Image Superimpose

Superimpose model lets you to create captivating visuals by seamlessly overlaying one image on top of another. It streamlines your image layering process, allowing you to bring your creative vision to life effortlessly.

test

SDXL Img2Img

SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

test

SDXL Controlnet

SDXL ControlNet gives unprecedented control over text-to-image generation. SDXL ControlNet models Introduces the concept of conditioning inputs, which provide additional information to guide the image generation process

test

Story Diffusion

Story Diffusion turns your written narratives into stunning image sequences.

test

Elevenlabs Sound Generation

Eleven Labs' Sound Generation API provides a robust development tool for programmatically generating audio content using artificial intelligence. This API empowers developers and creators to integrate sound generation functionalities into their applications and workflows.

test

Elevenlabs Speech To Speech

Eleven Labs Speech-to-Speech offers AI-powered voice conversion for content creators, media professionals, and anyone seeking to modify or translate audio speech.

test

Elevenlabs Text To Speech

Eleven Labs Text-to-Speech (TTS) harnesses the power of deep learning to create realistic and engaging synthetic speech from written text.

test

Omni Zero

Omni-Zero: A diffusion pipeline for zero-shot stylized portrait creation.

test

LLAVA 1.6 7B

LLaVa translates images into text descriptions & captions.

test

LLaVA 13B

LLaVA 13B is a Vision-language model which allows both image and text as inputs.

test

Tooncrafter

Create videos from illustrated input images

test

V Express

V-Express lets you create portrait videos from single images.

test

SadTalker

Audio-based Lip Synchronization for Talking Head Video

test

Hallo

Hallo lets you create portrait videos from single images.

test

Relighting

Prompts to auto-magically relight your images.

test

Automatic Mask Generator

Automatic Mask Generator is a powerful tool that automates the creation of precise masks for inpainting

test

Magic Eraser

LaMA Object Removal- AI Magic Eraser

test

Inpaint Mask Maker

Real-Time Open-Vocabulary Object Detection

test

Background Eraser

Background Eraser helps in flawless background removal with exceptional accuracy.

test

Clarity Upscaler

High resolution creative image Upscaler and Enhancer. A free Magnific alternative.

test

Consistent Character

Create images of a given character in different poses

test

IDM VTON

Best-in-class clothing virtual try on in the wild

test

Stable Diffusion 3 Medium Text to Image

Stable Diffusion is a type of latent diffusion model that can generate images from text. It was created by a team of researchers and engineers from CompVis, Stability AI, and LAION. Stable Diffusion v2 is a specific version of the model architecture. It utilizes a downsampling-factor 8 autoencoder with an 865M UNet and OpenCLIP ViT-H/14 text encoder for the diffusion model. When using the SD 2-v model, it produces 768x768 px images. It uses the penultimate text embeddings from a CLIP ViT-H/14 text encoder to condition the generation process.

test

Fooocus

Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.

test

IPAdapter Style Transfer

Style & Composition Transfer with Stable Diffusion IP Adapter

test

Profile Photo Style Transfer

Turn any image of a face into artwork using Stable Diffusion Controlnet and IPAdapter

test

illusion-diffusion-hq

Monster Labs QrCode ControlNet on top of SD Realistic Vision v5.1

test

PuLID

Novel tuning-free ID customization method for text-to-image generation.

test

Yamer's Realistic SDXL

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

test

GPT 4 turbo

GPT-4 outperforms both previous large language models and as of 2023, most state-of-the-art systems (which often have benchmark-specific training or hand-engineering). On the MMLU benchmark, an English-language suite of multiple-choice questions covering 57 subjects, GPT-4 not only outperforms existing models by a considerable margin in English, but also demonstrates strong performance in other languages. Currently points to gpt-4-turbo-2024-04-09.

test

GPT 4o

GPT-4o (“o” for “omni”) is our most advanced model. It is multimodal (accepting text or image inputs and outputting text), and it has the same high intelligence as GPT-4 Turbo but is much more efficient—it generates text 2x faster and is 50% cheaper. Additionally, GPT-4o has the best vision and performance across non-English languages of any of our models. GPT-4o is available in the OpenAI API to paying customers.

test

GPT 4

GPT-4 outperforms both previous large language models and as of 2023, most state-of-the-art systems (which often have benchmark-specific training or hand-engineering). On the MMLU benchmark, an English-language suite of multiple-choice questions covering 57 subjects, GPT-4 not only outperforms existing models by a considerable margin in English, but also demonstrates strong performance in other languages.

test

Mixtral 8x7b

Mistral MoE 8x7B Instruct v0.1 model with Sparse Mixture of Experts. Fine tuned for instruction following.

test

Mixtral 8x22b

Mistral MoE 8x22B Instruct v0.1 model with Sparse Mixture of Experts. Fine tuned for instruction following.

test

PuLID Lightning

Faster version of PuLID, a novel tuning-free face customization method for text-to-image generation

test

Fashion AI

This model is capable of editing clothing in an image using a premier clothing segmentation algorithm.

test

face-to-many

Turn a face into 3D, emoji, pixel art, video game, claymation or toy

test

face-to-sticker

Turn a face into a sticker

test

Llama 3 8b

Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.

test

material-transfer

Transfer a material from an image to a subject

test

Llama 3 70b

Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.

test

Faceswap V2

Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training

test

Insta Depth

InstantID aims to generate customized images with various poses or styles from only a single reference ID image while ensuring high fidelity

test

Background Removal V2

This model removes the background image from any image

test

NewReality Lightning SDXL

NewReality Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

DreamShaper Lightning SDXL

DreamShaper Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

Colossus Lightning SDXL

Colossus Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

Samaritan Lightning SDXL

Samaritan Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

Realism Lightning SDXL

Realism Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

ProtoVision Lightning SDXL

ProtoVision Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

NightVis Lightning SDXL

NightVis Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

WildCard Lightning SDXL

WildCard Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

Dynavis Lightning SDXL

Dynavis Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

Juggernaut Lightning SDXL

Juggernaut Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

Realvis Lightning SDXL

Realvis Lightning SDXL is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps.

test

Try-On Diffusion

Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on

test

Background Replace

This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask

test

InstantID

InstantID aims to generate customized images with various poses or styles from only a single reference ID image while ensuring high fidelity

test

Samaritan 3D XL

Samaritan 3D XL leverages the robust capabilities of the SDXL framework, ensuring high-quality, detailed 3D character renderings.

test

Stable Video Diffusion

Takes image as input and returns a video.

test

Segmind-Vega

The Segmind-Vega Model is a distilled version of the Stable Diffusion XL (SDXL), offering a remarkable 70% reduction in size and an impressive 100% speedup while retaining high-quality text-to-image generation capabilities.

test

Segmind-VegaRT

Segmind-VegaRT a distilled consistency adapter for Segmind-Vega that allows to reduce the number of inference steps to only between 2 - 8 steps.

test

IP-adapter Openpose XL

IP Adapter XL Openpose is built on the SDXL framework. This model integrates the IP Adapter and Openpose preprocessor to offer unparalleled control and guidance in creating context-rich images.

test

IP-adapter Canny XL

IP Adpater XL Canny is built on the SDXL framework. This model integrates the IP Adapter and Canny edge preprocessor to offer unparalleled control and guidance in creating context-rich images.

test

IP-adapter Depth XL

IP Adapter Depth XL is built on the SDXL framework. This model integrates the IP Adapter and Depth preprocessor to offer unparalleled control and guidance in creating context-rich images.

test

SDXL Inpaint

This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask

test

SSD Img2Img

This model uses SSD-1B to generate images by passing a text prompt and an initial image to condition the generation

test

SDXL-Openpose

This model leverages SDXL to generate the images with ControlNet conditioned on Human Pose Estimation.

test

SSD-Depth

This model leverages SSD-1B to generate the images with ControlNet conditioned on Depth Estimation

test

SSD-Canny

This model leverages SSD-1B to generate the images with ControlNet conditioned on Canny Images

test

SSD-1B

The Segmind Stable Diffusion Model (SSD-1B) is a distilled 50% smaller version of the Stable Diffusion XL (SDXL), offering a 60% speedup while maintaining high-quality text-to-image generation capabilities. It has been trained on diverse datasets, including Grit and Midjourney scrape data, to enhance its ability to create a wide range of visual content based on textual prompts.

test

Copax Timeless SDXL

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

test

Zavychroma SDXL

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

test

Realvis SDXL

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

test

Dreamshaper SDXL

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

Archived
test

Stable Diffusion 2.1

Stable Diffusion is a type of latent diffusion model that can generate images from text. It was created by a team of researchers and engineers from CompVis, Stability AI, and LAION. Stable Diffusion v2 is a specific version of the model architecture. It utilizes a downsampling-factor 8 autoencoder with an 865M UNet and OpenCLIP ViT-H/14 text encoder for the diffusion model. When using the SD 2-v model, it produces 768x768 px images. It uses the penultimate text embeddings from a CLIP ViT-H/14 text encoder to condition the generation process.

Archived
test

Stable Diffusion XL 0.9

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

test

Word2img

Create beautifully designed words using Segmind’s word to image for your marketing purposes

Archived
test

Segmind Tiny-SD

Convert Text into Images with the latest distilled stable diffusion model

test

Stable Diffusion Inpainting

Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask

test

Stable Diffusion img2img

This model uses diffusion-denoising mechanism as first proposed by SDEdit, Stable Diffusion is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

Archived
test

Segmind Small-SD

Create realistic portrait images using the finetined Segmind Tiny SD model. Segmind Tiny SD (Portrait) Serverless APIs, Segmind offers fastest deployment for Tiny-Stable-Diffusion inferences

test

Stable Diffusion XL 1.0

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software

Archived
test

Scifi

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

Samaritan

The most versatile photorealistic model that blends various models to achieve the amazing realistic images

Archived
test

RPG

This model corresponds to the Stable Diffusion RPG checkpoint for detailed images at the cost of a super detailed prompt

test

Reliberate

This model corresponds to the Stable Diffusion Reliberate checkpoint for detailed images at the cost of a super detailed prompt

test

Realistic Vision

This model corresponds to the Stable Diffusion Realistic Vision checkpoint for detailed images at the cost of a super detailed prompt

Archived
test

RCNZ - Cartoon

The most versatile photorealistic model that blends various models to achieve the amazing realistic images

Archived
test

Paragon

This model corresponds to the Stable Diffusion Paragon checkpoint for detailed images at the cost of a super detailed prompt

test

SD Outpainting

Stable Diffusion Outpainting can extend any image in any direction

Archived
test

Manmarumix

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

Majicmix

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

test

Juggernaut Final

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

Fruit Fusion

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

Flat 2d

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

Fantassified Icons

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

test

Epic Realism

This model corresponds to the Stable Diffusion Epic Realism checkpoint for detailed images at the cost of a super detailed prompt

test

Edge of Realism

This model corresponds to the Stable Diffusion Edge of Realism checkpoint for detailed images at the cost of a super detailed prompt

Archived
test

DvArch

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

Dream Shaper

Dreamshaper excels in delivering high-quality, detailed images. It is fine-tuned to understand and interpret a diverse range of artistic styles and subjects.

Archived
test

Disney

This model corresponds to the Stable Diffusion Disney checkpoint for detailed images at the cost of a super detailed prompt

Archived
test

Deep Spaced Diffusion

The most versatile photorealistic model that blends various models to achieve the amazing realistic space themed images.

test

Cyber Realistic

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

Cute Rich Style

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

Colorful

This model corresponds to the Stable Diffusion Colorful checkpoint for detailed images at the cost of a super detailed prompt

Archived
test

All in one pixe

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

526mix

The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

Archived
test

QR Generator

Create beautiful and creative QR codes for your marketing campaigns.

Archived
test

Segmind Tiny-SD (Portrait)

Convert text to images with the distilled stable diffusion model by Segmind, Small-SD. Segmind Small SD Serverless APIs, Segmind offers fastest deployment for Small-Stable-Diffusion inferences.

Archived
test

Kandinsky 2.1

Kandinsky inherits best practices from Dall-E 2 and Latent diffusion, while introducing some new ideas.

test

ControlNet Soft Edge

This model corresponds to the ControlNet conditioned on Soft Edge.

test

ControlNet Scribble

This model corresponds to the ControlNet conditioned on Scribble images.

test

ControlNet Depth

This model corresponds to the ControlNet conditioned on Depth estimation.

test

ControlNet Canny

This model corresponds to the ControlNet conditioned on Canny edges.

test

Codeformer

CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.

test

Segment Anything Model

The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image.

test

Faceswap

Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training

Archived
test

Revanimated

This model corresponds to the Stable Diffusion Revanimated checkpoint for detailed images at the cost of a super detailed prompt

test

Background Removal

This model removes the background image from any image

test

ESRGAN

AI-Powered Image Super-Resolution, upscaling and Image enhancement producing stunning, high-quality results using artificial intelligence

test

ControlNet Openpose

This model corresponds to the ControlNet conditioned on Human Pose Estimation.