If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import requests
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/face-to-many"
# Prepare data and files
data = {}
files = {}
data['seed'] = 1321321
# For parameter "image", you can send a raw file or a URI:
# files['image'] = open('IMAGE_PATH', 'rb') # To send a file
# data['image'] = 'IMAGE_URI' # To send a URI
data['style'] = "3D"
data['prompt'] = "a person"
data['lora_scale'] = 1
data['custom_lora_url'] = None
data['negative_prompt'] = ""
data['prompt_strength'] = 4.5
data['denoising_strength'] = 0.65
data['instant_id_strength'] = 1
data['control_depth_strength'] = 0.8
headers = {'x-api-key': api_key}
response = requests.post(url, data=data, files=files, headers=headers)
print(response.content) # The response is the generated image
Fix the random seed for reproducibility
An image of a person to be converted
An enumeration.
Allowed values:
How strong the LoRA will be
min : 0,
max : 1
URL to a Replicate custom LoRA. Must be in the format https://replicate.delivery/pbxt/[id]/trained_model.tar or https://pbxt.replicate.delivery/[id]/trained_model.tar
Things you do not want in the image
Strength of the prompt. This is the CFG scale, higher numbers lead to stronger prompt, lower numbers will keep more of a likeness to the original.
min : 0,
max : 20
How much of the original image to keep. 1 is the complete destruction of the original image, 0 is the original image
min : 0,
max : 1
How strong the InstantID will be.
min : 0,
max : 1
Strength of depth controlnet. The bigger this is, the more controlnet affects the output.
min : 0,
max : 1
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.
Gain greater control by dividing the creative process into distinct steps, refining each phase.
Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.
Integrate and utilize multiple models simultaneously, producing complex and polished creative results.
Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.
With to Face to Many, you can turn a face in to different styles such as 3D, emoji, pixel art, video game, clay or toy.
3D: Create a three-dimensional representation of the face.
Emoji: Turn the face into a fun, expressive emoji.
Pixel Art: Render the face in a retro, pixelated style reminiscent of early video games.
Video Game: Transform the face to resemble characters from video games.
Clay: Mold the face as if it were made from clay, similar to stop-motion animation characters.
Toy: Convert the face to look like a toy figure.
This model opens up a world of creative possibilities, making it easy to experiment with different artistic styles and representations.
Under the hood of Face to sticker model is a combination of Instant ID + IP Adapter + ControlNet Depth
Instant ID is responsible for identifying the unique features of the face of the person in the input image.
An image encoder (IP Adapter) helps in transferring the various styles (3D, emoji, pixel art, video game, clay or toy. ) on to the face image of the person in the input image.
ControlNet Depth estimates the depth of different parts of the face. This helps in creating a 3D representation of the face, which can then be used to apply the style seamlessly.
Input image: Choose an image that you want to transform. A close-up portrait shot is ideal because it allows the model to clearly identify and process the facial features.
Prompt: Provide a text prompt based on the input image. This could be a simple description of the person in the image, such as “a man” etc. The model uses this prompt to guide the style transfer process.
Style: Choose any style of your choice you want to see in the output image. (3D, Emoji, Toy, Clay, Pixels, Video game).
Custom LoRA: You can incorporate other styles by using custom LoRA models based on SDXL. Simply paste the link to the custom LoRA model.
Parameters: Adjust the below parameters to guide the final image output.
a. Prompt Strength: This parameter is similar to the CGF scale. It determines how closely the image generation follows the text prompt. A higher value will result in an output image that more closely matches the prompt.
b. Instant ID Strength: This parameter determines the degree of influence of Instant ID. The higher the value, the closer the face in the output image looks to the input image.
d. ControlNet Depth Strength: This parameter determines the degree of influence of ControlNet Depth conditioning. The higher the value, the more its influence.
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
InstantID aims to generate customized images with various poses or styles from only a single reference ID image while ensuring high fidelity
This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training