If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/fooocus"
# Request payload
data = {
"prompt": "cinematic 3D caricature of a woman, smiling, looking happy",
"negative_prompt": "lowquality, badquality, sketches",
"steps": 30,
"samples": 1,
"styles": "V2,Enhance,Sharp",
"aspect_ratio": "1024*1024",
"seed": 1849415,
"guidance_scale": 4,
"scheduler": "DPM++ 2M SDE Karras",
"base_model": "protovisionxl",
"faceswap_img": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/fooocus-face-image.jpg"), # Or use image_file_to_base64("IMAGE_PATH")
"faceswap_cn_stop": 0.9,
"faceswap_cn_weight": 0.8,
"imageprompt_img": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/fooocus-image-prompt.png"), # Or use image_file_to_base64("IMAGE_PATH")
"imageprompt_cn_stop": 0.5,
"imageprompt_cn_weight": 0.6,
"pyracanny_cn_stop": 0.5,
"pyracanny_cn_weight": 1,
"cpds_cn_stop": 0.5,
"cpds_cn_weight": 1,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Prompt to render
Prompts to exclude, eg. bad anatomy, bad hands, missing fingers
Number of denoising steps.
min : 20,
max : 100
Number images to generate.
min : 1,
max : 4
Style selection
output image aspect ratio
Allowed values:
Seed for image generation.
min : -1,
max : 999999999999999
Scale for classifier-free guidance
min : 1,
max : 25
Type of scheduler.
Allowed values:
Model for inference
Allowed values:
faceswap_img
image propmt
min : 0,
max : 1.5
image propmt
min : 0,
max : 1.5
faceswap_img
image prompt
min : 0,
max : 1.5
image propmt
min : 0,
max : 1.5
pyracanny_img
image propmt
min : 0,
max : 1.5
image propmt
min : 0,
max : 1.5
cpds_img
image propmt
min : 0,
max : 1.5
image propmt
min : 0,
max : 1.5
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.
Gain greater control by dividing the creative process into distinct steps, refining each phase.
Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.
Integrate and utilize multiple models simultaneously, producing complex and polished creative results.
Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.
Fooocus is an image generating framework that is based on Stable Diffusion. The goal of Fooocus is to combine the best aspects of Stable Diffusion and Midjourney. It’s designed to reduce the complexity of other Stable Diffusion interfaces, is easy to use and generates high-quality images out of the box.
Here are some key features of Fooocus:
Ease of Use: Fooocus is designed to be user-friendly. It only requires a single prompt for the image generation process.
Quality Images: Fooocus has optimized the Stable Diffusion pipeline to deliver excellent images. It uses a set of default settings that are optimized to give the best results when using SDXL models.
Prompt Expansion: Fooocus expands your prompt with a GPT based prompt engine. This means you don’t need to write long and complicated prompts.
These are the images you will be using as inputs in Fooocus. You can import various types of images such as an image prompt, Face image, Pyracanny, and CPDS depending on the use-case or desired final image output.
This is a general type of input where an image acts as a reference guide for the AI. It guides the image generation process in the direction of the provided image.
This is a specific kind of image prompt where the provided image focuses on a face. It’s useful when you want to specifically modify or generate an image centered around a face.
Pyracanny is a pyramid-based Canny edge control and it excels at detecting edges in high-resolution images. Regular Canny edge detection might miss some details, so PyraCanny tackles this by analyzing the image at various resolutions to create a more comprehensive edge map. This edge map then influences the final image.
Contrast Preserving Decolorization (CPDS) is another custom control method used in Fooocus. It works in a way that’s comparable to ControlNet depth conditioning. CPDS helps in understand the depth information within the image, which can be useful for creating a more realistic or nuanced final image output.
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
InstantID aims to generate customized images with various poses or styles from only a single reference ID image while ensuring high fidelity
This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training