AI Product Photo Editor

AI Product Photo Editor leverages advanced image-based ML techniques to generate high-quality product visuals using text prompts, product images, and background images.

Playground

Try the model in real time below.

loading...

Click or Drag-n-Drop

PNG, JPG or GIF, Up-to 2048 x 2048 px

loading...

Click or Drag-n-Drop

PNG, JPG or GIF, Up-to 2048 x 2048 px

output image


Examples

Check out what others have created with AI Product Photo Editor
Example preview

photo of a mixer grinder in modern kitchen

seed: 2566965guidance_scale: 6

API

If you're looking for an API, you can choose from your desired programming language.

POST
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 import requests import base64 # Use this function to convert an image file from the filesystem to base64 def image_file_to_base64(image_path): with open(image_path, 'rb') as f: image_data = f.read() return base64.b64encode(image_data).decode('utf-8') # Use this function to fetch an image from a URL and convert it to base64 def image_url_to_base64(image_url): response = requests.get(image_url) image_data = response.content return base64.b64encode(image_data).decode('utf-8') api_key = "YOUR_API_KEY" url = "https://api.segmind.com/v1/ai-product-photo-editor" # Request payload data = { "product_image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/ppv3-test/main-ip.jpeg"), # Or use image_file_to_base64("IMAGE_PATH") "background_image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/ppv3-test/bg6.png"), # Or use image_file_to_base64("IMAGE_PATH") "prompt": "photo of a mixer grinder in modern kitchen", "negative_prompt": "illustration, bokeh, low resolution, bad anatomy, painting, drawing, cartoon, bad quality, low quality", "num_inference_steps": 21, "guidance_scale": 6, "seed": 2566965, "sampler": "dpmpp_3m_sde_gpu", "scheduler": "karras", "samples": 1, "ipa_weight": 0.3, "ipa_weight_type": "linear", "ipa_start": 0, "ipa_end": 0.5, "ipa_embeds_scaling": "V only", "cn_strenght": 0.85, "cn_start": 0, "cn_end": 0.8, "dilation": 10, "mask_threshold": 220, "gaussblur_radius": 8, "base64": False } headers = {'x-api-key': api_key} response = requests.post(url, json=data, headers=headers) print(response.content) # The response is the generated image
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


product_imageimage *

Product Image


background_imageimage *

Background Reference Image


promptstr *

Prompt for image generation


negative_promptstr ( default: illustration, bokeh, low resolution, bad anatomy, painting, drawing, cartoon, bad quality, low quality )

Negative prompt


num_inference_stepsint *

Number of steps to generate image

min : 20,

max : 100


guidance_scalefloat ( default: 6 )

Scale for classifier-free guidance

min : 0,

max : 10


seedint ( default: 2566965 )

Seed number for image generation


samplerenum:str *

Sampler

Allowed values:


schedulerenum:str ( default: karras )

Scheduler

Allowed values:


samplesint ( default: 1 )

Number of samples to generate


ipa_weightfloat ( default: 0.3 )

IP Adapter weight

min : 0,

max : 2


ipa_weight_typeenum:str ( default: linear )

Type of IP Adapter weight

Allowed values:


ipa_startfloat ( default: 1 )

IP Adapter start value

min : 0,

max : 1


ipa_endfloat ( default: 0.5 )

IP Adapter end value

min : 0,

max : 1


ipa_embeds_scalingenum:str ( default: V only )

IP Adapter embedding scaling

Allowed values:


cn_strenghtfloat ( default: 0.85 )

ControlNet strength

min : 0,

max : 2


cn_startfloat ( default: 1 )

ControlNet start value

min : 0,

max : 1


cn_endfloat ( default: 0.8 )

ControlNet end value

min : 0,

max : 1


dilationint ( default: 10 )

Dilation value

min : -100,

max : 100


mask_thresholdint ( default: 220 )

Mask threshold value

min : 0,

max : 255


gaussblur_radiusint ( default: 8 )

Gaussian blur radius

min : 0,

max : 20


base64bool ( default: 1 )

Output as base64

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.


Pricing

Serverless Pricing

Buy credits that can be used anywhere on Segmind

$ 0.0015 /per second

Dedicated Cloud Pricing

For enterprise costs and dedicated endpoints

$ 0.0007 - $ 0.0031 /per second
FEATURES

PixelFlow allows you to use all these features

Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.

Segmented Creation Workflow

Gain greater control by dividing the creative process into distinct steps, refining each phase.

Customized Output

Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.

Layering Different Models

Integrate and utilize multiple models simultaneously, producing complex and polished creative results.

Workflow APIs

Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.

AI Product Photo Editor

AI Product Photo Editor leverages advanced image-based ML techniques to generate high-quality product visuals using text prompts, product images, and background images. This method combines inpainting, superimposition, and a dual-pass image generation process, employing Canny edge detection and IP-Adapter for background integration. The output enhances image details, ensuring high fidelity and professional-grade photos.

Capabilities:

  1. Can generate high-quality product images based on a combination of text prompts, product images, and background images.

  2. Employs inpainting with IP-Adapter and superimposition techniques for seamless image creation.

  3. Utilizes Canny edge detection to enhance edge details, ensuring sharp and defined product outlines.

  4. Executes a two-pass image generation process: the first pass integrates the product image with the background, and the second pass refines details like shadows and textures.

  5. Offers flexibility in modifying backgrounds or environments where the product is displayed, enhancing the visual appeal and context..

Technical Architecture: Combines inpainting with IP-Adapter using a reference image for background setting. Implements Canny edge detection to enhance and refine edge details, ensuring high-fidelity product images.

Employs a two-pass image generation process:

  1. First pass: Generates the base image integrating the product with the background.

  2. Second pass: Enhances finer details such as shadows and textures to ensure a photorealistic output. Concludes with a superimposition step to finalize and perfect the overall image composition.

Strengths: Capable of producing highly realistic and visually appealing product images. Flexibility in customizing image backgrounds and detailed enhancements offers wide-ranging applications. The two-pass generation process ensures high attention to detail, resulting in polished final images. Canny edge detection significantly improves the clarity and precision of product outlines.

How to use the model?

Step 1: Enter Prompt

Prompt: Describe the product image you want to create. For example, "Photos of plastic containers in a studio kitchen, minimal studio background."

Step 2: Upload Images

  • Product Image: Click on the upload area to browse and select your product image or drag and drop the image file.

  • Background Image: Click on the upload area to browse and select your background image or drag and drop the image file.

Step 3: Configure Negative Prompt (Optional)

Negative Prompt: Enter descriptions of elements you want to exclude from the generated image, such as "Illustration, broken, low resolution, bad anatomy."

Step 4: Set Inference Steps

Inference Steps: Enter the number of steps for the machine learning model to generate the image, e.g., 21.

Step 5: Set Randomization Seed

Seed: Enter a seed number for randomization to reproduce the same image on subsequent runs.

Step 6: Advanced Parameters

Click on the "Advanced Parameters" dropdown to reveal additional settings to further fine tune the outputs.

  • Guidance Scale: Adjusts how much the model adheres to the text prompt (higher value = stricter adherence).

  • Sampler: Selects the algorithm used for sampling; for example, "dpmpp_3m_sde_gpu."

  • Scheduler: Algorithmic scheduler for managing the sampling steps.

  • IPA Weight: The weight for the IP-Adapter controlling how much it influences the background image blending.

  • IPA Weight Type: The interpolation type for setting the IPA weight (e.g., linear).

  • IPA Start: Beginning point for the IP-Adapter influence.

  • IPA End: End point for the IP-Adapter influence.

  • IPA Embeds Scaling: Determines how embeddings from the IP-Adapter are scaled.

  • ControlNet Strength: Amount of control the ControlNet model has over the generation.

  • ControlNet Start: Start point for ControlNet influence.

  • ControlNet End: End point for ControlNet influence.

  • Dilation: Amount of dilation applied to the edges.

  • Mask Threshold: Threshold value for the masking process.

  • Gaussian Blur Radius: Radius for applying Gaussian blur to the image.

F.A.Q.

Frequently Asked Questions

Take creative control today and thrive.

Start building with a free account or consult an expert for your Pro or Enterprise needs. Segmind's tools empower you to transform your creative visions into reality.

Pixelflow Banner