Qwen Image Edit
Transform images effortlessly through semantic context and pixel-perfect appearance changes.
API
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
# Use this function to convert a list of image URLs to base64
def image_urls_to_base64(image_urls):
return [image_url_to_base64(url) for url in image_urls]
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/qwen-image-edit"
# Request payload
data = {
"image": image_url_to_base64("https://segmind-resources.s3.amazonaws.com/input/32e99b1e-d3b6-4a59-a588-9cfae0675b9d-qwen_display.png"), # Or use image_file_to_base64("IMAGE_PATH")
"prompt": "replace the text on sign board with 'STOP Qwen Image Edit is on Segmind'",
"negative_prompt": "low quality, noise, extra elements",
"steps": 30,
"guidance": 3.5,
"seed": 760941192,
"image_format": "png",
"quality": 90,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Attributes
Input Image URL or file reference.
Describes what to edit in the image
Avoid specific traits. 'Avoid noise' removes unwanted noise.
Number of steps for generating the image
min : 1,
max : 100
Influence of prompt. Use 4 for stronger control.
min : 1,
max : 25
Seed stabilizes output. Try -1 for randomness.
min : -1,
max : 999999999999999
Choose format for image storage. 'jpeg' for compact files, 'png' for clarity.
Allowed values:
Sets image detail level. Use 80 for web content, 100 for high-quality prints.
min : 10,
max : 100
Outputs base64. True if image as text needed.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Resources to get you started
Everything you need to know to get the most out of Qwen Image Edit
Qwen-Image-Edit: Quickstart & Best Practices
Qwen-Image-Edit is a 20 B-parameter vision–language model tailored for pixel-perfect, bilingual (English/Chinese) image editing. Whether you need high-level semantic shifts or fine-grained local tweaks, follow this guide to maximize output quality and speed.
1. Getting Started
- •Input Image: ≥ 512×512 px JPEG/PNG via URL or upload
- •Prompt: Natural language instructions (e.g.,
“Change the sky to a starry night,”
“Rotate the product label 45° clockwise,”
“Replace English text with Chinese 保持原有字体”.) - •Negative Prompt (Advanced):
negative_prompt="noise, blur, extra objects"
Use to suppress artifacts.
2. Core Parameters
Parameter | Default | Range | Purpose |
---|---|---|---|
steps | 40 | 10–100 | Denoising passes. More = finer detail. |
guidance_scale | 4 | 1–25 | Prompt adherence. Higher = stronger control. |
seed | 760941192 | –1–1e15 | Reproducibility. –1 for full randomness. |
3. Preset Configurations
- •
Quick Iteration
• steps=20
• guidance_scale=3
• seed=–1
→ Fast previews, lower detail, great for A/B tests. - •
High-Fidelity Edits
• steps=70
• guidance_scale=8
• negative_prompt="low quality, artifacts"
• seed=760941192
→ Ultra-detailed results; reproducible. - •
Balanced Output
• steps=40
• guidance_scale=5
• seed=12345678
→ Good detail + speed trade-off.
4. Use Case Recommendations
- •
UI/UX Mockups
Prompt: “Update button text to ‘提交’ in matching font.”
Params: steps=50, guidance_scale=6 - •
E-commerce & Packaging
Prompt: “Swap label to ‘Organic Green Tea’ and rotate by 15°.”
Params: steps=60, guidance_scale=7, negative_prompt="blurry edges" - •
Creative Style Transfer
Prompt: “Apply Van Gogh style to this portrait.”
Params: steps=70, guidance_scale=8 - •
Social Media Graphics
Prompt: “Make sky vibrant orange at sunset.”
Params: steps=45, guidance_scale=5 - •
Asset Localization
Prompt: “Translate English sign to 中文: ‘欢迎’ keeping font identical.”
Params: steps=50, guidance_scale=6
5. Pro Tips
- •High-res Inputs: Always use ≥512 px images.
- •Layered Prompts: Combine semantic + appearance cues:
“Change dress color to red, preserve folds.” - •Iterate with Seeds: Lock seed for A/B consistency.
- •Batch Edits: Automate via API by looping over JSON parameter sets.
- •Monitor Artifacts: Increase
negative_prompt
terms if you see noise.
By tuning these knobs: prompt clarity, steps, guidance scale, seed and negative prompts, you’ll unlock Qwen-Image-Edit’s full power for any image-editing workflow. Happy editing!
Other Popular Models
Discover other models you might be interested in.
sdxl-img2img
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

sdxl-controlnet
SDXL ControlNet gives unprecedented control over text-to-image generation. SDXL ControlNet models Introduces the concept of conditioning inputs, which provide additional information to guide the image generation process

fooocus
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.

sdxl1.0-txt2img
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software
