Bria Expand Image
Bria Expand enables precise image manipulation and enhancement with generative AI, trained exclusively on licensed data for safe, risk-free commercial use.
API
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
# Use this function to convert a list of image URLs to base64
def image_urls_to_base64(image_urls):
return [image_url_to_base64(url) for url in image_urls]
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/bria-expand-image"
# Request payload
data = {
"image": "https://segmind-resources.s3.amazonaws.com/input/3f05def3-776b-4944-a0f6-c23cec41a09e-cbf61f2b859662c0.jpg",
"aspect_ratio": "4:3",
"prompt_content_moderation": True,
"seed": 12345,
"negative_prompt": "Do not include buildings or modern architecture",
"preserve_alpha": True,
"visual_input_content_moderation": False,
"visual_output_content_moderation": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Attributes
Input image for processing. Use an accessible URL or Base64 string.
Defines image aspect ratio. For landscapes use 16:9, for portraits use 2:3.
Allowed values:
Text guidance for image expansion. Use keywords like 'sunset' or 'ocean'.
Scans prompt for NSFW content. Keep enabled for safe usage.
Sets randomization seed for reproducibility. Use any integer for consistency.
Excludes elements in generating image. Use 'cityscapes' to avoid urban features.
Keeps transparency in images. Use with PNGs to maintain transparency.
Moderates input visuals for NSFW content. Enable for safer uploads.
Moderates output visuals for NSFW content. Use to ensure clean results.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Resources to get you started
Everything you need to know to get the most out of Bria Expand Image
# Bria Image Editing API v2 â Quickstart Guide
Unlock Briaâs generative editing power by fine-tuning core parameters to match your use case. This guide walks you through best practices, recommended settings, and tips for reproducible, high-quality outputs.
## 1. Core Parameter Overview
- **image** (required): URL or Base64 string of your source.
- **aspect_ratio**: Choose from `1:1`, `16:9`, `9:16`, etc.
- **canvas_size** [width, height]: Defines overall output size in pixels.
- **original_image_size** [w, h]: Controls the input imageâs footprint on the canvas.
- **original_image_location** [x, y]: Offsets the image within the canvas.
- **prompt / negative_prompt**: Guides generative fill or background replacement.
- **seed**: Any integer to ensure consistency across runs.
- **preserve_alpha**: Keep transparency when working with PNGs.
- **prompt_content_moderation**, **visual_input_content_moderation**, **visual_output_content_moderation**: Auto-filter NSFW content.
## 2. Best Practices
1. **Lock Your Seed**
Use `seed=12345` (or any fixed integer) for reproducible edits.
2. **Moderate Content**
Enable `prompt_content_moderation=true` and visual moderation flags for safe deployments.
3. **Mask Precision**
For Eraser and Fill operations, supply clean binary masks to isolate the edit region.
4. **Leverage Negative Prompts**
Exclude unwanted elements (e.g., `"Do not include buildings"`) to refine generative outputs.
## 3. Parameter Tuning by Use Case
### E-commerce & Product Photography
- aspect_ratio: `1:1`
- Remove Background: `preserve_alpha=false`
- Increase Resolution: `canvas_size=[2000,2000]`, `seed=42`
- prompt: _unused_ / negative_prompt to avoid shadows.
### Marketing & Advertising Banners
- aspect_ratio: `16:9`
- Replace Background: `prompt="vibrant city skyline at dusk"`
- canvas_size: `[1920,1080]`, original_image_location:`[100,50]`
- seed: `2023`
### Digital Art & Concept Sketches
- Expand Image: `canvas_size=[3000,2000]`
- original_image_size:[1000,1000], original_image_location:[500,500]
- prompt: `"mystical forest with glowing mushrooms"`
- negative_prompt: `"modern buildings"`
### Social Media Portrait Touch-Ups
- aspect_ratio: `9:16` or `2:3`
- Generative Fill: `prompt="soft skin smoothing, warm lighting"`
- negative_prompt: `"blemishes, harsh shadows"`
- preserve_alpha: `false` for JPEG-style export
### UI/UX Prototypes & Transparent Assets
- preserve_alpha: `true`
- canvas_size: match screen dimensions (e.g., `[1440,1024]`)
- prompt: `"transparent background"`
- seed: fixed for consistent mockups
## 4. Pro Tips
- Always preview low-res drafts before final upscaling.
- Combine `visual_output_content_moderation=true` with brand safety workflows.
- Use `original_image_size` & `location` to composite multiple elements precisely.
By tailoring these parameters, youâll harness Bria Image Editing API v2 to produce professional, pixel-perfect results across any visual workflow.
Other Popular Models
Discover other models you might be interested in.
sdxl-img2img
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

face-to-many
Turn a face into 3D, emoji, pixel art, video game, claymation or toy

codeformer
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.

sd2.1-faceswapper
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
