Bria Generative Fill
Bria AI enables precise generative image editing for seamless creative enhancements and transformations.
API
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
# Use this function to convert a list of image URLs to base64
def image_urls_to_base64(image_urls):
return [image_url_to_base64(url) for url in image_urls]
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/bria-gen-fill"
# Request payload
data = {
"image": "https://segmind-resources.s3.amazonaws.com/input/84380902-5a75-4ae6-b499-4e8c08777792-6e2fc83c-b77b-4f89-869e-76fdbf746c81.jpeg",
"mask_type": "manual",
"prompt": "Place a wooden bench on the grass",
"prompt_content_moderation": True,
"negative_prompt": "No skyscrapers or ground vehicles",
"preserve_alpha": True,
"seed": 42,
"visual_input_content_moderation": False,
"visual_output_content_moderation": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Attributes
Provide the source image via URL or Base64. Use different formats to test input handling.
Define the generation area with a mask. Test different regions by altering mask shape.
Specify if the mask is manual or automatic. Use 'manual' for custom, 'automatic' for algorithm-generated.
Allowed values:
Enter a prompt to guide object generation. Try creative or detailed prompts for varied outputs.
Moderate the prompt for safety. Enable for sensitive environments.
Exclude elements from generation using this field. Use for undesired features or details.
Decide if the alpha channel is retained. Keep true for transparency needs.
Select a seed for reproducibility. Use fixed seed for consistent results.
Enable to check input images for inappropriate content. Useful for platform compliance.
Check output images for content issues. Activate for moderated environments.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Resources to get you started
Everything you need to know to get the most out of Bria Generative Fill
# Bria AI: A Practical Guide to Generative Image Editing
Bria AI is a versatile image-editing API that empowers developers, designers, and product teams to apply pixel-perfect edits at scale. Whether youâre swapping backgrounds, adding details, or upscaling old photos, this guide will help you choose the right parameters to get consistent, high-quality results.
## Getting Started
1. **Prepare Your Assets**
⢠Source image (URL or Base64 PNG/JPEG/WebP)
⢠Mask image (manual PNG mask or let Briaâs segmentation run automatically)
2. **Decide on Operation**
⢠Generative Fill â add new objects or patterns
⢠Foreground Erase & Reconstruct â remove subjects and fill background
⢠Canvas Expansion â extend edges with AI-generated scenery
⢠Background Blur / Enhancement / Upscale
3. **Basic Request Flow**
⢠Submit an asynchronous job via API call
⢠Poll the job ID until status = âsucceededâ
⢠Download the edited image (preserves alpha by default)
## Key Parameters & Use-Case Recommendations
| Parameter | Use Case | Suggested Value |
|-----------------------------------|---------------------------------|--------------------------------------------------|
| **mask_type** | Precise control | `"manual"` for custom shapes |
| | Quick background removal | `"automatic"` to leverage AI segmentation |
| **prompt / negative_prompt** | Creative fill | Detailed positive prompt; use negative to exclude unwanted elements (âNo reflectionsâ) |
| **seed** | Reproducible output | Fixed integer (e.g. `42`); omit for variety |
| **preserve_alpha** | Icons / overlays | `true` to keep transparency |
| **visual_input_content_moderation** / **visual_output_content_moderation** | Compliance | `true` in regulated or UGC-heavy environments |
| **prompt_content_moderation** | Sensitive themes | `true` to filter inappropriate text |
### Sample Scenarios
- **Product Photography**
mask_type: `manual`; prompt: âStudio shot of a sleek bottle on white backgroundâ; seed: 123; preserve_alpha: `false`.
- **Marketing Assets**
canvas_expansion + prompt: âFantasy forest edge with soft lightâ; negative_prompt: âNo animalsâ; seed omitted for creative variance.
- **Mobile Hero Images**
background_blur + mask_type: `automatic`; prompt: N/A; preserve_alpha: `true` to overlay text.
- **Digital Restoration & Upscaling**
use `upscale` endpoint; seed: same value for consistency; enable enhancement filters.
## Prompt Tips for Best Results
- Define a **precise mask** in PNG for tight control.
- Write **concise but descriptive prompts** (âA neon city skyline at duskâ).
- Add a **negative_prompt** to eliminate artifacts.
- Lock the **seed** for repeatable edits or leave blank for fresh outputs.
- Leverage **content moderation** flags to ensure brand safety.
By tailoring these parameters to your workflow, Bria AI will produce crisp, reliable, and on-brand visuals every time. Happy editing!
Other Popular Models
Discover other models you might be interested in.
sdxl-img2img
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

faceswap-v2
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training

sdxl-inpaint
This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask

codeformer
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.
