Bria Generate Background
Transform images through advanced background editing and generative content creation for diverse applications.
Playground

Resources to get you started
Everything you need to know to get the most out of Bria Generate Background
Bria AI Image Editing API v2 â Generative Image Editing Model
What is Bria AI Image Editing API v2?
Bria AI Image Editing API v2 is an advanced generative image editing solution designed for developers, creators, and product managers. It offers a full suite of background operationsâremoval, replacement, and blurringâand precise content editing tools like erasing and generative fill within masked regions. By preserving original image quality and supporting asynchronous processing with request tracking, Bria AI streamlines complex workflows and accelerates your integration of AI-driven visual enhancements.
Key Features
- â˘Background Operations: Remove, replace, or blur backgrounds with optional transparent detection (
force_background_detection
). - â˘Masked Region Editing: Erase unwanted areas and fill them with contextually generated content.
- â˘Guided Generation: Use text prompts (
prompt
) or reference images (ref_images
) for custom backgrounds. - â˘High-Fidelity Output: Retain native resolution via
original_quality
toggle. - â˘Asynchronous Processing: Submit jobs, poll status, and retrieve resultsâideal for large batches.
- â˘Content Moderation: Enable
prompt_content_moderation
,visual_input_content_moderation
, andvisual_output_content_moderation
for safety. - â˘Fast Mode: Accelerate previews with
fast
, then switch off for full-quality renders. - â˘Reproducibility: Control randomness using the
seed
parameter.
Best Use Cases
- â˘E-commerce: Automate product photo background replacement and enhancement.
- â˘Marketing: Generate hero images and banners with on-brand backgrounds.
- â˘Social Media: Create engaging posts by erasing distractions and filling scenes.
- â˘Real Estate: Virtually stage interiors with new furniture or dĂŠcor.
- â˘Compliance: Integrate moderation flags to filter user-generated content.
Prompt Tips and Output Quality
- â˘Use clear, concise prompts (e.g., âsunny beach with palm treesâ).
- â˘Supply high-quality
ref_images
for style consistency. - â˘Add
negative_prompt
(e.g., âno carsâ) to exclude elements. - â˘Enable
refine_prompt
for intricate or multi-object scenes. - â˘Toggle
original_quality
when resolution matters most. - â˘Try
fast
mode for quick iterations, then disable for final renders. - â˘Set a specific
seed
to reproduce outputs across calls.
FAQs
Q: How do I integrate Bria AI Image Editing API v2?
A: Call our RESTful endpoints with your API key, provide the image URL, set parameters, and poll the job endpoint to fetch results.
Q: What is asynchronous processing?
A: After submitting your request, you receive a job ID. Poll the status endpoint until itâs complete, then download the edited image.
Q: How do I preserve original resolution?
A: Include "original_quality": true
in your JSON payload to maintain the input imageâs native resolution.
Q: How does content moderation work?
A: Turn on prompt_content_moderation
and visual moderation flags to automatically filter unsafe or inappropriate content.
Q: Can I generate entirely new backgrounds?
A: Yesâuse the Generate Background model with text prompts or reference images to create bespoke scenes tailored to your needs.
Other Popular Models
Discover other models you might be interested in.
SDXL Img2Img
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
Faceswap V2
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
Stable Diffusion XL 1.0
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software
Codeformer
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.