Bria Reimagine
Bria AI Reimagine transforms reference images into detailed, styled visuals with creative flexibility.
Playground

Resources to get you started
Everything you need to know to get the most out of Bria Reimagine
Bria AI Reimagine Model
What is Bria AI Reimagine?
Bria AI Reimagine is an advanced text-to-image generation model in the Bria AI suite that lets you preserve a source layout while applying new styles and visual treatments. Unlike Base, Fast, or HD variants that focus on speed or fidelity, Reimagine uses a structure reference image (via ControlNet) to guide composition, depth, and edges so you get consistency in scene structure with full creative flexibility on texture, color, and lighting.
Key Features
- •Structure Reference Control
– Usestructure_image_url
orstructure_image_file
to anchor your composition
–structure_ref_influence
slider (0–1) controls adherence to the reference - •ControlNet Guidance Methods
– Edge detection, depth maps, recoloring, color grids for fine-grained control - •Speed vs. Quality Modes
– Fast Mode (boolean): prioritize throughput for quick iterations
– HD Mode: maximize detail with higher-resolution output - •Reproducibility & Refinement
–seed
parameter ensures identical results across runs
–steps_num
(4–50) sets diffusion iterations for more or less detail - •Optional Enhancements & Safety
–enhance_image
: boost clarity and sharpness
–prompt_content_moderation
&content_moderation
toggles enforce brand and safety policies
–ip_signal
detection for potential copyright issues
Best Use Cases
- •Creative Storyboarding: Maintain camera angles and scene blocks while iterating on color and style.
- •Product Mockups & UI Layouts: Preserve layout grids or wireframes and restyle for different themes.
- •Illustration & Concept Art: Generate consistent character or environment sketches with unique artistic treatments.
- •Marketing & Social Media Assets: Produce templated visuals that align across campaigns with varied moods.
Prompt Tips and Output Quality
- •Be Descriptive: Detailed prompts yield richer textures and more accurate scene elements.
- •High-Resolution References: Supply clear structure images to avoid artifacts in complex compositions.
- •Adjust Influence:
- •Lower
structure_ref_influence
(~0.3–0.5) for looser interpretation - •Higher values (>0.8) for strict adherence to the reference
- •Lower
- •Tweak Refinement: Increase
steps_num
(20–30) for ultra-fine details; lower it (8–12) for rough drafts. - •Seed Consistency: Use the same
seed
to reproduce previous outputs exactly, perfect for A/B testing. - •Moderation Settings: Toggle off moderation flags if you require unfiltered creative exploration.
FAQs
Q: How do I maintain the original layout of my reference image?
A: Upload a clear base image via structure_image_file
or URL and set structure_ref_influence
high (0.75–1.0).
Q: Can I speed up rendering without sacrificing all detail?
A: Enable fast
mode for quicker drafts, then rerun in HD mode or bump up steps_num
for final versions.
Q: What’s the default seed and steps?
A: By default, seed
is 42 and steps_num
is 12. Adjust both for reproducibility and quality control.
Q: How can I enforce brand-safe outputs?
A: Turn on prompt_content_moderation
and content_moderation
to filter generated images against unsafe or IP-sensitive content.
Q: Is custom color control supported?
A: Yes—combine ControlNet color grid guidance with descriptive prompts for consistent color theming.
Other Popular Models
Discover other models you might be interested in.
IDM VTON
Best-in-class clothing virtual try on in the wild
Faceswap V2
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
Stable Diffusion XL 1.0
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software
Faceswap
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training