Each parameter of the model controls a specific aspect of the furniture staging process. Here's a breakdown:
-
prompt:
This describes the scene or subject you want to stage. Write a detailed description of how the staged furniture or design should look. -
main_image:
The primary image of the room or space where the furniture will be staged. Ensure the URL is accessible and points to a clear, high-quality image. -
overlay_image:
The secondary image containing the furniture or staging items to overlay on the main image. Provide the URL of the furniture/staging image. -
main_image_mask:
A mask for the main image. This defines areas to focus on for staging (e.g., walls, floors). If not provided, staging might be applied globally. -
overlay_image_mask (optional):
- If the furniture in the overlay image has a white background, the model will automatically create a mask for it.
- If the overlay image is a lifestyle image or contains more than one furniture item, you should provide a custom mask.
-
steps:
This determines the number of processing steps. Higher values lead to better quality but take longer. -
seed:
Sets a random seed for reproducibility. Change this if you want different variations of the same setup. -
guidance:
Controls how strictly the output adheres to the prompt. Higher values ensure the output matches your prompt but can reduce creativity. -
image_format:
Specifies the format of the output image. Options are usuallypng
,jpeg
, etc. Choosepng
for lossless quality. -
image_quality:
Sets the quality of the output image. Higher values produce better quality but larger file sizes.
Prepare Your Inputs
Make sure all inputs are ready:
-
Images:
- Upload the
main_image
andoverlay_image
to a cloud service or use an accessible URL. - If masks are needed, ensure they match the size and format of their respective images.
- Upload the
-
Prompt:
- Be specific about the furniture type, arrangement, and style (e.g., "Place a modern gray sofa").
Other Popular Models
sdxl-img2img
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

faceswap-v2
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training

codeformer
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.

sd2.1-faceswapper
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
