Story Diffusion

Story Diffusion turns your written narratives into stunning image sequences.

Playground

Try the model in real time below.

output image


Examples

Check out what others have created with Story Diffusion
Example preview
seed: 42guidance_scale: 5

API

If you're looking for an API, you can choose from your desired programming language.

POST
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 import requests api_key = "YOUR_API_KEY" url = "https://api.segmind.com/v1/storydiffusion" # Prepare data and files data = {} files = {} data['seed'] = 42 data['num_ids'] = 3 data['sd_model'] = "Unstable" data['num_steps'] = 25 # For parameter "ref_image", you can send a raw file or a URI: # files['ref_image'] = open('IMAGE_PATH', 'rb') # To send a file # data['ref_image'] = 'IMAGE_URI' # To send a URI data['image_width'] = 768 data['image_height'] = 768 data['sa32_setting'] = 0.5 data['sa64_setting'] = 0.5 data['output_format'] = "webp" data['guidance_scale'] = 5 data['output_quality'] = 80 data['negative_prompt'] = "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs" data['character_description'] = "a man, wearing black suit" data['comic_description'] = "at home, read new paper #at home, The newspaper says there is a treasure house in the forest.\non the road, near the forest\n[NC] The car on the road, near the forest #He drives to the forest in search of treasure.\n[NC]A tiger appeared in the forest, at night \nvery frightened, open mouth, in the forest, at night\nrunning very fast, in the forest, at night\n[NC] A house in the forest, at night #Suddenly, he discovers the treasure house!\nin the house filled with treasure, laughing, at night #He is overjoyed inside the house." data['style_strength_ratio'] = 20 data['style_name'] = "Disney Charactor" data['comic_style'] = "Classic Comic Style" headers = {'x-api-key': api_key} response = requests.post(url, data=data, files=files, headers=headers) print(response.content) # The response is the generated image
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


seedint ( default: 42 )

Random seed. Leave blank to randomize the seed


num_idsint ( default: 3 )

Number of id images in total images. This should not exceed total number of line-separated prompts


sd_modelenum:str ( default: Unstable )

Allowed values:


num_stepsint ( default: 25 )

Number of sample steps

min : 20,

max : 50


ref_imagestr ( default: 1 )

Reference image for the character


image_widthenum:str ( default: 768 )

Allowed values:


image_heightenum:str ( default: 768 )

Allowed values:


sa32_settingfloat ( default: 0.5 )

The degree of Paired Attention at 32 x 32 self-attention layers

min : 0,

max : 1


sa64_settingfloat ( default: 0.5 )

The degree of Paired Attention at 64 x 64 self-attention layers

min : 0,

max : 1


output_formatenum:str ( default: webp )

Allowed values:


guidance_scalefloat ( default: 5 )

Scale for classifier-free guidance

min : 0.1,

max : 10


output_qualityint ( default: 80 )

Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality

min : 0,

max : 100


negative_promptstr ( default: bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs )

Describe things you do not want to see in the output


character_descriptionstr ( default: a man wearing black suit )

Add triger word 'img' when using ref_image. General description of the character. If ref_image above is provided, making sure to follow the class word you want to customize with the trigger word 'img', such as: 'man img' or 'woman img' or 'girl img'


comic_descriptionstr ( default: at home, read new paper #at home, The newspaper says there is a treasure house in the forest. on the road, near the forest [NC] The car on the road, near the forest #He drives to the forest in search of treasure. [NC]A tiger appeared in the forest, at night very frightened, open mouth, in the forest, at night running very fast, in the forest, at night [NC] A house in the forest, at night #Suddenly, he discovers the treasure house! in the house filled with treasure, laughing, at night #He is overjoyed inside the house. )

Remove [NC] when using Ref_image Each frame is divided by a new line. Only the first 10 prompts are valid for demo speed! For comic_description NOT using ref_image: (1) Support Typesetting Style and Captioning. By default, the prompt is used as the caption for each image. If you need to change the caption, add a '#' at the end of each line. Only the part after the '#' will be added as a caption to the image. (2) The [NC] symbol is used as a flag to indicate that no characters should be present in the generated scene images. If you want do that, prepend the '[NC]' at the beginning of the line.


style_strength_ratioint ( default: 20 )

Style strength of Ref Image (%), only used if ref_image is provided

min : 15,

max : 50


style_nameenum:str ( default: Disney Charactor )

Allowed values:


comic_styleenum:str ( default: Classic Comic Style )

Allowed values:

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.


Pricing

Serverless Pricing

Buy credits that can be used anywhere on Segmind

$ 0.0015 /per second
FEATURES

PixelFlow allows you to use all these features

Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.

Segmented Creation Workflow

Gain greater control by dividing the creative process into distinct steps, refining each phase.

Customized Output

Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.

Layering Different Models

Integrate and utilize multiple models simultaneously, producing complex and polished creative results.

Workflow APIs

Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.

Story Diffusion

Story Diffusion leverages the power of diffusion models to generate a series of images that cohesively depict your story's scenes. It's like having a visual effects team at your fingertips, translating your words into a captivating visual experience. Story Diffusion goes beyond simply generating individual images. Its core strength lies in maintaining consistency across the entire sequence. Characters, settings, and overall mood remain thematically linked, ensuring a visually cohesive story arc.

Application of Story Diffusion

This innovative model has far-reaching implications for various creative fields:

  • Concept Art and Illustration: Story Diffusion empowers artists and designers by generating visual references that perfectly capture the essence of their ideas. It acts as a springboard for further creative exploration.

  • Storyboarding and Pre-visualization: Filmmakers and animators can use Story Diffusion to create dynamic storyboards that visualize key scenes and plot points. This streamlines the pre-production process, saving time and resources.

  • Graphic Novels and Comics: Breathe life into static panels with Story Diffusion. Generate visuals that showcase dynamic action sequences and character emotions, enhancing the reading experience.

  • Interactive Storytelling: Integrate Story Diffusion into interactive storytelling platforms. Users can shape the narrative, and the model generates corresponding visuals on the fly, creating a truly personalized and engaging experience.

F.A.Q.

Frequently Asked Questions

Take creative control today and thrive.

Start building with a free account or consult an expert for your Pro or Enterprise needs. Segmind's tools empower you to transform your creative visions into reality.

Pixelflow Banner