LLaVA 13B
LLaVA 13B is a Vision-language model which allows both image and text as inputs.
Playground
Resources to get you started
Everything you need to know to get the most out of LLaVA 13B
LLaVA 13B
LLaVA 13B is a vision-language model (VLM) trained on OSS LLM-generated instruction following data. Its state-of-the-art architecture enables seamless interaction between visual content and textual prompts. FireLLaVA supports multi-image and multi-prompt generation. You can seamlessly integrate multiple images into your queries, enhancing context and specificity.
Applications
- •
Image Captioning: Generate descriptive captions for images, enriching content across social media, e-commerce, and more.
- •
Visual Question Answering (VQA): Pose questions about images, and FireLLaVA provides accurate answers.
- •
Creative Writing: Fuel your imagination by combining visual cues with textual prompts
Other Popular Models
Discover other models you might be interested in.
SDXL Controlnet
SDXL ControlNet gives unprecedented control over text-to-image generation. SDXL ControlNet models Introduces the concept of conditioning inputs, which provide additional information to guide the image generation process
IDM VTON
Best-in-class clothing virtual try on in the wild
Fooocus
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.
Codeformer
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.