17
models
Meta Models
Meta powers open-source AI leadership with the Llama model family, SAM for universal image segmentation, and MusicGen for AI music creation. Llama models deliver frontier language capabilities with full open-weight access. SAM and SAM3 segment any object in images and video with remarkable accuracy. MusicGen composes original tracks from text descriptions. Access every Meta model via Segmind APIs with a single endpoint call.
Sam Audio Large
Isolate any described sound from mixed audio tracks.
Sam 3D Object
Single 2D image into detailed 3D object models.
Sam 3D Body
Reconstruct 3D human body meshes from a single photo.
Sam3 Video
Real-time video segmentation and multi-object tracking.
Sam3 Image
Precise object segmentation and tracking in images.
Sam V2.1 Hiera Large
Meta's next-gen segmentation model for images and video.
Llama 4 Scout Instruct Basic
Unlock powerful multimodal AI with Llama 4 Scout basic, a 17 billion active parameters model offering leading text & image understanding.
Llama 4 Maverick Instruct Basic
Llama 4 Maverick Instruct Basic is a 400B parameter powerhouse with 128 experts for unparalleled text and image understanding.
Meta MusicGen Medium
MusicGen: Transform text into music with AI. Create unique, high-quality audio from simple descriptions. Experience the future of music generation with this innovative AI model.
Sam V2 Image
SAM v2, the next-gen segmentation model from Meta AI, revolutionizes computer vision. Building on SAM's success, it excels at accurately segmenting objects in images, offering robust and efficient solutions for various visual contexts.
Sam V2 Video
SAM v2 Video by Meta AI, allows promptable segmentation of objects in videos.
Llama 3.1 405b
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
Llama 3.1 70b
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
Llama 3.1 8b
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
Llama 3 8b
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
Llama 3 70b
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
Segment Anything Model
The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image.