SAM v2.1 Hiera Large
SAM v2.1 (Segment Anything Model 2.1) represents the next evolution in promptable visual segmentation by Meta AI, delivering more accurate and efficient mask generation for a wide range of image types and contexts.
Model Information
- •Architecture: SAM v2.1 extends the original SAM framework with optimized Hiera-based encoders for higher accuracy and speed.
- •Flexible Outputs: Supports overlay images, polygon coordinates, COCO RLE encodings, and individual PNG masks.
How to Use the Model
- •Upload an image (JPG, PNG, or WebP) — either as a URL or base64 string.
- •Configure output options:
- •make_overlay – Generate a visual overlay of masks on the input image.
- •save_polygons – Return polygon coordinates for each segmented region.
- •save_rle – Export COCO RLE encodings for mask data.
- •save_pngs – Save individual masks as PNGs in a ZIP file.
- •Adjust advanced parameters for quality and precision:
- •points_per_side, points_per_batch, pred_iou_thresh, stability_score_thresh, min_mask_region_area, nms_iou_thresh, max_masks, polygon_epsilon, tile_size, tile_stride.
- •Click “Generate” to obtain segmentation results.
Use Cases
- •Assisted Image Labeling: Speeds up dataset annotation with automatic mask proposals.
- •AR/VR Applications: Enables precise object isolation for immersive environments.
- •Autonomous Vehicles: Supports accurate perception and obstacle detection.
- •Environmental Monitoring: Segments satellite and aerial imagery for analysis.
- •Industrial & Sonar Imaging: Identifies regions of interest in complex visual data.
