Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SAM-pose2seg: Pose-Guided Human Instance Segmentation in Crowds

About

Segment Anything (SAM) provides an unprecedented foundation for human segmentation, but may struggle under occlusion, where keypoints may be partially or fully invisible. We adapt SAM 2.1 for pose-guided segmentation with minimal encoder modifications, retaining its strong generalization. Using a fine-tuning strategy called PoseMaskRefine, we incorporate pose keypoints with high visibility into the iterative correction process originally employed by SAM, yielding improved robustness and accuracy across multiple datasets. During inference, we simplify prompting by selecting only the three keypoints with the highest visibility. This strategy reduces sensitivity to common errors, such as missing body parts or misclassified clothing, and allows accurate mask prediction from as few as a single keypoint. Our results demonstrate that pose-guided fine-tuning of SAM enables effective, occlusion-aware human segmentation while preserving the generalization capabilities of the original model. The code and pretrained models will be available at https://mirapurkrabek.github.io/BBox-Mask-Pose/.

Constantin Kolomiiets, Miroslav Purkrabek, Jiri Matas• 2026

Related benchmarks

TaskDatasetResultRank
Instance SegmentationOCHuman (test)
Mask AP34.7
38
Pose EstimationOCHuman (val)
AP69.5
24
Pose EstimationCOCO 2017 (val)
AP61.6
23
Human Instance SegmentationCOCOPersons 2017 (val)
AP60.3
5
Human Instance SegmentationCOCO 2017 (val)
AP44.6
3
Pose-to-segmentationOCHuman (test)
AP70
3
Showing 6 of 6 rows

Other info

Follow for update