Collected molecules will appear here. Add from search or explore.
Provides code and benchmarks for evaluating the adversarial robustness of Meta's Segment Anything Model (SAM), specifically focusing on red-teaming the model's vulnerability to noise and directed attacks.
Defensibility
stars
3
This project is a classic 'research artifact' repository designed to accompany a CVPR workshop paper. With only 3 stars and 0 forks over a two-year lifespan, it has failed to gain any traction as a reusable tool or library. From a competitive standpoint, it suffers from significant obsolescence risk: Meta has already released SAM 2, and foundation model providers (OpenAI, Google, Meta) are increasingly internalizing red-teaming workflows or utilizing enterprise-grade platforms like Giskard, Robust Intelligence, or HiddenLayer. The code likely applies standard adversarial attack methods (like PGD or FGSM) to the SAM architecture, which is a common academic exercise but lacks a technical moat. There is no evidence of a community, and the 'velocity' is zero, suggesting the project is no longer maintained.
TECH STACK
INTEGRATION
reference_implementation
READINESS