Collected molecules will appear here. Add from search or explore.
Implementation of the Boundary Attack algorithm, a decision-based black-box adversarial attack that finds the minimum perturbation to change a model's prediction without requiring gradient access.
Defensibility
stars
98
forks
22
The 'Boundary Attack' is a seminal paper in adversarial machine learning, but this repository is a legacy implementation (nearly 8 years old). With only 98 stars and zero recent activity, it has been superseded by industry-standard libraries like Foolbox (maintained by the original paper's authors), the IBM Adversarial Robustness Toolbox (ART), and CleverHans. From a competitive perspective, this project offers no moat; the algorithm is well-documented and better-integrated elsewhere. Frontier labs (OpenAI, Anthropic) incorporate these techniques into their internal red-teaming and safety alignment pipelines (e.g., jailbreaking and robustness testing) as native platform features. The displacement is already complete, as modern researchers and engineers use consolidated libraries that support a wider array of frameworks (PyTorch, JAX) and more recent attack variants.
TECH STACK
INTEGRATION
algorithm_implementable
READINESS