Collected molecules will appear here. Add from search or explore.
A visual analytics system designed to explain and analyze the vulnerabilities of machine learning models to adversarial attacks, specifically focusing on how perturbations affect model decision boundaries.
Defensibility
stars
9
forks
2
This project is an academic artifact associated with a paper from approximately 2017. With only 9 stars and 2 forks over nearly 7 years, it lacks any community traction or developer momentum. The defensibility is extremely low (2) because the field of adversarial machine learning has moved significantly beyond the simple pixel-level perturbations and decision boundary visualizations provided here. Modern libraries like the Adversarial Robustness Toolbox (ART) by IBM, CleverHans, or integrated observability suites like Weights & Biases and Arize offer much more robust and maintained versions of these capabilities. While the combination of visual analytics and adversarial ML was novel at the time of publication, it is now an outdated reference implementation. Frontier labs face low risk from this because they have moved on to LLM safety and complex jailbreaking techniques, making this specific tool for older CNN/MLP architectures largely obsolete.
TECH STACK
INTEGRATION
reference_implementation
READINESS