Collected molecules will appear here. Add from search or explore.
Research code for simulating adversarial attacks on medical imaging models to evaluate diagnostic robustness compared to human radiologists.
Defensibility
stars
5
forks
2
This project is a static academic artifact associated with a paper from circa 2020. With only 5 stars and zero recent activity (velocity 0), it lacks any form of community momentum or technical moat. The repository serves primarily as a reproducibility package for the specific paper 'A machine and human reader study on AI diagnosis model safety under attacks of adversarial images.' In the context of the current market, it is largely obsolete. Medical AI robustness is now a mature sub-field with highly maintained, production-grade libraries such as the IBM Adversarial Robustness Toolbox (ART), Foolbox, and CleverHans, which offer broader attack coverage and framework support. Furthermore, frontier labs and medical technology giants (Google Health, Siemens Healthineers, GE Healthcare) have integrated much more sophisticated safety-testing suites into their internal development pipelines to meet regulatory requirements (FDA/EU MDR). The risk of platform domination is high because diagnostic safety is becoming a baked-in feature of MLOps platforms for healthcare, rather than a standalone toolkit.
TECH STACK
INTEGRATION
reference_implementation
READINESS