Collected molecules will appear here. Add from search or explore.
Implementation of foundational privacy attacks (Membership Inference, Attribute Inference, Model Inversion) against Machine Learning models using PyTorch.
Defensibility
stars
66
forks
8
This project serves as a clear educational reference for classic privacy attacks in ML, but it lacks the characteristics of a defensible or modern software product. With only 66 stars over nearly five years and zero current velocity, it is effectively a stale repository. Technically, it implements well-known research papers (e.g., Shokri et al. for Membership Inference) which have since been incorporated into much more robust and maintained libraries like IBM's Adversarial Robustness Toolbox (ART), TensorFlow Privacy, and PrivacyRaven. Frontier labs and cloud providers (AWS, Google Cloud, Azure) are increasingly baking 'Responsible AI' and privacy auditing directly into their MLOps pipelines, rendering standalone script collections like this obsolete for production use. Its primary value today is as a simple, readable code example for students or researchers looking to understand the mechanics of these attacks without the overhead of a large framework.
TECH STACK
INTEGRATION
reference_implementation
READINESS