Collected molecules will appear here. Add from search or explore.
Empirically evaluating privacy leakage and auditing the epsilon guarantees of differentially private machine learning (DPML) models using membership inference attacks.
Defensibility
stars
135
forks
48
EvaluatingDPML is a legacy research repository (over 7 years old) associated with early academic work on auditing Differential Privacy. While it holds 135 stars and 48 forks, its velocity is zero, indicating it is an archived academic artifact rather than a living software project. In the current landscape, its functionality has been largely absorbed by production-grade libraries such as Google's 'TensorFlow Privacy' and PyTorch's 'Opacus,' which include more robust privacy auditing tools (e.g., privacy loss distributions and modern membership inference attack suites). The project serves as a valuable reference for the methodology of 'empirical epsilon' measurement but lacks the infrastructure or community support to be considered defensible in a commercial or engineering context. Frontier labs are unlikely to compete with this specific repo, as they have already integrated superior auditing capabilities into their core DP frameworks. The risk of displacement is high because the methodology has evolved significantly since 2017, and modern tools provide better performance and integration with current ML stacks.
TECH STACK
INTEGRATION
reference_implementation
READINESS