Collected molecules will appear here. Add from search or explore.
Supplementary research implementation for attacking Split Learning systems, specifically targeting data reconstruction (model inversion), model replication (model stealing), and label leakage.
Defensibility
stars
15
forks
4
UnSplit is a classic academic reference implementation for security vulnerabilities in Split Learning (SL). While the paper itself was influential in the privacy-preserving machine learning (PPML) community, the repository is effectively a 'frozen' research artifact with minimal stars (15) and zero velocity over the last five years. It lacks the infrastructure, API, or maintenance required to be considered a tool or library. Its defensibility is near zero because it is a transparent implementation of a known research technique intended for reproducibility, not for production use or as a standalone service. Frontier labs are unlikely to build specific 'attack tools' for Split Learning, as their focus is on general-purpose foundation models and defensive alignment; however, the techniques demonstrated here (like Model Inversion) are well-understood by security teams at Microsoft (Counterfit) and Google. The project is already displaced by more modern adversarial ML frameworks and newer research that addresses more complex architectures beyond the simple split layers used here. It serves primarily as a historical benchmark for red-teaming distributed ML systems.
TECH STACK
INTEGRATION
reference_implementation
READINESS