Collected molecules will appear here. Add from search or explore.
Empirical evaluation of the trade-offs between model compression (pruning, quantization, distillation) and adversarial robustness in Code LMs (e.g., CodeBERT, PLBART).
Defensibility
citations
0
co_authors
3
This project is an academic artifact accompanying a research paper (arXiv:2508.03949). It lacks the characteristics of a software product or infrastructure project. The defensibility is low because the value lies in the data and insights generated by the study rather than a novel tool or proprietary codebase. With 0 stars and 3 forks shortly after release, it represents a standard 'code for reproducibility' repository. Frontier labs (OpenAI, Anthropic) are unlikely to compete directly as this is a niche study on specific code-based models (likely BERT-era), but they would incorporate similar findings into their internal safety and optimization pipelines. The risk of displacement is high because empirical studies on specific models age rapidly as newer architectures (e.g., Llama 3, DeepSeek-Coder) render the benchmarks for older models like PLBART less relevant to the current frontier.
TECH STACK
INTEGRATION
reference_implementation
READINESS