Collected molecules will appear here. Add from search or explore.
Pre-trained CodeBERT model fine-tuned for CWE-672 (Improper Validation of Extraneous Input) vulnerability detection in source code
downloads
2
likes
0
This is a single fine-tuned model checkpoint uploaded to Hugging Face with no associated repository, documentation, training code, evaluation methodology, or community engagement. The project shows zero traction (2 stars, 0 forks, no velocity), zero age, and appears to be an isolated model weight dump rather than a maintained project. CodeBERT fine-tuning for vulnerability detection is a well-established pattern (Microsoft research 2020+, widely replicated). The specific CWE-672 focus is niche but not novel—vulnerability-specific BERT models are commodity tooling at this point. Frontier labs (OpenAI, Anthropic, Google, Meta) have already shipped code understanding capabilities as part of their LLM stacks and would more likely integrate general-purpose code LLMs than adopt a single-CWE classifier. The lack of supporting infrastructure (training pipeline, evaluation harness, dataset documentation) means this is not defensible—anyone with access to labeled CWE-672 examples could replicate it in hours. High frontier risk because Copilot, CodeStorm, and similar systems already subsume vulnerability detection as a feature; a specialized single-CWE model adds no value they couldn't generate. This scores as a reference implementation at best, more accurately a model artifact without project substance.
TECH STACK
INTEGRATION
library_import
READINESS