Collected molecules will appear here. Add from search or explore.
Standardized library for applying parameter-efficient fine-tuning (PEFT) techniques like LoRA, QLoRA, and Prefix Tuning to large pre-trained models.
stars
20,904
forks
2,241
PEFT is the category-defining project for model adaptation in the open-source ecosystem. Its defensibility is near-absolute due to its position as a core dependency within the Hugging Face ecosystem (transformers, diffusers, accelerate) and its massive community adoption. While frontier labs offer fine-tuning as a service, PEFT provides the critical infrastructure for the open-weights movement to operate on consumer and enterprise hardware. It effectively turned academic research papers into production-grade engineering tools.
TECH STACK
INTEGRATION
library_import
READINESS