Collected molecules will appear here. Add from search or explore.
Implementations of classic Chinese Named Entity Recognition (NER) models including HMM, CRF, BiLSTM, and BiLSTM+CRF.
stars
2,282
forks
530
This project serves primarily as a historical and educational archive for Chinese NLP. With over 2,200 stars and 500 forks, it clearly provided significant value during the 'pre-Transformer' era (circa 2017-2019) as a clean reference for BiLSTM+CRF architectures. However, its 'velocity' is currently 0.0, and it has not evolved to include modern SOTA techniques like BERT-based fine-tuning or LLM prompting. In the current market, NER is either a solved commodity feature provided by frontier labs (OpenAI, Anthropic) via zero-shot extraction or handled by robust industrial frameworks like HanLP, PaddleNLP, or Hugging Face Transformers. The defensibility is low because the techniques are now standard textbook examples and the codebase is aging. While it remains a useful pedagogical resource for students learning the mechanics of sequence labeling, it lacks the infrastructure, data gravity, or model performance required to compete with modern NLP stacks.
TECH STACK
INTEGRATION
reference_implementation
READINESS