Collected molecules will appear here. Add from search or explore.
A lightweight C++ distributed deep learning framework specifically designed for feature engineering and model training using a Parameter Server (PS) architecture.
Defensibility
stars
30
forks
16
ps-dnn is a legacy-style distributed training framework built on ps-lite (the communication layer originally behind MXNet). While it offers specific feature engineering operators (bucket, combine, group) that are useful for Click-Through Rate (CTR) prediction in ad-tech or recommendation systems, the project is effectively stagnant with no activity for over four years. With only 30 stars and 16 forks, it lacks the community momentum to compete with modern equivalents. Technically, it has been superseded by more robust ecosystems like NVIDIA Merlin (HugeCTR), Alibaba's DeepRec, or even standard PyTorch/TensorFlow distributed strategies which now handle sparse feature sets more efficiently. The claim of being 'lightweight C++' for production inference is no longer a unique moat given the maturity of ONNX Runtime and TensorRT. From a competitive standpoint, this project serves as a reference implementation for ps-lite usage rather than a viable modern platform. Any enterprise use-case would likely choose a supported framework with active security patches and hardware acceleration support (CUDA/NCCL), which this project lacks.
TECH STACK
INTEGRATION
cli_tool
READINESS