Collected molecules will appear here. Add from search or explore.
Provides a quantized version (i1-GGUF format) of the MNLP_M2 model for efficient inference on resource-constrained hardware using llama.cpp.
downloads
10
likes
0
This is a standard quantization export of an existing model. It utilizes common tools (llama.cpp) to convert a model into the GGUF format. While useful for the community, it possesses no original code or intellectual property and can be reproduced in minutes by anyone with the base weights.
TECH STACK
INTEGRATION
reference_implementation
READINESS