Collected molecules will appear here. Add from search or explore.
Search and evaluate GitHub repos, arXiv papers, and HuggingFace models — scored for defensibility, threat profile, and composability.
Model definition, loading, and fine-tuning framework for transformer-based architectures across text, vision, audio, and multimodal domains with unified APIs for inference and training.
stars: 159,006
Official NVIDIA high-performance inference optimization library for Large Language Models on NVIDIA hardware, providing advanced kernels, quantization, and orchestration.
stars: 13,319
Distributed training and inference optimization library that enables scaling deep learning models to trillions of parameters through memory-saving techniques like ZeRO and various forms of parallelism.
stars: 42,020
Workflow orchestration platform for authoring, scheduling, and monitoring data pipelines and ETL processes
stars: 44,952
A cross-platform Python library for differentiable quantum programming, enabling the integration of quantum computing hardware and simulators with classical machine learning frameworks like PyTorch, JAX, and TensorFlow.
stars: 3,141
Open-source autopilot flight control software stack for multirotor, fixed-wing, and VTOL unmanned aerial vehicles (UAVs), providing real-time control algorithms, sensor fusion, navigation, and hardware abstraction across diverse embedded platforms.
stars: 11,461
Official Go implementation of the Model Context Protocol (MCP) for building interoperable AI clients and servers that connect LLMs to data sources and tools.
stars: 4,316
Model Context Protocol: A standard specification and reference implementation for connecting AI models to external data sources, tools, and context via a unified protocol
stars: 7,745
End-to-end open-source software stack for autonomous driving, providing modules for sensing, localization, perception, planning, and control.
stars: 11,344
High-throughput, memory-efficient LLM inference and serving engine with optimized batching, KV-cache management, and multi-GPU/hardware support
stars: 75,746
Production-grade WebAssembly runtime with JIT compilation, sandboxing, and WASI support for executing WASM modules across multiple platforms
stars: 17,866
Official Python implementation of the Model Context Protocol (MCP), enabling standardized communication between AI models and local or remote data sources and tools.
stars: 22,555
Production-grade inference serving platform for deploying and managing machine learning models across cloud and edge environments with multi-backend support, batching, and dynamic loading.
stars: 10,530
Provides a standardized, machine-readable format (XML, JSON, YAML) for representing security controls, system security plans, and assessment results to enable automated compliance and risk management.
stars: 869
Meta-operating system and middleware for robot development, providing communication, coordination, and tooling across distributed robot systems
stars: 5,312
LLM application framework providing abstractions for chaining language models, memory, retrieval, and agents with pluggable integrations across 100+ external services
stars: 132,812
A high-performance LLM inference and serving framework that optimizes structured generation and throughput via RadixAttention and a specialized domain-specific language.
stars: 25,582
Graph-based Retrieval-Augmented Generation (RAG) system that extracts entities and relationships from documents to build knowledge graphs for improved LLM context retrieval
stars: 32,039
Official catalog and reference implementations of Model Context Protocol (MCP) servers for standardizing AI model access to tools, data sources, and external systems
stars: 2,923
Data integration platform providing ETL/ELT pipelines connecting APIs, databases, and files to data warehouses, lakes, and lakehouses with self-hosted and cloud deployment options.
stars: 21,035
Create a free account to unlock more