Collected molecules will appear here. Add from search or explore.
A curated bibliography and resource hub for Vision-Language-Action (VLA) models, tracking research papers, datasets, and open-source models in the robotics foundation model space.
Defensibility
stars
482
forks
14
The project is a standard 'Awesome List' repository. While it has achieved significant traction (482 stars) within the robotics research community, it possesses no technical moat. Its value lies entirely in the human effort of curation. The 355-day age and 14 forks suggest it is a recognized entry point for researchers, but it is trivially reproducible by any motivated individual or LLM-driven research agent. Frontier labs (OpenAI, Google DeepMind) are the primary drivers of the content this list tracks (e.g., RT-2, Gato, Figure-01), making the list's relevance entirely dependent on the output of those entities. Platform domination risk is high because Hugging Face, arXiv, or the labs themselves (e.g., DeepMind's research blog) provide more authoritative and automated discovery mechanisms. The displacement horizon is short because the field of VLA is moving at a velocity that exceeds manual Markdown updates.
TECH STACK
INTEGRATION
reference_implementation
READINESS