Collected molecules will appear here. Add from search or explore.
A theoretical framework and survey reviewing the shift in LLM agent architecture from internal model weights to externalized runtime components (memory, skills, protocols, and harnesses).
Defensibility
citations
0
co_authors
21
This project is a survey paper (arXiv:2604.08224) rather than a software tool. While it identifies a critical trend—the 'externalization' of logic from the model into the system harness—it lacks a technical moat or unique dataset. The defensibility score is low because it is an intellectual synthesis of existing trends (LangChain, OpenAI Assistants, AutoGPT) rather than a functional product. The high fork-to-star ratio (21 forks, 0 stars) suggests immediate interest from the research community for citation or reference, but no public adoption as a tool. Frontier labs (OpenAI, Anthropic) are the primary drivers of this trend; their move toward 'system-level' agents (e.g., OpenAI Operators) directly competes with the independent frameworks this paper categorizes. Displacement risk is high as newer surveys and more robust, code-first frameworks emerge frequently in the fast-moving agentic space. The project's value lies in providing a vocabulary for investors and developers to discuss agent infrastructure, but it does not represent a defensible business or technical asset.
TECH STACK
INTEGRATION
theoretical_framework
READINESS