Collected molecules will appear here. Add from search or explore.
Curated list of research papers on large/foundation models for time-series forecasting and analytics.
Defensibility
stars
48
forks
1
Defensibility (score: 2/10): This repo is an “awesome list” of papers rather than a codebase, dataset, model, or production system. The primary artifact is curation in markdown, which is easily duplicated: another group can recreate the list by searching the same literature, adding/removing links, and updating tags. There is no evidence of a unique technical pipeline (e.g., automated bibliography extraction, benchmark harness, or proprietary dataset/model integration) that would create switching costs. Quantitative signals reinforce the low defensibility: 48 stars is modest and 1 fork indicates limited community adoption. Velocity (~0.0507/hr, roughly ~1.2/day) is not negligible but is small for a project that would indicate strong ongoing investment in the artifact beyond passive reading. Frontier risk (medium): Frontier labs are unlikely to directly “compete” with a bibliography, but they can easily absorb the underlying work by (a) building time-series foundation models as part of their broader model platforms, and (b) publishing their own curated resources, surveys, or internal paper catalogs. The repo’s value proposition is informational rather than architectural; as soon as major labs start releasing dedicated time-series model documentation or reference implementations, the relative utility of a third-party curated list may decline. Threat axis analysis: - Platform domination risk (high): Major platforms (OpenAI/Anthropic/Google/Microsoft) can dominate the narrative and tooling around time-series foundation models by shipping end-to-end capabilities (APIs, benchmarks, libraries) and publishing their own “awesome/curated” resources. Since this project is not a deployable system, the platforms don’t need to replicate it line-by-line; they only need to provide better first-class discovery and evaluation for time-series foundation models. - Market consolidation risk (medium): Research discovery/curation tends to consolidate around a few widely used channels (official docs, major benchmark leaderboards, university/lab-sponsored “awesome” lists, and periodically updated surveys). However, multiple curators can coexist without direct substitution if they cover different subtopics (e.g., forecasting vs anomaly detection, time-series LLM prompting vs tokenization approaches). So consolidation is not guaranteed to be total. - Displacement horizon (6 months): Bibliography-style projects are relatively fragile—when popular model ecosystems add time-series sections, reference implementations, and benchmark suites, users shift from ad-hoc paper lists to those productized resources. Given the modest adoption (48 stars) and low fork count (1), the repo likely has limited inertia. Key opportunities: The repo could become more defensible if it evolves from static curation into an actively maintained, structured research index with standardized metadata (tasks, datasets, metrics, code availability), automated link checking, and—critically—a benchmark harness or reproducibility layer (even a lightweight one). Adding downloadable evaluation results or curated “best-performing baselines” by task would create higher switching costs than links alone. Key risks: (1) No technical moat—only documentation, easy to clone. (2) Information redundancy—paper catalogs and aggregator platforms (Semantic Scholar, arXiv search, OpenReview, GitHub orgs, and lab pages) can render the curation less differentiating. (3) Platform release risk—major labs publishing time-series foundation model docs and code will reduce the incremental value of third-party lists.
TECH STACK
INTEGRATION
theoretical_framework
READINESS