Collected molecules will appear here. Add from search or explore.
Provides a framework and reference implementation for adapting Large Language Models (LLMs) to time-series forecasting using a two-stage fine-tuning approach (supervised fine-tuning followed by task-specific tuning).
stars
563
forks
48
LLM4TS is a notable research-oriented project that emerged during the initial wave of applying LLMs to non-textual data. With 563 stars and 48 forks, it has earned a respectable position as a reference point for academic and experimental time-series forecasting. However, its defensibility is limited (4/10) because it functions primarily as a 'frozen' research artifact rather than a living production library; the 0.0 velocity indicates a lack of active maintenance. From a competitive standpoint, the project faces immense pressure from two sides: specialized time-series foundation models (TSFMs) and frontier labs. Models like Google's TimesFM, Amazon's Chronos, and Salesforce's Moirai are built from the ground up for time-series, often outperforming the 'LLM-adaptation' approach proposed here. Simultaneously, platforms like Nixtla are consolidating the ecosystem with production-ready tools (TimeGPT). Frontier risk is high because hyperscalers (AWS, Google) are already integrating these specialized forecasting capabilities directly into their cloud AI suites (e.g., SageMaker Canvas, Vertex AI). While LLM4TS was a novel combination of LLM architecture and TS data at its inception (~992 days ago), it is being rapidly displaced by native foundation models that do not require the overhead of a multi-billion parameter language model to predict numerical trends.
TECH STACK
INTEGRATION
reference_implementation
READINESS