Editing
AI for Time Series and Forecasting
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> {{BloomIntro}} AI for time series and forecasting applies machine learning and deep learning techniques to sequential, time-indexed data to predict future values, detect anomalies, and extract patterns. Time series data is ubiquitous: stock prices, electricity demand, web traffic, sensor readings, weather measurements, and patient vital signs all evolve over time. Traditional forecasting relied on statistical models like ARIMA; modern AI-driven approaches β including LSTMs, Temporal Fusion Transformers, and foundation models for time series β now achieve state-of-the-art performance across domains. </div> __TOC__ <div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Remembering</span> == * '''Time series''' β A sequence of data points indexed in time order, typically at regular intervals. * '''Forecasting''' β Predicting future values of a time series based on its historical patterns. * '''Univariate time series''' β A single variable measured over time (e.g., daily sales). * '''Multivariate time series''' β Multiple variables measured simultaneously over time (e.g., temperature, humidity, and pressure together). * '''Trend''' β The long-term direction of a time series (upward, downward, or flat). * '''Seasonality''' β Regular, periodic patterns that repeat at known intervals (daily, weekly, yearly). * '''Residuals''' β The component remaining after removing trend and seasonality; ideally random noise. * '''Stationarity''' β A time series is stationary if its statistical properties (mean, variance) do not change over time. Many models require stationarity. * '''Autocorrelation''' β The correlation of a time series with its own past values (lags). * '''Lag''' β A prior time step. Lag-1 is yesterday's value; lag-7 is last week's value. * '''ARIMA''' β AutoRegressive Integrated Moving Average; a classical statistical model for univariate forecasting. * '''LSTM (Long Short-Term Memory)''' β A type of RNN with gating mechanisms that captures long-range dependencies in sequences. * '''Temporal Fusion Transformer (TFT)''' β A transformer-based model for multi-horizon time series forecasting, incorporating attention across time. * '''Anomaly detection''' β Identifying data points, intervals, or patterns that deviate significantly from expected behavior. * '''Horizon''' β The number of future time steps to forecast (1-step-ahead vs. multi-step/multi-horizon). * '''Rolling forecast''' β Re-fitting or updating the model as new data arrives, maintaining accuracy over time. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Understanding</span> == Time series forecasting is inherently a sequential problem: the order of observations matters, and the past contains information about the future. This distinguishes it from tabular classification, where rows are exchangeable. '''The decomposition framework''' is key to understanding time series: <syntaxhighlight lang="text"> Observed = Trend Γ Seasonal Γ Residual (multiplicative) = Trend + Seasonal + Residual (additive) </syntaxhighlight> Decomposing a series into these components enables targeted modeling: model the trend with regression, the seasonality with Fourier features or indicator variables, and the residual with a neural network or ARIMA. '''Why deep learning?''' Classical models like ARIMA excel at capturing simple autocorrelation but struggle with: * Non-linear relationships between variables * Multiple interacting series (multivariate) * Complex, multi-scale seasonality * Incorporating exogenous variables (weather, holidays, promotions) LSTMs can capture non-linear temporal dependencies and handle arbitrary-length sequences. Transformers add the ability to attend to any past time step directly, avoiding the vanishing gradient problem over long sequences. Foundation models for time series (TimeGPT, MOIRAI, Chronos) pre-trained on billions of time points can zero-shot forecast on new series. '''Evaluation discipline''': A critical mistake in time series is using random train/test splits. This causes data leakage β future data leaks into the training set. Always use chronological splits: train on the first 70β80%, validate on the next 10β15%, test on the most recent 10β15%. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Applying</span> == '''Multi-horizon forecasting with Temporal Fusion Transformer (PyTorch Forecasting):''' <syntaxhighlight lang="python"> import pandas as pd from pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer from pytorch_forecasting.metrics import QuantileLoss import lightning.pytorch as pl # Load data: each row = one time step for one series df = pd.read_csv("sales_data.csv") df["time_idx"] = (df["date"] - df["date"].min()).dt.days # integer time index max_encoder_length = 60 # Use 60 past days as context max_prediction_length = 14 # Forecast 14 days ahead # Training dataset training = TimeSeriesDataSet( df[lambda x: x.time_idx <= x.time_idx.max() - max_prediction_length], time_idx="time_idx", target="sales", group_ids=["store_id", "product_id"], # Multiple series min_encoder_length=30, max_encoder_length=max_encoder_length, max_prediction_length=max_prediction_length, static_categoricals=["store_id", "product_id"], time_varying_known_reals=["time_idx", "price", "day_of_week", "is_holiday"], time_varying_unknown_reals=["sales"], # Only target is "unknown" in future target_normalizer="auto", ) validation = TimeSeriesDataSet.from_dataset(training, df, predict=True, stop_randomization=True) train_dl = training.to_dataloader(train=True, batch_size=64) val_dl = validation.to_dataloader(train=False, batch_size=64) # TFT model tft = TemporalFusionTransformer.from_dataset( training, learning_rate=0.03, hidden_size=64, attention_head_size=4, dropout=0.1, hidden_continuous_size=32, output_size=7, # 7 quantile predictions (p10 to p90) loss=QuantileLoss(), log_interval=10, ) trainer = pl.Trainer(max_epochs=30, accelerator="gpu", gradient_clip_val=0.1) trainer.fit(tft, train_dl, val_dl) </syntaxhighlight> ; Model selection guide by forecasting scenario : '''Simple univariate, clean seasonality''' β SARIMA, Prophet (Meta), ETS : '''Univariate with complex patterns''' β N-BEATS, N-HiTS, PatchTST : '''Multivariate with known future covariates''' β Temporal Fusion Transformer, DeepAR : '''Very short series or irregular intervals''' β Gaussian Processes, ARIMA : '''Many series, zero-shot''' β TimeGPT, Chronos, MOIRAI (foundation models) : '''Anomaly detection''' β Isolation Forest (tabular features), LSTMAD, Anomaly Transformer </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Analyzing</span> == {| class="wikitable" |+ Time Series Model Comparison ! Model !! Type !! Strengths !! Weaknesses |- | ARIMA/SARIMA || Statistical || Interpretable, fast, works on small data || Assumes linearity, one series at a time |- | Prophet || Statistical || Handles holidays, trend changepoints || Limited to single series; no covariates |- | DeepAR || Deep Learning (LSTM) || Probabilistic, many series || Needs lots of data, slow training |- | TFT || Transformer || Multi-horizon, covariate-rich, interpretable || Complex, high data requirement |- | N-BEATS || Deep Learning (MLP) || Fast, competitive, no feature engineering || Limited covariate support |- | Chronos (foundation) || LLM-style || Zero-shot, no training needed || No covariate support yet; large model |} '''Failure modes:''' * '''Chronological leakage''' β Random train/test splits allow future data to inform past predictions, producing falsely optimistic results. Always split chronologically. * '''Ignoring non-stationarity''' β Many models assume stationarity. Differencing (ARIMA) or normalization per-series is required. * '''Ignoring distributional shift''' β Retail models trained pre-COVID performed terribly during COVID. Extreme events cause structural breaks that no model trained on historical data anticipates. * '''Point forecast overconfidence''' β Reporting only mean forecasts without uncertainty intervals. Downstream planning needs to understand the range of outcomes, not just the median. * '''Evaluation on last segment only''' β Evaluating only on the final test period may not represent the model's general quality. Use rolling window backtesting across multiple historical windows. </div> <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Evaluating</span> == Expert time series evaluation uses multiple metrics and rigorous experimental design: '''Regression metrics''': MAE (Mean Absolute Error), RMSE, MAPE (Mean Absolute Percentage Error), sMAPE. MAPE is undefined when actual=0 and is skewed by near-zero values; sMAPE or MAE are more robust. '''Probabilistic metrics''': For probabilistic forecasts (quantile or interval), use CRPS (Continuous Ranked Probability Score) or Winkler score. These reward well-calibrated uncertainty. '''Rolling window backtesting''': Instead of one train/test split, slide a window across history β train on windows [0:T], [0:T+1], β¦ and evaluate on each subsequent step. This tests the model across many historical regimes and avoids cherry-picking a favorable test period. '''Naive benchmarks''': Always compare to: naive (last value), seasonal naive (same period last cycle), and exponential smoothing. If a complex deep learning model cannot beat seasonal naive, it's not adding value. Expert practitioners report backtesting results as distributions (mean Β± std across windows) rather than a single number, and explicitly test for robustness during unusual periods (holidays, pandemics, market crashes). </div> <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Creating</span> == Designing a production time series forecasting system: '''1. Data architecture''' <syntaxhighlight lang="text"> Raw time series sources (databases, IoT, APIs) β [Time-indexed storage: InfluxDB, TimescaleDB, or Parquet partitioned by date] β [Feature engineering pipeline:] β βββ Temporal features: hour, day of week, month, quarter β βββ Lag features: lag-1, lag-7, lag-28, rolling mean/std β βββ Fourier features for seasonality β βββ External covariates: weather, holidays, promotions β [Stationarity tests + differencing if needed] β [Train/val/test split: chronological] </syntaxhighlight> '''2. Model training and selection''' <syntaxhighlight lang="text"> Train multiple models (baseline naive, SARIMA, TFT, foundation model) β Evaluate each with rolling window backtesting β Select winner by held-out test MAPE and CRPS β Train ensemble: weighted average of top-3 models (often beats any single model) β Register in model registry with evaluation metrics </syntaxhighlight> '''3. Production serving and retraining''' * Serve forecasts via API with caching (same-day forecast is rarely regenerated) * Nightly retrain on latest data window (rolling retrain strategy) * Monitor forecast accuracy vs. actuals in real-time; alert on anomalies * Detect distribution shift: plot forecast distribution vs. actuals weekly * Trigger manual review when MAE exceeds historical 95th percentile [[Category:Artificial Intelligence]] [[Category:Machine Learning]] [[Category:Time Series]] </div>
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Template used on this page:
Template:BloomIntro
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information