Core Module Documentation#

class neuralprophet.forecaster.NeuralProphet(growth='linear', changepoints=None, n_changepoints=10, changepoints_range=0.8, trend_reg=0, trend_reg_threshold=False, trend_global_local='global', yearly_seasonality='auto', weekly_seasonality='auto', daily_seasonality='auto', seasonality_mode='additive', seasonality_reg=0, season_global_local='global', n_forecasts=1, n_lags=0, num_hidden_layers=0, d_hidden=None, ar_reg=None, learning_rate=None, epochs=None, batch_size=None, loss_func='Huber', optimizer='AdamW', newer_samples_weight=2, newer_samples_start=0.0, quantiles=None, impute_missing=True, impute_linear=10, impute_rolling=10, drop_missing=False, collect_metrics=True, normalize='auto', global_normalization=False, global_time_normalization=True, unknown_data_normalization=False)#

NeuralProphet forecaster.

A simple yet powerful forecaster that models: Trend, seasonality, events, holidays, auto-regression, lagged covariates, and future-known regressors. Can be regularized and configured to model nonlinear relationships.

Parameters:
  • growth ({'off' or 'linear'}, default 'linear') –

    Set use of trend growth type.

    Options:
    • off: no trend.

    • (default) linear: fits a piece-wise linear trend with n_changepoints + 1 segments

    • discontinuous: For advanced users only - not a conventional trend,

    allows arbitrary jumps at each trend changepoint

  • changepoints ({list of str, list of np.datetimes or np.array of np.datetimes}, optional) –

    Manually set dates at which to include potential changepoints.

    Note

    Does not accept np.array of np.str. If not specified, potential changepoints are selected automatically.

  • n_changepoints (int) –

    Number of potential trend changepoints to include.

    Note

    Changepoints are selected uniformly from the first changepoint_range proportion of the history. Ignored if manual changepoints list is supplied.

  • changepoints_range (float) –

    Proportion of history in which trend changepoints will be estimated.

    e.g. set to 0.8 to allow changepoints only in the first 80% of training data. Ignored if manual changepoints list is supplied.

  • trend_reg (float, optional) –

    Parameter modulating the flexibility of the automatic changepoint selection.

    Note

    Large values (~1-100) will limit the variability of changepoints. Small values (~0.001-1.0) will allow changepoints to change faster. default: 0 will fully fit a trend to each segment.

  • trend_reg_threshold (bool, optional) –

    Allowance for trend to change without regularization.

    Options
    • True: Automatically set to a value that leads to a smooth trend.

    • (default) False: All changes in changepoints are regularized

  • trend_global_local (str, default 'global') –

    Modelling strategy of the trend when multiple time series are present.

    Options:
    • global: All the elements are modelled with the same trend.

    • local: Each element is modelled with a different trend.

    Note

    When only one time series is input, this parameter should not be provided. Internally it will be set to global, meaning that all the elements(only one in this case) are modelled with the same trend.

  • yearly_seasonality (bool, int) –

    Fit yearly seasonality.

    Options
    • True or False

    • auto: set automatically

    • value: number of Fourier/linear terms to generate

  • weekly_seasonality (bool, int) –

    Fit monthly seasonality.

    Options
    • True or False

    • auto: set automatically

    • value: number of Fourier/linear terms to generate

  • daily_seasonality (bool, int) –

    Fit daily seasonality.

    Options
    • True or False

    • auto: set automatically

    • value: number of Fourier/linear terms to generate

  • seasonality_mode (str) –

    Specifies mode of seasonality

    Options
    • (default) additive

    • multiplicative

  • seasonality_reg (float, optional) –

    Parameter modulating the strength of the seasonality model.

    Note

    Smaller values (~0.1-1) allow the model to fit larger seasonal fluctuations, larger values (~1-100) dampen the seasonality. default: None, no regularization

  • season_global_local (str, default 'global') –

    Modelling strategy of the seasonality when multiple time series are present. Options:

    • global: All the elements are modelled with the same seasonality.

    • local: Each element is modelled with a different seasonality.

    Note

    When only one time series is input, this parameter should not be provided. Internally it will be set to global, meaning that all the elements(only one in this case) are modelled with the same seasonality.

  • n_lags (int) – Previous time series steps to include in auto-regression. Aka AR-order

  • ar_reg (float, optional) –

    how much sparsity to induce in the AR-coefficients

    Note

    Large values (~1-100) will limit the number of nonzero coefficients dramatically. Small values (~0.001-1.0) will allow more non-zero coefficients. default: 0 no regularization of coefficients.

  • n_forecasts (int) – Number of steps ahead of prediction time step to forecast.

  • num_hidden_layers (int, optional) – number of hidden layer to include in AR-Net (defaults to 0)

  • d_hidden (int, optional) – dimension of hidden layers of the AR-Net. Ignored if num_hidden_layers == 0.

  • learning_rate (float) –

    Maximum learning rate setting for 1cycle policy scheduler.

    Note

    Default None: Automatically sets the learning_rate based on a learning rate range test. For manual user input, (try values ~0.001-10).

  • epochs (int) –

    Number of epochs (complete iterations over dataset) to train model.

    Note

    Default None: Automatically sets the number of epochs based on dataset size. For best results also leave batch_size to None. For manual values, try ~5-500.

  • batch_size (int) –

    Number of samples per mini-batch.

    If not provided, batch_size is approximated based on dataset size. For manual values, try ~8-1024. For best results also leave epochs to None.

  • newer_samples_weight (float, default 2.0) –

    Sets factor by which the model fit is skewed towards more recent observations.

    Controls the factor by which final samples are weighted more compared to initial samples. Applies a positional weighting to each sample’s loss value.

    e.g. newer_samples_weight = 2: final samples are weighted twice as much as initial samples.

  • newer_samples_start (float, default 0.0) –

    Sets beginning of ‘newer’ samples as fraction of training data.

    Throughout the range of ‘newer’ samples, the weight is increased from 1.0/newer_samples_weight initially to 1.0 at the end, in a monotonously increasing function (cosine from pi to 2*pi).

  • loss_func (str, torch.nn.functional.loss) –

    Type of loss to use:

    Options
    • (default) Huber: Huber loss function

    • MSE: Mean Squared Error loss function

    • MAE: Mean Absolute Error loss function

    • torch.nn.functional.loss.: loss or callable for custom loss, eg. L1-Loss

    Examples

    >>> from neuralprophet import NeuralProphet
    >>> import torch
    >>> import torch.nn as nn
    >>> m = NeuralProphet(loss_func=torch.nn.L1Loss)
    

  • collect_metrics (list of str, bool) –

    Set metrics to compute.

    Options
    • (default) True: [mae, rmse]

    • False: No metrics

    • list: Valid options: [mae, rmse, mse]

    Examples

    >>> from neuralprophet import NeuralProphet
    >>> m = NeuralProphet(collect_metrics=["MSE", "MAE", "RMSE"])
    

  • quantiles (list, default None) – A list of float values between (0, 1) which indicate the set of quantiles to be estimated.

  • impute_missing (bool) –

    whether to automatically impute missing dates/values

    Note

    imputation follows a linear method up to 20 missing values, more are filled with trend.

  • impute_linear (int) – maximal number of missing dates/values to be imputed linearly (default: 10)

  • impute_rolling (int) – maximal number of missing dates/values to be imputed using rolling average (default: 10)

  • drop_missing (bool) –

    whether to automatically drop missing samples from the data

    Options
    • (default) False: Samples containing NaN values are not dropped.

    • True: Any sample containing at least one NaN value will be dropped.

  • normalize (str) –

    Type of normalization to apply to the time series.

    Options
    • off bypasses data normalization

    • (default, binary timeseries) minmax scales the minimum value to 0.0 and the maximum value to 1.0

    • standardize zero-centers and divides by the standard deviation

    • (default) soft scales the minimum value to 0.0 and the 95th quantile to 1.0

    • soft1 scales the minimum value to 0.1 and the 90th quantile to 0.9

  • global_normalization (bool) –

    Activation of global normalization

    Options
    • True: dict of dataframes is used as global_time_normalization

    • (default) False: local normalization

  • global_time_normalization (bool) –

    Specifies global time normalization

    Options
    • (default) True: only valid in case of global modeling local normalization

    • False: set time data_params locally

  • unknown_data_normalization (bool) –

    Specifies unknown data normalization

    Options
    • True: test data is normalized with global data params even if trained with local data params (global modeling with local normalization)

    • (default) False: no global modeling with local normalization

add_country_holidays(country_name, lower_window=0, upper_window=0, regularization=None, mode='additive')#

Add a country into the NeuralProphet object to include country specific holidays and create the corresponding configs such as lower, upper windows and the regularization parameters

Holidays can only be added for a single country. Calling the function multiple times will override already added country holidays.

Parameters:
  • country_name (string) – name of the country

  • lower_window (int) – the lower window for all the country holidays

  • upper_window (int) – the upper window for all the country holidays

  • regularization (float) – optional scale for regularization strength

  • mode (str) – additive (default) or multiplicative.

add_events(events, lower_window=0, upper_window=0, regularization=None, mode='additive')#

Add user specified events and their corresponding lower, upper windows and the regularization parameters into the NeuralProphet object

Parameters:
  • events (str, list) – name or list of names of user specified events

  • lower_window (int) – the lower window for the events in the list of events

  • upper_window (int) – the upper window for the events in the list of events

  • regularization (float) – optional scale for regularization strength

  • mode (str) – additive (default) or multiplicative.

add_future_regressor(name, regularization=None, normalize='auto', mode='additive')#

Add a regressor as lagged covariate with order 1 (scalar) or as known in advance (also scalar).

The dataframe passed to fit() and predict() will have a column with the specified name to be used as a regressor. When normalize=True, the regressor will be normalized unless it is binary.

Note

Future Regressors have to be known for the entire forecast horizon, e.g. n_forecasts into the future.

Parameters:
  • name (string) – name of the regressor.

  • regularization (float) – optional scale for regularization strength

  • normalize (bool) –

    optional, specify whether this regressor will be normalized prior to fitting.

    Note

    if auto, binary regressors will not be normalized.

  • mode (str) – additive (default) or multiplicative.

add_lagged_regressor(names, n_lags: Union[int, Literal['auto', 'scalar']] = 'auto', regularization: Optional[float] = None, normalize='auto')#

Add a covariate or list of covariate time series as additional lagged regressors to be used for fitting and predicting. The dataframe passed to fit and predict will have the column with the specified name to be used as lagged regressor. When normalize=True, the covariate will be normalized unless it is binary.

Parameters:
  • names (string or list) – name of the regressor/list of regressors.

  • n_lags (int) – previous regressors time steps to use as input in the predictor (covar order) if auto, time steps will be equivalent to the AR order (default) if scalar, all the regressors will only use last known value as input

  • regularization (float) – optional scale for regularization strength

  • normalize (bool) – optional, specify whether this regressor will benormalized prior to fitting. if auto, binary regressors will not be normalized.

add_seasonality(name, period, fourier_order)#

Add a seasonal component with specified period, number of Fourier components, and regularization.

Increasing the number of Fourier components allows the seasonality to change more quickly (at risk of overfitting). Note: regularization and mode (additive/multiplicative) are set in the main init.

Parameters:
  • name (string) – name of the seasonality component.

  • period (float) – number of days in one period.

  • fourier_order (int) – number of Fourier components to use.

create_df_with_events(df, events_df)#

Create a concatenated dataframe with the time series data along with the events data expanded.

Parameters:
  • df (pd.DataFrame) – dataframe containing column ds, y, and optionally``ID`` with all data

  • events_df (dict, pd.DataFrame) – containing column ds and event

Returns:

columns y, ds and other user specified events

Return type:

dict, pd.DataFrame

crossvalidation_split_df(df, freq='auto', k=5, fold_pct=0.1, fold_overlap_pct=0.5, global_model_cv_type='global-time')#

Splits timeseries data in k folds for crossvalidation.

Parameters:
  • df (pd.DataFrame) – dataframe containing column ds, y, and optionally``ID`` with all data

  • freq (str) –

    data step sizes. Frequency of data recording,

    Note

    Any valid frequency for pd.date_range, such as 5min, D, MS or auto (default) to automatically set frequency.

  • k (int) – number of CV folds

  • fold_pct (float) – percentage of overall samples to be in each fold

  • fold_overlap_pct (float) – percentage of overlap between the validation folds.

  • global_model_cv_type (str) –

    Type of crossvalidation to apply to the dict of time series.

    options:

    global-time (default) crossvalidation is performed according to a timestamp threshold.

    local each episode will be crossvalidated locally (may cause time leakage among different episodes)

    intersect only the time intersection of all the episodes will be considered. A considerable amount of data may not be used. However, this approach guarantees an equal number of train/test samples for each episode.

Returns:

training data

validation data

Return type:

list of k tuples [(df_train, df_val), …]

See also

split_df

Splits timeseries df into train and validation sets.

double_crossvalidation_split_df

Splits timeseries data in two sets of k folds for crossvalidation on training and testing data.

Examples

>>> df1 = pd.DataFrame({'ds': pd.date_range(start = '2022-12-01', periods = 10, freq = 'D'),
...                     'y': [9.59, 8.52, 8.18, 8.07, 7.89, 8.09, 7.84, 7.65, 8.71, 8.09]})
>>> df2 = pd.DataFrame({'ds': pd.date_range(start = '2022-12-02', periods = 10, freq = 'D'),
...                     'y': [8.71, 8.09, 7.84, 7.65, 8.02, 8.52, 8.18, 8.07, 8.25, 8.30]})
>>> df3 = pd.DataFrame({'ds': pd.date_range(start = '2022-12-03', periods = 10, freq = 'D'),
...                     'y': [7.67, 7.64, 7.55, 8.25, 8.32, 9.59, 8.52, 7.55, 8.25, 8.09]})
>>> df3
    ds              y
0   2022-12-03      7.67
1   2022-12-04      7.64
2   2022-12-05      7.55
3   2022-12-06      8.25
4   2022-12-07      8.32
5   2022-12-08      9.59
6   2022-12-09      8.52
7   2022-12-10      7.55
8   2022-12-11      8.25
9   2022-12-12      8.09
You can create folds for a single dataframe.
>>> folds = m.crossvalidation_split_df(df3, k = 2, fold_pct = 0.2)
>>> folds
[(  ds            y
    0 2022-12-03  7.67
    1 2022-12-04  7.64
    2 2022-12-05  7.55
    3 2022-12-06  8.25
    4 2022-12-07  8.32
    5 2022-12-08  9.59
    6 2022-12-09  8.52,
    ds            y
    0 2022-12-10  7.55
    1 2022-12-11  8.25),
(   ds            y
    0 2022-12-03  7.67
    1 2022-12-04  7.64
    2 2022-12-05  7.55
    3 2022-12-06  8.25
    4 2022-12-07  8.32
    5 2022-12-08  9.59
    6 2022-12-09  8.52
    7 2022-12-10  7.55,
    ds            y
    0 2022-12-11  8.25
    1 2022-12-12  8.09)]
We can also create a df with many IDs.
>>> df1['ID'] = 'data1'
>>> df2['ID'] = 'data2'
>>> df3['ID'] = 'data3'
>>> df = pd.concat((df1, df2, df3))
When using the df with many IDs, there are three types of possible crossvalidation. The default crossvalidation is performed according to a timestamp threshold. In this case, we can have a different number of samples for each time series per fold. This approach prevents time leakage.
>>> folds = m.crossvalidation_split_df(df, k = 2, fold_pct = 0.2)
One can notice how each of the folds has a different number of samples for the validation set. Nonetheless, time leakage does not occur.
>>> folds[0][1]
    ds      y       ID
0   2022-12-10      8.09    data1
1   2022-12-10      8.25    data2
2   2022-12-11      8.30    data2
3   2022-12-10      7.55    data3
4   2022-12-11      8.25    data3
>>> folds[1][1]
    ds      y       ID
0   2022-12-11      8.30    data2
1   2022-12-11      8.25    data3
2   2022-12-12      8.09    data3
In some applications, crossvalidating each of the time series locally may be more adequate.
>>> folds = m.crossvalidation_split_df(df, k = 2, fold_pct = 0.2, global_model_cv_type = 'local')
In this way, we prevent a different number of validation samples in each fold.
>>> folds[0][1]
    ds      y       ID
0   2022-12-08      7.65    data1
1   2022-12-09      8.71    data1
2   2022-12-09      8.07    data2
3   2022-12-10      8.25    data2
4   2022-12-10      7.55    data3
5   2022-12-11      8.25    data3
>>> folds[1][1]
    ds      y       ID
0   2022-12-09      8.71    data1
1   2022-12-10      8.09    data1
2   2022-12-10      8.25    data2
3   2022-12-11      8.30    data2
4   2022-12-11      8.25    data3
5   2022-12-12      8.09    data3
The last type of global model crossvalidation gets the time intersection among all the time series used. There is no time leakage in this case, and we preserve the same number of samples per fold. The only drawback of this approach is that some of the samples may not be used (those not in the time intersection).
>>> folds = m.crossvalidation_split_df(df, k = 2, fold_pct = 0.2, global_model_cv_type = 'intersect')
>>> folds[0][1]
    ds      y       ID
0   2022-12-09      8.71    data1
1   2022-12-09      8.07    data2
2   2022-12-09      8.52    data3
0 2022-12-09  8.52}
>>> folds[1][1]
    ds      y       ID
0   2022-12-10      8.09    data1
1   2022-12-10      8.25    data2
2   2022-12-10      7.55    data3
double_crossvalidation_split_df(df, freq='auto', k=5, valid_pct=0.1, test_pct=0.1)#

Splits timeseries data in two sets of k folds for crossvalidation on training and testing data.

Parameters:
  • df (pd.DataFrame) – dataframe containing column ds, y, and optionally``ID`` with all data

  • freq (str) –

    data step sizes. Frequency of data recording,

    Note

    Any valid frequency for pd.date_range, such as 5min, D, MS or auto (default) to automatically set frequency.

  • k (int) – number of CV folds

  • valid_pct (float) – percentage of overall samples to be in validation

  • test_pct (float) – percentage of overall samples to be in test

Returns:

elements same as crossvalidation_split_df() returns

Return type:

tuple of k tuples [(folds_val, folds_test), …]

fit(df, freq='auto', validation_df=None, progress='bar', minimal=False)#

Train, and potentially evaluate model.

Training/validation metrics may be distorted in case of auto-regression, if a large number of NaN values are present in df and/or validation_df.

Parameters:
  • df (pd.DataFrame) – containing column ds, y, and optionally``ID`` with all data

  • freq (str) –

    Data step sizes. Frequency of data recording,

    Note

    Any valid frequency for pd.date_range, such as 5min, D, MS or auto (default) to automatically set frequency.

  • validation_df (pd.DataFrame, dict) – if provided, model with performance will be evaluated after each training epoch over this data.

  • epochs (int) – number of epochs to train (overrides default setting). default: if not specified, uses self.epochs

  • progress (str) –

    Method of progress display

    Options
    • (default) bar display updating progress bar (tqdm)

    • print print out progress (fallback option)

    • plot plot a live updating graph of the training loss, requires [live] install or livelossplot package installed.

    • plot-all extended to all recorded metrics.

  • minimal (bool) – whether to train without any printouts or metrics collection

Returns:

metrics with training and potentially evaluation metrics

Return type:

pd.DataFrame

get_latest_forecast(fcst, df_name=None, include_history_data=False, include_previous_forecasts=0)#

Get the latest NeuralProphet forecast, optional including historical data.

Parameters:
  • fcst (pd.DataFrame, dict) – output of self.predict.

  • df_name (str) – ID from time series that should forecast

  • include_history_data (bool) – specifies whether to include historical data

  • include_previous_forecasts (int) – specifies how many forecasts before latest forecast to include

Returns:

columns ds, y, and [origin-<i>]

Note

where origin-<i> refers to the (i+1)-th latest prediction for this row’s datetime. e.g. origin-3 is the prediction for this datetime, predicted 4 steps before the last step. The very latest predcition is origin-0.

Return type:

pd.DataFrame

Examples

We may get the df of the latest forecast:
>>> forecast = m.predict(df)
>>> df_forecast = m.get_latest_forecast(forecast)
Number of steps before latest forecast could be included:
>>> df_forecast = m.get_latest_forecast(forecast, include_previous_forecast=3)
Historical data could be included, however be aware that the df could be large:
>>> df_forecast = m.get_latest_forecast(forecast, include_history_data=True)
handle_negative_values(df, handle='remove', columns=None)#

Handle negative values in the given columns. If no column or handling are provided, negative values in all numeric columns are removed.

Parameters:
  • df (pd.DataFrame) – dataframe containing column ds, y with all data

  • handling ({str, int, float}, optional) –

    specified handling of negative values in the regressor column. Can be one of the following options:

    Options
    • (default) remove: Remove all negative values in the specified columns.

    • error: Raise an error in case of a negative value.

    • float or int: Replace negative values with the provided value.

  • columns (list of str, optional) – names of the columns to process

Returns:

input df with negative values handled

Return type:

pd.DataFrame

highlight_nth_step_ahead_of_each_forecast(step_number=None)#

Set which forecast step to focus on for metrics evaluation and plotting.

Parameters:

step_number (int) –

i-th step ahead forecast to use for statistics and plotting.

Note

Set to None to reset.

make_future_dataframe(df, events_df=None, regressors_df=None, periods=None, n_historic_predictions=False)#

Extends dataframe a number of periods (time steps) into the future.

Only use if you predict into the unknown future. New timestamps are added to the historic dataframe, with the ‘y’ column being NaN, as it remains to be predicted. Further, the given future events and regressors are added to the periods new timestamps. The returned dataframe will include historic data needed to additionally produce n_historic_predictions, for which there are historic observances of the series ‘y’.

Parameters:
  • df (pd.DataFrame) – History to date. DataFrame containing all columns up to present

  • events_df (pd.DataFrame) – Future event occurrences corresponding to periods steps into future. Contains columns ds and event. The event column contains the name of the event.

  • regressor_df (pd.DataFrame) – Future regressor values corresponding to periods steps into future. Contains column ds and one column for each of the external regressors.

  • periods (int) – number of steps to extend the DataFrame into the future

  • n_historic_predictions (bool, int) – Includes historic data needed to predict n_historic_predictions timesteps, for which there are historic observances of the series ‘y’. False: drop historic data except for needed inputs to predict future. True: include entire history.

Returns:

input df with ds extended into future, y set to None, with future events and regressors added.

Return type:

pd.DataFrame

Examples

>>> from neuralprophet import NeuralProphet
>>> m = NeuralProphet()
>>> # set the model to expect these events
>>> m = m.add_events(["playoff", "superbowl"])
>>> # create the data df with events
>>> history_df = m.create_df_with_events(df, events_df)
>>> metrics = m.fit(history_df, freq="D")
>>> # forecast with events known ahead
>>> future = m.make_future_dataframe(
>>>     history_df, events_df, periods=365, n_historic_predictions=180
>>> )
>>> # get 180 past and 365 future predictions.
>>> forecast = m.predict(df=future)
plot(fcst, df_name=None, ax=None, xlabel='ds', ylabel='y', figsize=(10, 6), plotting_backend='default')#

Plot the NeuralProphet forecast, including history.

Parameters:
  • fcst (pd.DataFrame) – output of self.predict.

  • df_name (str) – ID from time series that should be plotted

  • ax (matplotlib axes) – optional, matplotlib axes on which to plot.

  • xlabel (string) – label name on X-axis

  • ylabel (string) – label name on Y-axis

  • figsize (tuple) – width, height in inches. default: (10, 6)

  • plotting_backend (str) –

    optional, overwrites the default plotting backend.

    Options * plotly: Use plotly for plotting * matplotlib: use matplotlib for plotting * (default) default: use the global default for plotting

plot_components(fcst, df_name='__df__', figsize=None, forecast_in_focus=None, residuals=False, plotting_backend='default')#

Plot the NeuralProphet forecast components.

Parameters:
  • fcst (pd.DataFrame) – output of self.predict

  • df_name (str) – ID from time series that should be plotted

  • figsize (tuple) –

    width, height in inches.

    Note

    None (default): automatic (10, 3 * npanel)

  • plotting_backend (str) –

    optional, overwrites the default plotting backend.

    Options * plotly: Use plotly for plotting * matplotlib: use matplotlib for plotting * (default) default: use the global default for plotting

Returns:

plot of NeuralProphet components

Return type:

matplotlib.axes.Axes

plot_latest_forecast(fcst, df_name=None, ax=None, xlabel='ds', ylabel='y', figsize=(10, 6), include_previous_forecasts=0, plot_history_data=None, plotting_backend='default')#

Plot the latest NeuralProphet forecast(s), including history.

Parameters:
  • fcst (pd.DataFrame) – output of self.predict.

  • df_name (str) – ID from time series that should be plotted

  • ax (matplotlib axes) – Optional, matplotlib axes on which to plot.

  • xlabel (str) – label name on X-axis

  • ylabel (str) – abel name on Y-axis

  • figsize (tuple) – width, height in inches. default: (10, 6)

  • include_previous_forecasts (int) – number of previous forecasts to include in plot

  • plot_history_data (bool) – specifies plot of historical data

  • plotting_backend (str) –

    optional, overwrites the default plotting backend.

    Options * plotly: Use plotly for plotting * matplotlib: use matplotlib for plotting * (default) default: use the global default for plotting

Returns:

plot of NeuralProphet forecasting

Return type:

matplotlib.axes.Axes

plot_parameters(weekly_start=0, yearly_start=0, figsize=None, forecast_in_focus=None, df_name=None, plotting_backend='default', quantile=None)#

Plot the NeuralProphet forecast components.

Parameters:
  • weekly_start (int) –

    specifying the start day of the weekly seasonality plot.

    Note

    0 (default) starts the week on Sunday. 1 shifts by 1 day to Monday, and so on.

  • yearly_start (int) –

    specifying the start day of the yearly seasonality plot.

    Note

    0 (default) starts the year on Jan 1. 1 shifts by 1 day to Jan 2, and so on.

  • df_name (str) – name of dataframe to refer to data params from original keys of train dataframes (used for local normalization in global modeling)

  • figsize (tuple) –

    width, height in inches.

    Note

    None (default): automatic (10, 3 * npanel)

  • plotting_backend

    optional, overwrites the default plotting backend.

    Options * plotly: Use plotly for plotting * matplotlib: use matplotlib for plotting * (default) default: use the global default for plotting

Returns:

plot of NeuralProphet forecasting

Return type:

matplotlib.axes.Axes

predict(df, decompose=True, raw=False)#

Runs the model to make predictions.

Expects all data needed to be present in dataframe. If you are predicting into the unknown future and need to add future regressors or events, please prepare data with make_future_dataframe.

Parameters:
  • df (pd.DataFrame) – dataframe containing column ds, y, and optionally``ID`` with data

  • decompose (bool) – whether to add individual components of forecast to the dataframe

  • raw (bool) –

    specifies raw data

    Options
    • (default) False: returns forecasts sorted by target (highlighting forecast age)

    • True: return the raw forecasts sorted by forecast start date

Returns:

dependent on raw

Note

raw == True: columns ds, y, and [step<i>] where step<i> refers to the i-step-ahead prediction made at this row’s datetime, e.g. step3 is the prediction for 3 steps into the future, predicted using information up to (excluding) this datetime.

raw == False: columns ds, y, trend and [yhat<i>] where yhat<i> refers to the i-step-ahead prediction for this row’s datetime, e.g. yhat3 is the prediction for this datetime, predicted 3 steps ago, “3 steps old”.

Return type:

pd.DataFrame

predict_seasonal_components(df, quantile=0.5)#

Predict seasonality components

Parameters:
  • df (pd.DataFrame) – dataframe containing columns ds, y, and optionally``ID`` with all data

  • quantile (float) – the quantile in (0, 1) that needs to be predicted

Returns:

seasonal components with columns of name <seasonality component name>

Return type:

pd.DataFrame, dict

predict_trend(df, quantile=0.5)#

Predict only trend component of the model.

Parameters:
  • df (pd.DataFrame) – dataframe containing column ds, y, and optionally``ID`` with all data

  • quantile (float) – the quantile in (0, 1) that needs to be predicted

Returns:

trend on prediction dates.

Return type:

pd.DataFrame, dict

set_plotting_backend(plotting_backend)#

Set plotting backend.

Parameters:
  • plotting_backend (str) –

  • plot. (Specifies plotting backend to use for all plots. Can be configured individually for each) –

  • Options

    • plotly: Use the plotly backend for plotting

    • (default) matplotlib: use matplotlib for plotting

set_true_ar_for_eval(true_ar_weights)#

Configures model to evaluate closeness of AR weights to true weights.

Parameters:

true_ar_weights (np.array) – true AR-parameters, if known.

split_df(df, freq='auto', valid_p=0.2, local_split=False)#

Splits timeseries df into train and validation sets. Prevents leakage of targets. Sharing/Overbleed of inputs can be configured. Also performs basic data checks and fills in missing data, unless impute_missing is set to False.

Parameters:
  • df (pd.DataFrame) – dataframe containing column ds, y, and optionally``ID`` with all data

  • freq (str) –

    data step sizes. Frequency of data recording,

    Note

    Any valid frequency for pd.date_range, such as 5min, D, MS or auto (default) to automatically set frequency.

  • valid_p (float) – fraction of data to use for holdout validation set, targets will still never be shared.

  • local_split (bool) – Each dataframe will be split according to valid_p locally (in case of dict of dataframes

Returns:

training data

validation data

Return type:

tuple of two pd.DataFrames

See also

crossvalidation_split_df

Splits timeseries data in k folds for crossvalidation.

double_crossvalidation_split_df

Splits timeseries data in two sets of k folds for crossvalidation on training and testing data.

Examples

>>> df1 = pd.DataFrame({'ds': pd.date_range(start = '2022-12-01', periods = 5,
...                     freq='D'), 'y': [9.59, 8.52, 8.18, 8.07, 7.89]})
>>> df2 = pd.DataFrame({'ds': pd.date_range(start = '2022-12-09', periods = 5,
...                     freq='D'), 'y': [8.71, 8.09, 7.84, 7.65, 8.02]})
>>> df3 = pd.DataFrame({'ds': pd.date_range(start = '2022-12-09', periods = 5,
...                     freq='D'), 'y': [7.67, 7.64, 7.55, 8.25, 8.3]})
>>> df3
    ds              y
0   2022-12-09      7.67
1   2022-12-10      7.64
2   2022-12-11      7.55
3   2022-12-12      8.25
4   2022-12-13      8.30

You can split a single dataframe, which also may contain NaN values. Please be aware this may affect training/validation performance.

>>> (df_train, df_val) = m.split_df(df3, valid_p = 0.2)
>>> df_train
    ds              y
0   2022-12-09      7.67
1   2022-12-10      7.64
2   2022-12-11      7.55
3   2022-12-12      8.25
>>> df_val
    ds              y
0   2022-12-13      8.3
One can define a single df with many time series identified by an ‘ID’ column.
>>> df1['ID'] = 'data1'
>>> df2['ID'] = 'data2'
>>> df3['ID'] = 'data3'
>>> df = pd.concat((df1, df2, df3))
You can use a df with many IDs (especially useful for global modeling), which will account for the time range of the whole group of time series as default.
>>> (df_train, df_val) = m.split_df(df, valid_p = 0.2)
>>> df_train
    ds      y       ID
0   2022-12-01      9.59    data1
1   2022-12-02      8.52    data1
2   2022-12-03      8.18    data1
3   2022-12-04      8.07    data1
4   2022-12-05      7.89    data1
5   2022-12-09      8.71    data2
6   2022-12-10      8.09    data2
7   2022-12-11      7.84    data2
8   2022-12-09      7.67    data3
9   2022-12-10      7.64    data3
10  2022-12-11      7.55    data3
>>> df_val
    ds      y       ID
0   2022-12-12      7.65    data2
1   2022-12-13      8.02    data2
2   2022-12-12      8.25    data3
3   2022-12-13      8.30    data3
In some applications, splitting locally each time series may be helpful. In this case, one should set local_split to True.
>>> (df_train, df_val) = m.split_df(df, valid_p = 0.2, local_split = True)
>>> df_train
    ds      y       ID
0   2022-12-01      9.59    data1
1   2022-12-02      8.52    data1
2   2022-12-03      8.18    data1
3   2022-12-04      8.07    data1
4   2022-12-09      8.71    data2
5   2022-12-10      8.09    data2
6   2022-12-11      7.84    data2
7   2022-12-12      7.65    data2
8   2022-12-09      7.67    data3
9   2022-12-10      7.64    data3
10  2022-12-11      7.55    data3
11  2022-12-12      8.25    data3
>>> df_val
    ds      y       ID
0   2022-12-05      7.89    data1
1   2022-12-13      8.02    data2
2   2022-12-13      8.30    data3
test(df)#

Evaluate model on holdout data.

Parameters:

df (pd.DataFrame) – dataframe containing column ds, y, and optionally``ID`` with with holdout data

Returns:

evaluation metrics

Return type:

pd.DataFrame