Core Module Documentation#

class neuralprophet.configure.AR(n_lags: 'int', ar_reg: 'Optional[float]' = None, ar_layers: 'Optional[List[int]]' = None)#
regularize(weights, original=False)#

Regularization of AR coefficients

Parameters
  • weights (torch.Tensor) – Model weights to be regularized towards zero

  • original (bool) – Do not penalize non-zeros

Returns

Regularization loss

Return type

numeric

neuralprophet.configure.ConfigCountryHolidays#

alias of neuralprophet.configure.Holidays

class neuralprophet.configure.ConfigFutureRegressors(model: 'str', d_hidden: 'int', num_hidden_layers: 'int')#
class neuralprophet.configure.ConfigSeasonality(mode: 'np_types.SeasonalityMode' = 'additive', computation: 'str' = 'fourier', reg_lambda: 'float' = 0, yearly_arg: 'np_types.SeasonalityArgument' = 'auto', weekly_arg: 'np_types.SeasonalityArgument' = 'auto', daily_arg: 'np_types.SeasonalityArgument' = 'auto', global_local: 'np_types.SeasonGlobalLocalMode' = 'global', seasonality_local_reg: 'Optional[Union[bool, float]]' = None, yearly_global_local: 'np_types.SeasonalityArgument' = 'auto', weekly_global_local: 'np_types.SeasonalityArgument' = 'auto', daily_global_local: 'np_types.SeasonalityArgument' = 'auto', condition_name: 'Optional[str]' = None)#
class neuralprophet.configure.Event(lower_window: 'int', upper_window: 'int', reg_lambda: 'Optional[float]', mode: 'str')#
class neuralprophet.configure.Holidays(country: 'Union[str, List[str]]', lower_window: 'int', upper_window: 'int', mode: 'str' = 'additive', reg_lambda: 'Optional[float]' = None)#
class neuralprophet.configure.LaggedRegressor(reg_lambda: 'Optional[float]', as_scalar: 'bool', normalize: 'Union[bool, str]', n_lags: 'int', lagged_reg_layers: 'Optional[List[int]]')#
class neuralprophet.configure.MissingDataHandling(impute_missing: 'bool' = True, impute_linear: 'int' = 10, impute_rolling: 'int' = 10, drop_missing: 'bool' = False)#
class neuralprophet.configure.Model(lagged_reg_layers: 'Optional[List[int]]')#
class neuralprophet.configure.Normalization(normalize: 'str', global_normalization: 'bool', global_time_normalization: 'bool', unknown_data_normalization: 'bool', local_data_params: 'dict' = <factory>, global_data_params: 'dict' = <factory>)#
class neuralprophet.configure.Regressor(reg_lambda: 'Optional[float]', normalize: 'Union[str, bool]', mode: 'str')#
class neuralprophet.configure.Season(resolution: 'int', period: 'float', arg: 'np_types.SeasonalityArgument', condition_name: 'Optional[str]', global_local: 'np_types.SeasonGlobalLocalMode' = 'local')#
class neuralprophet.configure.Train(learning_rate: 'Optional[float]', epochs: 'Optional[int]', batch_size: 'Optional[int]', loss_func: 'Union[str, torch.nn.modules.loss._Loss, Callable]', optimizer: 'Union[str, Type[torch.optim.Optimizer]]', quantiles: 'List[float]' = <factory>, optimizer_args: 'dict' = <factory>, scheduler: 'Optional[Type[torch.optim.lr_scheduler.OneCycleLR]]' = None, scheduler_args: 'dict' = <factory>, newer_samples_weight: 'float' = 1.0, newer_samples_start: 'float' = 0.0, reg_delay_pct: 'float' = 0.5, reg_lambda_trend: 'Optional[float]' = None, trend_reg_threshold: 'Optional[Union[bool, float]]' = None, lr_finder_args: 'dict' = <factory>)#
set_lr_finder_args(dataset_size, num_batches)#

Set the lr_finder_args. This is the range of learning rates to test.

set_optimizer()#

Set the optimizer and optimizer args. If optimizer is a string, then it will be converted to the corresponding torch optimizer. The optimizer is not initialized yet as this is done in configure_optimizers in TimeNet.

set_scheduler()#

Set the scheduler and scheduler args. The scheduler is not initialized yet as this is done in configure_optimizers in TimeNet.

class neuralprophet.configure.Trend(growth: 'np_types.GrowthMode', changepoints: 'Optional[list]', n_changepoints: 'int', changepoints_range: 'float', trend_reg: 'float', trend_reg_threshold: 'Optional[Union[bool, float]]', trend_global_local: 'str', trend_local_reg: 'Optional[Union[bool, float]]' = None)#