Core Module Documentation#
- class neuralprophet.logger.MetricsLogger(**kwargs: Any)#
- after_save_checkpoint(checkpoint_callback) None #
Called after model checkpoint callback saves a new checkpoint.
- Parameters
checkpoint_callback – the model checkpoint callback instance
- log_metrics(metrics: Mapping[str, float], step: Optional[int] = None) None #
Records metrics. This method logs metrics as soon as it received them.
- Parameters
metrics – Dictionary with metric names as keys and measured quantities as values
step – Step number at which the metrics should be recorded
- class neuralprophet.logger.ProgressBar(*args, **kwargs)#
Custom progress bar for PyTorch Lightning for only update every epoch, not every batch.
- on_train_batch_end(trainer: pytorch_lightning.trainer.trainer.Trainer, pl_module: pytorch_lightning.core.module.LightningModule, *_) None #
Called when the train batch ends.
Note
The value
outputs["loss"]
here will be the normalized value w.r.taccumulate_grad_batches
of the loss returned fromtraining_step
.
- on_train_epoch_start(trainer: pytorch_lightning.trainer.trainer.Trainer, *_) None #
Called when the train epoch begins.