Building load forecasting: Hospital in SF¶
We can train a forecaster on another commom energy problem. In this, case we are training a 1-step ahead forecaster to predict the electricity consumption of a building.
The dataset contains one year of hourly observations. The training will occur on 11 months of the data, reserving the last month for evaluation.
[1]:
if 'google.colab' in str(get_ipython()):
!pip install git+https://github.com/ourownstory/neural_prophet.git # may take a while
#!pip install neuralprophet # much faster, but may not have the latest upgrades/bugfixes
import pandas as pd
from neuralprophet import NeuralProphet, set_log_level
# set_log_level("ERROR")
[2]:
# data_location = "https://raw.githubusercontent.com/ourownstory/neuralprophet-data/main/datasets/"
data_location = '../../../neuralprophet-data/datasets/'
sf_load_df = pd.read_csv(data_location + 'energy/SF_hospital_load.csv')
[3]:
sf_load_df.head(3)
[3]:
ds | y | |
---|---|---|
0 | 2015-01-01 01:00:00 | 778.007969 |
1 | 2015-01-01 02:00:00 | 776.241750 |
2 | 2015-01-01 03:00:00 | 779.357338 |
Generic forecast: Time-based features only¶
In this first section, we will train a model with time-features only like we would do with Facebook Prophet.
[4]:
m = NeuralProphet(
weekly_seasonality=6,
daily_seasonality=10,
trend_reg=1,
learning_rate=0.01,
)
df_train, df_test = m.split_df(sf_load_df, freq='H', valid_p = 1.0/12)
metrics = m.fit(df_train, freq='H', validation_df=df_test, progress='plot')

log-SmoothL1Loss
training (min: -5.110, max: -0.354, cur: -5.110)
validation (min: -4.747, max: -0.412, cur: -4.552)
[5]:
metrics.tail(1)
[5]:
SmoothL1Loss | MAE | RMSE | RegLoss | SmoothL1Loss_val | MAE_val | RMSE_val | |
---|---|---|---|---|---|---|---|
108 | 0.006034 | 45.99676 | 63.704651 | 0.002061 | 0.010543 | 60.436569 | 86.829773 |
[6]:
forecast = m.predict(df_train)
fig = m.plot(forecast)
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.988% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.988% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H

[7]:
forecast = m.predict(df_test)
m = m.highlight_nth_step_ahead_of_each_forecast(1)
fig = m.plot(forecast[-7*24:])
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.863% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.863% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H

[8]:
fig_param = m.plot_parameters()

1-step ahead forecast with Auto-Regression¶
[9]:
m = NeuralProphet(
growth='off',
yearly_seasonality=False,
weekly_seasonality=False,
daily_seasonality=False,
n_lags=3*24,
ar_reg=1,
learning_rate = 0.01,
)
df_train, df_test = m.split_df(sf_load_df, freq='H', valid_p = 1.0/12)
metrics = m.fit(df_train, freq='H', validation_df=df_test, progress='plot')

log-SmoothL1Loss
training (min: -6.431, max: -0.740, cur: -6.293)
validation (min: -6.479, max: -1.179, cur: -6.295)
[10]:
metrics.tail(1)
[10]:
SmoothL1Loss | MAE | RMSE | RegLoss | SmoothL1Loss_val | MAE_val | RMSE_val | |
---|---|---|---|---|---|---|---|
108 | 0.001849 | 23.766048 | 35.326185 | 0.00114 | 0.001845 | 23.998619 | 36.330311 |
[11]:
forecast = m.predict(df_train)
fig = m.plot(forecast)
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.988% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.988% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H

[12]:
forecast = m.predict(df_test)
m = m.highlight_nth_step_ahead_of_each_forecast(1)
fig = m.plot(forecast[-7*24:])
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.874% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.875% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H

[13]:
fig_param = m.plot_parameters()

1-step ahead forecast with AR-Net: Using a Neural Network¶
Here, we will use the power of a neural Network to fit non-linear patterns.
[14]:
m = NeuralProphet(
growth='off',
yearly_seasonality=False,
weekly_seasonality=False,
daily_seasonality=False,
n_lags=3*24,
num_hidden_layers=4,
d_hidden=32,
learning_rate=0.003,
)
df_train, df_test = m.split_df(sf_load_df, freq='H', valid_p = 1.0/12)
metrics = m.fit(df_train, freq='H', validation_df=df_test, progress='plot')

log-SmoothL1Loss
training (min: -8.839, max: -2.890, cur: -8.839)
validation (min: -8.656, max: -3.286, cur: -8.642)
[15]:
metrics.tail(1)
[15]:
SmoothL1Loss | MAE | RMSE | RegLoss | SmoothL1Loss_val | MAE_val | RMSE_val | |
---|---|---|---|---|---|---|---|
108 | 0.000145 | 6.901227 | 9.84333 | 0.0 | 0.000176 | 7.593913 | 11.234858 |
[16]:
forecast = m.predict(df_train)
fig = m.plot(forecast)
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.988% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.988% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H

[17]:
forecast = m.predict(df_test)
m = m.highlight_nth_step_ahead_of_each_forecast(1)
fig = m.plot(forecast[-7*24:])
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.874% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H
INFO - (NP.df_utils._infer_frequency) - Major frequency H corresponds to 99.875% of the data.
INFO - (NP.df_utils._infer_frequency) - Defined frequency is equal to major frequency - H

[18]:
fig_comp = m.plot_components(forecast[-7*24:])

[ ]:
[ ]: