This function helps to correct the forecasts' biases based on ML (random forest) training on the previous time points.
Usage
sda.bias.correction(
settings,
t,
t.start,
dates,
all.X,
obs.mean,
state.interval,
cov.dir,
residual.lag = FALSE,
py.init = NULL
)Arguments
- settings
PEcAn settings object.
- t
numeric: the current number of time points (e.g., t=1 for the beginning time point).
- t.start
numeric: the user-defined time point to avoid the initial burnin period.
- dates
vector: a vector of dates used for extracting covariates through time.
- all.X
list: lists of data frame of model forecast from the beginning to the current time points that has n (ensemble size) rows and n.var (number of variables) times n.site (number of locations) columns. (e.g., 100 ensembles, 4 variables, and 8,000 locations will end up with data.frame of 100 rows and 32,000 columns)
- obs.mean
List: lists of date times named by time points, which contains lists of sites named by site ids, which contains observation means for each state variables of each site for each time point.
- state.interval
matrix: containing the upper and lower boundaries for each state variable.
- cov.dir
character: physical path to the directory contains the time series covariate maps.
- residual.lag
logical: decide if we want to include the lagged residual (difference in residual between time points) in the ML process.
- py.init
R function: R function to initialize the python functions. Default is NULL. the default random forest will be used if `py.init` is NULL.