Score model output predictions
score_model_out.Rd
Scores model outputs with a single output_type
against observed data.
Usage
score_model_out(
model_out_tbl,
oracle_output,
metrics = NULL,
relative_metrics = NULL,
baseline = NULL,
summarize = TRUE,
by = "model_id",
output_type_id_order = NULL
)
Arguments
- model_out_tbl
Model output tibble with predictions
- oracle_output
Predictions that would have been generated by an oracle model that knew the observed target data values in advance
- metrics
Character vector of scoring metrics to compute. If
NULL
(the default), appropriate metrics are chosen automatically. See details for more.- relative_metrics
Character vector of scoring metrics for which to compute relative skill scores. The
relative_metrics
should be a subset ofmetrics
and should only include proper scores (e.g., it should not contain interval coverage metrics). IfNULL
(the default), no relative metrics will be computed. Relative metrics are only computed ifsummarize = TRUE
, and require that"model_id"
is included inby
.- baseline
String with the name of a model to use as a baseline for relative skill scores. If a baseline is given, then a scaled relative skill with respect to the baseline will be returned. By default (
NULL
), relative skill will not be scaled with respect to a baseline model.- summarize
Boolean indicator of whether summaries of forecast scores should be computed. Defaults to
TRUE
.- by
Character vector naming columns to summarize by. For example, specifying
by = "model_id"
(the default) will compute average scores for each model.- output_type_id_order
For ordinal variables in pmf format, this is a vector of levels for pmf forecasts, in increasing order of the levels. For all other output types, this is ignored.
Details
See the hubverse documentation for the expected format of the oracle output data.
Default metrics are provided by the scoringutils
package. You can select
metrics by passing in a character vector of metric names to the metrics
argument.
The following metrics can be selected (all are used by default) for the
different output_type
s:
Quantile forecasts: (output_type == "quantile"
)
wis
overprediction
underprediction
dispersion
bias
ae_median
"interval_coverage_XX": interval coverage at the "XX" level. For example, "interval_coverage_95" is the 95% interval coverage rate, which would be calculated based on quantiles at the probability levels 0.025 and 0.975.
See scoringutils::get_metrics.forecast_quantile for details.
Nominal forecasts: (output_type == "pmf"
and output_type_id_order
is NULL
)
log_score
(scoring for ordinal forecasts will be added in the future).
See scoringutils::get_metrics.forecast_nominal for details.
Median forecasts: (output_type == "median"
)
ae_point: absolute error of the point forecast (recommended for the median, see Gneiting (2011))
See scoringutils::get_metrics.forecast_point for details.
Mean forecasts: (output_type == "mean"
)
se_point
: squared error of the point forecast (recommended for the mean, see Gneiting (2011))
See scoringutils::add_relative_skill for details on relative skill scores.
References
Gneiting, Tilmann. 2011. "Making and Evaluating Point Forecasts." Journal of the American Statistical Association 106 (494): 746–62. <doi: 10.1198/jasa.2011.r10138>.
Examples
# compute WIS and interval coverage rates at 80% and 90% levels based on
# quantile forecasts, summarized by the mean score for each model
quantile_scores <- score_model_out(
model_out_tbl = hubExamples::forecast_outputs |>
dplyr::filter(.data[["output_type"]] == "quantile"),
oracle_output = hubExamples::forecast_oracle_output,
metrics = c("wis", "interval_coverage_80", "interval_coverage_90"),
relative_metrics = "wis",
by = "model_id"
)
quantile_scores
#> Key: <model_id>
#> model_id wis interval_coverage_80 interval_coverage_90
#> <char> <num> <num> <num>
#> 1: Flusight-baseline 329.4545 0.0 0.1250
#> 2: MOBS-GLEAM_FLUH 315.2393 0.5 0.5625
#> 3: PSI-DICE 227.9527 0.5 0.5000
#> wis_relative_skill
#> <num>
#> 1: 1.1473659
#> 2: 1.0978597
#> 3: 0.7938733
# compute log scores based on pmf predictions for categorical targets,
# summarized by the mean score for each combination of model and location.
# Note: if the model_out_tbl had forecasts for multiple targets using a
# pmf output_type with different bins, it would be necessary to score the
# predictions for those targets separately.
pmf_scores <- score_model_out(
model_out_tbl = hubExamples::forecast_outputs |>
dplyr::filter(.data[["output_type"]] == "pmf"),
oracle_output = hubExamples::forecast_oracle_output,
metrics = "log_score",
by = c("model_id", "location", "horizon")
)
head(pmf_scores)
#> model_id location horizon log_score
#> <char> <char> <int> <num>
#> 1: Flusight-baseline 25 0 0.02107606
#> 2: Flusight-baseline 25 1 6.69652380
#> 3: Flusight-baseline 25 2 17.73313203
#> 4: Flusight-baseline 25 3 Inf
#> 5: Flusight-baseline 48 0 2.18418007
#> 6: Flusight-baseline 48 1 7.49960792