healthyverse_tsa

Time Series Analysis, Modeling and Forecasting of the Healthyverse

Packages Steven P. Sanderson II, MPH - Date: 2025-12-24

Introduction

This analysis follows a Nested Modeltime Workflow from modeltime along with using the NNS package. I use this to monitor the downloads of all of my packages:

Get Data

glimpse(downloads_tbl)
Rows: 163,894
Columns: 11
$ date      <date> 2020-11-23, 2020-11-23, 2020-11-23, 2020-11-23, 2020-11-23,…
$ time      <Period> 15H 36M 55S, 11H 26M 39S, 23H 34M 44S, 18H 39M 32S, 9H 0M…
$ date_time <dttm> 2020-11-23 15:36:55, 2020-11-23 11:26:39, 2020-11-23 23:34:…
$ size      <int> 4858294, 4858294, 4858301, 4858295, 361, 4863722, 4864794, 4…
$ r_version <chr> NA, "4.0.3", "3.5.3", "3.5.2", NA, NA, NA, NA, NA, NA, NA, N…
$ r_arch    <chr> NA, "x86_64", "x86_64", "x86_64", NA, NA, NA, NA, NA, NA, NA…
$ r_os      <chr> NA, "mingw32", "mingw32", "linux-gnu", NA, NA, NA, NA, NA, N…
$ package   <chr> "healthyR.data", "healthyR.data", "healthyR.data", "healthyR…
$ version   <chr> "1.0.0", "1.0.0", "1.0.0", "1.0.0", "1.0.0", "1.0.0", "1.0.0…
$ country   <chr> "US", "US", "US", "GB", "US", "US", "DE", "HK", "JP", "US", …
$ ip_id     <int> 2069, 2804, 78827, 27595, 90474, 90474, 42435, 74, 7655, 638…

The last day in the data set is 2025-12-22 22:35:48, the file was birthed on: 2025-10-31 10:47:59.603742, and at report knit time is 1255.8 hours old. Happy analyzing!

Now that we have our data lets take a look at it using the skimr package.

skim(downloads_tbl)
   
Name downloads_tbl
Number of rows 163894
Number of columns 11
_______________________  
Column type frequency:  
character 6
Date 1
numeric 2
POSIXct 1
Timespan 1
________________________  
Group variables None

Data summary

Variable type: character

skim_variable n_missing complete_rate min max empty n_unique whitespace
r_version 120349 0.27 5 7 0 50 0
r_arch 120349 0.27 1 7 0 6 0
r_os 120349 0.27 7 19 0 24 0
package 0 1.00 7 13 0 8 0
version 0 1.00 5 17 0 62 0
country 15318 0.91 2 2 0 166 0

Variable type: Date

skim_variable n_missing complete_rate min max median n_unique
date 0 1 2020-11-23 2025-12-22 2023-11-12 1849

Variable type: numeric

skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
size 0 1 1124004.74 1486497.76 355 29851.5 310352.5 2348557.0 5677952 ▇▁▂▁▁
ip_id 0 1 11285.05 21919.35 1 229.0 2859.5 11895.5 299146 ▇▁▁▁▁

Variable type: POSIXct

skim_variable n_missing complete_rate min max median n_unique
date_time 0 1 2020-11-23 09:00:41 2025-12-22 22:35:48 2023-11-12 12:11:55 103536

Variable type: Timespan

skim_variable n_missing complete_rate min max median n_unique
time 0 1 0 59 51 60

We can see that the following columns are missing a lot of data and for us are most likely not useful anyways, so we will drop them c(r_version, r_arch, r_os)

Plots

Now lets take a look at a time-series plot of the total daily downloads by package. We will use a log scale and place a vertical line at each version release for each package.

[[1]]

[[2]]

[[3]]

[[4]]

[[5]]

[[6]]

[[7]]

[[8]]

Now lets take a look at some time series decomposition graphs.

[[1]]

[[2]]

[[3]]

[[4]]

[[5]]

[[6]]

[[7]]

[[8]]

[[1]]

[[2]]

[[3]]

[[4]]

[[5]]

[[6]]

[[7]]

[[8]]

Seasonal Diagnostics:

[[1]]

[[2]]

[[3]]

[[4]]

[[5]]

[[6]]

[[7]]

[[8]]

ACF and PACF Diagnostics:

[[1]]

[[2]]

[[3]]

[[4]]

[[5]]

[[6]]

[[7]]

[[8]]

Feature Engineering

Now that we have our basic data and a shot of what it looks like, let’s add some features to our data which can be very helpful in modeling. Lets start by making a tibble that is aggregated by the day and package, as we are going to be interested in forecasting the next 4 weeks or 28 days for each package. First lets get our base data.

Call:
stats::lm(formula = .formula, data = df)

Residuals:
    Min      1Q  Median      3Q     Max 
-147.37  -36.48  -11.20   26.99  819.76 

Coefficients:
                                                     Estimate Std. Error
(Intercept)                                        -1.863e+02  5.959e+01
date                                                1.138e-02  3.158e-03
lag(value, 1)                                       1.083e-01  2.310e-02
lag(value, 7)                                       8.966e-02  2.381e-02
lag(value, 14)                                      7.666e-02  2.377e-02
lag(value, 21)                                      8.327e-02  2.384e-02
lag(value, 28)                                      6.842e-02  2.375e-02
lag(value, 35)                                      5.116e-02  2.375e-02
lag(value, 42)                                      6.499e-02  2.386e-02
lag(value, 49)                                      6.326e-02  2.380e-02
month(date, label = TRUE).L                        -1.026e+01  4.969e+00
month(date, label = TRUE).Q                         9.837e-01  4.868e+00
month(date, label = TRUE).C                        -1.545e+01  4.900e+00
month(date, label = TRUE)^4                        -5.726e+00  4.903e+00
month(date, label = TRUE)^5                        -6.417e+00  4.872e+00
month(date, label = TRUE)^6                         1.418e+00  4.894e+00
month(date, label = TRUE)^7                        -4.387e+00  4.835e+00
month(date, label = TRUE)^8                        -4.016e+00  4.810e+00
month(date, label = TRUE)^9                         2.803e+00  4.823e+00
month(date, label = TRUE)^10                        9.203e-01  4.839e+00
month(date, label = TRUE)^11                       -4.060e+00  4.826e+00
fourier_vec(date, type = "sin", K = 1, period = 7) -1.124e+01  2.200e+00
fourier_vec(date, type = "cos", K = 1, period = 7)  7.174e+00  2.277e+00
                                                   t value Pr(>|t|)    
(Intercept)                                         -3.126 0.001798 ** 
date                                                 3.603 0.000323 ***
lag(value, 1)                                        4.688 2.96e-06 ***
lag(value, 7)                                        3.766 0.000171 ***
lag(value, 14)                                       3.226 0.001278 ** 
lag(value, 21)                                       3.493 0.000489 ***
lag(value, 28)                                       2.881 0.004016 ** 
lag(value, 35)                                       2.153 0.031415 *  
lag(value, 42)                                       2.724 0.006512 ** 
lag(value, 49)                                       2.658 0.007920 ** 
month(date, label = TRUE).L                         -2.064 0.039184 *  
month(date, label = TRUE).Q                          0.202 0.839896    
month(date, label = TRUE).C                         -3.153 0.001645 ** 
month(date, label = TRUE)^4                         -1.168 0.242979    
month(date, label = TRUE)^5                         -1.317 0.187936    
month(date, label = TRUE)^6                          0.290 0.772060    
month(date, label = TRUE)^7                         -0.907 0.364290    
month(date, label = TRUE)^8                         -0.835 0.403809    
month(date, label = TRUE)^9                          0.581 0.561169    
month(date, label = TRUE)^10                         0.190 0.849195    
month(date, label = TRUE)^11                        -0.841 0.400320    
fourier_vec(date, type = "sin", K = 1, period = 7)  -5.109 3.60e-07 ***
fourier_vec(date, type = "cos", K = 1, period = 7)   3.150 0.001658 ** 
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 59.2 on 1777 degrees of freedom
  (49 observations deleted due to missingness)
Multiple R-squared:  0.2295,    Adjusted R-squared:   0.22 
F-statistic: 24.07 on 22 and 1777 DF,  p-value: < 2.2e-16

NNS Forecasting

This is something I have been wanting to try for a while. The NNS package is a great package for forecasting time series data.

NNS GitHub

library(NNS)

data_list <- base_data |>
    select(package, value) |>
    group_split(package)

data_list |>
    imap(
        \(x, idx) {
            obj <- x
            x <- obj |> pull(value) |> tail(7*52)
            train_set_size <- length(x) - 56
            pkg <- obj |> pluck(1) |> unique()
#            sf <- NNS.seas(x, modulo = 7, plot = FALSE)$periods
            seas <- t(
                sapply(
                    1:25, 
                    function(i) c(
                        i,
                        sqrt(
                            mean((
                                NNS.ARMA(x, 
                                         h = 28, 
                                         training.set = train_set_size, 
                                         method = "lin", 
                                         seasonal.factor = i, 
                                         plot=FALSE
                                         ) - tail(x, 28)) ^ 2)))
                    )
                )
            colnames(seas) <- c("Period", "RMSE")
            sf <- seas[which.min(seas[, 2]), 1]
            
            cat(paste0("Package: ", pkg, "\n"))
            NNS.ARMA.optim(
                variable = x,
                h = 28,
                training.set = train_set_size,
                #seasonal.factor = seq(12, 60, 7),
                seasonal.factor = sf,
                pred.int = 0.95,
                plot = TRUE
            )
            title(
                sub = paste0("\n",
                             "Package: ", pkg, " - NNS Optimization")
            )
        }
    )
Package: healthyR
[1] "CURRNET METHOD: lin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'lin' , seasonal.factor =  c( 7 ) ...)"
[1] "CURRENT lin OBJECTIVE FUNCTION = 5.2175924447278"
[1] "BEST method = 'lin' PATH MEMBER = c( 7 )"
[1] "BEST lin OBJECTIVE FUNCTION = 5.2175924447278"
[1] "CURRNET METHOD: nonlin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'nonlin' , seasonal.factor =  c( 7 ) ...)"
[1] "CURRENT nonlin OBJECTIVE FUNCTION = 6.43358060428004"
[1] "BEST method = 'nonlin' PATH MEMBER = c( 7 )"
[1] "BEST nonlin OBJECTIVE FUNCTION = 6.43358060428004"
[1] "CURRNET METHOD: both"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'both' , seasonal.factor =  c( 7 ) ...)"
[1] "CURRENT both OBJECTIVE FUNCTION = 6.13044794418033"
[1] "BEST method = 'both' PATH MEMBER = c( 7 )"
[1] "BEST both OBJECTIVE FUNCTION = 6.13044794418033"

Package: healthyR.ai
[1] "CURRNET METHOD: lin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'lin' , seasonal.factor =  c( 4 ) ...)"
[1] "CURRENT lin OBJECTIVE FUNCTION = 17.1728128240806"
[1] "BEST method = 'lin' PATH MEMBER = c( 4 )"
[1] "BEST lin OBJECTIVE FUNCTION = 17.1728128240806"
[1] "CURRNET METHOD: nonlin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'nonlin' , seasonal.factor =  c( 4 ) ...)"
[1] "CURRENT nonlin OBJECTIVE FUNCTION = 20.1758406149785"
[1] "BEST method = 'nonlin' PATH MEMBER = c( 4 )"
[1] "BEST nonlin OBJECTIVE FUNCTION = 20.1758406149785"
[1] "CURRNET METHOD: both"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'both' , seasonal.factor =  c( 4 ) ...)"
[1] "CURRENT both OBJECTIVE FUNCTION = 23.7028058827973"
[1] "BEST method = 'both' PATH MEMBER = c( 4 )"
[1] "BEST both OBJECTIVE FUNCTION = 23.7028058827973"

Package: healthyR.data
[1] "CURRNET METHOD: lin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'lin' , seasonal.factor =  c( 23 ) ...)"
[1] "CURRENT lin OBJECTIVE FUNCTION = 10.8670352018108"
[1] "BEST method = 'lin' PATH MEMBER = c( 23 )"
[1] "BEST lin OBJECTIVE FUNCTION = 10.8670352018108"
[1] "CURRNET METHOD: nonlin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'nonlin' , seasonal.factor =  c( 23 ) ...)"
[1] "CURRENT nonlin OBJECTIVE FUNCTION = 7.53966708732917"
[1] "BEST method = 'nonlin' PATH MEMBER = c( 23 )"
[1] "BEST nonlin OBJECTIVE FUNCTION = 7.53966708732917"
[1] "CURRNET METHOD: both"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'both' , seasonal.factor =  c( 23 ) ...)"
[1] "CURRENT both OBJECTIVE FUNCTION = 7.93788670604438"
[1] "BEST method = 'both' PATH MEMBER = c( 23 )"
[1] "BEST both OBJECTIVE FUNCTION = 7.93788670604438"

Package: healthyR.ts
[1] "CURRNET METHOD: lin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'lin' , seasonal.factor =  c( 19 ) ...)"
[1] "CURRENT lin OBJECTIVE FUNCTION = 21.2659409870562"
[1] "BEST method = 'lin' PATH MEMBER = c( 19 )"
[1] "BEST lin OBJECTIVE FUNCTION = 21.2659409870562"
[1] "CURRNET METHOD: nonlin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'nonlin' , seasonal.factor =  c( 19 ) ...)"
[1] "CURRENT nonlin OBJECTIVE FUNCTION = 21.141859716198"
[1] "BEST method = 'nonlin' PATH MEMBER = c( 19 )"
[1] "BEST nonlin OBJECTIVE FUNCTION = 21.141859716198"
[1] "CURRNET METHOD: both"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'both' , seasonal.factor =  c( 19 ) ...)"
[1] "CURRENT both OBJECTIVE FUNCTION = 21.0401615913755"
[1] "BEST method = 'both' PATH MEMBER = c( 19 )"
[1] "BEST both OBJECTIVE FUNCTION = 21.0401615913755"

Package: healthyverse
[1] "CURRNET METHOD: lin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'lin' , seasonal.factor =  c( 4 ) ...)"
[1] "CURRENT lin OBJECTIVE FUNCTION = 7.34866993992594"
[1] "BEST method = 'lin' PATH MEMBER = c( 4 )"
[1] "BEST lin OBJECTIVE FUNCTION = 7.34866993992594"
[1] "CURRNET METHOD: nonlin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'nonlin' , seasonal.factor =  c( 4 ) ...)"
[1] "CURRENT nonlin OBJECTIVE FUNCTION = 11.8963659382357"
[1] "BEST method = 'nonlin' PATH MEMBER = c( 4 )"
[1] "BEST nonlin OBJECTIVE FUNCTION = 11.8963659382357"
[1] "CURRNET METHOD: both"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'both' , seasonal.factor =  c( 4 ) ...)"
[1] "CURRENT both OBJECTIVE FUNCTION = 14.9202080980749"
[1] "BEST method = 'both' PATH MEMBER = c( 4 )"
[1] "BEST both OBJECTIVE FUNCTION = 14.9202080980749"

Package: RandomWalker
[1] "CURRNET METHOD: lin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'lin' , seasonal.factor =  c( 23 ) ...)"
[1] "CURRENT lin OBJECTIVE FUNCTION = 11.6072700751069"
[1] "BEST method = 'lin' PATH MEMBER = c( 23 )"
[1] "BEST lin OBJECTIVE FUNCTION = 11.6072700751069"
[1] "CURRNET METHOD: nonlin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'nonlin' , seasonal.factor =  c( 23 ) ...)"
[1] "CURRENT nonlin OBJECTIVE FUNCTION = 8.43659918453205"
[1] "BEST method = 'nonlin' PATH MEMBER = c( 23 )"
[1] "BEST nonlin OBJECTIVE FUNCTION = 8.43659918453205"
[1] "CURRNET METHOD: both"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'both' , seasonal.factor =  c( 23 ) ...)"
[1] "CURRENT both OBJECTIVE FUNCTION = 9.32424584198708"
[1] "BEST method = 'both' PATH MEMBER = c( 23 )"
[1] "BEST both OBJECTIVE FUNCTION = 9.32424584198708"

Package: tidyAML
[1] "CURRNET METHOD: lin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'lin' , seasonal.factor =  c( 1 ) ...)"
[1] "CURRENT lin OBJECTIVE FUNCTION = 142.277623209315"
[1] "BEST method = 'lin' PATH MEMBER = c( 1 )"
[1] "BEST lin OBJECTIVE FUNCTION = 142.277623209315"
[1] "CURRNET METHOD: nonlin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'nonlin' , seasonal.factor =  c( 1 ) ...)"
[1] "CURRENT nonlin OBJECTIVE FUNCTION = 126.94013917144"
[1] "BEST method = 'nonlin' PATH MEMBER = c( 1 )"
[1] "BEST nonlin OBJECTIVE FUNCTION = 126.94013917144"
[1] "CURRNET METHOD: both"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'both' , seasonal.factor =  c( 1 ) ...)"
[1] "CURRENT both OBJECTIVE FUNCTION = 95.9576377056693"
[1] "BEST method = 'both' PATH MEMBER = c( 1 )"
[1] "BEST both OBJECTIVE FUNCTION = 95.9576377056693"

Package: TidyDensity
[1] "CURRNET METHOD: lin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'lin' , seasonal.factor =  c( 7 ) ...)"
[1] "CURRENT lin OBJECTIVE FUNCTION = 10.9730610088761"
[1] "BEST method = 'lin' PATH MEMBER = c( 7 )"
[1] "BEST lin OBJECTIVE FUNCTION = 10.9730610088761"
[1] "CURRNET METHOD: nonlin"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'nonlin' , seasonal.factor =  c( 7 ) ...)"
[1] "CURRENT nonlin OBJECTIVE FUNCTION = 8.34108303662068"
[1] "BEST method = 'nonlin' PATH MEMBER = c( 7 )"
[1] "BEST nonlin OBJECTIVE FUNCTION = 8.34108303662068"
[1] "CURRNET METHOD: both"
[1] "COPY LATEST PARAMETERS DIRECTLY FOR NNS.ARMA() IF ERROR:"
[1] "NNS.ARMA(... method =  'both' , seasonal.factor =  c( 7 ) ...)"
[1] "CURRENT both OBJECTIVE FUNCTION = 9.4617809019988"
[1] "BEST method = 'both' PATH MEMBER = c( 7 )"
[1] "BEST both OBJECTIVE FUNCTION = 9.4617809019988"

[[1]]
NULL

[[2]]
NULL

[[3]]
NULL

[[4]]
NULL

[[5]]
NULL

[[6]]
NULL

[[7]]
NULL

[[8]]
NULL

Pre-Processing

Now we are going to do some basic pre-processing.

data_padded_tbl <- base_data %>%
  pad_by_time(
    .date_var  = date,
    .pad_value = 0
  )

# Get log interval and standardization parameters
log_params  <- liv(data_padded_tbl$value, limit_lower = 0, offset = 1, silent = TRUE)
limit_lower <- log_params$limit_lower
limit_upper <- log_params$limit_upper
offset      <- log_params$offset

data_liv_tbl <- data_padded_tbl %>%
  # Get log interval transform
  mutate(value_trans = liv(value, limit_lower = 0, offset = 1, silent = TRUE)$log_scaled)

# Get Standardization Params
std_params <- standard_vec(data_liv_tbl$value_trans, silent = TRUE)
std_mean   <- std_params$mean
std_sd     <- std_params$sd

data_transformed_tbl <- data_liv_tbl %>%
  group_by(package) %>%
  # get standardization
  mutate(value_trans = standard_vec(value_trans, silent = TRUE)$standard_scaled) %>%
  tk_augment_fourier(
    .date_var = date,
    .periods  = c(7, 14, 30, 90, 180),
    .K        = 2
  ) %>%
  tk_augment_timeseries_signature(
    .date_var = date
  ) %>%
  ungroup() %>%
  select(-c(value, -year.iso))

Since this is panel data we can follow one of two different modeling strategies. We can search for a global model in the panel data or we can use nested forecasting finding the best model for each of the time series. Since we only have 5 panels, we will use nested forecasting.

To do this we will use the nest_timeseries and split_nested_timeseries functions to create a nested tibble.

horizon <- 4*7

nested_data_tbl <- data_transformed_tbl %>%

    # 0. Filter out column where package is NA
    filter(!is.na(package)) %>%
    
    # 1. Extending: We'll predict n days into the future.
    extend_timeseries(
        .id_var        = package,
        .date_var      = date,
        .length_future = horizon
    ) %>%
    
    # 2. Nesting: We'll group by id, and create a future dataset
    #    that forecasts n days of extended data and
    #    an actual dataset that contains n*2 days
    nest_timeseries(
        .id_var        = package,
        .length_future = horizon
        #.length_actual = horizon*2
    ) %>%
    
   # 3. Splitting: We'll take the actual data and create splits
   #    for accuracy and confidence interval estimation of n das (test)
   #    and the rest is training data
    split_nested_timeseries(
        .length_test = horizon
    )

nested_data_tbl
# A tibble: 8 × 4
  package       .actual_data          .future_data       .splits          
  <fct>         <list>                <list>             <list>           
1 healthyR.data <tibble [1,840 × 50]> <tibble [28 × 50]> <split [1812|28]>
2 healthyR      <tibble [1,832 × 50]> <tibble [28 × 50]> <split [1804|28]>
3 healthyR.ts   <tibble [1,775 × 50]> <tibble [28 × 50]> <split [1747|28]>
4 healthyverse  <tibble [1,745 × 50]> <tibble [28 × 50]> <split [1717|28]>
5 healthyR.ai   <tibble [1,574 × 50]> <tibble [28 × 50]> <split [1546|28]>
6 TidyDensity   <tibble [1,425 × 50]> <tibble [28 × 50]> <split [1397|28]>
7 tidyAML       <tibble [1,032 × 50]> <tibble [28 × 50]> <split [1004|28]>
8 RandomWalker  <tibble [455 × 50]>   <tibble [28 × 50]> <split [427|28]> 

Now it is time to make some recipes and models using the modeltime workflow.

Modeltime Workflow

Recipe Object

recipe_base <- recipe(
  value_trans ~ .
  , data = extract_nested_test_split(nested_data_tbl)
  )

recipe_base

recipe_date <- recipe(
  value_trans ~ date
  , data = extract_nested_test_split(nested_data_tbl)
  )

Models

# Models ------------------------------------------------------------------

# Auto ARIMA --------------------------------------------------------------

model_spec_arima_no_boost <- arima_reg() %>%
  set_engine(engine = "auto_arima")

wflw_auto_arima <- workflow() %>%
  add_recipe(recipe = recipe_date) %>%
  add_model(model_spec_arima_no_boost)

# NNETAR ------------------------------------------------------------------

model_spec_nnetar <- nnetar_reg(
  mode              = "regression"
  , seasonal_period = "auto"
) %>%
  set_engine("nnetar")

wflw_nnetar <- workflow() %>%
  add_recipe(recipe = recipe_base) %>%
  add_model(model_spec_nnetar)

# TSLM --------------------------------------------------------------------

model_spec_lm <- linear_reg() %>%
  set_engine("lm")

wflw_lm <- workflow() %>%
  add_recipe(recipe = recipe_base) %>%
  add_model(model_spec_lm)

# MARS --------------------------------------------------------------------

model_spec_mars <- mars(mode = "regression") %>%
  set_engine("earth")

wflw_mars <- workflow() %>%
  add_recipe(recipe = recipe_date) %>%
  add_model(model_spec_mars)

Nested Modeltime Tables

nested_modeltime_tbl <- modeltime_nested_fit(
  # Nested Data
  nested_data = nested_data_tbl,
   control = control_nested_fit(
     verbose = TRUE,
     allow_par = FALSE
   ),
  # Add workflows
  wflw_auto_arima,
  wflw_lm,
  wflw_mars,
  wflw_nnetar
)
nested_modeltime_tbl <- nested_modeltime_tbl[!is.na(nested_modeltime_tbl$package),]

Model Accuracy

nested_modeltime_tbl %>%
  extract_nested_test_accuracy() %>%
  filter(!is.na(package)) %>%
  knitr::kable()
package .model_id .model_desc .type mae mape mase smape rmse rsq
healthyR.data 1 ARIMA Test 0.7283596 172.32578 0.7084059 147.2195 0.8200320 0.0004834
healthyR.data 2 LM Test 0.6659373 224.99973 0.6476937 119.7955 0.7994281 0.0597221
healthyR.data 3 EARTH Test 1.1029543 504.29529 1.0727384 118.6934 1.3431714 0.0127157
healthyR.data 4 NNAR Test 0.7460935 241.31352 0.7256539 133.9408 0.8950873 0.0001319
healthyR 1 ARIMA Test 0.5033889 266.71707 0.8604906 140.9911 0.6257854 0.0029176
healthyR 2 LM Test 0.4826050 449.38269 0.8249626 104.8226 0.5793575 0.1385617
healthyR 3 EARTH Test 0.7516971 764.05415 1.2849474 111.9812 0.8811326 0.0935887
healthyR 4 NNAR Test 0.4123123 199.75087 0.7048046 113.8066 0.5017408 0.2345093
healthyR.ts 1 ARIMA Test 0.5978137 114.60855 0.7762101 159.8154 0.7455624 0.0359105
healthyR.ts 2 LM Test 0.6703808 145.24418 0.8704324 129.1245 0.8358226 0.0020344
healthyR.ts 3 EARTH Test 0.5339835 134.58600 0.6933321 101.8256 0.7018769 0.0618091
healthyR.ts 4 NNAR Test 0.7694902 156.72354 0.9991175 151.3359 0.9422121 0.0097233
healthyverse 1 ARIMA Test 0.6743785 97.39765 0.6736234 116.1145 0.8869407 0.0007238
healthyverse 2 LM Test 0.7683652 137.40024 0.7675049 120.0752 0.9013803 0.0081141
healthyverse 3 EARTH Test 1.0643946 283.55476 1.0632029 110.6372 1.2432926 0.0190930
healthyverse 4 NNAR Test 0.7618218 140.95879 0.7609689 126.3094 0.9045790 0.0236366
healthyR.ai 1 ARIMA Test 0.7858894 114.14279 0.9024941 151.7625 0.9138999 0.3042449
healthyR.ai 2 LM Test 0.8758550 163.91053 1.0058082 132.2528 1.0784017 0.0727373
healthyR.ai 3 EARTH Test 2.3989266 774.03191 2.7548623 135.5210 2.8445943 0.3968339
healthyR.ai 4 NNAR Test 0.8588094 137.16037 0.9862335 137.9731 1.0572319 0.0222469
TidyDensity 1 ARIMA Test 1.0149567 179.17488 0.6486094 161.2578 1.1526099 0.1309207
TidyDensity 2 LM Test 0.9471489 122.40199 0.6052768 158.3925 1.1349209 0.0193278
TidyDensity 3 EARTH Test 0.9200465 94.22875 0.5879569 165.6449 1.1413176 0.0263725
TidyDensity 4 NNAR Test 1.0618212 163.31818 0.6785583 157.1827 1.1974671 0.0183726
tidyAML 1 ARIMA Test 0.5536957 124.77945 0.6741471 124.8103 0.7349982 0.0000091
tidyAML 2 LM Test 0.5792351 206.72125 0.7052425 130.8985 0.7383391 0.1035182
tidyAML 3 EARTH Test 1.0589417 476.42069 1.2893049 125.8480 1.2096838 0.0023937
tidyAML 4 NNAR Test 0.7146013 333.95111 0.8700562 115.6533 0.8383834 0.0811746
RandomWalker 1 ARIMA Test 0.6568350 120.29770 0.6858403 141.7613 0.8165203 0.0662191
RandomWalker 2 LM Test 0.6917081 146.56179 0.7222533 152.2283 0.8247919 0.0470930
RandomWalker 3 EARTH Test 0.7981137 195.75073 0.8333577 164.1493 0.8595254 0.1113190
RandomWalker 4 NNAR Test 0.6779859 179.35184 0.7079252 154.6735 0.8192475 0.0648154

Plot Models

nested_modeltime_tbl %>%
  extract_nested_test_forecast() %>%
  group_by(package) %>%
  filter_by_time(.date_var = .index, .start_date = max(.index) - 60) %>%
  ungroup() %>%
  plot_modeltime_forecast(
    .interactive = FALSE,
    .conf_interval_show  = FALSE,
    .facet_scales = "free"
  ) +
  theme_minimal() +
  facet_wrap(~ package, nrow = 3) +
  theme(legend.position = "bottom")

Best Model

best_nested_modeltime_tbl <- nested_modeltime_tbl %>%
  modeltime_nested_select_best(
    metric = "rmse",
    minimize = TRUE,
    filter_test_forecasts = TRUE
  )

best_nested_modeltime_tbl %>%
  extract_nested_best_model_report()
# Nested Modeltime Table
  

# A tibble: 8 × 10
  package      .model_id .model_desc .type   mae  mape  mase smape  rmse     rsq
  <fct>            <int> <chr>       <chr> <dbl> <dbl> <dbl> <dbl> <dbl>   <dbl>
1 healthyR.da…         2 LM          Test  0.666 225.  0.648  120. 0.799 5.97e-2
2 healthyR             4 NNAR        Test  0.412 200.  0.705  114. 0.502 2.35e-1
3 healthyR.ts          3 EARTH       Test  0.534 135.  0.693  102. 0.702 6.18e-2
4 healthyverse         1 ARIMA       Test  0.674  97.4 0.674  116. 0.887 7.24e-4
5 healthyR.ai          1 ARIMA       Test  0.786 114.  0.902  152. 0.914 3.04e-1
6 TidyDensity          2 LM          Test  0.947 122.  0.605  158. 1.13  1.93e-2
7 tidyAML              1 ARIMA       Test  0.554 125.  0.674  125. 0.735 9.05e-6
8 RandomWalker         1 ARIMA       Test  0.657 120.  0.686  142. 0.817 6.62e-2
best_nested_modeltime_tbl %>%
  extract_nested_test_forecast() %>%
  #filter(!is.na(.model_id)) %>%
  group_by(package) %>%
  filter_by_time(.date_var = .index, .start_date = max(.index) - 60) %>%
  ungroup() %>%
  plot_modeltime_forecast(
    .interactive = FALSE,
    .conf_interval_alpha = 0.2,
    .facet_scales = "free"
  ) +
  facet_wrap(~ package, nrow = 3) +
  theme_minimal() +
  theme(legend.position = "bottom")

Refitting and Future Forecast

Now that we have the best models, we can make our future forecasts.

nested_modeltime_refit_tbl <- best_nested_modeltime_tbl %>%
    modeltime_nested_refit(
        control = control_nested_refit(verbose = TRUE)
    )
nested_modeltime_refit_tbl
# Nested Modeltime Table
  

# A tibble: 8 × 5
  package       .actual_data .future_data .splits           .modeltime_tables 
  <fct>         <list>       <list>       <list>            <list>            
1 healthyR.data <tibble>     <tibble>     <split [1812|28]> <mdl_tm_t [1 × 5]>
2 healthyR      <tibble>     <tibble>     <split [1804|28]> <mdl_tm_t [1 × 5]>
3 healthyR.ts   <tibble>     <tibble>     <split [1747|28]> <mdl_tm_t [1 × 5]>
4 healthyverse  <tibble>     <tibble>     <split [1717|28]> <mdl_tm_t [1 × 5]>
5 healthyR.ai   <tibble>     <tibble>     <split [1546|28]> <mdl_tm_t [1 × 5]>
6 TidyDensity   <tibble>     <tibble>     <split [1397|28]> <mdl_tm_t [1 × 5]>
7 tidyAML       <tibble>     <tibble>     <split [1004|28]> <mdl_tm_t [1 × 5]>
8 RandomWalker  <tibble>     <tibble>     <split [427|28]>  <mdl_tm_t [1 × 5]>
nested_modeltime_refit_tbl %>%
  extract_nested_future_forecast() %>%
  group_by(package) %>%
  mutate(across(.value:.conf_hi, .fns = ~ standard_inv_vec(
    x    = .,
    mean = std_mean,
    sd   = std_sd
  )$standard_inverse_value)) %>%
  mutate(across(.value:.conf_hi, .fns = ~ liiv(
    x = .,
    limit_lower = limit_lower,
    limit_upper = limit_upper,
    offset      = offset
  )$rescaled_v)) %>%
  filter_by_time(.date_var = .index, .start_date = max(.index) - 60) %>%
  ungroup() %>%
  plot_modeltime_forecast(
    .interactive = FALSE,
    .conf_interval_alpha = 0.2,
    .facet_scales = "free"
  ) +
  facet_wrap(~ package, nrow = 3) +
  theme_minimal() +
  theme(legend.position = "bottom")