14.5 Model Building

20200607 We now build, fit, or train a model. R has most machine learning algorithms available. We will begin with a simple favourite—the decision tree algorithm— using rpart::rpart(). We record this information using the generic variables mdesc (human readable description of the model type) and mtype (type of the model).

mtype <- "rpart"
mdesc <- "decision tree"

The model will be built using tidyselect::all_of() the dplyr::select()’ed variables from the training dplyr::slice() of the dataset. The training slice is identified as the row numbers stored as tr and the column names stored as vars. This training dataset is piped on to rpart::rpart() together with a specification of the model to be built as stored in form. Using generic variables allows us to change the formula, the dataset, the observations and the variables used in building the model yet retain the same programming code. The resulting model is saved into the variable model.

ds %>%
  select(all_of(vars)) %>%
  slice(tr) %>%          
  rpart(form, .) ->

To view the model simply reference the generic variable model on the command line. This asks R to base::print() the model.

## n= 139059 
## node), split, n, loss, yval, (yprob)
##       * denotes terminal node
##  1) root 139059 29437 No (0.7883129 0.2116871)  
##    2) humidity_3pm< 71.5 116662 16101 No (0.8619859 0.1380141) *
##    3) humidity_3pm>=71.5 22397  9061 Yes (0.4045631 0.5954369)  
##      6) humidity_3pm< 82.5 12318  5585 No (0.5465985 0.4534015)  
##       12) wind_gust_speed< 42 7441  2658 No (0.6427899 0.3572101) *
##       13) wind_gust_speed>=42 4877  1950 Yes (0.3998360 0.6001640) *
##      7) humidity_3pm>=82.5 10079  2328 Yes (0.2309753 0.7690247) *

This textual version of the model provides the basic structure of the tree. We present the details in Chapter 20. Different model builders will base::print() different information.

This is our first predictive model. Be sure to spend some time to understand and reflect on the knowledge that the model is exposing.

Your donation will support ongoing availability and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984. Copyright © 1995-2021 Graham.Williams@togaware.com Creative Commons Attribution-ShareAlike 4.0