14.5 Model Building
20200607 We now build, fit, or train a model. R has most
machine learning algorithms available. We will begin with a simple
favourite—the decision tree algorithm— using
rpart::rpart(). We record this information using the generic
variables mdesc
(human readable description of the model
type) and mtype
(type of the model).
The model will be built using tidyselect::all_of()
the dplyr::select()’ed variables from the training
dplyr::slice() of the dataset. The training slice is
identified as the row numbers stored as tr
and the column
names stored as vars
. This training dataset is piped on to
rpart::rpart() together with a specification of the model to
be built as stored in form
. Using generic variables allows
us to change the formula, the dataset, the observations and the
variables used in building the model yet retain the same programming
code. The resulting model is saved into the variable model
.
To view the model simply reference the generic variable
model
on the command line. This asks R to
base::print() the model.
## n= 158807
##
## node), split, n, loss, yval, (yprob)
## * denotes terminal node
##
## 1) root 158807 34106 No (0.7852362 0.2147638)
## 2) humidity_3pm< 72.5 134559 19156 No (0.8576387 0.1423613) *
## 3) humidity_3pm>=72.5 24248 9298 Yes (0.3834543 0.6165457)
## 6) humidity_3pm< 82.5 12367 5806 No (0.5305248 0.4694752)
## 12) rainfall< 1.25 7253 2688 No (0.6293947 0.3706053) *
## 13) rainfall>=1.25 5114 1996 Yes (0.3903011 0.6096989) *
## 7) humidity_3pm>=82.5 11881 2737 Yes (0.2303678 0.7696322) *
This textual version of the model provides the basic structure of the tree. We present the details in Chapter 20. Different model builders will base::print() different information.
This is our first predictive model. Be sure to spend some time to understand and reflect on the knowledge that the model is exposing.
Your donation will support ongoing availability and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984. Copyright © 1995-2022 Graham.Williams@togaware.com Creative Commons Attribution-ShareAlike 4.0