14.8 Evaluating the Model

REVIEW Before deploying a model we will want to have some measure of confidence in the predictions. This is the role of evaluation—we evaluate the performance of a model to gain an expectation of how well the model will perform on new observations.

We evaluate a model by making predictions on observations that were not used in building the model. These observations will need to have a known outcome so that we can compare the model prediction against the known outcome. This is the purpose of the test dataset as explained in Section 8.12.

head(predict_te) == head(actual_te)
## [1] TRUE TRUE TRUE TRUE TRUE TRUE
sum(head(predict_te) == head(actual_te))
## [1] 6
sum(predict_te == actual_te)
## [1] 28242


Your donation will support ongoing availability and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984. Copyright © 1995-2022 Graham.Williams@togaware.com Creative Commons Attribution-ShareAlike 4.0