10.15 Imputation

20240816

Imputation is the process of filling in the gaps (or missing values) in data. We need to be careful when imputing missing values in the data as it is like inventing new data and is not always wise to do so. Be sure to understand the pros and cons of imputation. There is considerable discussion about whether imputation is a good idea or not. After all, we end up inventing data to suit the needs of the tool we are using. We won’t discuss the pros and cons here. Do be aware that imputation can be problematic.

Often, data will contain some degree of missing values, and this can cause a problem for some modelling algorithms, though not all have an issue. For example randomForest::randomForest() silently removes any observation with any missing value by default whilst rpart::rpart() has a particularly well developed approach to dealing with missing values. For datasets with a very large number of variables, and a reasonable number of missing values, removal of observations or variables with missing values may well result in a small, unrepresentative dataset, or even no dataset at all!

There are many types of imputations available, only some of which are directly available in Rattle.

If the missing data pattern is monotonic, then imputation can be simplified. The pattern of missing values is also useful in suggesting which variables could be candidates for imputing the missing values of other variables. Refer to the Show Missing check button of the Summary option of the Explore tab for details

When Rattle performs an imputation it will store the results in a variable of the dataset which has the same name as the variable that is imputed, but prefixed with IMP_. Such variables, whether they are imputed by Rattle or already existed in the dataset loaded into Rattle (e.g., a dataset from SAS), will be treated as input variables, and the original variable marked to be ignored.

A simple tool for imputing missing values using a model is randomForest::na.roughfix() from randomForest (Breiman et al. 2024). This function provides, as the name implies, a rather basic algorithm for imputing missing values. Because of this we will demonstrate the process but then restore the original dataset—we will not want this imputation to be included in our actual dataset to override the original variable values.

# Backup the dataset so we can restore it as required.

ods <- ds
# Count the number of missing values.

ds[vars] %>%  is.na() %>% sum()

# Impute missing values.

ds[vars] %<>% na.roughfix()

# Confirm that no missing values remain.

ds[vars] %>%  is.na() %>% sum()

We now restore the dataset with its original contents.

# Restore the original dataset.

ds <- ods

References

Breiman, Leo, Adele Cutler, Andy Liaw, and Matthew Wiener. 2024. randomForest: Breiman and Cutlers Random Forests for Classification and Regression. https://www.stat.berkeley.edu/~breiman/RandomForests/.


Your donation will support ongoing availability and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984. Copyright © 1995-2022 Graham.Williams@togaware.com Creative Commons Attribution-ShareAlike 4.0