Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
398 views
in Technique[技术] by (71.8m points)

r - Different results with randomForest() and caret's randomForest (method = "rf")

I am new to caret, and I just want to ensure that I fully understand what it’s doing. Towards that end, I’ve been attempting to replicate the results I get from a randomForest() model using caret’s train() function for method="rf". Unfortunately, I haven’t been able to get matching results, and I’m wondering what I’m overlooking.

I’ll also add that given that randomForest uses bootstrapping to generate samples to fit each of the ntrees, and estimates error based on out-of-bag predictions, I’m a little fuzzy on the difference between specifying "oob" and "boot" in the trainControl function call. These options generate different results, but neither matches the randomForest() model.

Although I’ve read the caret Package website (http://topepo.github.io/caret/index.html), as well as various StackOverflow questions that seem potentially relevant, but I haven’t been able to figure out why the caret method = "rf" model produces different results from randomForest(). Thank you very much for any insight you might be able to offer.

Here’s a replicable example, using the CO2 dataset from the MASS package.

library(MASS)
data(CO2)

library(randomForest)
set.seed(1)
rf.model <- randomForest(uptake ~ ., 
                       data = CO2,
                       ntree = 50,
                       nodesize = 5,
                       mtry=2,
                       importance=TRUE, 
                       metric="RMSE")

library(caret)
set.seed(1)
caret.oob.model <- train(uptake ~ ., 
                     data = CO2,
                     method="rf",
                     ntree=50,
                     tuneGrid=data.frame(mtry=2),
                     nodesize = 5,
                     importance=TRUE, 
                     metric="RMSE",
                     trControl = trainControl(method="oob"),
                     allowParallel=FALSE)

set.seed(1)
caret.boot.model <- train(uptake ~ ., 
                     data = CO2,
                     method="rf",
                     ntree=50,
                     tuneGrid=data.frame(mtry=2),
                     nodesize = 5,
                     importance=TRUE, 
                     metric="RMSE",
                     trControl=trainControl(method="boot", number=50),
                     allowParallel=FALSE)

 print(rf.model)
 print(caret.oob.model$finalModel) 
 print(caret.boot.model$finalModel)

Produces the following:

print(rf.model)

      Mean of squared residuals: 9.380421
                % Var explained: 91.88

print(caret.oob.model$finalModel)

      Mean of squared residuals: 38.3598
                % Var explained: 66.81

print(caret.boot.model$finalModel)

      Mean of squared residuals: 42.56646
                % Var explained: 63.16

And the code to look at variable importance:

importance(rf.model)

importance(caret.oob.model$finalModel)

importance(caret.boot.model$finalModel)
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Using formula interface in train converts factors to dummy. To compare results from caret with randomForest you should use the non-formula interface.

In your case, you should provide a seed inside trainControl to get the same result as in randomForest.

Section training in caret webpage, there are some notes on reproducibility where it explains how to use seeds.

library("randomForest")
set.seed(1)
rf.model <- randomForest(uptake ~ ., 
                         data = CO2,
                         ntree = 50,
                         nodesize = 5,
                         mtry = 2,
                         importance = TRUE, 
                         metric = "RMSE")

library("caret")
caret.oob.model <- train(CO2[, -5], CO2$uptake, 
                         method = "rf",
                         ntree = 50,
                         tuneGrid = data.frame(mtry = 2),
                         nodesize = 5,
                         importance = TRUE, 
                         metric = "RMSE",
                         trControl = trainControl(method = "oob", seed = 1),
                         allowParallel = FALSE)

If you are doing resampling, you should provide seeds for each resampling iteration and an additional one for the final model. Examples in ?trainControl show how to create them.

In the following example, the last seed is for the final model and I set it to 1.

seeds <- as.vector(c(1:26), mode = "list")

# For the final model
seeds[[26]] <- 1

caret.boot.model <- train(CO2[, -5], CO2$uptake, 
                          method = "rf",
                          ntree = 50,
                          tuneGrid = data.frame(mtry = 2),
                          nodesize = 5,
                          importance = TRUE, 
                          metric = "RMSE",
                          trControl = trainControl(method = "boot", seeds = seeds),
                          allowParallel = FALSE)

Definig correctly the non-formula interface with caret and seed in trainControl you will get the same results in all three models:

rf.model
caret.oob.model$final
caret.boot.model$final

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...