Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
309 views
in Technique[技术] by (71.8m points)

r - Statistical test with test-data

If I am using two method (NN and KNN) with caret and then I want to provide significance test, how can I do wilcoxon test.

I provided sample of my data as follows

structure(list(Input = c(25, 193, 70, 40), Output = c(150, 98, 
        27, 60), Inquiry = c(75, 70, 0, 20), File = c(60, 36, 12, 12), 
        FPAdj = c(1, 1, 0.8, 1.15), RawFPcounts = c(1750, 1902, 535, 
        660), AdjFP = c(1750, 1902, 428, 759), Effort = c(102.4, 
        105.2, 11.1, 21.1)), row.names = c(NA, 4L), class = "data.frame")

    d=readARFF("albrecht.arff") 
    index <- createDataPartition(d$Effort, p = .70,list = FALSE)
    tr <- d[index, ]
    ts <- d[-index, ] 

    boot <- trainControl(method = "repeatedcv", number=100)

         cart1 <- train(log10(Effort) ~ ., data = tr,
                        method = "knn",
                        metric = "MAE",
                        preProc = c("center", "scale", "nzv"),
                        trControl = boot)

           postResample(predict(cart1, ts), log10(ts$Effort))

           cart2 <- train(log10(Effort) ~ ., data = tr,
                          method = "knn",
                          metric = "MAE",
                          preProc = c("center", "scale", "nzv"),
                          trControl = boot)

           postResample(predict(cart2, ts), log10(ts$Effort))

How to perform wilcox.test() here.

    Warm regards
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

One way to deal with your problem is to generate several performance values for knn and NN which you can compare using a statistical test. This can be achieved using Nested resampling.

In nested resampling you are performing train/test splits multiple times and evaluating the model on each test set.

Lets for instance use BostonHousing data:

library(caret)
library(mlbench)

data(BostonHousing)

lets just select numerical columns for the example to make it simple:

d <- BostonHousing[,sapply(BostonHousing, is.numeric)]

As far as I know there is no way to perform nested CV in caret out of the box so a simple wrapper is needed:

generate outer folds for nested CV:

outer_folds <- createFolds(d$medv, k = 5)

Lets use bootstrap resampling as the inner resample loop to tune the hyper parameters:

boot <- trainControl(method = "boot",
                     number = 100)

now loop over the outer folds and perform hyper parameter optimization using the train set and predict on the test set:

CV_knn <- lapply(outer_folds, function(index){
  tr <- d[-index, ]
  ts <- d[index,]
  
  cart1 <- train(medv ~ ., data = tr,
                 method = "knn",
                 metric = "MAE",
                 preProc = c("center", "scale", "nzv"),
                 trControl = boot,
                 tuneLength = 10) #to keep it short we will just probe 10 combinations of hyper parameters
  
  postResample(predict(cart1, ts), ts$medv)
})

extract just MAE from the results:

sapply(CV_knn, function(x) x[3]) -> CV_knn_MAE
CV_knn_MAE
#output
Fold1.MAE Fold2.MAE Fold3.MAE Fold4.MAE Fold5.MAE 
 2.503333  2.587059  2.031200  2.475644  2.607885 

Do the same for glmnet learner for instance:

CV_glmnet <- lapply(outer_folds, function(index){
  tr <- d[-index, ]
  ts <- d[index,]
  
  cart1 <- train(medv ~ ., data = tr,
                 method = "glmnet",
                 metric = "MAE",
                 preProc = c("center", "scale", "nzv"),
                 trControl = boot,
                 tuneLength = 10)
  
  postResample(predict(cart1, ts), ts$medv)
})

sapply(CV_glmnet, function(x) x[3]) -> CV_glmnet_MAE

CV_glmnet_MAE
#output
Fold1.MAE Fold2.MAE Fold3.MAE Fold4.MAE Fold5.MAE 
 3.400559  3.383317  2.830140  3.605266  3.525224

now compare the two using wilcox.test. Since the performance for both learners was generated using the same data splits a paired test is appropriate:

wilcox.test(CV_knn_MAE,
            CV_glmnet_MAE,
            paired = TRUE)

If comparing more than two algorithms one can use friedman.test


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...