Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
355 views
in Technique[技术] by (71.8m points)

conv neural network - Deeplearning4j DataSetIterator Reset not supported due to which model.fit() doesnot work

I am trying to run a classifier model on Cats and Dogs dataset. Every thing works fine till the code reaches Model.fit(dataSetIterator, eEpochs) step where as I found during Debug the dataSetIterator based on RecordReaderDataSetIterator hangs at this step due to :

Preconditions.checkArgument(numEpochs == 1 || iterator.resetSupported(), "Cannot perform multiple epochs training using" + "iterator thas does not support resetting (iterator.resetSupported() returned false)"); in MultiLayerNetwork.class... Complete Code is as under:-

public class ImageRecordReaderExample { static final int COUNTIN = 30;

public static void main(String[] args){

    /*
    *  Note:
    *  1) Download the image datasets from imagenet for a number of different labels (For eg: dogs, cats, bears etc..).
    *     URL -> http://www.image-net.org/
    *  2) Create different sub directories for the category of image files that was downloaded.
    *     The subdirectory folder names indicates the possible outcomes (labels) of the model.
    *
    *  This is a simple example to check whether your data is properly extracted from source.
    *
    * */
    try {
        FileSplit fileSplit = new FileSplit(new File("E:\Thesis\Databases\dogs-vs-cats\train"),NativeImageLoader.ALLOWED_FORMATS,new Random(42));
        int numLabels = fileSplit.getRootDir().listFiles(File::isDirectory).length;
       // INDArray weightsArray = Nd4j.create(new double[]{0.35, 0.65});
       // wtsArray = new LossMCXENT(weightsArray)
        ParentPathLabelGenerator parentPathLabelGenerator = new ParentPathLabelGenerator();
        BalancedPathFilter balancedPathFilter = new BalancedPathFilter(
                new Random(42),
                NativeImageLoader.ALLOWED_FORMATS,
                parentPathLabelGenerator
        );
        System.out.println("filesplit.....");
        InputSplit[] inputSplits = fileSplit.sample(balancedPathFilter,80,20);
        InputSplit trainData = inputSplits[0];
        InputSplit testData = inputSplits[1];
        System.out.println("train and test data split carried out");
      
        DataNormalization scaler = new ImagePreProcessingScaler(0,1);

        // Neural Network
        MultiLayerConfiguration config;
        config = new NeuralNetConfiguration.Builder()
                //.weightInit(WeightInit.DISTRIBUTION)
                //.dist(new NormalDistribution(0.0, 0.01))
                //.activation(Activation.RELU)
                //.updater(new Nesterovs(new StepSchedule(ScheduleType.ITERATION, 1e-2, 0.1, 100000), 0.9))
                //.biasUpdater(new Nesterovs(new StepSchedule(ScheduleType.ITERATION, 2e-2, 0.1, 100000), 0.9))
                //.gradientNormalization(GradientNormalization.RenormalizeL2PerLayer) // normalize to prevent vanishing or exploding gradients
                //.l2(5 * 1e-4)
                 .weightInit(WeightInit.XAVIER)
                 .updater(new Nesterovs(0.008D,0.9D))
                .list()
                .layer(new ConvolutionLayer.Builder(2,2)
                        .nIn(COUNTIN)
                        .nOut(15)
                        .stride(2,2)
                        .activation(Activation.RELU)
                        .build())
                .layer(1, new LocalResponseNormalization.Builder().name("lrn1").build())
                .layer(new SubsamplingLayer.Builder(PoolingType.MAX)
                        .kernelSize(2,2)
                        .build())
                .layer(new ConvolutionLayer.Builder(2,2)
                        .nOut(7)
                        .stride(2,2)
                        .activation(Activation.RELU)
                        .build())
                .layer(2, new LocalResponseNormalization.Builder().name("lrn2").build())
                .layer(new SubsamplingLayer.Builder(PoolingType.MAX)
                        .kernelSize(2,2)
                        .build())
                .layer(new DenseLayer.Builder()
                        .nOut(100)
                        .dist(new NormalDistribution(0.001, 0.005))
                        .activation(Activation.RELU)
                        .build())
                .layer(new DenseLayer.Builder()
                        .nOut(100)
                        .dist(new NormalDistribution(0.001, 0.005))
                        .activation(Activation.RELU)
                        .build())
                .layer(new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
                        .nOut(numLabels)
                        .activation(Activation.SOFTMAX)
                        .build())
                .setInputType(InputType.convolutional(30,30,3)).backpropType(BackpropType.Standard)
                .build();
       
      //train without transformations
        ImageRecordReader imageRecordReader = new ImageRecordReader(30,30,3,parentPathLabelGenerator);
        imageRecordReader.initialize(trainData,null);
        DataSetIterator dataSetIterator = new RecordReaderDataSetIterator(imageRecordReader,4,1,numLabels);
        scaler.fit(dataSetIterator);
        dataSetIterator.setPreProcessor(scaler);
        
        MultiLayerNetwork model = new MultiLayerNetwork(config);
        model.init();
        model.setListeners(new ScoreIterationListener(10)); //PerformanceListener for optimized training
        System.out.println("Fitting Model..");
        model.fit(dataSetIterator,1);
        
       

                  
        imageRecordReader.initialize(testData);
        dataSetIterator = new RecordReaderDataSetIterator(imageRecordReader,4,1,numLabels);
        scaler.fit(dataSetIterator);
        dataSetIterator.setPreProcessor(scaler);

        Evaluation evaluation = model.evaluate(dataSetIterator);
        System.out.println("args = [" + evaluation.stats() + "]");

        File modelFile = new File("cnntrainedmodel.zip");
        ModelSerializer.writeModel(model,modelFile,true);
        ModelSerializer.addNormalizerToModel(modelFile,scaler);
        
        
    } catch(IllegalArgumentException e){
        System.out.println("Please provide proper image directory path in place of: Path/to/image-files ");
        System.out.println("For more details, please refer to the instructions listed in comment section");
    } catch (IOException e) {
        e.printStackTrace();
    }
}

}

question from:https://stackoverflow.com/questions/65873306/deeplearning4j-datasetiterator-reset-not-supported-due-to-which-model-fit-does

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Thanks I got the solution...

  1. First javacpp 1.5.4 and javacv-platform 1.5.3 are compatible.
  2. In the first layer of config = new NeuralNetConfiguration.Builder(), nIn(nChannels) as in my case it was nChannels = 3. nOut(width/Height of Image) means nOut were kept the same as that of resized image.
  3. To avoid long processing I removed extra Convolutional layers and only the first convolutional layer and then the output layer were used.

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...