I have implemented a method which simply loops around a set of CSV files that contain data on a number of different module. This then adds the 'moduleName' into a hashSet. (Code shown below)
I have used a hashSet as it guarantees no duplicates are inserted instead of an ArrayList which would have to use the contain() method and iterate through the list to check if it is already there.
I believe using the hash set has a better performance than an array list.
Am I correct in stating that?
Also, can somebody explain to me:
- How to work the performance for each data structure if used?
What is the complexity using the big-O notation?
HashSet<String> modulesUploaded = new HashSet<String>();
for (File f: marksheetFiles){
try {
csvFileReader = new CSVFileReader(f);
csvReader = csvFileReader.readFile();
csvReader.readHeaders();
while(csvReader.readRecord()){
String moduleName = csvReader.get("Module");
if (!moduleName.isEmpty()){
modulesUploaded.add(moduleName);
}
}
} catch (IOException e) {
e.printStackTrace();
}
csvReader.close();
}
return modulesUploaded;
}
question from:
https://stackoverflow.com/questions/10196343/hash-set-and-array-list-performances 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…