First of all based on more general question on StackOverflow it is not possible to detect encoding of file in 100% certainty.
I've struggle this many times and come to non-automatic solution:
Use iconvlist
to get all possible encodings:
codepages <- setNames(iconvlist(), iconvlist())
Then read data using each of them
x <- lapply(codepages, function(enc) try(read.table("encoding.asc",
fileEncoding=enc,
nrows=3, header=TRUE, sep=""))) # you get lots of errors/warning here
Important here is to know structure of file (separator, headers). Set encoding using fileEncoding
argument. Read only few rows.
Now you could lookup on results:
unique(do.call(rbind, sapply(x, dim)))
# [,1] [,2]
# 437 14 2
# CP1200 3 29
# CP12000 0 1
Seems like correct one is that with 3 rows and 29 columns, so lets see them:
maybe_ok <- sapply(x, function(x) isTRUE(all.equal(dim(x), c(3,29))))
codepages[maybe_ok]
# CP1200 UCS-2LE UTF-16 UTF-16LE UTF16 UTF16LE
# "CP1200" "UCS-2LE" "UTF-16" "UTF-16LE" "UTF16" "UTF16LE"
You could look on data too
x[maybe_ok]
For your file all this encodings returns identical data (partially because there is some redundancy as you see).
If you don't know specific of your file you need to use readLines
with some changes in workflow (e.g. you can't use fileEncoding
, must use length
instead of dim
, do more magic to find correct ones).
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…