Providing example data would help. However, you might be able to adapt the following to your needs.
I created an example data file, which is a just a text file containing the following:
1sep2sep3
1sep2sep3
1sep2sep3
1sep2sep3
1sep2sep3
1sep2sep3
1sep2sep3
I saved it as 'test.csv'. The separation character is the 'sep' string. I think read.csv()
uses scan()
, which only accepts a single character for sep
. To get around it, consider the following:
dat <- readLines('test.csv')
dat <- gsub("sep", " ", dat)
dat <- textConnection(dat)
dat <- read.table(dat)
readLines()
just reads the lines in. gsub
substitutes the multi-character seperation string for a single ' '
, or whatever is convenient for your data. Then textConnection()
and read.data()
reads everything back in conveniently. For smaller datasets, this should be fine. If you have very large data, consider preprocessing with something like AWK to substitute the multi-character separation string. The above is from http://tolstoy.newcastle.edu.au/R/e4/help/08/04/9296.html .
Update
Regarding your comment, if you have spaces in your data, use a different replacement separator. Consider changing test.csv
to :
1sep2 2sep3
1sep2 2sep3
1sep2 2sep3
1sep2 2sep3
1sep2 2sep3
1sep2 2sep3
1sep2 2sep3
Then, with the following function:
readMulti <- function(x, sep, replace, as.is = T)
{
dat <- readLines(x)
dat <- gsub(sep, replace, dat)
dat <- textConnection(dat)
dat <- read.table(dat, sep = replace, as.is = as.is)
return(dat)
}
Try:
readMulti('test.csv', sep = "sep", replace = "", as.is = T)
Here, you replace the original separator with tabs (
). The as.is
is passed to read.table()
to prevent strings being read in is factors, but that's your call. If you have more complicated white space within your data, you might find the quote
argument in read.table()
helpful, or pre-process with AWK, perl, etc.
Something similar with crippledlambda's strsplit()
is most likely equivalent for moderately sized data. If performance becomes an issue, try both and see which works for you.