I'm processing data from government sources (FEC, state voter databases, etc). It's inconsistently malformed, which breaks my CSV parser in all sorts of delightful ways.
It's externally sourced and authoritative. I must parse it, and I cannot have it re-input, validated on input, or the like. It is what it is; I don't control the input.
Properties:
- Fields contain malformed UTF-8 (e.g.
Foo xAB bar
)
- The first field of a line specifies the record type from a known set. Knowing the record type, you know how many fields there are and their respective data types, but not until you do.
- Any given line within a file might use quoted strings (
"foo",123,"bar"
) or unquoted (foo,123,bar
). I haven't yet encountered any where it's mixed within a given line (i.e. "foo",123,bar
) but it's probably in there.
- Strings may include internal newline, quote, and/or comma character(s).
- Strings may include comma separated numbers.
- Data files can be very large (millions of rows), so this needs to still be reasonably fast.
I'm using Ruby FasterCSV (known as just CSV in 1.9), but the question should be language-agnostic.
My guess is that a solution will require preprocessing substitution with unambiguous record separator / quote characters (eg ASCII RS, STX). I've started a bit here but it doesn't work for everything I get.
How can I process this kind of dirty data robustly?
ETA: Here's a simplified example of what may be in single file:
"this","is",123,"a","normal","line"
"line","with "an" internal","quote"
"short line","with
an
"internal quote", 1 comma and
linebreaks"
un "quot" ed,text,with,1,2,3,numbers
"quoted","number","series","1,2,3"
"invalid xAB utf-8"
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…