Some packages do allow for working on the csv file with some kind of a record type, that you can then read by index/header-name (cannot find the one I remember). But considering the size of the input, I am unsure how regular csv deserializers will perform.
Please consider, that if there is no underlying class to represent a record, then at some point you will have to tell the code what type to use (each time you access a property). You could write a (for example) python script, that based on the first two lines creates the *.cs file for the class, and you compile it into the project.
Regarding not using any packages... well you could write some simple code, where you split the line. If it is guaranteed, that none of the fields contain a comma (or the separator) and a linebreak, it could work - but you still have to write a dynamic program, that matches csv records with a property and somehow finds a proper deserializer for that type. I would strongly suggest using a library for this, like CsvHelper.
As a sidenote, if you are willing to consider alternatives, I would load this thing into a key-value database (you can simulate it with an RDBMS, though it wont be super fast). It might be easier to work with SQL.
TL;DR
- option 1: generate a class with a script and then use a nuget package to handle the serialization (kind of the 'spray and pray' method) - linq will be available like normal
- option 2: use a database, that is more prepared for large datasets
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…