There's nothing wrong with the file. "ANSI as UTF-8" means there's no BOM but Notepad++ has definitely identified the encoding as UTF-8 by analyzing byte patterns. I tested this by creating a file with Russian, Greek and Polish text in it and saving it as UTF-8 without a BOM. Here it is:
# Russian
Следующая
# Greek
Επ?μενη
# Polish
Wi?cej
I did this in a different editor (EditPad Pro) and used hex mode to make sure the BOM wasn't there. When I opened it in NPP it showed the encoding as "ANSI as UTF-8" and all of the characters displayed correctly. Then, still in hex mode, I removed the first byte of the first Russian character. When I opened it in NPP again, it showed the encoding as "ANSI" and displayed the non-ASCII parts of the text as mojibake:
; Russian
?D?DμD′?????‰D°?
; Greek
???€?????μ???·
; Polish
Wi??cej
Back to EditPad, and this time I added a BOM but didn't repair the Cyrillic character. This time NPP reported the encoding as "UTF-8" and everything displayed correctly except that first Russian character, as shown below. "A1" is the hex representation of what should have been the second byte of that character in UTF-8. It was displayed in an inverted color scheme to indicate an error.
# Russian
A1ледующая
# Greek
Επ?μενη
# Polish
Wi?cej
To summarize: In the absence of a BOM, Notepad++ looks for bytes that can't represent ASCII characters because their values are greater than 127 (or 7F
hex). If it finds any, but they all conform to the patterns required by UTF-8, it decodes the file as UTF-8 and reports the encoding in the status bar as "ANSI as UTF-8".
But if it finds even one byte that doesn't toe the UTF-8 line, it decodes the file as "ANSI", meaning the default single-byte encoding for the underlying platform. If your file had been corrupted, that's what you would be seeing.
EDIT: Although your file is valid without it, you could add a BOM by manually writing the three bytes "EF BB BF"
at the very beginning of the file--but there should be a better way. How are you generating the content now? Because it is UTF-8, with at least one non-ASCII character in there somewhere; otherwise, NPP would report it as "ANSI".
Another possibility to consider: if you have any influence over the process that consumes your CSV file, maybe you can configure it to expect UTF-8 without a BOM. Technically, any software that can decode UTF-8 with a BOM but not without one is broken. The Unicode Consortium actually discourages use of the UTF-8 BOM, not that anyone's listening.