It would seem that ruby thinks that the string encoding is already utf8, so when you do
line.encode!('UTF-8', :undef => :replace, :invalid => :replace, :replace => "")
it doesn't actually do anything because the destination encoding is the same as the current encoding (at least that's my interpretation of the code in transcode.c
)
The real question here is whether your starting data is valid in some encoding that isn't utf-8 or whether this is data that is supposed to be utf-8 but has a few warts in it that you want to discard.
In the first case, the correct thing to do is tell ruby what this encoding is. You can do this when you open the file
File.open('somefile', 'r:iso-8859-1')
will open the file, interpreting its contents as iso-8859-1
You can even get ruby to transcode for you
File.open('somefile', 'r:iso-8859-1:utf-8')
will open the file as iso-8859-1, but when you read data from it the bytes will be converted to utf-8 for you.
You can also call force_encoding
to tell ruby what a string's encoding is (this doesn't modify the bytes at all, it just tells ruby how to interpret them).
In the second case, where you just want to dump whatever nasty stuff has got into your utf-8, you can't just call encode!
as you have because that's a no-op. In ruby 2.1 and higher, you can use String#scrub, in previous versions you can do this
line.encode!('UTF-16', :undef => :replace, :invalid => :replace, :replace => "")
line.encode!('UTF-8')
We first convert to utf-16. Since this is a different encoding, ruby will actually replace our invalid sequences. We can then convert back to utf-8. This won't lose us any extra data because utf-8 and utf-16 are just two different ways of encoding the same underlying character set.