ruby 1.9, force_encoding, but check

I have a string I have read from some kind of input.

To the best of my knowledge, it is UTF8. Okay:


But if this string has bytes in it that are not in fact legal UTF8, I want to know now and take action.

Ordinarily, will force_encoding("utf8") raise if it encounters such bytes? I believe it will not.

If I was doing an #encode I could choose from the handy options with what to do with characters that are invalid in the source encoding (or destination encoding).

But I'm not doing an #encode, I'm doing a #force_encoding. It has no such options.

Would it make sense to


to get an exception right away? Normally encoding from utf8 to utf8 doesn't make any sense. But maybe this is the way to get it to raise right away if there's invalid bytes? Or use the :replace option etc to do something different with invalid bytes?

But no, can't seem to make that work either.

Anyone know?

1.9.3-p0 :032 > a = "bad: \xc3\x28 okay".force_encoding("utf-8")
=> "bad: \xC3( okay"
1.9.3-p0 :033 > a.valid_encoding?
=> false

Okay, but how do I find and eliminate those bad bytes? Oddly, this does NOT raise:

1.9.3-p0 :035 > a.encode("utf-8")
 => "bad: \xC3( okay"

If I was converting to a different encoding, it would!

1.9.3-p0 :039 > a.encode("ISO-8859-1")
Encoding::InvalidByteSequenceError: "\xC3" followed by "(" on UTF-8

Or if I told it to, it'd replace it with a "?" =>

1.9.3-p0 :040 > a.encode("ISO-8859-1", :invalid => :replace)
=> "bad: ?( okay"

So ruby's got the smarts to know what are bad bytes in utf-8, and to replace em with something else -- when converting to a different encoding. But I don't want to convert to a different encoding, i want to stay utf8 -- but I might want to raise if there's an invalid byte in there, or I might want to replace invalid bytes with replacement chars.

Isn't there some way to get ruby to do this?

update I believe this has finally been added to ruby in 2.1, with String#scrub present in the 2.1 preview release to do this. So look for that!


(update: see

So I coded up a solution to what I needed here:

But only much more recently did I realize this actually IS built into the stdlib, you just need to, somewhat counter-intuitively, pass 'binary' as the "source encoding":

a = "bad: \xc3\x28 okay".force_encoding("utf-8")
a.encode("utf-8", "binary", :undef => :replace)
=> "bad: �( okay"

Yep, that's exactly what I wanted. So turns out this IS built into 1.9 stdlib, it's just undocumented and few people know it (or maybe few people that speak English know it?). Although I saw these arguments used this way on a blog somewhere, so someone else knew it!

In ruby 2.1, the stdlib finally supports this with scrub.

make sure that your scriptfile itself is saved as UTF8 and try the following

# encoding: UTF-8
p [a = "bad: \xc3\x28 okay", a.valid_encoding?]
p [a.force_encoding("utf-8"), a.valid_encoding?]
p [a.encode!("ISO-8859-1", :invalid => :replace), a.valid_encoding?]

This gives on my windows7 system the following

["bad: \xC3( okay", false]
["bad: \xC3( okay", false]
["bad: ?( okay", true]

So your bad char is replaced, you can do it right away as follows

a = "bad: \xc3\x28 okay".encode!("ISO-8859-1", :invalid => :replace)
=> "bad: ?( okay"

EDIT: here a solution that works on any arbitrary encoding, the first encodes only the bad chars, the second just replaces by a ?

def validate_encoding(str)
  str.chars.collect do |c| 
    (c.valid_encoding?) ? c:c.encode!(Encoding.locale_charmap, :invalid => :replace)

def validate_encoding2(str)
  str.chars.collect do |c| 
    (c.valid_encoding?) ? c:'?'

a = "bad: \xc3\x28 okay"

puts validate_encoding(a)                  #=>bad: ?( okay
puts validate_encoding(a).valid_encoding?  #=>true

puts validate_encoding2(a)                  #=>bad: ?( okay
puts validate_encoding2(a).valid_encoding?  #=>true

To check that a string has no invalid sequences, try to convert it to the binary encoding:

# Returns true if the string has only valid sequences
def valid_encoding?(string)
  string.encode('binary', :undef => :replace)
rescue Encoding::InvalidByteSequenceError => e

p valid_encoding?("\xc0".force_encoding('iso-8859-1'))    # true
p valid_encoding?("\u1111")                               # true
p valid_encoding?("\xc0".force_encoding('utf-8'))         # false

This code replaces undefined characters, because we don't care if there are valid sequences that cannot be represented in binary. We only care if there are invalid sequences.

A slight modification to this code returns the actual error, which has valuable information about the improper encoding:

# Returns the encoding error, or nil if there isn't one.

def encoding_error(string)
  string.encode('binary', :undef => :replace)
rescue Encoding::InvalidByteSequenceError => e

# Returns truthy if the string has only valid sequences

def valid_encoding?(string)

puts encoding_error("\xc0".force_encoding('iso-8859-1'))    # nil
puts encoding_error("\u1111")                               # nil
puts encoding_error("\xc0".force_encoding('utf-8'))         # "\xC0" on UTF-8

About the only thing I can think of is to transcode to something and back that won't damage the string in the round-trip:


Seems rather wasteful, though.

Okay, here's a really lame pure ruby way to do it I figured out myself. It probably performs for crap. what the heck, ruby? Not selecting my own answer for now, hoping someone else will show up and give us something better.

 # Pass in a string, will raise an Encoding::InvalidByteSequenceError
 # if it contains an invalid byte for it's encoding; otherwise
 # returns an equivalent string.
 # OR, like String#encode, pass in option `:invalid => :replace`
 # to replace invalid bytes with a replacement string in the
 # returned string.  Pass in the
 # char you'd like with option `:replace`, or will, like String#encode
 # use the unicode replacement char if it thinks it's a unicode encoding,
 # else ascii '?'.
 # in any case, method will raise, or return a new string
 # that is #valid_encoding?
 def validate_encoding(str, options = {})
   str.chars.collect do |c|
     if c.valid_encoding?
       unless options[:invalid] == :replace
         # it ought to be filled out with all the metadata
         # this exception usually has, but what a pain!
         options[:replace] || (
          # surely there's a better way to tell if
          # an encoding is a 'Unicode encoding form'
          # than this? What's wrong with you ruby 1.9?
'UTF') ?
             "\uFFFD" :
             "?" )

More ranting at

If you are doing this for a "real-life" use case - for example for parsing different strings entered by users, and not just for the sake of being able to "decode" a totally random file which could be made of as many encodings as you wish, then I guess you could at least assume that all charcters for each string have the same encoding.

Then, in this case, what would you think about this?

strings = [ "UTF-8 string with some utf8 chars \xC3\xB2 \xC3\x93", 
             "ISO-8859-1 string with some iso-8859-1 chars \xE0 \xE8", "..." ]

strings.each { |s| 
    s.force_encoding "utf-8"
    if s.valid_encoding?
        while s.valid_encoding? == false 
                    s.force_encoding "ISO-8859-1"
                    s.force_encoding "..."

I am not a Ruby "pro" in any way, so please forgive if my solution is wrong or even a bit naive..

I just try to give back what I can, and this is what I've come to, while I was (I still am) working on this little parser for arbitrarily encoded strings, which I am doing for a study-project.

While I'm posting this, I must admit that I've not even fully tested it.. I.. just got a couple of "positive" results, but I felt so excited of possibly having found what I was struggling to find (and for all the time I spent reading about this on SO..) that I just felt the need to share it as quick as possible, hoping that it could help save some time to anyone who has been looking for this for as long as I've been... .. if it works as expected :)

A simple way to provoke an exception seems to be:

untrusted_string.match /./

Here are 2 common situations and how to deal with them in Ruby 2.1+. I know, the question refers to Ruby v1.9, but maybe this is helpful for others finding this question via Google.

Situation 1

You have an UTF-8 string with possibly a few invalid bytes Remove the invalid bytes:

str = "Partly valid\xE4 UTF-8 encoding: äöüß"

 # => "Partly valid UTF-8 encoding: äöüß"

Situation 2

You have a string that could be in either UTF-8 or ISO-8859-1 encoding Check which encoding it is and convert to UTF-8 (if necessary):

str = "String in ISO-8859-1 encoding: \xE4\xF6\xFC\xDF"

unless str.valid_encoding?
  str.encode!( 'UTF-8', 'ISO-8859-1', invalid: :replace, undef: :replace, replace: '?' )
end #unless
 # => "String in ISO-8859-1 encoding: äöüß"


  • The above code snippets assume that Ruby encodes all your strings in UTF-8 by default. Even though, this is almost always the case, you can make sure of this by starting your scripts with # encoding: UTF-8.

  • If invalid, it is programmatically possible to detect most multi-byte encodings like UTF-8 (in Ruby, see: #valid_encoding?). However, it is NOT (easily) possible to programmatically detect invalidity of single-byte-encodings like ISO-8859-1. Thus the above code snippet does not work the other way around, i.e. detecting if a String is valid ISO-8859-1 encoding.

  • Even though UTF-8 has become increasingly popular as the default encoding in the web, ISO-8859-1 and other Latin1 flavors are still very popular in the Western countries, especially in North America. Be aware that there a several single-byte encodings out there that are very similar, but slightly vary from ISO-8859-1. Examples: CP1252 (a.k.a. Windows-1252), ISO-8859-15

Need Your Help

Database project deploy fails in Visual Studio 2013

visual-studio-2013 sql-server-data-tools sql-server-2014

I have a database project in a solution which will not deploy from Visual Studio 2013. It fails with an error message "Unable to connect to target server".