Test-Mojibake
view release on metacpan or search on metacpan
DESCRIPTION
Many modern text editors automatically save files using UTF-8
codification, however, perl interpreter does not expects it by default.
Whereas this does not represent a big deal on (most) backend-oriented
programs, Web framework (Catalyst <http://www.catalystframework.org/>,
Mojolicious <http://mojolicio.us/>) based applications will suffer of
so-called Mojibake <http://en.wikipedia.org/wiki/Mojibake> (lit.
"unintelligible sequence of characters").
Even worse: if an editor saves BOM (Byte Order Mark, U+FEFF character
in Unicode) at the start of the script with executable bit set (on Unix
systems), it won't execute at all, due to shebang corruption.
Avoiding codification problems is quite simple:
* Always use utf8/use common::sense when saving source as UTF-8;
* Always specify =encoding UTF-8 when saving POD as UTF-8;
* Do neither of above when saving as ISO-8859-1;
* Never save BOM (not that it's wrong; just avoid it as you'll barely
notice it's presence when in trouble).
However, if you find yourself upgrading old code to use UTF-8 or trying
to standardize a big project with many developers each one using a
different platform/editor, reviewing all files manually can be quite
painful. Specially in cases when some files have multiple encodings
(note: it all started when I realized that Gedit & derivatives are
unable to open files with character conversion tables).
Enter the Test::Mojibake ;)
Actually, Test::Mojibake only cares about UTF-8, as it is roughly safe
to be detected. So, when UTF-8 characters are detected without
preceding declaration, an error is reported. On the other way,
non-UTF-8 characters in UTF-8 mode are wrong, either.
If present, Unicode::CheckUTF8 module (XS wrapper) will be used to
validate UTF-8 strings, note that it is 30 times faster and a lot more
Unicode Consortium compliant than the built-in Pure Perl
implementation!
UTF-8 BOM (Byte Order Mark) is also detected as an error. While Perl is
OK handling BOM, your OS probably isn't. Check out:
./bom.pl: line 1: $'\357\273\277#!/usr/bin/perl': command not found
Caveats
Whole-line source comments, like:
# this is a whole-line comment...
print "### hello world ###\n"; # ...and this os not
lib/Test/Mojibake.pm view on Meta::CPAN
}
my $use_utf8 = 0;
my $pod = 0;
my $pod_utf8 = 0;
my $n = 1;
my %pod = ();
while (my $line = <$fh>) {
if (($n == 1) && $line =~ /^\x{EF}\x{BB}\x{BF}/x) {
$Test->ok(0, $name);
$Test->diag("UTF-8 BOM (Byte Order Mark) found in $file");
return;
} elsif ($line =~ /^=+cut\s*$/x) {
$pod = 0;
} elsif ($line =~ /^=+encoding\s+([\w\-]+)/x) {
my $pod_encoding = lc $1;
$pod_encoding =~ y/-//d;
# perlpod states:
# =encoding affects the whole document, and must occur only once.
++$pod{$pod_encoding};
lib/Test/Mojibake.pm view on Meta::CPAN
no strict 'vars';
use Test::Mojibake;
file_encoding_ok($file, 'Valid encoding');
done_testing($num_tests);
=head1 DESCRIPTION
Many modern text editors automatically save files using UTF-8 codification, however, L<perl> interpreter does not expects it I<by default>. Whereas this does not represent a big deal on (most) backend-oriented programs, Web framework (L<Catalyst|http...
Even worse: if an editor saves BOM (Byte Order Mark, C<U+FEFF> character in Unicode) at the start of the script with executable bit set (on Unix systems), it won't execute at all, due to shebang corruption.
Avoiding codification problems is quite simple:
=over 4
=item *
Always C<use utf8>/C<use common::sense> when saving source as UTF-8;
=item *
Always specify C<=encoding UTF-8> when saving POD as UTF-8;
=item *
Do neither of above when saving as ISO-8859-1;
=item *
B<Never> save BOM (not that it's wrong; just avoid it as you'll barely notice it's presence when in trouble).
=back
However, if you find yourself upgrading old code to use UTF-8 or trying to standardize a big project with many developers each one using a different platform/editor, reviewing all files manually can be quite painful. Specially in cases when some file...
Enter the L<Test::Mojibake> C<;)>
=head1 FUNCTIONS
=head2 file_encoding_ok( FILENAME[, TESTNAME ] )
lib/Test/Mojibake.pm view on Meta::CPAN
Similarly, POD encoding can be changed via:
=encoding UTF-8
Correspondingly, C<no utf8>/C<=encoding latin1> put Perl back into ISO-8859-1 mode.
Actually, L<Test::Mojibake> only cares about UTF-8, as it is roughly safe to be detected. So, when UTF-8 characters are detected without preceding declaration, an error is reported. On the other way, non-UTF-8 characters in UTF-8 mode are wrong, eith...
If present, L<Unicode::CheckUTF8> module (XS wrapper) will be used to validate UTF-8 strings, note that it is B<30 times faster> and a lot more Unicode Consortium compliant than the built-in Pure Perl implementation!
UTF-8 BOM (Byte Order Mark) is also detected as an error. While Perl is OK handling BOM, your OS probably isn't. Check out:
./bom.pl: line 1: $'\357\273\277#!/usr/bin/perl': command not found
=head2 Caveats
Whole-line source comments, like:
# this is a whole-line comment...
print "### hello world ###\n"; # ...and this os not
BEGIN {
use_ok('Test::Mojibake');
}
BAD: {
my $name = 'Byte Order Mark is unnecessary!';
my $file = 't/bad/bom.pl_';
test_out("not ok 1 - $name");
file_encoding_ok($file, $name);
test_fail(-1);
test_diag("UTF-8 BOM (Byte Order Mark) found in $file");
test_test("$name is bad");
}
( run in 0.675 second using v1.01-cache-2.11-cpan-131fc08a04b )