Data-Localize
view release on metacpan or search on metacpan
Locale::Maketext::Simple.
# RATIONALE
Functionality-wise, Locale::Maketext does what it advertises to do.
Here's a few reasons why you might or might not choose Data::Localize
over Locale::Maketext-based localizers:
## Object-Oriented
Data::Localize is completely object-oriented. YMMV.
## Faster
On some my benchmarks, Data::Localize is faster than Locale::Maketext
by 50~80%. (But see PERFORMANCE)
## Scalable For Large Amount Of Lexicons
Whereas Locale::Maketext generally stores the lexicons in memory,
Data::Localize allows you to store this data in alternate storage.
By default Data::Localize comes with a BerkeleyDB backend.
# BASIC WORKING
## STRUCTURE
Data::Localize is a wrapper around various Data::Localize::Localizer
implementers (localizers). So if you don't specify any localizers,
Data::Localize will do... nothing (unless you specify `auto`).
Localizers are the objects that do the actual localization. Localizers must
register themselves to the Data::Localize parent, noting which languages it
can handle (which usually is determined by the presence of data files like
en.po, ja.po, etc). A special language ID of '\*' is used to accept fallback
cases. Localizers registered to handle '\*' will be tried _after_ all other
language possibilities have been exhausted.
If the particular localizer cannot deal with the requested string, then
it simply returns nothing.
## AUTO-GENERATING LEXICONS
Locale::Maketext allows you to supply an "\_AUTO" key in the lexicon hash,
which allows you to pass a non-existing key to the localize() method, and
use it as the actual lexicon, if no other applicable lexicons exists.
Locale::Maketext attaches this to the lexicon hash itself, but Data::Localizer
differs in that it attaches to the Data::Localizer object itself, so you
don't have to place \_AUTO everywhere.
# here, we're deliberately not setting any localizers
my $loc = Data::Localize->new(auto => 1);
# previous auto => 1 will force Data::Localize to fallback to
# using the key ('Hello, [_1]') as the localization token.
print $loc->localize('Hello, [_1]', 'John Doe'), "\n";
# UTF8
All data is expected to be in decoded utf8. You must "use utf8" or
decode them to Perl's internal representation for all values
passed to Data::Localizer. We won't try to be smart for you. USE UTF8!
- Using Explicit decode()
use Encode q(decode decode_utf8);
use Data::Localizer;
my $loc = Data::Localize->new(...);
$loc->localize( $key, decode( 'iso-2022-jp', $value ) );
# if $value is encoded utf8...
# $loc->localize( $key, decode_utf8( $value ) );
- Using utf8
"use utf8" is simpler, but do note that it will affect ALL your literal strings
in the current scope
use utf8;
$loc->localize( $key, "some-utf8-key-here" );
# USING ALTERNATE STORAGE
By default all lexicons are stored on memory, but if you're building an app
with thousands and thousands of long messages, this might not be the ideal
solution. In such cases, you can change where the lexicons get stored
my $loc = Data::Localize->new();
$loc->add_localizer(
class => 'Gettext',
path => '/path/to/data/*.po'
storage_class => 'BerkeleyDB',
storage_args => {
dir => '/path/to/really/fast/device'
}
);
This would cause Data::Localize to put all the lexicon data in several BerkeleyDB files under /path/to/really/fast/device
Note that this approach would buy you no gain if you use Data::Localize::Namespace, as that approach by default expects everything to be in memory.
# DEBUGGING
## DEBUG
To enable debug tracing, either set DATA\_LOCALIZE\_DEBUG environment variable,
DATA_LOCALIZE_DEBUG=1 ./yourscript.pl
or explicitly define a function before loading Data::Localize:
BEGIN {
*Data::Localize::DEBUG = sub () { 1 };
}
use Data::Localize;
# METHODS
( run in 0.959 second using v1.01-cache-2.11-cpan-39bf76dae61 )