NAME

   Text::Util::Chinese - A collection of subroutines for processing
   Chinese Text

DESCRIPTIONS

   The subroutines provided by this module are for processing Chinese
   text. Conventionally, all input strings are assumed to be
   wide-characters. No `decode_utf8` or `utf8::decode` were done in this
   module. Users of this module should deal with input-decoding first
   before passing values to these subroutines.

   Given the fact that corpus files are usually large, it may be a good
   idea to avoid slurping the entire input stream. Conventionally,
   subroutines in this modules accept "input iterator" as its way to
   receive a small piece of corpus at a time. The "input iterator" is a
   CodeRef that returns a string every time it is called, or undef if
   there are nothing more to be processed. Here's a trivial example to
   open a file as an input iterator:

       sub open_as_iterator {
           my ($path) = @_
           open my $fh, '<', $path;
           return sub {
               my $line = <$fh>;
               return undef unless defined($line);
               return decode_utf8($line);
           }
       }

       my $input_iter = open_as_iterator("/data/corpus.txt");

   This $input_iter can be then passed as arguments to different
   subroutines.

EXPORTED SUBROUTINES

   extract_words( $input_iter ) #=> ArrayRef[Str]

     This extracts words from Chinese text. A word in Chinese text is a
     token with N charaters. These N characters is often used together in
     the input and therefore should be a meaningful unit.

     The input parameter is a iterator -- a subroutine that must return a
     string of Chinese text each time it is invoked. Or, when the input is
     exhausted, it must return undef. For example:

         open my $fh, '<', 'book.txt';
         my $words = extract_words(
             sub {
                 my $x = <$fh>;
                 return decode_utf8 $x;
             });

     The type of return value is ArrayRef[Str].

     It is likely that this subroutine returns an empty ArrayRef with no
     contents. It is only useful when the volume of input is a leats a few
     thousands of characters. The more, the better.

   extract_presuf( $input_iter, $output_cb, $opts ) #=> HashRef

     This subroutine extract meaningful tokens that are prefix or suffix
     of input. Comparing to extract_word, it yields extracted tokens
     frequently by calling $output_cb.

     It is used like this:

         my $extracted = extract_presuf(
             \&next_input,
             sub {
                 my ($token, $extracted) = @_;

                 ...
             },
             +{
                 threshold => 9,
                 lengths => [ 2,3 ],
             }
         );

     The $output_cb callback is passed two arguments. The first one is the
     new $token that appears more then $threshold times as a prefix and as
     a suffix. The second arguments is a HashRef with keys being the set
     of all extracted tokens. The very same HashRef is also going to be
     the return value of this subroutine.

     The 3rd argument is a HashRef with parameters to the internal
     algorithm. threshold should be an Int, lengths should be an
     ArrayRef[Int] and that constraints the lengths of prefixes and
     suffixes to be extracted.

     The default value for threshold is 9, while the default value for
     lengths is [2,3]

   sentences_iterator( $input_iter ) #=> CodeRef

     This subroutine split input into sentences. It takes an text
     iterator, and returns another one.

   phrase_iterator( $input_iter ) #=> CodeRef

     This subroutine split input into smallelr phrases. It takes an text
     iterator, and returns another one.

   tokenize_by_script( $text ) #=> Array[ Str ]

     This subroutine split text into tokens, where each token is the same
     writing script.

AUTHORS

   Kang-min Liu <[email protected]>

LICENCE

   Unlicense https://unlicense.org/