QUICK DESCRIPTION
 Sybase::Xfer transfers data between two Sybase servers  with  multiple
 options like specifying a where_clause, a smart auto_delete option and
 can pump data from a perl or take  a  plain  flat  file.  Has  option,
 similiar to default behaviour in Sybase::BCP, to capture  failed  rows
 in a batch.

 Also comes with a command line wrapper, sybxfer.


INSTALLATION (the usual)
  perl Makefile.PL [ LIB=<alternate installation> ]
  make
  make test
  make install


DEPENDENCIES
  Requires Perl Version 5.005 or beyond

  Requires packages:
     Sybase::DBlib
     Getopt::Long
     Tie::IxHash


SYNOPSIS
  #from perl
     #!/usr/bin/perl5.005
     use Sybase::Xfer;
     $h = new Sybase::Xfer( %options );
     $h->xfer();
     $h->done();

  #from shell
     #!/usr/ksh
     sybxfer <options>


DESCRIPTION (a little bit from the pod)

 If you're in an environment with multiple servers and you  don't  want
 to use cross-server joins then this module may be worth a  gander.  It
 transfers data from one server to another server row-by-row in memory
 w/o using an intermediate file.

 To juice things up it can take data from any set of  sql  commands  as
 long as the output of the sql matches the  definition  of  the  target
 table. And it can take data from a  perl  subroutine  if  you're  into
 that.

 It also has some smarts to delete rows in the target table before  the
 data is  transferred  by  several  methods.  See  the  -truncate_flag,
 -delete_flag and -auto_delete switches.

 Everything is controlled by switch settings sent has  a  hash  to  the
 module. In essence one describes the from source and the to source and
 the module takes it from there.

 Error handling:

   An attempt was made to build in hooks for robust error reporting via
   perl callbacks. By default, it will print to stderr  the  data,  the
   column names, and their datatypes upon  error.  This  is  especially
   useful when sybase reports attempt to load an oversized row  warning
   message.


 Auto delete:

   More recently the code has been  tweaked  to  handle  the  condition
   where data is bcp'ed into a table but the row already exists and the
   desired result to replace  the  row.  Originally,  the  -delete_flag
   option was meant for this condition. ie. clean out the table via the
   -where_clause before the bcp in was to occur. If this is  action  is
   too drastic, however, by using the -auto_delete option  one  can  be
   more precise and force only those rows about to be  inserted  to  be
   deleted before the bcp in begins. It will bcp the 'key'  information
   to a temp table, run a delete (in a loop so as not to blow  any  log
   space) via a join between the temp table and target table  and  then
   begin the bcp in. It's weird but in the right situation  it  may  be
   exactly what you want. Typically used to manually replicate a table.


CONTACTS
  my e-mail: [email protected]



CHANGE LOG:
------------------------
Version .30  11-feb-2001
------------------------

Some promised mods.

* New/modified switches as follows:

       -error_handling 'continue|abort|retry'
          added value 'retry'. Will behave like Sybase::BCP upon  error
          on bcp_batch. In other words, set  the  batchsize  to  1  and
          resend the data. Prints failed  records  to -error_data_file.
          (thanks to Michael Peppler for allowing me to raid his code.)

       -retry_max 'n'
          number of times to  retry  a  failed  bcp_batch.  Default  is
          three.

       -retry_deadlock_sleep 'sec'
          number of seconds to  sleep  between  sending  the  bcp_batch
          again iff a deadlock error is detected. Default is 120.

       -callback_err_batch 'coderef'
          implemented it. Now users can modify the rows  in  the  batch
          and request that xfer resend it or not. Default is Xfer's own
          routine   that   implements   the   most   useful   behaviour
          (hopefully.)

       -error_data_file | -edf <filename>
          contains records that  failed  to  transfer  in  addition  to
          source record number and Sybase erorr message for each record
          that failed.

       -from_file | -ff <filename>
          new source. Take the  data  from  a  flat  file.  Must  be  a
          delimited file. Steals a chapter from Sybase's bcp.

       -from_file_delimiter | -ffd  <regex for delimiter>
          used inconjunction with with -from_file. The delimeter can be
          regular expression. (similiar to -t switch in Sybase's bcp)

       -holdlock
          if switch -from_table specified then this  switch  appends  a
          'holdlock' to internally generated  select  statement.

       -app_name <name>
          sets the program_name in master..sysprocesses table. Defaults
          to the script name.

       -delete_flag
          modified such that rows are deleted in -batchsize increments
          so the log doesn't get blown.

* changed the final summary message slightly to report:
   1) num rows read from source
   2) num rows resent to target table (only if errors)
   3) num rows failed to be transferred


* if no args given it'll exit gracefully with a message to run perldoc