AI-TensorFlow-Libtensorflow

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN


0.0.5 2023-01-30 11:46:31-0500

  - Features

      - Docker images with dependencies for notebooks.
      - Support for running notebooks in Binder.

  - Documentation

      - Add manual index and quickstart guide.
      - Add InferenceUsingTFHubEnformerGeneExprPredModel tutorial.

0.0.4 2022-12-21 15:57:53-0500

  - Features

      - Add Data::Printer and stringification support for several classes.
      - Add `::TFLibrary` class. Move `GetAllOpList()` method there.

  - Documentation

MANIFEST  view on Meta::CPAN

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/PackableMaybeArrayRef.pm
lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/RecordArrayRef.pm
lib/AI/TensorFlow/Libtensorflow/Lib/Types.pm
lib/AI/TensorFlow/Libtensorflow/Lib/_Alloc.pm
lib/AI/TensorFlow/Libtensorflow/Manual.pod
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod
lib/AI/TensorFlow/Libtensorflow/Manual/GPU.pod
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod
lib/AI/TensorFlow/Libtensorflow/Manual/Quickstart.pod
lib/AI/TensorFlow/Libtensorflow/Operation.pm
lib/AI/TensorFlow/Libtensorflow/OperationDescription.pm
lib/AI/TensorFlow/Libtensorflow/Output.pm
lib/AI/TensorFlow/Libtensorflow/Session.pm
lib/AI/TensorFlow/Libtensorflow/SessionOptions.pm
lib/AI/TensorFlow/Libtensorflow/Status.pm
lib/AI/TensorFlow/Libtensorflow/TFLibrary.pm
lib/AI/TensorFlow/Libtensorflow/TString.pm
lib/AI/TensorFlow/Libtensorflow/Tensor.pm
lib/AI/TensorFlow/Libtensorflow/_Misc.pm

lib/AI/TensorFlow/Libtensorflow.pm  view on Meta::CPAN


=head1 SYNOPSIS

  use aliased 'AI::TensorFlow::Libtensorflow' => 'Libtensorflow';

=head1 DESCRIPTION

The C<libtensorflow> library provides low-level C bindings
for TensorFlow with a stable ABI.

For more detailed information about this library including how to get started,
see L<AI::TensorFlow::Libtensorflow::Manual>.

=head1 CLASS METHODS

=head2 Version

  my $version = Libtensorflow->Version();
  like $version, qr/(\d|\.)+/, 'Got version';

B<Returns>

lib/AI/TensorFlow/Libtensorflow/Lib/_Alloc.pm  view on Meta::CPAN

	my $el = INT8;
	my $el_size = $el->Size;

	my $max_alignment = $alignments[0];
	my $req_size = 2 * $max_alignment + $el_size;
	# All data that is sent to TF_NewTensor here is within the block of
	# memory allocated at $ptr_base.
	my $ptr_base = malloc($req_size);
	defer { free($ptr_base); }

	# start at offset that is aligned with $max_alignment
	my $ptr = $ptr_base + ( $max_alignment - $ptr_base % $max_alignment );

	my $create_tensor_at_alignment = sub {
		my ($n, $dealloc_called) = @_;
		my $offset = $n - $ptr % $n;
		my $ptr_offset = $ptr + $offset;
		my $space_for_data = $req_size - $offset;

		window(my $data, $ptr_offset, $space_for_data);

lib/AI/TensorFlow/Libtensorflow/Manual.pod  view on Meta::CPAN

=encoding UTF-8

=head1 NAME

AI::TensorFlow::Libtensorflow::Manual - Index of manual

=head1 TABLE OF CONTENTS

=over 4

=item L<AI::TensorFlow::Libtensorflow::Manual::Quickstart>

Start here to get an overview of the library.

=item L<AI::TensorFlow::Libtensorflow::Manual::GPU>

GPU-specific installation and usage information.

=item L<AI::TensorFlow::Libtensorflow::Manual::CAPI>

Appendix of all C API functions with their signatures. These are linked from

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN


=over 2

  Creates a TF_WhileParams for creating a while loop in `g`. `inputs` are
  outputs that already exist in `g` used as initial values for the loop
  variables.
  
  The returned TF_WhileParams will have all fields initialized except
  `cond_output`, `body_outputs`, and `name`. The `body_outputs` buffer will be
  allocated to size `ninputs`. The caller should build `cond_graph` and
  `body_graph` starting from the inputs, and store the final outputs in
  `cond_output` and `body_outputs`.
  
  If `status` is OK, the caller must call either TF_FinishWhile or
  TF_AbortWhile on the returned TF_WhileParams. If `status` isn't OK, the
  returned TF_WhileParams is not valid, and the caller should not call
  TF_FinishWhile() or TF_AbortWhile().
  
  Missing functionality (TODO):
  - Gradients
  - Reference-type inputs

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

           an input. We can relax this constraint if there are
           compelling use cases).
           - If an array is given (`num_opers` >= 0), all operations
           in it will become part of the function. In particular, no
           automatic skipping of dummy input operations is performed.
   ninputs - number of elements in `inputs` array
   inputs - array of TF_Outputs that specify the inputs to the function.
            If `ninputs` is zero (the function takes no inputs), `inputs`
            can be null. The names used for function inputs are normalized
            names of the operations (usually placeholders) pointed to by
            `inputs`. These operation names should start with a letter.
            Normalization will convert all letters to lowercase and
            non-alphanumeric characters to '_' to make resulting names match
            the "[a-z][a-z0-9_]*" pattern for operation argument names.
            `inputs` cannot contain the same tensor twice.
   noutputs - number of elements in `outputs` array
   outputs - array of TF_Outputs that specify the outputs of the function.
             If `noutputs` is zero (the function returns no outputs), `outputs`
             can be null. `outputs` can contain the same tensor more than once.
   output_names - The names of the function's outputs. `output_names` array
                  must either have the same length as `outputs`

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN


=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteSession(TF_Session*, TF_Status* status);

=head2 TF_SessionRun

=over 2

  Run the graph associated with the session starting with the supplied inputs
  (inputs[0,ninputs-1] with corresponding values in input_values[0,ninputs-1]).
  
  Any NULL and non-NULL value combinations for (`run_options`,
  `run_metadata`) are valid.
  
     - `run_options` may be NULL, in which case it will be ignored; or
       non-NULL, in which case it must point to a `TF_Buffer` containing the
       serialized representation of a `RunOptions` protocol buffer.
     - `run_metadata` may be NULL, in which case it will be ignored; or
       non-NULL, in which case it must point to an empty, freshly allocated

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextDim(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* shape_handle, int64_t i,
      TF_DimensionHandle* result);

=head2 TF_ShapeInferenceContextSubshape

=over 2

  Returns in <*result> a sub-shape of <shape_handle>, with dimensions
  [start:end]. <start> and <end> can be negative, to index from the end of the
  shape. <start> and <end> are set to the rank of <shape_handle> if > rank of
  <shape_handle>.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextSubshape(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* shape_handle, int64_t start,
      int64_t end, TF_ShapeHandle* result, TF_Status* status);

=head2 TF_ShapeInferenceContextSetUnknownShape

=over 2

  Places an unknown shape in all outputs for the given inference context. Used
  for shape inference functions with ops whose output shapes are unknown.

=back

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

  TF_CAPI_EXPORT extern void TF_DefaultThreadOptions(TF_ThreadOptions* options);

=head2 TF_StartThread

=over 2

  Returns a new thread that is running work_func and is identified
  (for debugging/performance-analysis) by thread_name.
  
  The given param (which may be null) is passed to work_func when the thread
  starts. In this way, data may be passed from the thread back to the caller.
  
  Caller takes ownership of the result and must call TF_JoinThread on it
  eventually.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern TF_Thread* TF_StartThread(const TF_ThreadOptions* options,
                                                  const char* thread_name,
                                                  void (*work_func)(void*),

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_GetInput(TF_OpKernelContext* ctx, int i,
                                         TF_Tensor** tensor, TF_Status* status);

=head2 TF_InputRange

=over 2

  Retrieves the start and stop indices, given the input name. Equivalent to
  OpKernel::InputRange(). `args` will contain the result indices and status.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_InputRange(TF_OpKernelContext* ctx,
                                           const char* name,
                                           TF_InputRange_Args* args);

=head2 TF_SetOutput

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

  TF_CAPI_EXPORT extern void TFE_ContextExportRunMetadata(TFE_Context* ctx,
                                                          TF_Buffer* buf,
                                                          TF_Status* status);

=head2 TFE_ContextStartStep

=over 2

  Some TF ops need a step container to be set to limit the lifetime of some
  resources (mostly TensorArray and Stack, used in while loop gradients in
  graph mode). Calling this on a context tells it to start a step.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextStartStep(TFE_Context* ctx);

=head2 TFE_ContextEndStep

=over 2

  Ends a step. When there is no active step (that is, every started step has
  been ended) step containers will be cleared. Note: it is not safe to call
  TFE_ContextEndStep while ops that rely on the step container may be running.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextEndStep(TFE_Context* ctx);

=head2 TFE_HandleToDLPack

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

  `device_name` must not name an existing physical or custom device. It must
  follow the format:
  
     /job:<name>/replica:<replica>/task:<task>/device:<type>:<device_num>
  
  If the device is successfully registered, `status` is set to TF_OK. Otherwise
  the device is not usable. In case of a bad status, `device.delete_device` is
  still called on `device_info` (i.e. the caller does not retain ownership).
  
  This API is highly experimental, and in particular is expected to change when
  it starts supporting operations with attributes and when tf.function support
  is added.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_RegisterCustomDevice(TFE_Context* ctx,
                                                      TFE_CustomDevice device,
                                                      const char* device_name,
                                                      void* device_info,
                                                      TF_Status* status);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN


=head2 TF_InitPlugin

=over 2

  /// Initializes a TensorFlow plugin.
  ///
  /// Must be implemented by the plugin DSO. It is called by TensorFlow runtime.
  ///
  /// Filesystem plugins can be loaded on demand by users via
  /// `Env::LoadLibrary` or during TensorFlow's startup if they are on certain
  /// paths (although this has a security risk if two plugins register for the
  /// same filesystem and the malicious one loads before the legimitate one -
  /// but we consider this to be something that users should care about and
  /// manage themselves). In both of these cases, core TensorFlow looks for
  /// the `TF_InitPlugin` symbol and calls this function.
  ///
  /// For every filesystem URI scheme that this plugin supports, the plugin must
  /// add one `TF_FilesystemPluginInfo` entry in `plugin_info->ops` and call
  /// `TF_SetFilesystemVersionMetadata` for that entry.
  ///

lib/AI/TensorFlow/Libtensorflow/Manual/GPU.pod  view on Meta::CPAN

=head1 INSTALLATION

In order to use a GPU with C<libtensorflow>, you will need to check that the
L<hardware requirements|https://www.tensorflow.org/install/pip#hardware_requirements> and
L<software requirements|https://www.tensorflow.org/install/pip#software_requirements> are
met. Please refer to the official documentation for the specific
hardware capabilities and software versions.

An alternative to installing all the software listed on the "bare metal" host
machine is to use C<libtensorflow> via a Docker container and the
NVIDIA Container Toolkit. See L<AI::TensorFlow::Libtensorflow::Manual::Quickstart/DOCKER IMAGES>
for more information.

=head1 RUNTIME

When running C<libtensorflow>, your program will attempt to acquire quite a bit
of GPU VRAM. You can check if you have enough free VRAM by using the
C<nvidia-smi> command which displays resource information as well as which
processes are currently using the GPU.  If C<libtensorflow> is not able to
allocate enough memory, it will crash with an out-of-memory (OOM) error. This
is typical when running multiple programs that both use the GPU.

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN


}

package Interval {
    use Bio::Location::Simple ();

    use parent qw(Bio::Location::Simple);

    sub center {
        my $self = shift;
        my $center = int( ($self->start + $self->end ) / 2 );
        my $delta = ($self->start + $self->end ) % 2;
        return $center + $delta;
    }

    sub resize {
        my ($self, $width) = @_;
        my $new_interval = $self->clone;

        my $center = $self->center;
        my $half   = int( ($width-1) / 2 );
        my $offset = ($width-1) % 2;

        $new_interval->start( $center - $half - $offset );
        $new_interval->end(   $center + $half  );

        return $new_interval;
    }

    use overload '""' => \&_op_stringify;

    sub _op_stringify { sprintf "%s:%s", $_[0]->seq_id // "(no sequence)", $_[0]->to_FTstring }
}

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

    die "Wrong interval size for $interval --($to)--> $resized_interval"
        unless $resized_interval->length == $to;

    say sprintf "Interval: %s -> %s, length %2d : %s",
        $interval,
        $resized_interval, $resized_interval->length,
        $msg;
}

for my $interval_spec ( [4, 8], [5, 8], [5, 9], [6, 9]) {
    my ($start, $end) = @$interval_spec;
    my $test_interval = Interval->new( -seq_id => 'chr11', -start => $start, -end => $end );
    say sprintf "Testing interval %s with length %d", $test_interval, $test_interval->length;
    say "-----";
    for(0..5) {
        my $base = $test_interval->length;
        my $to = $base + $_;
        _debug_resize $test_interval, $to, "$base -> $to (+ $_)";
    }
    say "";
}

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

use Bio::DB::HTS::Faidx;

my $hg_db = Bio::DB::HTS::Faidx->new( $hg_bgz_path );

sub extract_sequence {
    my ($db, $interval) = @_;

    my $chrom_length = $db->length($interval->seq_id);

    my $trimmed_interval = $interval->clone;
    $trimmed_interval->start( List::Util::max( $interval->start, 1               ) );
    $trimmed_interval->end(   List::Util::min( $interval->end  , $chrom_length   ) );

    # Bio::DB::HTS::Faidx is 0-based for both start and end points
    my $seq = $db->get_sequence2_no_length(
        $trimmed_interval->seq_id,
        $trimmed_interval->start - 1,
        $trimmed_interval->end   - 1,
    );

    my $pad_upstream   = 'N' x List::Util::max( -($interval->start-1), 0 );
    my $pad_downstream = 'N' x List::Util::max( $interval->end - $chrom_length, 0 );

    return join '', $pad_upstream, $seq, $pad_downstream;
}

sub seq_info {
    my ($seq, $n) = @_;
    $n ||= 10;
    if( length $seq > $n ) {
        sprintf "%s...%s (length %d)", uc substr($seq, 0, $n), uc substr($seq, -$n), length $seq;

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN


####

{

say "Testing sequence extraction:";

say "1 base: ",   seq_info
    extract_sequence( $hg_db,
        Interval->new( -seq_id => 'chr11',
            -start => 35_082_742 + 1,
            -end   => 35_082_742 + 1 ) );

say "3 bases: ",  seq_info
    extract_sequence( $hg_db,
        Interval->new( -seq_id => 'chr11',
            -start => 1,
            -end   => 1 )->resize(3) );

say "5 bases: ", seq_info
    extract_sequence( $hg_db,
        Interval->new( -seq_id => 'chr11',
            -start => $hg_db->length('chr11'),
            -end   => $hg_db->length('chr11') )->resize(5) );

say "chr11 is of length ", $hg_db->length('chr11');
say "chr11 bases: ", seq_info
    extract_sequence( $hg_db,
        Interval->new( -seq_id => 'chr11',
            -start => 1,
            -end   => $hg_db->length('chr11') )->resize( $hg_db->length('chr11') ) );
}

my $target_interval = Interval->new( -seq_id => 'chr11',
    -start => 35_082_742 +  1, # BioPerl is 1-based
    -end   => 35_197_430 );

say "Target interval: $target_interval with length @{[ $target_interval->length ]}";

die "Target interval is not $model_central_base_pairs_length bp long"
    unless $target_interval->length == $model_central_base_pairs_length;

say "Target sequence is ", seq_info extract_sequence( $hg_db, $target_interval );


lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

my $plot_output_path = 'enformer-target-interval-tracks.png';
my $gp = gpwin('pngcairo', font => ",10", output => $plot_output_path, size => [10,2. * @tracks], aa => 2 );

$gp->multiplot( layout => [1, scalar @tracks], title => $target_interval );

$gp->options(
    offsets => [ graph => "0.01, 0, 0, 0" ],
    lmargin => "at screen 0.05",
);

my $x = zeroes($predictions_p->dim(1))->xlinvals($target_interval->start, $target_interval->end);

my @tics_opts = (mirror => 0, out => 1);

for my $i (0..$#tracks) {
    my ($title, $id, $y) = @{$tracks[$i]};
    $gp->plot( {
            title => $title,
            border => [2],
            ytics => { @tics_opts, locations => [ ceil(($y->max-$y->min)/2)->sclr ] },
            ( $i == $#tracks

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN


Now we need a way to deal with the sequence interval. We're going to use 1-based coordinates as BioPerl does. In fact, we'll extend a BioPerl class.

  package Interval {
      use Bio::Location::Simple ();
  
      use parent qw(Bio::Location::Simple);
  
      sub center {
          my $self = shift;
          my $center = int( ($self->start + $self->end ) / 2 );
          my $delta = ($self->start + $self->end ) % 2;
          return $center + $delta;
      }
  
      sub resize {
          my ($self, $width) = @_;
          my $new_interval = $self->clone;
  
          my $center = $self->center;
          my $half   = int( ($width-1) / 2 );
          my $offset = ($width-1) % 2;
  
          $new_interval->start( $center - $half - $offset );
          $new_interval->end(   $center + $half  );
  
          return $new_interval;
      }
  
      use overload '""' => \&_op_stringify;
  
      sub _op_stringify { sprintf "%s:%s", $_[0]->seq_id // "(no sequence)", $_[0]->to_FTstring }
  }
  

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

      die "Wrong interval size for $interval --($to)--> $resized_interval"
          unless $resized_interval->length == $to;
  
      say sprintf "Interval: %s -> %s, length %2d : %s",
          $interval,
          $resized_interval, $resized_interval->length,
          $msg;
  }
  
  for my $interval_spec ( [4, 8], [5, 8], [5, 9], [6, 9]) {
      my ($start, $end) = @$interval_spec;
      my $test_interval = Interval->new( -seq_id => 'chr11', -start => $start, -end => $end );
      say sprintf "Testing interval %s with length %d", $test_interval, $test_interval->length;
      say "-----";
      for(0..5) {
          my $base = $test_interval->length;
          my $to = $base + $_;
          _debug_resize $test_interval, $to, "$base -> $to (+ $_)";
      }
      say "";
  }
  

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

  use Bio::DB::HTS::Faidx;
  
  my $hg_db = Bio::DB::HTS::Faidx->new( $hg_bgz_path );
  
  sub extract_sequence {
      my ($db, $interval) = @_;
  
      my $chrom_length = $db->length($interval->seq_id);
  
      my $trimmed_interval = $interval->clone;
      $trimmed_interval->start( List::Util::max( $interval->start, 1               ) );
      $trimmed_interval->end(   List::Util::min( $interval->end  , $chrom_length   ) );
  
      # Bio::DB::HTS::Faidx is 0-based for both start and end points
      my $seq = $db->get_sequence2_no_length(
          $trimmed_interval->seq_id,
          $trimmed_interval->start - 1,
          $trimmed_interval->end   - 1,
      );
  
      my $pad_upstream   = 'N' x List::Util::max( -($interval->start-1), 0 );
      my $pad_downstream = 'N' x List::Util::max( $interval->end - $chrom_length, 0 );
  
      return join '', $pad_upstream, $seq, $pad_downstream;
  }
  
  sub seq_info {
      my ($seq, $n) = @_;
      $n ||= 10;
      if( length $seq > $n ) {
          sprintf "%s...%s (length %d)", uc substr($seq, 0, $n), uc substr($seq, -$n), length $seq;

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

  
  ####
  
  {
  
  say "Testing sequence extraction:";
  
  say "1 base: ",   seq_info
      extract_sequence( $hg_db,
          Interval->new( -seq_id => 'chr11',
              -start => 35_082_742 + 1,
              -end   => 35_082_742 + 1 ) );
  
  say "3 bases: ",  seq_info
      extract_sequence( $hg_db,
          Interval->new( -seq_id => 'chr11',
              -start => 1,
              -end   => 1 )->resize(3) );
  
  say "5 bases: ", seq_info
      extract_sequence( $hg_db,
          Interval->new( -seq_id => 'chr11',
              -start => $hg_db->length('chr11'),
              -end   => $hg_db->length('chr11') )->resize(5) );
  
  say "chr11 is of length ", $hg_db->length('chr11');
  say "chr11 bases: ", seq_info
      extract_sequence( $hg_db,
          Interval->new( -seq_id => 'chr11',
              -start => 1,
              -end   => $hg_db->length('chr11') )->resize( $hg_db->length('chr11') ) );
  }

B<STREAM (STDOUT)>:

  Testing sequence extraction:
  1 base: G (length 1)
  3 bases: NNN (length 3)
  5 bases: NNNNN (length 5)
  chr11 is of length 135086622
  chr11 bases: NNNNNNNNNN...NNNNNNNNNN (length 135086622)

B<RESULT>:

  1

Now we can use the same target interval that is used in the example notebook which recreates part of L<figure 1|https://www.nature.com/articles/s41592-021-01252-x/figures/1> from the Enformer paper.

  my $target_interval = Interval->new( -seq_id => 'chr11',
      -start => 35_082_742 +  1, # BioPerl is 1-based
      -end   => 35_197_430 );
  
  say "Target interval: $target_interval with length @{[ $target_interval->length ]}";
  
  die "Target interval is not $model_central_base_pairs_length bp long"
      unless $target_interval->length == $model_central_base_pairs_length;
  
  say "Target sequence is ", seq_info extract_sequence( $hg_db, $target_interval );
  
  

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

  my $plot_output_path = 'enformer-target-interval-tracks.png';
  my $gp = gpwin('pngcairo', font => ",10", output => $plot_output_path, size => [10,2. * @tracks], aa => 2 );
  
  $gp->multiplot( layout => [1, scalar @tracks], title => $target_interval );
  
  $gp->options(
      offsets => [ graph => "0.01, 0, 0, 0" ],
      lmargin => "at screen 0.05",
  );
  
  my $x = zeroes($predictions_p->dim(1))->xlinvals($target_interval->start, $target_interval->end);
  
  my @tics_opts = (mirror => 0, out => 1);
  
  for my $i (0..$#tracks) {
      my ($title, $id, $y) = @{$tracks[$i]};
      $gp->plot( {
              title => $title,
              border => [2],
              ytics => { @tics_opts, locations => [ ceil(($y->max-$y->min)/2)->sclr ] },
              ( $i == $#tracks

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

          })
      );
  }

B<DISPLAY>:

=for html <span style="display:inline-block;margin-left:1em;"><p><table style="width: 100%"><tr><td><tt>apple</tt></td><td><a href="https://upload.wikimedia.org/wikipedia/commons/1/15/Red_Apple.jpg"><img alt="apple" src="https://upload.wikimedia.org/...

=head2 Download the test images and transform them into suitable input data

We now fetch these images and prepare them to be the in the needed format by using C<Imager> to resize and add padding. Then we turn the C<Imager> data into a C<PDL> ndarray. Since the C<Imager> data is stored as 32-bits with 4 channels in the order ...

We then take all the PDL ndarrays and concatenate them. Again, note that the dimension lists for the PDL ndarray and the TFTensor are reversed.

  sub imager_paste_center_pad {
      my ($inner, $padded_sz, @rest) = @_;
  
      my $outer = Imager->new( List::Util::mesh( [qw(xsize ysize)], $padded_sz ),
          @rest
      );
  

lib/AI/TensorFlow/Libtensorflow/Manual/Quickstart.pod  view on Meta::CPAN

# ABSTRACT: Start here for an overview of the library
# PODNAME: AI::TensorFlow::Libtensorflow::Manual::Quickstart

__END__

=pod

=encoding UTF-8

=head1 NAME

AI::TensorFlow::Libtensorflow::Manual::Quickstart - Start here for an overview of the library

=head1 DESCRIPTION

This provides a tour of C<libtensorflow> to help get started with using the
library.

=head1 CONVENTIONS

The library uses UpperCamelCase naming convention for method names in order to
match the underlying C library (for compatibility with future API changes) and
to make translating code from C easier as this is a low-level API.

As such, constructors for objects that correspond to C<libtensorflow> data
structures are typically called C<New>. For example, a new

lib/AI/TensorFlow/Libtensorflow/Session.pm  view on Meta::CPAN

B<C API>: L<< C<TF_NewSession>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_NewSession >>

=head2 LoadFromSavedModel

B<C API>: L<< C<TF_LoadSessionFromSavedModel>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_LoadSessionFromSavedModel >>

=head1 METHODS

=head2 Run

Run the graph associated with the session starting with the supplied
C<$inputs> with corresponding values in C<$input_values>.

The values at the outputs given by C<$outputs> will be placed in
C<$output_values>.

B<Parameters>

=over 4

=item Maybe[TFBuffer] $run_options

xt/author/pod-snippets.t  view on Meta::CPAN

use With::Roles;

my @modules = qw/
	AI::TensorFlow::Libtensorflow
	AI::TensorFlow::Libtensorflow::ApiDefMap
	AI::TensorFlow::Libtensorflow::Buffer
	AI::TensorFlow::Libtensorflow::DataType
	AI::TensorFlow::Libtensorflow::Graph
	AI::TensorFlow::Libtensorflow::Tensor

	AI::TensorFlow::Libtensorflow::Manual::Quickstart
/;

plan tests => 0 + @modules;

package # hide from PAUSE
	Test::Pod::Snippets::Role::PodLocatable {

	use Moose::Role;
	use Pod::Simple::Search;



( run in 0.673 second using v1.01-cache-2.11-cpan-fd5d4e115d8 )